Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…
Themes this week
JOEL
Themes this week:
- How AI is revolutionising game development and agentic simulation.
- The rise of biological computing, from mushroom-controlled robots to brain organoids.
- The critical balance between AI automation and human oversight in various industries.
AI dreams of digital playgrounds
If you haven’t seen the brilliant indie comedy Computer Chess, it might be worth a watch as an eccentric view of 1980s gaming nerds, and a testament how far gaming has come in the subsequent 44 years. To be fair traditional rules based “AI” in games has been the mainstay of single-player experiences for a long time. But Google’s GameNGen scalable diffusion paper from last week, and now a new infinite Mario engine trained on a consumer GPU this week have created huge interest in ‘AI as game engine’. These projects demonstrate AI’s ability to essentially hallucinate environments and mechanics without code, learning all of the necessary complex physics and interactions in virtual space. While still facing performance limitations – MarioVGG takes six seconds to generate half a second of gameplay – these developments hint at a future where game creation could be as simple as describing your vision to an AI.
Current generation AI’s applications are still in their infancy, with long gaming development cycles, cost constraints, and high performance demands. But the long-term uses are diverse, from enhancing graphics and procedural generation to improving NPCs (non-playing characters), analysing and personalising player experiences and the increasing the speed of development (this Minecraft clone was created entirely via prompts using Vercel’s v0). Nvidia’s Avatar Cloud Engine (ACE) is set to bring a new generation of intelligent, personality-rich NPCs to games, potentially revitalising experiences and attracting new players. But the real game-changer might be when these NPCs cease to be “non-player” at all.
Robert Yang, CEO of Altera, posted on X this week, “As AI agents become an integral part of our human civilization, they must effectively collaborate with each other and the rest of us.” This collaboration is already taking shape in projects like Altera’s simulation of over 1,000 AI agents in the game Minecraft, forming their own societies complete with economies and governments.
The methodologies developed for these gaming AIs have broad potential. GameNGen’s approach could benefit fields like urban planning, scientific simulations, and social science research. Altera’s work in games could mean vast agent simulations that model complex real-world dynamics.
The diffusion models being used in many images, video and now game generation innovations could find applications in diverse fields, creating everything from protein structures to complex mathematical concepts.
Takeaways: The fusion of LLMs, diffusion models and gaming is opening up all kinds of possibilities for creators. Forward-thinking business should be taking a closer look at this evolution of agent and engine simulation and exploring how these technologies could enhance their products, services, or research. At ExoBrain we’re helping clients to make agent-based simulations a useful reality. Of course, AI games can also be fun (and historic). Why not try 80s inspired AI Dungeon, an early demonstration of GPT-2’s power, and a surprisingly immersive experience for a text adventure.
From Mario to mushroom powered bio-robots
Anybody who has played or watched The Last of Us will feel somewhat uneasy about this week’s news that researchers have successfully created robots controlled by mushroom mycelia. Unlike the game’s fungally-infected zombies, these mushroom-powered machines represent a groundbreaking step towards harnessing biological systems for computing.
This development is part of a broader research trend in bio computing and hybridisation, where biological structures are fused with traditional computing systems. At the forefront of this field is the study of brain organoids – tiny, lab-grown clusters of human brain cells that can be integrated with electronic systems. This “organoid intelligence” (OI) aims to create biological computing substrates that could potentially offer far greater energy efficiency than traditional silicon-based systems.
Last month we reported on Swiss startup FinalSpark who offer access to its brain organoid computers for $500 per month. These systems use dopamine to “train” the organoids, mimicking the brain’s natural reward system. “We encapsulate dopamine in a molecular cage, invisible to the organoid initially,” explains FinalSpark co-founder Dr Fred Jordan. “When we want to ‘reward’ the organoid, we expose it to specific light frequencies. This light opens the cage, releasing the dopamine and providing the intended stimulus to the organoid.“
As we’re all aware, current AI systems consume enormous amounts of energy. Biocomputers could potentially operate at a fraction of this energy cost, making AI more sustainable and accessible. For businesses, this could mean reduced operational costs and the ability to deploy more powerful AI systems without the current energy constraints.
But what about the ethics I hear you say? As these biological computing systems become more sophisticated, questions arise about their potential for sentience or consciousness. Dr. Brett Kagan of Cortical Labs, (who have trained cells to play the game Pong), grapples with these questions daily. “We’re entering uncharted territory,” Kagan says. “At what point does a collection of neurons become something more? And what rights or protections should we afford to these entities?“
Takeaways: The emergence of intelligent biocomputing, from mushroom-controlled robots to brain organoid computers, represents an actual ‘paradigm’ shift! Buy while offering the potential for more efficient systems, it also brings ever more ethical and practical challenges. This research may not prove usable for some time, but it begs the question; can we get much nearer to the level of energy efficiency we see in the likes of the human brain… exaflop power on 20 watts? For a fascinating journey into what ubiquitous exaflop sale computing at 20-watts would look like, we recommend the brilliant Carl Shulman discussing this on the 80,000 hours podcast.
JOOST
Keeping humans in the loop
This week, Oxford University launched its Human-Centred AI Lab, emphasising the critical balance between AI automation and human oversight. The lab’s mission speaks of the importance of designing AI systems that enhance human capabilities rather than replace them, prioritising societal well-being and ethical practices.
As AI becomes more prevalent across industries, businesses are thinking about how to harness its power while preserving essential human judgment. From improving workplace inclusivity to better decision-making, AI has immense potential to transform our world. However, human oversight, control, and empathy remain critical in ensuring AI is used responsibly and ethically.
Microsoft is naturally upbeat on AI’s potential to foster inclusivity in the workplace. In a recent interview with the BBC, Microsoft’s executive leadership explained how they’re using AI to create more accessible environments through real-time captioning, summarising, and translating languages in meetings. These advancements highlight the need for thoughtfulness in AI application and their use as enablers of inclusivity (vs drivers of marginalisation).
While AI has demonstrated its ability to augment decision-making processes in industries like finance, healthcare, and logistics, relying solely on AI for critical decisions cremains precarious, for now. DLA Piper’s recent report on the human factor in AI highlights that relying solely on AI for critical decisions can be dangerous. Automation bias—where humans accept AI recommendations without sufficient scrutiny—is a real problem when AI can easily overwhelm a human user with decisions to review. Human oversight is particularly crucial in high-stakes decisions, such as credit scoring or hiring processes, where AI can perpetuate biases present in the data.
A key theme in AI adoption is finding the right balance between control and enablement. AI can help managers and leaders focus on strategic tasks by automating routine administrative duties. This shift allows human workers to focus on creativity, problem-solving, and interpersonal skills—areas where AI falls short. However, this balance is delicate. While AI can augment human capabilities, there’s a risk that over-reliance on automation could erode human agency.
As AI continues to evolve, the focus must remain on how it interacts with human behaviour, supports inclusive environments, and safeguards ethical decision-making. The future of AI is bright, but its success will depend on our ability to integrate human oversight into its very core. How can businesses ensure they’re striking the right balance between AI automation and human judgment? What steps can organisations take to embed ethics and human oversight into their AI systems from the ground up?
Takeaways: Businesses should prioritise a human-centred approach when implementing AI, focusing on enhancing rather than replacing human capabilities. Establish clear guidelines for human involvement in AI-driven decision-making processes, especially in high-stakes areas. By fostering a partnership between AI and human judgment, we can unlock the full potential of these powerful technologies without losing sight of what makes us uniquely human.
EXO
Weekly news roundup
This week’s news highlights significant investments in AI safety, the integration of AI into various industries, ongoing regulatory efforts, advancements in AI research, and developments in AI hardware, reflecting the rapid evolution and widespread impact of artificial intelligence.
AI business news
- OpenAI co-founder Sutskever’s new safety-focused AI startup SSI raises $1 billion (This substantial investment underscores the growing importance of AI safety in the industry, echoing our previous discussions on responsible AI development.)
- Marc Benioff’s Salesforce has declared a ‘hard pivot’ to autonomous AI agents (This shift by a major CRM player highlights the increasing role of AI in business operations, a trend we’ve been tracking closely.)
- Time 100 Most Influential People in AI 2024 (This list provides valuable insights into the key players shaping the AI landscape, many of whom we’ve discussed in previous newsletters.)
- Volkswagen brings ChatGPT to its vehicles via voice assist (This integration demonstrates the expanding applications of AI in everyday consumer products, a trend we’ve been monitoring.)
- M&S using AI as personal style guru in effort to boost online sales (This application showcases how AI is transforming retail experiences, a topic we’ve explored in previous issues.)
AI governance news
- Global powers sign AI pact focused on democratic values (This international agreement reflects the ongoing efforts to establish global AI governance, a crucial topic we’ve been following closely.)
- North Carolina Man Charged With Using AI to Win Music Royalties (This case highlights the emerging legal challenges surrounding AI-generated content, an issue we’ve previously discussed.)
- CMA clears Microsoft’s hiring of Inflection leadership (This decision reflects the ongoing scrutiny of big tech’s AI acquisitions, a recurring theme in our coverage of AI industry dynamics.)
- Ireland’s privacy watchdog ends legal fight with X over data use for AI after it agrees to permanent limits (This resolution underscores the ongoing tension between AI development and data privacy, a key issue we’ve been tracking.)
- Defense AI models ‘a risk to life’ claims spurned tech firm (This controversy highlights the ethical concerns surrounding AI in defense applications, a topic we’ve explored in previous newsletters.)
AI research news
- Attention Heads of Large Language Models: A Survey (This comprehensive survey provides valuable insights into the inner workings of LLMs, building on our previous discussions of AI model architectures.)
- LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA (This research addresses the important issue of citation generation in AI, relevant to our ongoing coverage of AI in academic and research contexts.)
- OLMoE: Open Mixture-of-Experts Language Models (This paper explores an open approach to Mixture-of-Experts models, a topic we’ve touched on in our discussions of AI model efficiency.)
- ReMamba: Equip Mamba with Effective Long-Sequence Modeling (This research advances long-sequence modeling capabilities, addressing a key challenge in AI that we’ve previously highlighted.)
- OD-VAE: An Omni-dimensional Video Compressor for Improving Latent Video Diffusion Model (This work on video compression for diffusion models relates to our ongoing coverage of advancements in AI-generated media.)
AI hardware news
- Intel’s 120 TOPS Lunar Lake AI PC chips have landed (This development in AI-focused PC chips reflects the growing demand for AI capabilities in consumer hardware, a trend we’ve been following.)
- OpenAI’s First In-House Chip Will Be Developed By TSMC On Its A16 Angstrom Process (This move by OpenAI into custom chip development highlights the increasing importance of specialised AI hardware, a topic we’ve discussed previously.)
- DoJ reportedly advances Nvidia antitrust investigation (This investigation underscores the growing scrutiny of AI chip market dominance, a crucial issue in the AI hardware landscape we’ve been monitoring.)
- Elon Musk’s xAI launches ‘Colossus’ AI training system with 100,000 Nvidia chips (This massive AI training system highlights the scale of resources being invested in AI development, a trend we’ve been tracking closely.)
- Nvidia’s AI chips are cheaper to rent in China than US (This pricing discrepancy reflects the complex global dynamics of AI chip supply and demand, an issue relevant to our ongoing coverage of AI hardware economics.)