Week 36 news

Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…

Themes this week

JOEL

Themes this week:

  • How AI is revolutionising game development and agentic simulation.
  • The rise of biological computing, from mushroom-controlled robots to brain organoids.
  • The critical balance between AI automation and human oversight in various industries.

AI dreams of digital playgrounds

If you haven’t seen the brilliant indie comedy Computer Chess, it might be worth a watch as an eccentric view of 1980s gaming nerds, and a testament how far gaming has come in the subsequent 44 years. To be fair traditional rules based “AI” in games has been the mainstay of single-player experiences for a long time. But Google’s GameNGen scalable diffusion paper from last week, and now a new infinite Mario engine trained on a consumer GPU this week have created huge interest in ‘AI as game engine’. These projects demonstrate AI’s ability to essentially hallucinate environments and mechanics without code, learning all of the necessary complex physics and interactions in virtual space. While still facing performance limitations – MarioVGG takes six seconds to generate half a second of gameplay – these developments hint at a future where game creation could be as simple as describing your vision to an AI.

Current generation AI’s applications are still in their infancy, with long gaming development cycles, cost constraints, and high performance demands. But the long-term uses are diverse, from enhancing graphics and procedural generation to improving NPCs (non-playing characters), analysing and personalising player experiences and the increasing the speed of development (this Minecraft clone was created entirely via prompts using Vercel’s v0). Nvidia’s Avatar Cloud Engine (ACE) is set to bring a new generation of intelligent, personality-rich NPCs to games, potentially revitalising experiences and attracting new players. But the real game-changer might be when these NPCs cease to be “non-player” at all.

Robert Yang, CEO of Altera, posted on X this week, “As AI agents become an integral part of our human civilization, they must effectively collaborate with each other and the rest of us.” This collaboration is already taking shape in projects like Altera’s simulation of over 1,000 AI agents in the game Minecraft, forming their own societies complete with economies and governments.

The methodologies developed for these gaming AIs have broad potential. GameNGen’s approach could benefit fields like urban planning, scientific simulations, and social science research. Altera’s work in games could mean vast agent simulations that model complex real-world dynamics.

The diffusion models being used in many images, video and now game generation innovations could find applications in diverse fields, creating everything from protein structures to complex mathematical concepts.

Takeaways: The fusion of LLMs, diffusion models and gaming is opening up all kinds of possibilities for creators. Forward-thinking business should be taking a closer look at this evolution of agent and engine simulation and exploring how these technologies could enhance their products, services, or research. At ExoBrain we’re helping clients to make agent-based simulations a useful reality. Of course, AI games can also be fun (and historic). Why not try 80s inspired AI Dungeon, an early demonstration of GPT-2’s power, and a surprisingly immersive experience for a text adventure.

From Mario to mushroom powered bio-robots

Anybody who has played or watched The Last of Us will feel somewhat uneasy about this week’s news that researchers have successfully created robots controlled by mushroom mycelia. Unlike the game’s fungally-infected zombies, these mushroom-powered machines represent a groundbreaking step towards harnessing biological systems for computing.

This development is part of a broader research trend in bio computing and hybridisation, where biological structures are fused with traditional computing systems. At the forefront of this field is the study of brain organoids – tiny, lab-grown clusters of human brain cells that can be integrated with electronic systems. This “organoid intelligence” (OI) aims to create biological computing substrates that could potentially offer far greater energy efficiency than traditional silicon-based systems.

Last month we reported on Swiss startup FinalSpark who offer access to its brain organoid computers for $500 per month. These systems use dopamine to “train” the organoids, mimicking the brain’s natural reward system. “We encapsulate dopamine in a molecular cage, invisible to the organoid initially,” explains FinalSpark co-founder Dr Fred Jordan. “When we want to ‘reward’ the organoid, we expose it to specific light frequencies. This light opens the cage, releasing the dopamine and providing the intended stimulus to the organoid.

As we’re all aware, current AI systems consume enormous amounts of energy. Biocomputers could potentially operate at a fraction of this energy cost, making AI more sustainable and accessible. For businesses, this could mean reduced operational costs and the ability to deploy more powerful AI systems without the current energy constraints.

But what about the ethics I hear you say? As these biological computing systems become more sophisticated, questions arise about their potential for sentience or consciousness. Dr. Brett Kagan of Cortical Labs, (who have trained cells to play the game Pong), grapples with these questions daily. “We’re entering uncharted territory,” Kagan says. “At what point does a collection of neurons become something more? And what rights or protections should we afford to these entities?

Takeaways: The emergence of intelligent biocomputing, from mushroom-controlled robots to brain organoid computers, represents an actual ‘paradigm’ shift! Buy while offering the potential for more efficient systems, it also brings ever more ethical and practical challenges. This research may not prove usable for some time, but it begs the question; can we get much nearer to the level of energy efficiency we see in the likes of the human brain… exaflop power on 20 watts? For a fascinating journey into what ubiquitous exaflop sale computing at 20-watts would look like, we recommend the brilliant Carl Shulman discussing this on the 80,000 hours podcast.

JOOST

Keeping humans in the loop

This week, Oxford University launched its Human-Centred AI Lab, emphasising the critical balance between AI automation and human oversight. The lab’s mission speaks of the importance of designing AI systems that enhance human capabilities rather than replace them, prioritising societal well-being and ethical practices.

As AI becomes more prevalent across industries, businesses are thinking about how to harness its power while preserving essential human judgment. From improving workplace inclusivity to better decision-making, AI has immense potential to transform our world. However, human oversight, control, and empathy remain critical in ensuring AI is used responsibly and ethically.

Microsoft is naturally upbeat on AI’s potential to foster inclusivity in the workplace. In a recent interview with the BBC, Microsoft’s executive leadership explained how they’re using AI to create more accessible environments through real-time captioning, summarising, and translating languages in meetings. These advancements highlight the need for thoughtfulness in AI application and their use as enablers of inclusivity (vs drivers of marginalisation).

While AI has demonstrated its ability to augment decision-making processes in industries like finance, healthcare, and logistics, relying solely on AI for critical decisions cremains precarious, for now. DLA Piper’s recent report on the human factor in AI highlights that relying solely on AI for critical decisions can be dangerous. Automation bias—where humans accept AI recommendations without sufficient scrutiny—is a real problem when AI can easily overwhelm a human user with decisions to review. Human oversight is particularly crucial in high-stakes decisions, such as credit scoring or hiring processes, where AI can perpetuate biases present in the data.

A key theme in AI adoption is finding the right balance between control and enablement. AI can help managers and leaders focus on strategic tasks by automating routine administrative duties. This shift allows human workers to focus on creativity, problem-solving, and interpersonal skills—areas where AI falls short. However, this balance is delicate. While AI can augment human capabilities, there’s a risk that over-reliance on automation could erode human agency.

As AI continues to evolve, the focus must remain on how it interacts with human behaviour, supports inclusive environments, and safeguards ethical decision-making. The future of AI is bright, but its success will depend on our ability to integrate human oversight into its very core. How can businesses ensure they’re striking the right balance between AI automation and human judgment? What steps can organisations take to embed ethics and human oversight into their AI systems from the ground up?

Takeaways: Businesses should prioritise a human-centred approach when implementing AI, focusing on enhancing rather than replacing human capabilities. Establish clear guidelines for human involvement in AI-driven decision-making processes, especially in high-stakes areas. By fostering a partnership between AI and human judgment, we can unlock the full potential of these powerful technologies without losing sight of what makes us uniquely human.

ExoBrain symbol

EXO

Weekly news roundup

This week’s news highlights significant investments in AI safety, the integration of AI into various industries, ongoing regulatory efforts, advancements in AI research, and developments in AI hardware, reflecting the rapid evolution and widespread impact of artificial intelligence.

AI business news

AI governance news

AI research news

AI hardware news

Week 40 news

OpenAI accelerates, auto-podcasting emerges from uncanny valley, and why not to take every economist’s view on AI at face value

Week 39 news

AlphaChip plays the optimisation game, money talks as content walks, and infrastructure goes hyperscale

Week 38 news

Microsoft turn the page on Copilot, infinite ambitions but finite resources, and are you opted in or out?