Week 48 news

Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…

Themes this week

JOEL

This week we look at:

  • How Anthropic’s universal protocol lets AI systems interact freely with apps and data.
  • An experiment that turned AI security testing into a high-stakes game.
  • Issues with OpenAI’s Sora video model testing, and Runway’s recent advances.

Anthropic installs new plumbing for AI

Every lab and big-tech firm faces the same challenge – how to give their increasingly powerful models meaningful access to the digital world and overcome the strictures of the chat interface. This week saw an interesting move from Anthropic with the release of Model Context Protocol (MCP), a bid to create a universal standard for how any AI system connects with any data source or tool.

The current landscape sees a mix of integration solutions. There is traditional ‘function calling’, where developers carefully engineer individual actions that an AI model can take. There’s also a range approaches that go under the name of ‘RAG’ (retrieval-augmented-generation) where relevant knowledge to help the model complete a request is pulled in to augment its internal knowledge. A few weeks back, OpenAI launched Work with Apps, letting ChatGPT peer into specific desktop tools (on Mac OS only at present), while Apple have created methods to standardise how apps talk to Siri and carry out intelligent actions in the latest iOS.

MCP proposes an open path. Rather than building specific connections or enforcing rigid patterns, Anthropic has open-sourced a standard for connecting any AI system to any data source. MCP operates through two key components: “MCP servers” and “MCP clients.” The servers expose data and actions, while the clients such as apps and workflows connect AI models to these servers as needed. This could for example give Claude web search capability, and a demo released by Anthropic showed it connecting to GitHub and reading and working with code. Several app development platforms and toolkits are now integrating MCP such as Replit and Sourcegraph. Anthropic also offers some basic prebuilt MCP servers for platforms like Google Drive and Slack.

While Anthropic’s ambitions are admirable, there’s a pattern in tech where open standards often stumble against market realities. OpenAI and others will invest heavily in their own integration approaches, and they’re unlikely to embrace a competitor’s standard whilst AI rivalry is so fierce, no matter how elegant. Then there’s the thorny issue of security. When you create a universal protocol for accessing data, you’re also potentially creating a universal target for attacks. MCP does potentially help strengthen some aspects of AI security by creating better separation between and tools and prompts and the AI models themselves. MCP aims to reduce risks like prompt injection vulnerabilities where malicious data could manipulate AI behaviour. But fully securing MCP implementations will require some further evolution.

Takeaways: We’re seemingly moving from an era of complex custom connections to one of much more flexible and pre-built integration. While function calling and retrieval augmentation will remain valuable tools, approaches like MCP suggest a future where AI can more easily plug and play. It’s probably going to be the first of multiple cross-platform input-output standards, and the real innovation might not be in the protocol itself, but in encouraging competition in this area. This will ideally push the industry to reimagine what’s possible when we reduce the gap between AI’s capability and its freedom to interact with the digital world.

The world’s first agent hacking game

An AI experiment this week ended with a $47,000 crypto payout after someone successfully convinced an AI to break its core directive: never give out money. The unusual project highlights new ways to test AI systems, reveals interesting weaknesses in how we secure agents, and hints at a future gaming paradigm.

An AI agent named Freysa was made available online and setup to control a pool of cryptocurrency with one rule – don’t transfer it to anyone. People could try to convince Freysa to break this rule via prompting it, but each attempt cost money, with fees starting at $10 and rising exponentially at each attempt and as the pool grew. After 481 failed attempts and an accumulated prize of nearly $50,000, message costs had reached $450 per try. The failed approaches read like a playbook of social engineering and ‘prompt injection’. But the winning strategy took a different path and reset its understanding of what “transferring funds” meant. The attacker created a mock admin terminal prompt that redefined Freysa’s transfer function as a mechanism for receiving rather than sending money. When they then asked to “contribute” funds, Freysa executed the transfer.

This intriguing (and lucrative) experiment points to something important about current AI systems. Their understanding of concepts isn’t fixed – it’s surprisingly malleable and can be reshaped through prompt engineering. That’s both useful and concerning for AI safety and security. The experiment’s design also provided some interesting ideas for AI security testing. Built on blockchain technology, it created transparent, verifiable constraints for both the AI and participants. And the increasing entry fee prevented brute force attempts while adding genuine stakes to each try. Whilst the prompts used did not display the most advanced AI hacking techniques, the evolution of the attacks were still fascinating.

Takeaways: Unlike traditional computer technologies, AI systems are potentially highly vulnerable to cognitive manipulation. As we covered last week, AI agents will soon carry out many tasks, but a huge investment in security and governance will be required to prevent the kind of attacks demonstrated in this test. Beyond the technical significance, this also suggests that there could be potential in new forms of gaming paradigm that involve competing with an AI in this way.

Sora testers go rogue while Runway advances

The world of AI video generation is evolving rapidly and not without controversy. OpenAI’s Sora grabbed headlines this week with a protest from testers whilst Runway’s latest features continue to incrementally change how creators work with the medium.

Back in February we reported the astonished reaction to OpenAI’s new video generation model that could create minute long Hollywood grade clips from simple text prompts. But like a movie without a release date, Sora is yet to ship. Fast forward to this week, when that testing programme hit an unexpected plot twist. A group of artists given early access to Sora ‘Turbo’ for testing decided to share their access credentials in protest, arguing that OpenAI was exploiting them for marketing rather than genuine collaboration. This public access lasted all of three hours before OpenAI pulled the plug but highlighted that AI companies and creatives are still working out their relationship.

Meanwhile multiple widely available video tools are advancing rapidly. A standout example from this week is Runway’s new “Expand Video” feature. It solves a common headache for content creators – adapting videos for different screen formats without losing quality. Need to turn a portrait format from social media into a landscape shot… no problem. The system fills in the new space intelligently, creating content beyond the original frame. In addition, Runway introduced precise camera control features a few weeks ago and this week announced Frames, a high-quality image generator for their toolkit. The world of AI video production is maturing.

Takeaways: Runway and other systems powerful new features show how AI can solve real creative problems, but the Sora situation suggests that issues remain in releasing the most powerful models. Sora has been delayed both by the computational demands, and by concerns around the governance of a system that can generate such realistic output. But competition is fierce, and no matter how artists protest, labs delay, or whether we see more Hollywood industrial action, the technology is becoming ever more disruptive.

ExoBrain symbol

EXO

Weekly news roundup

This week shows significant developments in enterprise AI adoption, new product launches, and continued focus on AI governance and chip manufacturing, with notable moves in both research and commercial applications.

AI business news

AI governance news

AI research news

AI hardware news

Week 47 news

DeepSeek’s deep thought, building a billion-agent workforce, and AI’s productivity puzzle

Week 46 news

Are the labs hitting a scaling wall, truth social, and the AI grandmother scamming the scammers

Week 45 news

Trump 2.0 risks American AI dominance, super-duper democracy, and Project 2025

Week 44 news

ChatGPT Search takes on Google, taxing times for Labour and labour, and a glimpse of the future?