Week 27 news

Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…

Themes this week

JOEL

This week we look at:

  • The contrasting impact of AI on the UK and French elections.
  • Control in the coming age of AI agents.
  • Innovations in multi-modal and voice interaction.

A tale of two elections

As voters in the UK and France went to the polls this week, AI’s influence on their respective elections has been in stark contrast. In the UK, it has barely featured in party campaigns (or the political debate). Meanwhile, across the Channel, French far-right parties have embraced the technology. Reports indicate they’ve been deploying AI-generated content across social media platforms to amplify deeply divisive messaging, raising concerns about the potential to sway democratic processes and even impact election integrity.

The limited role of AI in the UK election likely stems from a combination of factors, including the fundamental cyclic desire for change being the overriding public sentiment, and a mainly traditional campaign with a focus on TV and press media. In contrast, French nationalist parties have published numerous AI-generated images across social media, reaching millions. Many posts, with crafted photorealistic motifs, targeting highly emotive topics such as immigration and EU scepticism, went undetected by social platform moderators, raising concerns about the effectiveness of content policies.

Samuel LaFont, head of digital communication for France’s Reconquest party, noted the economic advantage of AI-generated imagery: “When you compare the price of a Shutterstock subscription and a Midjourney subscription, Shutterstock becomes irrelevant.” This shift in content creation practices poses challenges for regulators and platforms, as they struggle to keep pace.

This use of generated imagery exposes gaps in the regulatory frameworks. Neither the UK nor France has specific laws governing the use of AI in political campaigns, relying instead on voluntary commitments and platform policies that are proving inadequate. The EU’s Digital Services Act (currently being used to pressure X to deal with dangerous content) and the recently ratified EU AI Act offer some measure of regulation, but their effectiveness in addressing political groups intent on aggressive multi-channel communication strategies will always be limited.

Yahoo news reports that in France “RN (National Rally) candidates have had AI training, including how to use ChatGPT since January 2024, according to French broadcaster BFMTV. In their training booklet, it gives examples of how AI can help political candidates, like creating campaign posters and finding arguments for debates, the broadcaster reported.”

France finds itself at a paradoxical crossroads in its AI ambitions. President Macron’s vision of a “startup nation” and efforts to nurture homegrown AI giants like Mistral AI have borne fruit, with French AI companies securing $2.3 billion in funding over the past decade – more than any other European country. And yet AI is now intensifying political turmoil which could see more restrictive immigration policies, increased taxation, and policy gridlock derailing this AI boom.

None the less, AI remains a key focus for the French President, evident in his X post on Friday morning to his new UK counterpart: “Congratulations Sir Keir Starmer on your victory. Pleased with our first discussion. We will continue the work begun with the UK for our bilateral cooperation, for peace and security in Europe, for the climate and for AI.”

Takeaways: As Labour takes the reins of government, their approach to AI will be closely watched. They have expressed a desire to toughen up regulation including putting the AI Safety Institute on a statutory footing. The UK implemented a digital imprints law as part of the Elections Act 2022, providing a basis for protection against anonymous political content. This legislation requires digital campaign material to include an ‘imprint’ identifying who is responsible for publishing it. While this law wasn’t specifically designed to address AI content, it offers a framework that could be adapted to tackle emerging challenges in political campaigning. However, as the law is new and untested in the context of full-scale use of AI, its effectiveness in regulating advanced political messaging remains to be seen. Whilst the UK has so far avoided any challenges to the orderly transition of power, this has been a matter of luck rather than truly robust protections. With its own concerns about the right-wing of British politics, the new government will need to act swiftly to set out its approach to AI and to prepare for how it might intersect with the electoral process in the future.

Agents untethered

In 2009, Harvard professor Jonathan Zittrain’s book, The Future of the Internet – And How to Stop It, warned of a digital landscape dominated by “tethered appliances” that could stifle innovation and facilitate unprecedented control. Fifteen years on, as AI agents evolve from simple chatbots to autonomous decision-makers, Zittrain’s concerns have pivoted to encompass the ‘untethered’. In a thought-provoking piece for The Atlantic this week, he sounds a alarm about the rise of agents – autonomous AI systems acting independently on behalf of humans. These agents, Zittrain argues, could proliferate out of control, potentially causing harm long after their original purpose becomes obsolete. “Give it a few goals, let it have a bank account, let it spend money… who knows where it ends up?” he cautions, highlighting the need for safeguards.

Concern over the impact of software agents, bots, or autonomous entities is not new. In 2010, a flash crash caused by high-frequency trading algorithms wiped out $1 trillion of stock market value in minutes, serving as an early warning of the unintended consequences of automated systems.

Investment is now pouring into the development of ‘agentic AI’. Altera, for example, is developing agents that can play Minecraft alongside human users. This may seem like a trivial application, but it represents a interesting approach to exploring the nuances of human-AI interactions. “We are developing advanced AI agents that coexist with us in virtual environments,” explains an Altera spokesperson. “These agents possess strong pro-human social-emotional intelligence and will eventually attain self-awareness.” Altera posted on X this week showing their Minecraft agents developing aspects of an collaboration; “our agents are now logging their progress on google sheets too. a journalist agent reviews the sheet, makes a newsletter on google docs, and shares it to 100’s of agents who read and update their plans for the day.”

While the potential benefits of socially aware agents that work tirelessly to achieve complex goals are immense, the risks are not to be ignored. Zittrain warns of the dangers of “set it and forget it” entities that could operate indefinitely, potentially causing harm long after their original purpose has become obsolete. “Agents set to translate vague goals might also choose the wrong means to achieve them,” Zittrain cautions.

Cloudflare’s recent launch of a free tool to combat AI bots scraping websites suggests an emerging cat-and-mouse game between agents and digital asset owners. Perplexity have recently got into hot water for this. Cloudflare who cache a sizeable proportion of the world’s Internet material stated “we fear that some AI companies intent on circumventing rules to access content will persistently adapt to evade bot detection”.

Takeaways: Proposals for agent regulation, including the idea of implementing a “time to live” feel unlikely to succeed given the variety of architectures that can be used to create a multi-agent system. But when developing agent systems, the range of complex outcomes must be considered and carefully managed. Businesses should implement testing frameworks that continuously simulate many scenarios. They should also establish clear guidelines for agent behaviour and decision-making processes and extend risk and control frameworks to include this new construct, ensuring human oversight at critical junctures.

The art of conversation

This week we saw demos and releases suggesting our interactions with AI will continue to get more fluid. Kyutai’s Moshi experiment, a new OpenAI GPT-4o voice mode demo, Character.AI, and ElevenLabs all showcased real-time capabilities.

Moshi, a relatively small open-weight multimodal language model from a French lab, can process speech input and output simultaneously, understand and express emotions, and speak with different accents. Meanwhile on-stage at last week’s AI Engineer World Fair, OpenAI’s GPT-4o was demonstrated via an unreleased version of ChatGPT Desktop. The demo showcased integrated low-latency voice generation, visual context understanding, video generation, and rapid optical character recognition.

Character.AI unveiled ‘Character Calls‘, a feature that enables users to have real-time voice conversations with AI characters on their mobile app. Meanwhile, ElevenLabs expanded its AI voice capabilities by introducing AI-recreated voices of late Hollywood celebrities like Judy Garland and James Dean to its reader product. Both emphasise safety measures to prevent misuse, but as ever ethical questions loom large. The delay until the Autumn of GPT-4o voice mode is likely due to both the technical demands of serving to many users, but also the challenge of preventing undesired emotions and content in voice output.

Takeaways: These advancements represent a significant evolution in AI user experience, moving us closer to genuine conversational interaction. The ability to seamlessly switch between visual, audio, and text inputs promises a great deal. However, we’re still in the early stages. While the potential is exciting, widespread and fully integrated availability is limited, making it challenging to evaluate the practical utility and broader implications of these multimodal interactions.

ExoBrain symbol

EXO

Weekly news roundup

This week’s news highlights the rapid advancements in generative AI, the growing investment in AI startups, and the ongoing debates around AI governance and responsible development. It also showcases the latest AI research breakthroughs and the evolving AI hardware landscape.

AI business news

AI governance news

AI research news

AI hardware news

Week 42 news

Image generation group test, feral meme-generators from the future, and AI goes nuclear

Week 40 news

OpenAI accelerates, auto-podcasting emerges from uncanny valley, and why not to take every economist’s view on AI at face value

Week 39 news

AlphaChip plays the optimisation game, money talks as content walks, and infrastructure goes hyperscale