Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…
Themes this week
JOEL
This week we look at:
- The contrasting impact of AI on the UK and French elections.
- Control in the coming age of AI agents.
- Innovations in multi-modal and voice interaction.
A tale of two elections
As voters in the UK and France went to the polls this week, AI’s influence on their respective elections has been in stark contrast. In the UK, it has barely featured in party campaigns (or the political debate). Meanwhile, across the Channel, French far-right parties have embraced the technology. Reports indicate they’ve been deploying AI-generated content across social media platforms to amplify deeply divisive messaging, raising concerns about the potential to sway democratic processes and even impact election integrity.
The limited role of AI in the UK election likely stems from a combination of factors, including the fundamental cyclic desire for change being the overriding public sentiment, and a mainly traditional campaign with a focus on TV and press media. In contrast, French nationalist parties have published numerous AI-generated images across social media, reaching millions. Many posts, with crafted photorealistic motifs, targeting highly emotive topics such as immigration and EU scepticism, went undetected by social platform moderators, raising concerns about the effectiveness of content policies.
Samuel LaFont, head of digital communication for France’s Reconquest party, noted the economic advantage of AI-generated imagery: “When you compare the price of a Shutterstock subscription and a Midjourney subscription, Shutterstock becomes irrelevant.” This shift in content creation practices poses challenges for regulators and platforms, as they struggle to keep pace.
This use of generated imagery exposes gaps in the regulatory frameworks. Neither the UK nor France has specific laws governing the use of AI in political campaigns, relying instead on voluntary commitments and platform policies that are proving inadequate. The EU’s Digital Services Act (currently being used to pressure X to deal with dangerous content) and the recently ratified EU AI Act offer some measure of regulation, but their effectiveness in addressing political groups intent on aggressive multi-channel communication strategies will always be limited.
Yahoo news reports that in France “RN (National Rally) candidates have had AI training, including how to use ChatGPT since January 2024, according to French broadcaster BFMTV. In their training booklet, it gives examples of how AI can help political candidates, like creating campaign posters and finding arguments for debates, the broadcaster reported.”
France finds itself at a paradoxical crossroads in its AI ambitions. President Macron’s vision of a “startup nation” and efforts to nurture homegrown AI giants like Mistral AI have borne fruit, with French AI companies securing $2.3 billion in funding over the past decade – more than any other European country. And yet AI is now intensifying political turmoil which could see more restrictive immigration policies, increased taxation, and policy gridlock derailing this AI boom.
None the less, AI remains a key focus for the French President, evident in his X post on Friday morning to his new UK counterpart: “Congratulations Sir Keir Starmer on your victory. Pleased with our first discussion. We will continue the work begun with the UK for our bilateral cooperation, for peace and security in Europe, for the climate and for AI.”
Takeaways: As Labour takes the reins of government, their approach to AI will be closely watched. They have expressed a desire to toughen up regulation including putting the AI Safety Institute on a statutory footing. The UK implemented a digital imprints law as part of the Elections Act 2022, providing a basis for protection against anonymous political content. This legislation requires digital campaign material to include an ‘imprint’ identifying who is responsible for publishing it. While this law wasn’t specifically designed to address AI content, it offers a framework that could be adapted to tackle emerging challenges in political campaigning. However, as the law is new and untested in the context of full-scale use of AI, its effectiveness in regulating advanced political messaging remains to be seen. Whilst the UK has so far avoided any challenges to the orderly transition of power, this has been a matter of luck rather than truly robust protections. With its own concerns about the right-wing of British politics, the new government will need to act swiftly to set out its approach to AI and to prepare for how it might intersect with the electoral process in the future.
Agents untethered
In 2009, Harvard professor Jonathan Zittrain’s book, The Future of the Internet – And How to Stop It, warned of a digital landscape dominated by “tethered appliances” that could stifle innovation and facilitate unprecedented control. Fifteen years on, as AI agents evolve from simple chatbots to autonomous decision-makers, Zittrain’s concerns have pivoted to encompass the ‘untethered’. In a thought-provoking piece for The Atlantic this week, he sounds a alarm about the rise of agents – autonomous AI systems acting independently on behalf of humans. These agents, Zittrain argues, could proliferate out of control, potentially causing harm long after their original purpose becomes obsolete. “Give it a few goals, let it have a bank account, let it spend money… who knows where it ends up?” he cautions, highlighting the need for safeguards.
Concern over the impact of software agents, bots, or autonomous entities is not new. In 2010, a flash crash caused by high-frequency trading algorithms wiped out $1 trillion of stock market value in minutes, serving as an early warning of the unintended consequences of automated systems.
Investment is now pouring into the development of ‘agentic AI’. Altera, for example, is developing agents that can play Minecraft alongside human users. This may seem like a trivial application, but it represents a interesting approach to exploring the nuances of human-AI interactions. “We are developing advanced AI agents that coexist with us in virtual environments,” explains an Altera spokesperson. “These agents possess strong pro-human social-emotional intelligence and will eventually attain self-awareness.” Altera posted on X this week showing their Minecraft agents developing aspects of an collaboration; “our agents are now logging their progress on google sheets too. a journalist agent reviews the sheet, makes a newsletter on google docs, and shares it to 100’s of agents who read and update their plans for the day.”
While the potential benefits of socially aware agents that work tirelessly to achieve complex goals are immense, the risks are not to be ignored. Zittrain warns of the dangers of “set it and forget it” entities that could operate indefinitely, potentially causing harm long after their original purpose has become obsolete. “Agents set to translate vague goals might also choose the wrong means to achieve them,” Zittrain cautions.
Cloudflare’s recent launch of a free tool to combat AI bots scraping websites suggests an emerging cat-and-mouse game between agents and digital asset owners. Perplexity have recently got into hot water for this. Cloudflare who cache a sizeable proportion of the world’s Internet material stated “we fear that some AI companies intent on circumventing rules to access content will persistently adapt to evade bot detection”.
Takeaways: Proposals for agent regulation, including the idea of implementing a “time to live” feel unlikely to succeed given the variety of architectures that can be used to create a multi-agent system. But when developing agent systems, the range of complex outcomes must be considered and carefully managed. Businesses should implement testing frameworks that continuously simulate many scenarios. They should also establish clear guidelines for agent behaviour and decision-making processes and extend risk and control frameworks to include this new construct, ensuring human oversight at critical junctures.
The art of conversation
This week we saw demos and releases suggesting our interactions with AI will continue to get more fluid. Kyutai’s Moshi experiment, a new OpenAI GPT-4o voice mode demo, Character.AI, and ElevenLabs all showcased real-time capabilities.
Moshi, a relatively small open-weight multimodal language model from a French lab, can process speech input and output simultaneously, understand and express emotions, and speak with different accents. Meanwhile on-stage at last week’s AI Engineer World Fair, OpenAI’s GPT-4o was demonstrated via an unreleased version of ChatGPT Desktop. The demo showcased integrated low-latency voice generation, visual context understanding, video generation, and rapid optical character recognition.
Character.AI unveiled ‘Character Calls‘, a feature that enables users to have real-time voice conversations with AI characters on their mobile app. Meanwhile, ElevenLabs expanded its AI voice capabilities by introducing AI-recreated voices of late Hollywood celebrities like Judy Garland and James Dean to its reader product. Both emphasise safety measures to prevent misuse, but as ever ethical questions loom large. The delay until the Autumn of GPT-4o voice mode is likely due to both the technical demands of serving to many users, but also the challenge of preventing undesired emotions and content in voice output.
Takeaways: These advancements represent a significant evolution in AI user experience, moving us closer to genuine conversational interaction. The ability to seamlessly switch between visual, audio, and text inputs promises a great deal. However, we’re still in the early stages. While the potential is exciting, widespread and fully integrated availability is limited, making it challenging to evaluate the practical utility and broader implications of these multimodal interactions.
EXO
Weekly news roundup
This week’s news highlights the rapid advancements in generative AI, the growing investment in AI startups, and the ongoing debates around AI governance and responsible development. It also showcases the latest AI research breakthroughs and the evolving AI hardware landscape.
AI business news
- Report: China leads the world in generative AI patents (Demonstrates China’s strategic focus on AI innovation and the global race to dominate this transformative technology.)
- Exclusive: AI coding startup Magic seeks $1.5-billion valuation in new funding round (Highlights the continued investor interest in AI-powered software development tools and the potential for AI to revolutionise the coding process.)
- WhatsApp “Imagine Me” feature leaks – here’s what we know (Showcases the integration of generative AI into mainstream messaging platforms and the potential impact on user experiences.)
- Can AI outperform a wealth manager at picking investments? (Explores the growing role of AI in the financial services industry and the potential for AI-driven investment strategies to challenge traditional human-led approaches.)
- Investors Pour $27.1 Billion Into A.I. Start-Ups, Defying a Downturn (Highlights the continued investor confidence in the AI sector, even in the face of broader economic challenges, and the potential for AI startups to drive innovation.)
AI governance news
- Figma disables new AI design feature after being called out on social media (Demonstrates the ongoing debates around the responsible deployment of AI tools, particularly in sensitive domains like design, and the need for careful consideration of ethical implications.)
- YouTube now lets you request removal of AI-generated content that simulates your face or voice (Highlights the growing concerns around the misuse of AI-generated media and the efforts by platforms to address these issues and protect user privacy.)
- News outlets are accusing Perplexity of plagiarism and unethical web scraping (Explores the ethical challenges surrounding the use of web data by AI systems and the potential for AI-powered tools to infringe on intellectual property rights.)
- A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too (Underscores the importance of robust cybersecurity measures in the AI industry and the potential geopolitical implications of AI technology falling into the wrong hands.)
- Microsoft CEO of AI: Online content is ‘freeware’ for models (Highlights the ongoing debates around the use of web data in AI training and the need for clear guidelines and regulations to ensure responsible and ethical AI development.)
AI research news
- Summary of a Haystack: A Challenge to Long-Context LLMs and RAG Systems (Showcases the latest advancements in large language models and the ongoing efforts to improve their performance on complex, long-form tasks.)
- Scaling Synthetic Data Creation with 1,000,000,000 Personas (Highlights the growing importance of synthetic data in AI training and the potential for large-scale persona generation to enhance model performance.)
- Simulating Classroom Education with LLM-Empowered Agents (Explores the application of AI in educational settings and the potential for language models to enhance the learning experience.)
- AI Agents That Matter (Showcases the latest research on developing AI systems that can engage in meaningful and ethical decision-making, a key focus area for responsible AI development.)
- GraphRAG: New tool for complex data discovery now on GitHub (Highlights the ongoing efforts to create advanced AI-powered tools for data analysis and discovery, which can have significant implications across various industries.)
AI hardware news
- Huawei exec rejects idea that advanced chip shortage will hamper China’s AI ambitions (Demonstrates the strategic importance of semiconductor technology for AI development and the efforts by major players to overcome supply chain challenges.)
- Samsung expects profits to soar with boost from AI (Highlights the growing integration of AI capabilities into consumer electronics and the potential for AI to drive revenue growth for leading tech companies.)
- Google’s emissions grew 13% in 2023 amid increasing AI energy demand (Underscores the environmental impact of the growing computational demands of AI systems and the need for sustainable hardware solutions.)
- ASML expansion in Veldhoven can proceed, Dutch court rules (Highlights the critical role of semiconductor manufacturing equipment providers in supporting the growth of the AI hardware ecosystem.)
- Lambda reportedly seeking $800M in new funding for GPU cloud (Demonstrates the increasing demand for specialised AI hardware infrastructure and the investment opportunities in this rapidly evolving market.)