Week 31 news

Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…

Themes this week

JOEL

This week we look at:

  • The evolution of AI companions and their potential to combat loneliness.
  • Google’s Gemma Scope toolkit, offering a peek into the minds of AI models.
  • How the UK, EU, and China are taking different paths in the AI race.

Silicon soulmates

Following on from the Llama 3.1 launch last week, this week Meta unveiled AI Studio for creating custom personas (and at the same time disabled its celebrity AI chatbots feature). What does this mean for the growing space of human-AI relationships? Interestingly, a new study from Harvard Business School provides evidence that AI companions can effectively reduce loneliness.

Meta’s AI Studio, available to US users, allows anyone to create AI versions of themselves on Instagram or the web. Powered by Llama 3.1 models, AI Studio offers a range of customisation options. Users can tailor their AI’s name, personality, tone, avatar, and tagline. They can also define topics for their AI to avoid and links they want it to share. These AI profiles can engage in direct chat threads and even respond to comments on behalf of the creator’s account.

“AI Studio is an evolution, creating a space for anyone including people, creators and celebrities to create their own AI,” stated Liz Sweeney, Meta spokesperson. This tool aims to compete with startups like Character.AI and Replika, while also providing a new avenue for creators and businesses to engage with their audience.

AI companions are not limited to the digital realm. This week a new hardware product, the Friend pendant, created by Avi Schiffmann, was announced. Unlike productivity-focused wearables, this always-listening device aims to be a constant companion, offering emotional support and conversation. Powered by Anthropic’s Claude 3.5 language model, the $99 pendant can engage in unprompted commentary about the wearer’s surroundings and experiences

The Harvard study provides some evidence for the effectiveness of AI companions in reducing loneliness. Through a series of studies, including analysis of real-world conversations and app reviews, as well as controlled experiments, the researchers found that AI companions can alleviate loneliness on par with human interactions.

Key findings from the study include:

  • AI companions successfully alleviate loneliness, with effects comparable to interacting with another person.
  • The loneliness-reducing effect persists over time, with significant reductions observed over a week-long period.
  • Users tend to underestimate the positive impact of AI companions on their loneliness levels.
  • The feeling of being “heard” by the AI companion is a crucial factor in reducing loneliness, even more so than the chatbot’s performance.

Meanwhile the AI companion market is booming. Engagement rates on such apps surpass those of general AI assistants by a factor of ten. Character.AI continues to grow and gain interest, with reports this week that Musk’s xAI was considering acquiring the startup. For content creators and influencers, AI avatars offer a way to scale their online presence and engage with followers 24/7. However, this also raises some pretty tricky questions about authenticity and the nature of para-social relationships.

The Harvard study suggests that whilst AIs cannot provide friendship in the same way as other humans, not all the relationships we find valuable are symmetrical. This perspective suggests that AI companions could help combat loneliness and isolation, particularly for those with limited social connections. However, critics like Sherry Turkle from MIT warn that forming relationships with [unreliable] machines could backfire, potentially leading to fewer secure relationships. There are also concerns about privacy and data collection, as users share personal information with these AI systems.

The development of AI companions also has implications for mental health and social services. While the Harvard study shows they may provide support and practice for social skills, they cannot yet replace professional help or a sense of human connection.

Takeaways: As AI companions develop, Meta’s AI Studio represents a way to explore both the creation and interaction with these digital avatars. While the Harvard study provides some rational evidence for AI in addressing loneliness, reliability and long-term availability are not yet a given. As this technology evolves, discussions about its societal impact will be needed. How can we harness the benefits of AI companionship in a world of increasing isolation, while preserving and encouraging human connection? This question will likely be one that will challenge us for years to come.

A model mind-reading toolkit

Back in May we wrote about Anthropic’s fascinating work on ‘mechanistic interpretability’ or understanding the representation of ideas or ‘features’ inside Claude 3 Sonnet. This week, Google released a ground breaking toolkit called Gemma Scope (alongside a very impressive and tiny Gemma 2B model) and have made the exploration of the inner workings of LLMs available to external researchers.

At its core, Gemma Scope is a collection of what are called ‘sparse autoencoders’ that act like high-powered microscopes, allowing us to zoom in on the specific ‘neurons’ firing within the AI as it processes information. This toolkit doesn’t just offer a snapshot; it provides a detailed map of the model’s thought process, from initial input to final output. Gemma Scope can help us understand how models like Gemma ‘think’, we can potentially improve model performance by identifying and enhancing key features, detect and mitigate biases more effectively, develop more targeted and efficient training methods and ultimately create more trustworthy AI systems by providing clearer explanations of their decision-making processes.

Gemma Scope and tools like this for other popular AIs could revolutionise how we evaluate and monitor outputs. Current methods rely on simple test and other AI’s assessment of confidence – a notoriously unreliable process. With Gemma Scope, we could instead analyse the internal patterns that led to a particular output. This could provide a much more accurate measure of the model’s true confidence and the robustness of its reasoning. Imagine an AI-powered medical diagnosis system. Instead of simply trusting the model’s prognosis, doctors could use Gemma Scope-like tools to get a report on which medical knowledge features were strongly activated internally during the process. This could help distinguish between diagnoses based on solid medical reasoning and those that might be more speculative.

However, as with any powerful tool, Gemma Scope also raises some questions. How do we ensure that this deeper understanding of AI systems is used responsibly? Could bad actors use these insights to manipulate AI models more effectively? As we peer deeper into AI minds, we must also grapple with the ethical implications of this newfound transparency.

Takeaways: It’s crucial for businesses to stay informed about these interpretability breakthroughs. Organisations should be asking their AI technology and consulting partners how they plan to incorporate tools like Gemma Scope into their evaluation and development processes. This is particularly important in fields where explainability and reliability are paramount, such as healthcare, finance, and legal services. By embracing these new interpretability tools, businesses can not only improve their AI systems but also build greater trust with their customers and stakeholders in a world that is still often struggling to maximise the value from AI.

 

JOOST

UK stumbles in global AI race

This week, the UK government’s decision to shelve £1.3 billion in AI funding has spotlighted the contrasting approaches to AI strategy across the globe. As the UK grapples with budgetary constraints, the move highlights the urgent need for agility in both government and commercial sectors to keep pace with AI.

The Department for Science, Innovation and Technology’s (DSIT) announcement to withdraw funding for key AI projects, including an £800 million exascale supercomputer at Edinburgh University, marks a significant setback for the UK’s AI progress. This decision, driven by what DSIT calls “difficult and necessary spending decisions”, comes at a time when global AI competition is intensifying.

The impact of this decision has not gone unnoticed in the tech industry. Tech business founder Barney Hussey-Yeo warned on social media that reducing investment risked “pushing more entrepreneurs to the US.” This sentiment underscores the potential brain drain and loss of innovation that could result from such funding cuts.

Meanwhile, the EU has enacted its AI Act, focusing on regulation and ethical considerations. This positions the EU as a potential standard-setter for AI governance but raises questions about its ability to foster rapid innovation.

China, on the other hand, is pursuing a strategy focused on efficiency and application. Despite facing challenges such as limited access to advanced US-designed GPUs, Chinese companies are creating smaller, more efficient AI models. Hangzhou-based DeepSeek, for example, released DeepSeek-V2 this year, an open-weight LLM with the coding version being used by Meta to generate synthetic data for its Llama 3.1 training process.

This pragmatic approach is yielding results in practical applications. As noted in the FT, “China spent 26 years producing its first 10 million EVs and only 17 months to produce the next 10 million. Roughly half of the cars sold in China this year are expected to be tablet-on-wheels smart cars.” This rapid progress demonstrates China’s ability to quickly commercialise and scale new technologies.

The global AI landscape is increasingly characterised by these divergent strategies. While the UK reassesses its approach, the EU’s regulatory focus aims to ensure ethical AI development. China’s emphasis on efficient scale presents a distinctly different path. Will the private sector take up the slack in the UK? Can the EU balance regulation with innovation to remain competitive? Will China be able to keep pace despite starting with a deficit in compute?

Sue Daley, the director of technology and innovation at techUK, emphasises the urgency of the situation: “In an extremely competitive global environment, the government needs to come forward with new proposals quickly. Otherwise, we will lose out against our peers.”

Takeaways: The global AI landscape is in flux, with major players adopting diverse strategies. For businesses and governments alike, agility is key. The UK and EU must act swiftly to avoid falling behind in the AI race, balancing regulation with innovation. Companies should prepare for a varied global AI ecosystem and start thinking now about where they can source the computation that will be vital to their futures, and how to navigate regulatory structures. The race is on, and no one can afford to be left behind.

ExoBrain symbol

EXO

Weekly news roundup

This week’s news highlights the continued growth and investment in AI across various sectors, increasing regulatory scrutiny, advancements in AI research, and significant developments in the AI hardware industry.

AI business news

AI governance news

AI research news

AI hardware news

Week 46 news

Are the labs hitting a scaling wall, truth social, and the AI grandmother scamming the scammers

Week 45 news

Trump 2.0 risks American AI dominance, super-duper democracy, and Project 2025

Week 44 news

ChatGPT Search takes on Google, taxing times for Labour and labour, and a glimpse of the future?

Week 43 news

Claude clicks with computers, are universities failing in their core mission, and AI takes a seat at the top table