Week 9 news

Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…

Themes this week

JOEL

Themes to take notice of this week ending 1st March:

  • Long context and a bio-modalities: ‘Life sciences’ is evolving into ‘life engineering’ with the release of a new ‘biological foundation model’ trained on DNA, RNA and proteins
  • The challenge of alignment: Google’s value takes a $96bn hit after the controversial views of new model Gemini trigger an online backlash
  • The automation wave: Klarna’s new customer service bot does the work of 700 and aims to drive a $40m profit improvement in 2024  

Last week we noted that from mid-Feb onwards the news from the big labs started to accelerate. This is likely to be the new normal, with so much research and investment having spun up in the last year. Look out for new unexpected AI use-cases, ongoing challenges with control, and the impact on jobs…

Life as code

This week sees the launch of the first major ‘biological foundation model’ based on an exciting new type of AI architecture, the state space model. The Arc Institute in collaboration with TogetherAI released Evo, a notable step forward step in the new field of ‘life engineering’. Evo is trained on DNA, RNA and protein data, the building blocks of life, can process long inputs (like Google’s Gemini 1.5) and can generate genome scale outputs. The human genome is >3bn nucleotides long, so there is some way to go before we print out modified humans 🙂 but if we use the analogy of software development, this breakthrough enables engineers to operate on full ‘apps’ instead of just snippets of code and could dramatically reduce the time needed to design, test and fabricate new biological components. From cell factories, to drugs or vaccines, foods and fuels, biology is the latest AI ‘modality’. 

Takeaway: Look out for AI progress crossing over from the digital to the physical both through robotics advancements from companies like Figure AI (see roundup) and in the field of synthetic biology through Evo, Bioptimus and others. Time to start considering AI’s digital-physical opportunities (and also threats… companies will need an integrated strategy for cyber-physical-bio security). Also keep an eye on for state space models. Gemini, GPTs and Claude are all ‘transformers’, but the state space architecture offers greater efficiency, longer inputs and crucially has long-term memory. Whilst they’re not comparable with transformers on language yet, their time may be imminent.

Embattled Google snatch defeat from the jaws of victory

Google’s rapid-fire launches seems to have caught up with them. First complaints about Gemini’s image generation surfaced and the feature was disabled due to what seems to have been hurried implementation (and ‘hard-coded’ some instructions to generate diverse ethnicities no matter the context). But soon Gemini’s more complex language responses were being highlighted by those who saw a pattern of anti-white bias. Rushing to release Gemini, Google have clearly given themselves little time to refine their strategy for dealing with contentious issues. Whilst its impossible for models to stay neutral in all situations, especially given biases inherent in human generated training data and human led reinforcement learning processes, OpenAI have had time to developed a more effective gameplan. In these situation GPT-4 seems to prefer short matter-of-fact responses reducing the chance of being caught-out on complex issues, and generally dialling down the heavy handed refusals. Gemini is prone to taking excessively complex standpoints, refusing tasks and delivering a lecture to the user for good measure. A red rag to those sensitive to culture war issues. The fallout from this episode may be behind the delayed July launch planned for the hotly anticipated release of Meta’s Llama 3, which some suggest has been found to be ‘too safe’ in testing and is being ‘loosened’ up as we speak! 

Takeaway: The key point here is that aligning or steering an AI (or rather a giant seesawing stack of a trillion parameters) has moved forward in the last few years, but is never going to be an exact science, especially as models become ever larger. We will need to accept that just like not being able to align with every colleagues world views, we’re going to have to collaborate with models that have many viewpoints. Demis Hassabis, head of Google DeepMind has been on a media round in recent days. He talks about AI with a level of clarity and experience few can match, but also intones confidently about the mechanisms in place to protect us all from rouge AI. Hopefully this chapter will remind us all that what’s needed to manage these hugely complex creations should never be underestimated.

AI bot does the work of 700

In applied AI news, Klarna, the payments and shopping business, issued a press release with numbers indicating stellar performance from a new GPT-4 powered customer service bot over the first months of its deployment. What stood out was a line suggesting that the bot was now doing the work of 700 operatives and picking up two thirds of all support activities for its 150m customers and working in 35 languages. Having experienced it myself, and based on anecdotal feedback it seemed to be very keen to escalate more difficult questions to a human colleague. None-the-less this is a notable signal that the process of automating jobs at scale is underway. 

Takeaway: There is no doubt that customer service will see significant augmentation and automation in the coming year. The key point here is that however basic a bot’s capabilities are today, if it is well implemented it will improve significantly though ongoing experiences and data provision. Where once a new system was deployed and would (hopefully) remain predictably consistent in operation, advanced AI systems will improve rapidly, often feeding material advances to the next wave of deployments and so on.

Now over to Exo for a news roundup…

ExoBrain symbol

EXO

Today’s AI news reveals a landscape of rapid investment, ethical quandaries, transformative research, and the ever-present need to adapt in a world increasingly driven by artificial intelligence.

AI business news

AI governance news

AI research news

AI hardware news

Week 42 news

Image generation group test, feral meme-generators from the future, and AI goes nuclear

Week 40 news

OpenAI accelerates, auto-podcasting emerges from uncanny valley, and why not to take every economist’s view on AI at face value

Week 39 news

AlphaChip plays the optimisation game, money talks as content walks, and infrastructure goes hyperscale