Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…
Themes this week
JOEL
Themes to take notice of this week ending 1st March:
- Long context and a bio-modalities: ‘Life sciences’ is evolving into ‘life engineering’ with the release of a new ‘biological foundation model’ trained on DNA, RNA and proteins
- The challenge of alignment: Google’s value takes a $96bn hit after the controversial views of new model Gemini trigger an online backlash
- The automation wave: Klarna’s new customer service bot does the work of 700 and aims to drive a $40m profit improvement in 2024
Last week we noted that from mid-Feb onwards the news from the big labs started to accelerate. This is likely to be the new normal, with so much research and investment having spun up in the last year. Look out for new unexpected AI use-cases, ongoing challenges with control, and the impact on jobs…
Life as code
This week sees the launch of the first major ‘biological foundation model’ based on an exciting new type of AI architecture, the state space model. The Arc Institute in collaboration with TogetherAI released Evo, a notable step forward step in the new field of ‘life engineering’. Evo is trained on DNA, RNA and protein data, the building blocks of life, can process long inputs (like Google’s Gemini 1.5) and can generate genome scale outputs. The human genome is >3bn nucleotides long, so there is some way to go before we print out modified humans 🙂 but if we use the analogy of software development, this breakthrough enables engineers to operate on full ‘apps’ instead of just snippets of code and could dramatically reduce the time needed to design, test and fabricate new biological components. From cell factories, to drugs or vaccines, foods and fuels, biology is the latest AI ‘modality’.
Takeaway: Look out for AI progress crossing over from the digital to the physical both through robotics advancements from companies like Figure AI (see roundup) and in the field of synthetic biology through Evo, Bioptimus and others. Time to start considering AI’s digital-physical opportunities (and also threats… companies will need an integrated strategy for cyber-physical-bio security). Also keep an eye on for state space models. Gemini, GPTs and Claude are all ‘transformers’, but the state space architecture offers greater efficiency, longer inputs and crucially has long-term memory. Whilst they’re not comparable with transformers on language yet, their time may be imminent.
Embattled Google snatch defeat from the jaws of victory
Google’s rapid-fire launches seems to have caught up with them. First complaints about Gemini’s image generation surfaced and the feature was disabled due to what seems to have been hurried implementation (and ‘hard-coded’ some instructions to generate diverse ethnicities no matter the context). But soon Gemini’s more complex language responses were being highlighted by those who saw a pattern of anti-white bias. Rushing to release Gemini, Google have clearly given themselves little time to refine their strategy for dealing with contentious issues. Whilst its impossible for models to stay neutral in all situations, especially given biases inherent in human generated training data and human led reinforcement learning processes, OpenAI have had time to developed a more effective gameplan. In these situation GPT-4 seems to prefer short matter-of-fact responses reducing the chance of being caught-out on complex issues, and generally dialling down the heavy handed refusals. Gemini is prone to taking excessively complex standpoints, refusing tasks and delivering a lecture to the user for good measure. A red rag to those sensitive to culture war issues. The fallout from this episode may be behind the delayed July launch planned for the hotly anticipated release of Meta’s Llama 3, which some suggest has been found to be ‘too safe’ in testing and is being ‘loosened’ up as we speak!
Takeaway: The key point here is that aligning or steering an AI (or rather a giant seesawing stack of a trillion parameters) has moved forward in the last few years, but is never going to be an exact science, especially as models become ever larger. We will need to accept that just like not being able to align with every colleagues world views, we’re going to have to collaborate with models that have many viewpoints. Demis Hassabis, head of Google DeepMind has been on a media round in recent days. He talks about AI with a level of clarity and experience few can match, but also intones confidently about the mechanisms in place to protect us all from rouge AI. Hopefully this chapter will remind us all that what’s needed to manage these hugely complex creations should never be underestimated.
AI bot does the work of 700
In applied AI news, Klarna, the payments and shopping business, issued a press release with numbers indicating stellar performance from a new GPT-4 powered customer service bot over the first months of its deployment. What stood out was a line suggesting that the bot was now doing the work of 700 operatives and picking up two thirds of all support activities for its 150m customers and working in 35 languages. Having experienced it myself, and based on anecdotal feedback it seemed to be very keen to escalate more difficult questions to a human colleague. None-the-less this is a notable signal that the process of automating jobs at scale is underway.
Takeaway: There is no doubt that customer service will see significant augmentation and automation in the coming year. The key point here is that however basic a bot’s capabilities are today, if it is well implemented it will improve significantly though ongoing experiences and data provision. Where once a new system was deployed and would (hopefully) remain predictably consistent in operation, advanced AI systems will improve rapidly, often feeding material advances to the next wave of deployments and so on.
Now over to Exo for a news roundup…
EXO
Today’s AI news reveals a landscape of rapid investment, ethical quandaries, transformative research, and the ever-present need to adapt in a world increasingly driven by artificial intelligence.
AI business news
- Microsoft made a $16M investment in Mistral AI as they released a new closed-source model to rival GPT-4, and a chat assistant (Highlights the potential for unexpected shifts in strategy, even from companies that were once strong advocates of open technology)
- The Who’s Who of AI just chipped in to fund humanoid robot startup Figure, who aim to replace millions of workers with humanoid automatons (The funding of Figure, a robotics startup, underscores the importance of considering the potential impact of advanced automation on the future of work.)
- All Singaporeans aged 40 and above get a £1750 monthly allowance to train on AI (Singapore’s AI training initiative emphasizes the value of investing in workforce upskilling to adapt to a rapidly evolving AI-driven landscape.)
- Apple Stock Near 4-Month Low As AI Questions Linger (Apple’s stock dip serves as a reminder to critically evaluate AI capabilities and limitations when considering dependencies on specific tech giants.)
- AI optimism sends Nasdaq to new post-Covid high (The Nasdaq surge due to AI optimism indicates the potential benefits of integrating AI into existing models for a competitive advantage)
AI governance news
- Elon Musk sues OpenAI over AI threat (Elon Musk escalates his concerns about the direction of AI development by suing OpenAI, alleging the company’s shift away from open-source technology poses a potential threat to society.)
- OpenAI sued, again, for scraping and replicating news stories (The lawsuit against OpenAI emphasizes the importance of careful consideration of data sourcing and intellectual property issues when utilizing AI models.)
- BEAST AI needs just a minute of GPU time to make an LLM fly off the rails (BEAST AI brings into sharp focus the crucial importance of thoroughly vetting AI systems for vulnerabilities before deployment, especially in high-risk scenarios.)
- US military pulls the trigger, uses AI to target air strikes (The US military using AI for targeting stresses the urgency of establishing clear ethical guidelines and oversight for AI use in sensitive applications.)
- EU turns to Big Tech to help deepfake-proof election (The EU’s partnership with Big Tech suggests the value of collaboration in developing tools to combat AI-generated misinformation.)
AI research news
- The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits (Research into 1-bit LLMs hints at the potential for AI models that offer performance benefits with lower computational costs.)
- EMO: Emote Portrait Alive – Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions (EMO research spotlights the potential applications of cutting-edge AI developments in media and entertainment content creation.)
- Genie: Generative Interactive Environments (Empowers users to create and interact with diverse virtual worlds generated from text, images, or even sketches.)
- StructLM: Towards Building Generalist Models for Structured Knowledge Grounding (StructLM research indicates potential significant performance gains as AI’s ability to leverage structured data improves.)
- The FinBen: An Holistic Financial Benchmark for Large Language Models (The FinBen benchmark provides a useful tool for assessing AI tools specifically tailored for financial applications.)
AI hardware news
- Humane pushes Ai Pin ship date to mid-April (Humane’s delayed chip release reminds us to anticipate potential supply chain or development delays when working with bleeding-edge AI hardware.)
- Mark Zuckerberg woos Big Tech in Asia to double down on AI chips (Zuckerberg’s Asian deal-making reveals shifting dynamics and competition in the AI chip landscape.)
- Dell promises ‘every PC is going to be an AI PC’ whether you like it or not (Dell’s AI focus suggests a future where pre-integrated AI tools will shape computing environments.)
- Generative AI ‘commonplace in cloud business models’ – as Azure leads the way (The rise of generative AI in cloud models points to the increasing importance of cloud solutions for optimal access to AI capabilities.)
- Meta looking to use exotic, custom CPU in its datacentres for machine learning and AI (Meta’s exploration of custom CPUs suggests that for peak AI performance, specialized hardware beyond mainstream options may be worth investigating.)