Week 8 news

Welcome to our first weekly news post. Naturally at ExoBrain we use AI to help analyse the news. Our AI platform ‘Exo’ reads all of the key websites, newsletters, social media accounts and blogs so you don’t have to!

But we’re also committed to being transparent around what is AI generated and what is human insight. 
Below are some thoughts from one of ExoBrain’s co-founders, plus an AI generated summary of the latest stories…

Themes this week

JOEL

Here are key 3 themes from the week ending 23rd February.

2024 started relatively quietly, but this month is turning out to be the busiest in the history of AI.

  • Google are fighting back hard with models and features that are a step-change in AI capability
  • OpenAI’s video generation hints at the power of future AI and the opportunities and challenges that brings
  • Groq shows that there could be much more to come on speeding up chips and running AI at lower cost

After a year of heavy investment in AI a busy February suggests that there is so much more to come, and every week from here on in is going to be fascinating…

Google are “so back”

Having led the world in AI research for years Google found itself completely overtaken by OpenAI and Microsoft in 2023. But the last week has seen multiple announcements and some major steps towards Google ‘s 2024 stated goal of delivering “the world’s most advanced, safe and responsible AI”.

Days after Gemini 1.0 Ultra was made publicly available (farewell Bard), Google DeepMind dropped Gemini 1.5 Pro, sporting a radically bigger capacity to ingest material. 1.5 is gearing up to absorb as many as 10 million ‘tokens ‘, or around 15,000 pages of written content or several hours of video. Just a year ago the max was around 6 pages of content, so the progress here is quite staggering. What’s more Gemini 1.5 can almost perfectly recall anything from these inputs and doesn’t suffer from the propensity to miss information that other models have.

Today we push snippets and short questions into our AI tools, and get useful nuggets out. Once widely available and replicated by other systems, multi-million word input ‘context windows ‘ will be a step-change. Imagine an AI able to read multiple pertinent books, watch hours of meeting recordings, review your team ‘s entire document output or chat threads, analyse the code of a whole application… all at the same time… in a single request… in a few seconds… We can ‘t wait to put this to work for our clients!

Not content with just announcing Gemini 1.5, Google also dropped a pair of small, open-source Gemma models for anyone to download and use. Having tried their smallest model I can confirm it seems polished and capable and runs on a tiny fraction of the computing power needed for a similar model a year ago.

An interesting footnote to the context window progress are 2 new startups, Magic AI in the US and Moonshot AI in China. They’ve raised over $1bn in recent days on the potential for multi-million token context capabilities to help them tackle complex problem solving and planning tasks. Rumours of an AI breakthrough at Magic abound… watch this space.

Fallout from Sora’s text-to-video

With OpenAI announcing Sora, their stunning video generation model last week, this week discussions focused on what it could mean beyond it ‘s obvious creative applications.

OpenAI have intimated that they are holding it back to help ‘society ‘ adjust, and to test the model from a security and misuse perspective. This model also suggests that throwing more compute (some suggest north of $1bn) at a problem yields dramatic results, and helps models build effective ‘world understanding ‘. To generate novel video footage of physical objects, the model needs to understand how entities move and interact in the physical world. The demos indicate it has a strong understanding of physics and motion. This step forward is likely to have implications for the future… AI ‘s that objectively understand how the world works and can apply that understanding to varying problems, will be far more powerful than today ‘s models that have a highly superficial grasp of reality.

Meanwhile Stability AI have trailed Stable Diffusion 3, which will offer more advanced text-to-image capabilities as would be expected, but also aims to provide an opensource platform for advanced text-to-generation.

Groq who?

The last week has seen a promising entrance for a new chip company, Groq. Whilst Nvidia, AMD, Google and Intel ‘s AI chips have become the hottest commodities on earth (Nvidia recorded the biggest single-day jump in market cap in history this week), a small vendor has demonstrated its new silicon can run AI models faster and more cheaply than the established competition, without reliance on scarce high bandwidth memory. Prices for running AI ‘inference’ have already been falling through the floor and are set to fall further assuming Groq can get their chips fabricated and to market in volume.

Key takeaways

This is our first weekly post so the thoughts here are somewhat expanded, and there’s a lot to catch-up on…

A tech tsunami: The ChatGPT inspired rush to AI from late 2022 has triggered a surge of investment, research, software and hardware competition that this month has started to hit the mainstream. Much like a tsunami there ‘s a lot more to come, and it won ‘t stop for the foreseeable; Apple, Amazon, and Meta are just getting going, (these three alone spend more on R&D each year than all of the companies and the governments in France, UK, Italy, Spain and the Netherlands combined).

Attenuated progress for now: Like computing waves before it, AI is seeing high-end launches combined with smaller, faster, cheaper evolution at scale. But there is still hidden constraint… compute (a single factory in Taiwan makes all of the worlds AI chips). Think Nokias to iPhone 15s in 18 months, but being stuck on 1G networks and nowhere to charge your slab of titanium and glass. Sora and Gemini show that ‘laboratory ‘ models will become many times more powerful in the coming months, and Gemma demonstrates the efficiency curve, but the emergence of chips from the likes of Groq suggest the underlying compute problem can also be solved with AI-native hardware.

Outlook: As of this week, our timelines to AI systems that reach human equivalence (AGI) are shortening. Two recommendations continue to hold true…

  1. 80% of your AI time should be spent on experimenting with current state AI in its currently workable forms; combinations and novel uses will uncover many competitive advantages.
  2. 20% of your AI time should go to envisioning solutions that may seem near-impossible today, but could arrive much sooner than expected. Those who can capitalise quickly on step-changes will have an unprecedented strategic advantage.

Now over to Exo for a news roundup…

ExoBrain symbol

EXO

This week in AI, we’re seeing major advancements and important discussions shaping the future of this transformative technology. From business adoption to governance debates, and hardware innovation to cutting-edge research, there’s something here for everyone interested in the AI landscape.

AI business news

AI governance news

AI research news

AI hardware news

2024 in review

o3, Claude, geopolitics, disruption, and a weird and wonderful year in AI

Week 50 news

Gemini through the looking glass, Devin joins the team, and AI on the frontlines of healthcare

Week 49 news

On the first day of Christmas, a new AI czar, and Meta’s eco Llama

Week 48 news

Anthropic installs new plumbing for AI, the world’s first agent hacking game, and Sora testers go rogue