Welcome to our first weekly news post. Naturally at ExoBrain we use AI to help analyse the news. Our AI platform ‘Exo’ reads all of the key websites, newsletters, social media accounts and blogs so you don’t have to!
But we’re also committed to being transparent around what is AI generated and what is human insight. Below are some thoughts from one of ExoBrain’s co-founders, plus an AI generated summary of the latest stories…
Themes this week
JOEL
Here are key 3 themes from the week ending 23rd February.
2024 started relatively quietly, but this month is turning out to be the busiest in the history of AI.
- Google are fighting back hard with models and features that are a step-change in AI capability
- OpenAI’s video generation hints at the power of future AI and the opportunities and challenges that brings
- Groq shows that there could be much more to come on speeding up chips and running AI at lower cost
After a year of heavy investment in AI a busy February suggests that there is so much more to come, and every week from here on in is going to be fascinating…
Google are “so back”
Having led the world in AI research for years Google found itself completely overtaken by OpenAI and Microsoft in 2023. But the last week has seen multiple announcements and some major steps towards Google ‘s 2024 stated goal of delivering “the world’s most advanced, safe and responsible AI”.
Days after Gemini 1.0 Ultra was made publicly available (farewell Bard), Google DeepMind dropped Gemini 1.5 Pro, sporting a radically bigger capacity to ingest material. 1.5 is gearing up to absorb as many as 10 million ‘tokens ‘, or around 15,000 pages of written content or several hours of video. Just a year ago the max was around 6 pages of content, so the progress here is quite staggering. What’s more Gemini 1.5 can almost perfectly recall anything from these inputs and doesn’t suffer from the propensity to miss information that other models have.
Today we push snippets and short questions into our AI tools, and get useful nuggets out. Once widely available and replicated by other systems, multi-million word input ‘context windows ‘ will be a step-change. Imagine an AI able to read multiple pertinent books, watch hours of meeting recordings, review your team ‘s entire document output or chat threads, analyse the code of a whole application… all at the same time… in a single request… in a few seconds… We can ‘t wait to put this to work for our clients!
Not content with just announcing Gemini 1.5, Google also dropped a pair of small, open-source Gemma models for anyone to download and use. Having tried their smallest model I can confirm it seems polished and capable and runs on a tiny fraction of the computing power needed for a similar model a year ago.
An interesting footnote to the context window progress are 2 new startups, Magic AI in the US and Moonshot AI in China. They’ve raised over $1bn in recent days on the potential for multi-million token context capabilities to help them tackle complex problem solving and planning tasks. Rumours of an AI breakthrough at Magic abound… watch this space.
Fallout from Sora’s text-to-video
With OpenAI announcing Sora, their stunning video generation model last week, this week discussions focused on what it could mean beyond it ‘s obvious creative applications.
OpenAI have intimated that they are holding it back to help ‘society ‘ adjust, and to test the model from a security and misuse perspective. This model also suggests that throwing more compute (some suggest north of $1bn) at a problem yields dramatic results, and helps models build effective ‘world understanding ‘. To generate novel video footage of physical objects, the model needs to understand how entities move and interact in the physical world. The demos indicate it has a strong understanding of physics and motion. This step forward is likely to have implications for the future… AI ‘s that objectively understand how the world works and can apply that understanding to varying problems, will be far more powerful than today ‘s models that have a highly superficial grasp of reality.
Meanwhile Stability AI have trailed Stable Diffusion 3, which will offer more advanced text-to-image capabilities as would be expected, but also aims to provide an opensource platform for advanced text-to-generation.
Groq who?
The last week has seen a promising entrance for a new chip company, Groq. Whilst Nvidia, AMD, Google and Intel ‘s AI chips have become the hottest commodities on earth (Nvidia recorded the biggest single-day jump in market cap in history this week), a small vendor has demonstrated its new silicon can run AI models faster and more cheaply than the established competition, without reliance on scarce high bandwidth memory. Prices for running AI ‘inference’ have already been falling through the floor and are set to fall further assuming Groq can get their chips fabricated and to market in volume.
Key takeaways
This is our first weekly post so the thoughts here are somewhat expanded, and there’s a lot to catch-up on…
A tech tsunami: The ChatGPT inspired rush to AI from late 2022 has triggered a surge of investment, research, software and hardware competition that this month has started to hit the mainstream. Much like a tsunami there ‘s a lot more to come, and it won ‘t stop for the foreseeable; Apple, Amazon, and Meta are just getting going, (these three alone spend more on R&D each year than all of the companies and the governments in France, UK, Italy, Spain and the Netherlands combined).
Attenuated progress for now: Like computing waves before it, AI is seeing high-end launches combined with smaller, faster, cheaper evolution at scale. But there is still hidden constraint… compute (a single factory in Taiwan makes all of the worlds AI chips). Think Nokias to iPhone 15s in 18 months, but being stuck on 1G networks and nowhere to charge your slab of titanium and glass. Sora and Gemini show that ‘laboratory ‘ models will become many times more powerful in the coming months, and Gemma demonstrates the efficiency curve, but the emergence of chips from the likes of Groq suggest the underlying compute problem can also be solved with AI-native hardware.
Outlook: As of this week, our timelines to AI systems that reach human equivalence (AGI) are shortening. Two recommendations continue to hold true…
- 80% of your AI time should be spent on experimenting with current state AI in its currently workable forms; combinations and novel uses will uncover many competitive advantages.
- 20% of your AI time should go to envisioning solutions that may seem near-impossible today, but could arrive much sooner than expected. Those who can capitalise quickly on step-changes will have an unprecedented strategic advantage.
Now over to Exo for a news roundup…
EXO
This week in AI, we’re seeing major advancements and important discussions shaping the future of this transformative technology. From business adoption to governance debates, and hardware innovation to cutting-edge research, there’s something here for everyone interested in the AI landscape.
AI business news
- Samsung is bringing Galaxy AI features to more devices (Wider availability of AI-powered features signals mainstream adoption and potential impact on user experience)
- Dili wants to automate due diligence with AI (Streamlining a complex business process with AI hints at wider efficiency gains and potential job displacement)
- Google strikes a deal with Reddit, reportedly for $60M/year, giving Google access to the Reddit Data API to surface more Reddit content and train AI models (Large-scale data deals highlight the importance of data in powering AI systems and potential shifts in search results)
- Analysis shows writing, customer service, and translation jobs declined the most on Upwork since the release of ChatGPT (Demonstrates the real-world impact of AI on specific job markets)
- Elon Musk hints at possible X partnership with Midjourney image generation (Potentially influential combination of tech titan and cutting-edge AI application)
AI governance news
- Google says it is working to immediately fix Gemini’s inaccuracies in some historical image generation depictions (Addresses the urgent problem of factual errors and biases in generative AI output)
- Hundreds of AI luminaries sign letter calling for anti-deepfake legislation (Highlights growing concern among AI experts about the potential misuse of deepfakes and a push for regulation)
- Tech giants sign voluntary pledge to fight election-related deepfakes (Indicates proactive measures by major tech players to combat AI-powered disinformation)
- Anthropic takes steps to prevent election misinformation (Shows a specific AI company taking responsibility to mitigate potential harms)
- No ‘GPT’ trademark for OpenAI (Implications for intellectual property and how AI model names might become more distinctive)
AI research news
- Cohere for AI launches open source LLM for 101 languages (Expands accessible AI development resources with a focus on multilingual capabilities)
- Gemma Open Models, A family of lightweight, state-of-the-art open models built from the same research and technology used to create Gemini (Opens the door for smaller companies and individuals to build cutting-edge AI applications with less computational overhead)
- Meta’s V-JEPA: Meta AI released the code for V-JEPA, a non-generative model capable of predicting missing parts of videos using an abstract representation space (Advancements in video understanding and manipulation could lead to new creative and security applications)
- FinTral: A Family of GPT-4 Level Multimodal Financial Large Language Models (Specialized AI models for a complex industry show progression toward domain-specific AI solutions)
- LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens (Research pushes the boundaries of AI memory and processing abilities, potentially leading to more sophisticated AI interactions)
AI hardware news
- Forget ChatGPT – Groq is the new AI platform to beat with blistering computation speed (Could lead to significant breakthroughs in AI model training and real-time applications)
- Microsoft is developing a server networking card to boost the performance of AI chips (Dedicated hardware for AI highlights the need for specialized computing solutions)
- AI is going to need a global investment, just maybe not $7T, says OpenAI CEO Sam Altman (While the specific $7T figure may be disputed, it highlights the need for substantial investment from both governments and the private sector)
- Nvidia revenue grows 265 percent (Financial success signals strong demand and industry confidence in AI-focused hardware)
- SoftBank’s Masayoshi Son is reportedly seeking $100B to build a new AI chip venture (Massive investment aims to create a major player in the AI hardware landscape)