Week 17 news

Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…

Themes this week


Themes this week:

  • When will AI be frictionless and ambient?
  • AWS to be the “Bedrock” for enterprise AI, while Microsoft and Google beat earnings estimates on cloud service growth
  • Mixed market signals as Meta spooks investors

AI at the mobile ‘edge’

In recent years new phone releases have become relatively low-key affairs, each one serving-up another slab of black glass with an extra camera lens or hour of battery life. But AI features and new shapes and sizes of device are rejuvenating mobile computing.

Microsoft this week unveiled the Phi-3 family of models, including the compact Phi-3 Mini. It’s small enough to run on a mid-range phone, and yet appears to compete with the larger Llama 3, Mistral, and even OpenAI models across various benchmarks. The trick is in the ultra-high quality curated dataset used to train it. Beyond the training approach, this is significant because it moves us toward realising the idea of ‘ambient intelligence,’ where AI more seamlessly integrates into our lives, transforming our mobile computers from communication and entertainment tools to intelligent companions.

The concept of ambient intelligence envisions a world where AI is omnipresent, ‘always-on,’ continuously recording, and then instantly responsive. AI is gradually breaking free from its chat window boundaries as companies like Samsung and Motorola unveil mobile devices with built-in capabilities. Today these advancements enable enhanced features such as translations, smart search, and creative photography, processed both on-device and in the cloud. Mckinsey predicts that by the end of the year as many as 50% of the interactions we have with our phones will be AI augmented. But by processing AI workloads fully on the device, users will enjoy faster, private, and offline experiences while in the future benefiting from extra intelligent personalisation.

Samsung has confirmed that the upcoming Galaxy S25 smartphones will feature more on-device AI capabilities powered by Google’s Gemini Nano 2. Meanwhile Microsoft is making overtures to the Korean giant and they may team up to work on the next generation of mobile AI silicon.

But most eyes are on Apple’s WWDC conference in June to find out how the firm plans to reclaim its mojo. Apple has been investing heavily in model research and serving the developer ecosystem, and also making acquisitions, this week picking up a firm specialising in on-device vision processing. But iOS 18 is going to be the make-or-break event. While Samsung uses a mix of on-device and cloud-powered AI, Apple plans to rely more on its own local LLM although intriguingly has also been linked with both Google and Anthropic’s tech.

Meanwhile news this week suggested that Apple is scaling back it is work on the Vision Pro VR system. The demand and compelling use cases have been lacking. This kind of super-immersive experience is perhaps the antithesis of ambient computing, and it seems for now humans prefer to balance external awareness with virtual interaction for most tasks. We’ve had ambient computing of a sort with Alexa and other voice assistants for a while, but they have lacked the intelligence to be subtle or proactive, generally causing annoyance when they stray from simple commands. Amazon demonstrated an LLM powered Alexa in September but this has yet to see wide deployment.

It is not certain the ‘phone’ as we know it will always be the dominant personal platform. There are new AI powered wearables emerging that may deliver ambient intelligence in more seamless ways. Whilst the Humane Pin has received catastrophically bad reviews, the Llama 3 powered multi-modal upgrade to the Ray-Ban Meta smart glasses is getting much more positive press, as is the odd but apparently fun Rabbit R1. We’ve tried the Ray-Bans, and they provide a practical and real-world solution for more continuous access to audio and visual capture and analysis, although won’t be AI enabled in the UK until later in the year. Ambient AI is all about simplicity, and at ExoBrain we can’t wait for our pre-ordered Limitless pendants to ship. Meanwhile we’ve been testing the Llama models on the Android platform and the reality is that these larger models are not quite efficient enough yet to work on-device, but that will soon change. With faster chips and smaller smarter models, the next device generation is shaping up to be a fascinating step-change.

Takeaways: Unless you have a very recent phone, mobile AI is still mostly app and web based. ChatGPT, Perplexity search, Copilot and Poe are the mainstays of on the Apple and Google app stores, with user star ratings and downloads roughly in that order. (Notable by their absences are Claude and Gemini). There are also an increasing number of character-centric AI apps appearing in the mobile charts alongside the mapping, writing, creative and generation tools. This is a space we will no doubt cover in the future as our mobile devices not only act as practical helpers, but perhaps also as social or psychological companions.

Update on the hyperscalers: AWS, Azure and GCP

ExoBrain was at the (Amazon) AWS summit in London this week looking at the biggest of the cloud service ‘hyperscalers’. The main focus; their AI hosting service Bedrock. It’s coming to the London region and is also getting several significant updates including custom model imports, a new evaluation feature to help quickly compare and select the best models for various use cases, and guardrails to provide enhanced safety and privacy. Additionally, AWS is offering new Titan models exclusively on Bedrock, along with the latest models from Anthropic, Cohere and Meta. Our sense was that Amazon continues to focus on the base of the infrastructure stack with rich data, security, engineering and management tools and large partner ecosystem.

The summit keynote also pushed Amazon Q, an assistant designed to help developers at every stage of the development lifecycle. This is being emphasised more so than the equivalent tools from Microsoft and Google and could lead to a new area of competition where designing, constructing, and managing increasingly complex solutions is automated and even self-generating in the longer term.

No assessment of cloud services is complete without considering the other two giants of hyperscale. Google recently announced new Vertex AI capabilities, including Agent Builder, Gemini 1.5 Pro, and the addition of open-source language models. Microsoft continues to enhance Azure AI Studio and add new security tools like Prompt Shields and Groundedness detection, and the integration of Cohere’s Command R models. All of the big CSPs are supporting multi-model solutions, and adding AI testing, governance, and security features. Google are making the first moves to make agent building a native part of the stack, with Microsoft set to follow with a rumoured announcement planned for their Build conference in May.

Google, Microsoft, and AWS are increasingly adopting distinct strategies in their approach to the AI revolution. Google is focusing more on niche ML services and their own models, leveraging historic expertise and huge TPU compute resources. This strategy allows Google to maintain control over its core technologies while still offering flexibility to customers, and feeds into the productivity space through its Workspace suite, although to date it has not yet seen much AI enhancement, perhaps suggesting the sheer range of fronts Google is trying to operate across.

On the other hand, Microsoft is taking a total domination approach, aiming to integrate AI into every aspect of its business. By leveraging its existing strengths in automation, analytics, enterprise software, communications, and productivity, Microsoft is seeking to embed AI capabilities across its entire product portfolio. Its leading the pack with user tools like Copilot across every conceivable aspect of its platform. This strategy could give Microsoft a significant advantage in reaching and transforming various parts of the business world, as it already has a strong presence and customer base in these areas. Likewise, they are seeking to dominate the AI model world with their financial clout, bankrolling OpenAI, hoovering up Inflection, and putting the competition regulators on high alert.

AWS is focusing on providing a solid foundation for AI at the base of the stack, whilst insuring it has a stake in the AI models race with its investment into Anthropic. AWS offers customers a wide range of models and tools to build and deploy complex AI solutions. This base of stack approach allows AWS to serve as the underlying infrastructure for advanced AI development, regardless of the specific applications or industries involved. Some targeted moves have been made by Amazon to integrate AI into its e-commerce empire, with strong adoption of its intelligent seller tools. But for now, they are content to stay somewhat in the AI background.

As Nvidia’s revenue from AI chips soars and it expands its offerings to include boards, systems, software, and services, it could potentially emerge as a formidable competitor to the CSPs themselves. The CSPs find themselves increasingly reliant on Nvidia’s chips and server tech, which could limit their flexibility and bargaining power. Hence the CSPs are working on a rage of custom hardware and silicon options to reduce their dependence on Nvidia. CSPs are also investing heavily in new data centre locations to meet the growing demand for cloud services and to provide low-latency access to their AI capabilities. Saudi Arabia, in particular, has emerged as an attractive destination for the likes of Microsoft due to its strategic location, ambitious plans to diversify its economy post-Oil, and focus on AI development. However, the global chip shortage has posed challenges in terms of procuring the necessary hardware, accelerating custom chip development programmes. Additionally, the availability of suitable land for data centre construction and the capacity to generate and connect sufficient electricity supplies to these facilities are critical factors. As a result, providers are exploring innovative solutions, such nuclear, renewable, and fusion energy.

Meanwhile the UK Competition and Markets Authority has just announced it’s looking at the partnerships between Microsoft and Mistral AI, Amazon and Anthropic, and Microsoft’s hiring of former employees of Inflection AI. This scrutiny adds to the ongoing investigation of Microsoft’s partnership with OpenAI. If the CMA decides to launch formal investigations, it could lead to delays in the launch of new AI services or features in the UK market.

Google and Microsoft released their earnings on Thursday, both beating estimates. Microsoft’s AI everywhere strategy seems to be working with third-quarter revenues exceeding expectations, growing 17% to $61.9 billion, with its Intelligent Cloud unit reaching $26.7 billion in revenue. Alphabet (Google’s parent company) surpassed first-quarter revenue expectations with $80.54 billion, its cloud services saw a 28% increase. Amazon earning will be reported next week with growth expected to be double digit although below Google and Microsoft. The full impact of generative AI on the cloud market is expected to be realized from 2025 onwards and the battle for AI cloud leadership will intensify. Expect more focus on agents, synthetic data tools, AI safety and governance, and ever deeper AI integration across the big three’s CSP platforms.

Takeaways: The big three are also a significant source of greenhouse gas emissions, and a significant proportion of the 2-4% of global emissions contributed by the world’s computing infrastructure. Carbon reporting protocols are divided into several buckets: Scope 1 being direct, scope 2 indirect and scope 3 being the emissions from the supply chain such as cloud service providers and AI compute. 70%+ of a typical organisation’s emissions are scope 3, but its widely talked of as the invisible part of the carbon equation. The big 3 provide tools to help you calculate your emissions, check out the Emissions Impact Dashboard for Azure, the Carbon Footprint Consoles for Google and AWS Carbon Footprint Reporting . In terms of their climate goals, Microsoft have pledged to become a “carbon negative, water positive, zero waste company” by 2030 and Amazon are “on a path to powering operations with 100% renewable energy by 2025” and reaching net-zero emissions by 2040. Google have perhaps the most ambitious goal, to be “net-zero emissions and 24/7 carbon-free energy” by 2030.


A tale of two cities

This week, the AI industry experienced a mix of signals as major players saw varying fortunes from their investments and strategies. BCG, Microsoft, and Google demonstrated positive results, while Meta faced investor concerns over its AI expenditure.

BCG’s announced it expects 20% of its 2024 revenue to be AI-related. 2023 revenue was $12.3 billion. This signalled a positive shift in the world of strategy consulting. The firm is reimagining how it delivers its services, leveraging AI to transform the way it advises clients. But is this a sustainable strategy, or will strategy consultants eventually need to invest more heavily in their own AI capabilities to stay competitive?

As the Stanford report revealed last week, staying at the frontier of AI research is becoming increasingly expensive. Training costs for cutting-edge models like GPT-4 and Gemini Ultra are estimated to be in the hundreds of millions of dollars. This has effectively priced out organisations such as universities, once the centres of ground breaking AI research, from developing their own frontier foundation models. Policy initiatives, such as President Biden’s Executive Order on AI, aim to level the playing field by providing non-industry actors with the resources needed to conduct high-level AI research. However, it is unlikely these efforts are enough to counterbalance the immense resources of tech giants.

Meta’s $40 billion capital expenditure on AI, with expectations of spending $100 billion in 2024, spooked investors and wiped out $120 billion in company value in a single day this week. While the scale of Meta’s investment is undoubtedly massive, it’s worth noting that the company has poured $46 billion into the metaverse over the last three years, and Amazon invested over $20 billion in Alexa. These bets, however, seem more binary in nature compared to the broader potential of AI.

Microsoft, on the other hand, has taken a different approach by partnering with OpenAI. This strategy seems to have gone down better with investors, as evidenced by Microsoft’s position as the world’s most valuable listed company, with a valuation of nearly $3 trillion and a climbing share price following stellar last quarter cloud earnings. The symbiotic relationship between Microsoft and OpenAI showcases the potential for strategic partnerships in the AI space, allowing companies to share the risks and rewards of cutting-edge research. Google also signalled last week that it would be spending $100 billion on AI in the coming years, although direct AI revenues from cloud services are sending Alphabet’s share price in the upward direction.

Takeaways: The AI industry is at a critical juncture, with fortunes being made and lost in the blink of an eye. Companies must carefully consider their investment strategies and partnerships to navigate this rapidly evolving landscape. Investor sentiment remains jumpy, but the opportunity presented by AI is too big to ignore and talk of a ‘bubble bursting’ is premature. As the dust settles on another busy week, Amara’s law is as relevant as ever; we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. The cost of AI feels out of step with the immediate benefits for some, but the longer-running growth signals suggest AI’s vast potential.

ExoBrain symbol


This week’s news highlights the continued rapid progress and adoption of AI across industries, along with the growing challenges around governance, security, and responsible deployment.

AI business news

AI governance news

AI research news

AI hardware news

Week 29 news

Language models do the math, MA(AI)GA, and intelligence too cheap to meter?

Week 28 news

Bursting the bubble narrative, reimagining public sector productivity, and the age of reason

Week 27 news

A tale of two elections, agents untethered, and the art of conversation

Week 26 news

Claude 3.5 Sonnet hits the high notes, the rise of the AI engineer, and Figma’s new creative toolkit