Week 14 news

Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…

Themes this week

JOEL

This week we assess the significant quantum computing news in the context of AI, explore Google’s travails, and check-in on AI safety post Bletchley Park:

  • Microsoft announces a breakthrough in quantum computing that has significant ramifications for AI as well as information security
  • Google feels the pressure of disruption, the cost of compute; may start to charge for premium AI search
  • The EU, UK and US formed partnerships to work together on ‘frontier’ model safety, the only problem is, nobody actually knows how they work

A true quantum leap

At ExoBrain we’re obsessed with helping companies unlock the power of AI, but in many ways that really means helping them unlock the power of ‘compute’. As we’ve discussed before, it’s been the driving force behind every wave of technology progress in our lifetimes, and it’s cost, accessibility and geo-political distribution may decide our future.

For thousands of years analogue computers, such as the fascinating Antikythera mechanism, helped us unlock the concepts of time and the motion of the planets. Since Alan Turing and the cracking of the Enigma code, digital computers have dominated, but this week some news emerged from Microsoft and hardware vendor Quantinuum that suggests the next era is nearing. They announced a breakthrough in the reliability of quantum computing, reducing the error rates of their virtual or ‘logical’ qubits, the building block of the quantum computer. Previous research suggested that such error-correction strategies could require thousands of expensive physical qubits per logical qubit. Microsoft’s research indicates we may need 100x less. Notable progress is also being made on single atom qubits, these replace superconductors which need to be cooled to a rather chilly -237C.

Quantum computing (QC) is not new, but has been held back by very low reliability, and the super expensive and impractical nature of the hardware. These 2024 developments suggest things are about to change, and over the next few years as we see digital computing skyrocket with Nvidia’s Blackwell, we’re also likely to see quantum computing become widely available.

But if you’re reading this you’re thinking, what does this mean for AI? Firstly, there is an interesting flywheel developing between the use of AI to accelerate the design and development of QC, and the strengths of the quantum paradigm in AI. QC research is inspiring novel AI architectures, and as the cost of QC reduces, it can start to super charge scientific discovery, new algorithms, data and memory capacities, massive parallelism etc. But we should be aware that whilst there is a clear playbook for scaling digital AI over the next few years, there is no equivalent vision for quantum AI. Its cost and complexity to-date has limited exploration, and the engineering and research ecosystem is under-developed. But this week’s news is likely to spur development. We expect to see hybrid classical plus quantum approaches emerging in the next few years. By the end of the decade this marriage will allow AI systems to exploit digital computing for the areas in which it excels such as data processing, control flow, user experiences, and integrating with older software, and will leverage quantum strengths for the messy chaotic real-world; complex algorithms, simulations, and higher dimensional data.

Naturally there’s a flip side. A recent update for Apple’s iMessage on your iPhone for example introduced highly advanced ‘post quantum cryptography’. QC capabilities that go beyond the hundreds (of error free) qubits we see today into the thousands and millions, likely around the end of the decade, will break most current encryption standards with ease. State actors are already harvesting data with a steal-now, decrypt later mindset. The information integrity and cyber security landscapes face extreme threats from both AI and QC.

Takeaways: On a practical note and for those managing tech the next steps in preparing for post-quantum cryptography is important reading. And whilst we’re preparing for the worst, we can be exploring the hoped for tools and techniques of the future. You can get started with quantum computing and create and run quantum programs with the help of the Copilot in Azure Quantum on the Azure Quantum website. Whilst this is still just emulation, its a great chance to get to grips with QC and understand what new potential it unlocks.

JOOST

The disrupter disrupted? Google may charge for AI search

Reports this week suggest that Google is considering charging fees for new “premium” search features powered by AI models. This is big for several reasons; firstly it would represent a major strategic shift for Google, as it would be the first time the tech giant has placed core components of its suite behind a paywall. Yet even more significantly the move underscores Google’s ongoing challenges in adapting to the ‘AI-age’ it invented and dealing with disruption to its primary advertising based revenue model. When you serve up 8.5bn searches a day, AI compute is still relatively expensive.

Essentially, AI is starting to disrupt business models in all kinds of ways, regardless of industry and size. In Google’s case, LLMs are providing more natural and conversational access to information, reducing the need for users to navigate through websites and be exposed to advertising. And it certainly does not help if direct competition is providing it for free, with Microsoft Copilot and its OpenAI model access (both now thinking of co-building the mother-of-all-datacentres in the desert). Plus new entrants such as Perplexity.ai are unbundling search with more advanced AI features.

The time required for new technologies to commoditise and disrupt is being compressed to the extreme. Where it took the automobile more than half a century, PCs about two decades, it took LLMs just a few years to become widely available. And as Joel highlights, compute growth is accelerating this process. You can figure out the trendline…

The company that invented LLMs is now facing an existential crisis brought about by LLMs. The new wave of AI (the transformer model) was born of Google’s diverse research community (and so the story goes: a serendipitous corridor conversation). But then Google lost focus and the brains behind it departed for new start-ups, allowing others (including OpenAI) to double down and overtake.

So speed will be everything. Focused fast-movers can snatch the chance to monetise early but only for a short(er) period. This will make the ability to flex (scale up, down, and pivot) a true competitive advantage, with ability to experiment, act fast and build strong partnerships (with sources of ‘AI alpha’) lifesavers. On the flipside, there will come a point when it’s simply too late for companies to adequately respond and adapt.

So what can we learn from this? The new normal is already here. Organisations can no longer afford to take a passive approach to AI. Proactively looking for commercial opportunities and vulnerabilities, understanding how to sustainably integrate AI into your core business model, and finding areas for exponential, continuous improvements.… all top priorities. In this era of relentless disruption, the choice is stark: be the disruptor or be disrupted. Not sure if there will be much in between.

Takeaways: To see the future of AI search make sure you try Perplexity.aiExa.ai and you.com.

JOEL

Will the new transatlantic institutional collaboration keep us safe?

This week the EUUK and US announced a new partnerships on AI testing, with their respective AI Safety Institutes. The partnership aims to advance international scientific knowledge of frontier AI models and facilitate sociotechnical policy alignment on AI safety and security. Sounds great? It is as far as it goes, but the since the much vaunted Bletchley Park global safety summit last year, the progress on concrete safety measures has only inched forward. The fundamental problem is the opacity and scale of LLMs… with trillions of virtual neurones, the truth is nobody really knows how they work.

Anthropic (the trainers of Claude) has devised an ‘AI Safety Level’ scheme called ASL. Claude 3 is deemed ASL-2 and I quote: “shows early signs of dangerous capabilities—for example, the ability to give instructions on how to build bioweapons—but where the information is not yet useful due to insufficient reliability or not providing information that, e.g., a search engine couldn’t. Current LLMs, including Claude, appear to be ASL-2.” Note the words verbatim from their documentation; “appear to be”. Claude 3 Opus is exhibiting unique self-reflective behaviour that ExoBrain and other organisations have been at the forefront of documenting. Right now, we see no evidence of deceptive acts, but we can’t be certain. This year we may reach ASL-3; “systems that substantially increase the risk of catastrophic misuse compared to non-AI baselines (e.g., search engines or textbooks) or show low-level autonomous capabilities.”

A study also out this week indicated that whilst safety efforts are growing fast, they’re still a 2% drop in the research ocean. The pay disparity is no doubt a factor. A research scientist role at the UK institute is currently advertised with a package of £85-135K, not bad. But a capability research role with a big AI lab would net you between £230k-£350k a year according to current postings. Top engineers and researchers are offered £1m+.

Despite the big salaries, the labs are at a loss to explain how we safely adopt their inventions. This week OpenAI notified the world that they had essentially perfected AI voice duplication technology, with models able to learn from just 15 seconds of audio. What they expect the world to do to manage these capabilities is anybody’s guess.

Takeaways: Take any talk of AI safety you hear (evaluations, assurance, and red teaming) with a big pinch of salt. This is not aircraft safety, where we know how planes fly and can engineer them accordingly. Nobody knows why the models are able to do what they do. At ExoBrain we believe that AI can’t be made fully safe in the lab. We need to harden system design, organisations, and society through ever more robust real-world implementation projects. When adopting AI, its down to us at the business-end to keep vigilant, do the safety testing in-situ, and deploy thoughtfully.

ExoBrain symbol

EXO

This week highlights the rapid acceleration of AI across businesses, research, and hardware, but tempers this excitement with growing concerns about safety, ethics, and geopolitical tensions surrounding AI technology.

AI business news

AI governance news

AI research news

AI hardware news

Week 40 news

OpenAI accelerates, auto-podcasting emerges from uncanny valley, and why not to take every economist’s view on AI at face value

Week 39 news

AlphaChip plays the optimisation game, money talks as content walks, and infrastructure goes hyperscale

Week 38 news

Microsoft turn the page on Copilot, infinite ambitions but finite resources, and are you opted in or out?