Week 24 news

Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…

Themes this week

JOEL

This week we look at:

  • Apple’s vision for personal AI.
  • A new $1 million prize to advance research beyond large language models.
  • Luma Labs cinematic video Dream Machine.

Apple’s vision for AI as a suite of personal intelligence features

At the hotly anticipated WWDC24 developer conference this week, Apple unveiled its new play, ‘Apple Intelligence’, positioning it as “AI for the rest of us.” With a strong emphasis on privacy, Apple aims to set a new standard for AI in consumer devices. The system, set to launch in beta this autumn, promises to seamlessly integrate AI capabilities across all Apple operating systems and devices (although only if you have the very latest hardware).

Apple Intelligence is not a chatbot; it is a comprehensive system that prioritises personal intelligence over a conversational paradigm. The company has taken a thoughtful approach to the consumer experience, focusing on enhancing the already familiar capabilities of Apple devices while providing a fluid engine that allows for adding increased intelligence over time. Users will be able to summarise and generate text and images across apps, get more nuanced assistive responses based on personal context, and have the AI models directly control apps through ‘app intents’. Personal workflows, created from prompts or event triggers, seem very possible with this system.

Under the hood, Apple Intelligence relies on mostly small on-device AI models that compare favourably with recent offerings from Microsoft (Phi-3) and Google (Gemini Nano) for similar edge use-cases but don’t move the state-of-the-art forward. Their more powerful models running in the cloud are sub-GPT-4 level but do seem to have innovated in terms of training, reinforcement learning and efficiency. Additionally, the system utilises a form of retrieval augmented generation, a widely used AI approach, that indexes personal data so it can be retrieved and provided to the models to create relevant outputs, plus a smart orchestration layer to invisibly control on-device or cloud processing.

Siri, Apple’s virtual assistant, has received a significant refresh as part of Apple Intelligence. Users can now interact with Siri through text inputs and enhanced capabilities allow it to string together actions and infer from various bits of personal context, such as plans, appointments, and messages. This agentic capability is also reminiscent of Google’s idea for launching multi-step flows from the search bar, but we’re yet to experience this in the real-world.

Privacy is a key selling point for Apple and also for Apple Intelligence. The company emphasises that most of the data either stays on the device or is sent to a special new “private cloud compute” infrastructure (with servers running Apple M chips). However, it remains to be seen whether consumers will fully understand and trust this model. One of the clever safety features is a restriction on image generation, avoiding the creation of photorealistic photos. My children were most excited by the Genmoji feature, which will allow for the creation of unique emoji. The system also ticks the box for multi-modal capabilities, including screen drawing, similar to that seen in recent GPT-4o demos.

While Apple Intelligence includes free access to ChatGPT, it appears to be a non-exclusive, distribution-only relationship rather than the much-vaunted partnership that some had speculated. The integration of OpenAI’s technology feels somewhat bolted on, and it’s unclear how this will impact the overall user experience.

Apple Intelligence is promising as a incremental vision for light touch consumer AI. The markets agree with a share price boost returning Apple to the status of the world’s most valuable company. But by the Autumn, fun, slick creative and life management features will be a minimum requirement, so Apple can’t stand still if they want to kick-start a new upgrade cycle.

Takeaways: In terms of device support for this on-device centric approach, Apple Intelligence will be available on Apple M1, M2, M3, and M4 chips, as well as the A17 Pro chipset. However, the Bionic chips, will not be compatible. Only the iPhone 15 Pro and iPhone 15 Pro Max will work as the older devices use the Bionic chipset, so no intelligence for the iPhone 15 or 15 Plus or the last-gen iPads.

The $1 million ARC prize

French AI researcher François Chollet, known for his critical stance on Large Language Models (LLMs), has recently launched the Abstraction and Reasoning Corpus (ARC) challenge in collaboration with Mike Knoop, co-founder of Zapier. This $1 million prize presents another way to evaluate AIs and encourage the development of systems that can efficiently learn new skills, rather than just the answering of questions (see our coverage of standardised testing for AI 2-weeks ago).

Chollet argues that current LLMs rely heavily on memorisation and pattern matching, lacking the ability to adapt to novel situations. He believes that the industry’s excessive focus on LLMs has set back progress towards AGI by 5-10 years, limiting the exchange of ideas and collaboration among researchers. The hard-cash prize aims to encourage researchers to explore new ideas and approaches. The prize will be awarded annually, with the ultimate goal of achieving 85% accuracy on the benchmark.

The ARC benchmark, designed by Chollet, tests an AI system’s ability to efficiently learn new skills by presenting it with novel tasks that require reasoning and abstraction. He believes LLMs solve tasks by identifying the right ‘program template’ from their vast memory and applying it. Chollet maintains that true reasoning involves working out new programs on the fly.

Jack Cole, a researcher working on solutions to the ARC benchmark, has been able to make progress using LLMs, getting scores of around 35%. His approach combines the strengths of current AIs, which do excel at pattern recognition, with better solution ‘search’, supporting more thoughtful planning and reasoning as well as more dynamic learning. LLMs currently suffer from being unable to deeply learn as they think.

However, some argue that these discussions over memorisation versus new program synthesis may be irrelevant if AI systems can effectively get things done. After all, there is no precise definition of intelligence, as highlighted by the work of Michael Levin on diverse intelligence in biological systems. It may be that us humans are also drawing from a vast array of mini program templates as we solve day-to-day problems.

Takeaways: The ARC prize will have very little impact on the current multi-billion dollar LLM focus, but it does serve as an important indicator for AI progress. If researchers can win, it would suggest that AI capabilities may prove the sceptics wrong and develop faster than even the scaling-law optimists believe. This could have profound implications for the coming years, so it will be worth watching this space very closely. We at ExoBrain will keep you posted.

 

ExoBrain symbol

EXO

AI video dream machine

While we still wait for OpenAI’s Sora, others are stepping into the video generation ring such as KLING and now Luma Labs. Their Dream Machine launched this week and offers, free, realistic videos, from text and image prompts. They state on their website: “It is a highly scalable and efficient transformer model trained directly on videos making it capable of generating physically accurate, consistent and eventful shots. Dream Machine is our first step towards building a universal imagination engine and it is available to everyone now!”

Luma Dream Machine generates 5 second cinematic videos with lifelike motion and maintains a degree of visual and physical coherence even in complex scenes but does struggle with animals, humans, character movement and text. The model’s simple interface makes its accessible to a wide range of creators, although more control will be needed to shape the sometimes-unpredictable results. Social media platforms have been flooded with videos showcasing Dream Machine including many classic memes converted into videos.

You can see our newsletter image here, animated to show the viewer flying through the futuristic scene. Using your preferred image generation tool to prompt the video creation seems to get the best results.

Takeaways: Luma Labs AI has made the model available for free, offering users up to 30 video generations per month.

ExoBrain symbol

EXO

Weekly news roundup

This week’s news highlights the rapid advancements in AI across various industries, the growing concerns around responsible AI development, and the intensifying competition in the AI hardware market.

AI business news

AI governance news

AI research news

AI hardware news

Week 46 news

Are the labs hitting a scaling wall, truth social, and the AI grandmother scamming the scammers

Week 45 news

Trump 2.0 risks American AI dominance, super-duper democracy, and Project 2025

Week 44 news

ChatGPT Search takes on Google, taxing times for Labour and labour, and a glimpse of the future?

Week 43 news

Claude clicks with computers, are universities failing in their core mission, and AI takes a seat at the top table