Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…
Themes this week
JOEL
This week we look at the details behind the AI and jobs headlines, the world of deepfakes, plus the latest new models:
- A think tank report made the UK headlines predicting up to 8 million jobs lost to AI
- Deepfake controversies continue, we suggest a step everyone can take to start managing their information landscape
- The model leader board gets busier at the top and new open-weight and architectural options emerge
AI jobs apocalypse?
You may have seen headlines such as “8 million UK careers at risk of job apocalypse from AI” this week, on the back of a new report from the The Institute for Public Policy Research. Nothing like some classic media scaremongering to drive clicks! But this is a topic that needs attention. Beyond the silicon, code and money, the human productivity story is perhaps the biggest in AI. This week we’re going to unpack the report plus share some of the broader ways to think about this most pressing of AI questions.
The report uses a typical atomic task automation exposure assessment (using GPT-4, so not without some bias in how AI’s capabilities were seen pre 2024) to roll-up how AI automation will impact the work we do by occupation and in other various task bundles.
Some of the more interesting insights in the report…
- AI adoption phase 1 is ‘here and now’ with the AI of today already automating ‘low-hanging fruit’, and we’re moving into phase 2 termed ‘integrated’ AI which subsumes the non-routine higher cognition tasks. The report is at pains to point out that things are moving much faster than previous job market disruptions.
- 59% of tasks are exposed to integrated AI. This shows the vast scope of impact, and exposure is also widely distributed across task types and occupations from so-called low to high skilled.
- AI will cause a mix of augmentation, increased productivity, job creation, and job displacement/destruction. The job market is not zero-sum.
- Governments need a wakeup call… policy makers and institutions should be urgently considering a 3-point plan… ring-fencing key job sectors, boosting new AI job creation and skill transitions, and planning for the fallout from rapid job destruction, potentially with tools such as targeted or universal basic income.
The 8 million job losses scenario attracts attention but should not be taken literally. What this serves as, along with the tasks exposure numbers are as somewhat rational indicators of the scale of AI impact. GPTs are GPTs “general purpose technologies” and will affect the majority of jobs.
The report is a useful jumping-off point for getting your head around the impact on jobs, but not the whole story. Task automation is one way to think about this, but many argue that just because you can automate a task, doesn’t mean you will. They suggest that the idea of comparative advantage means a non-infinite AI capacity, powering new value, will often be focused on ever higher value areas rather than be used for all tasks. This effect is already in-play, but exponential compute growth (see last week’s news) will make it less significant as we become increasingly able to replicate every human many thousands of times over. Moreover, as AI systems become more capable of performing complex tasks, the comparative advantage of human labour will diminish, leading to greater substitution of AI for human workers across a wider range of occupations and so on. Nonetheless, occupations that have a high reliance on market connectivity or complex networks, and are protected by unions, professional certifications, institutions and regulation will not be automated so readily.
Takeaways: So what does the future hold? AI will augment us all, improve productivity significantly, accelerate education, science, innovation and creativity, and in the short term drive up wages and corporate profits. But there will likely be a challenging period, where increased GPU compute, rapid model advances, and automated and autonomous entities accelerating job destruction. This recent paper on the impact of AGI builds on compute/task-centric analysis with a macro-economic view, and also considers aggregate demand. There’s a scenario where rapid AI progress leads to polarisation, wage bill reduction, significant unemployment and demand suppression… basically where there aren’t enough middle-salary workers to buy the goods and services AI is helping create. And when the pace of change outstrips the ability of workers to acquire new skills and transition to new roles, it exacerbates the problem. We’d see worsening wealth inequality and capital concentration in the top 1%. This then drives a series of second-order societal effects that would be destabilising.
So how would we get out of this AI trap? The challenge is capital distribution, or rather where all the money flows… This is why the likes of Sam Altman have been talking about universal basic income (UBI) for several years, as a means of re-distribution. Universal or targeted wage support, or other means of re-distribution, might be vital to sustain demand through this period. Whilst that might drive inflation and suppress human productivity, if it can sustain the global economy and the AI productivity and innovation boom, those effects could be mitigated. More new science, new job paradigms, and new products means marginal wage growth, potentially cheaper goods and services and more worker spending power, and could eventually pull us through. AI could reignite an era where capital is better invested in innovation than in wealth funds. But what about energy, material and environmental limits? AI will need to help there too, maximising our ability to grow sustainably within those planetary constraints.
The paper also quantifies a big long-term question; what happens if AI replaces everything we do? If we’re tending towards this future is there a bigger trap waiting for us where we run out of road? Are there bounds to the challenges we can put our minds too? At ExoBrain we think not, but that’s for another blog post, and deeper analysis in the coming weeks.
So what can we do today? 3 things…
- AI adoption ASAP; this will drive learning, skill transition, adaptability, and kick-start the vital process creating new jobs and productivity. It’s hard to imagine what the new jobs might be; the only strategy is to experiment to create them. AI is a highly personal and human scale tech, as Prof. Ethan Mollick (the author of a brand new book on co-intelligence) says, “your employees are your R&D lab right now.”
- Small to medium businesses can and must exploit their agility advantage to reduce the chances that all the AI driven return on capital and wage growth are ever more concentrated in a few mega-corps.
- We need to intelligently shape policy and intervention; governments and institutions need our guidance and input, and must make faster progress.
Fake deepfakes?
The sphere of AI challenged ‘information integrity’ is getting ever messier, with several feature launches and controversies sparking debate this week. A series of videos were linked on social media ostensibly to promote avatar generation from Arcads, but they sparked debate as to whether they were AI generated or not, or indeed were authorised by the person who’s likeness they contained. Stories continue to emerge of people being cloned to promote inappropriate products or experiencing non-consensual photo modifications. Knowing intuitively what is real and what is not is now essentially impossible. Meanwhile HeyGen announced a new feature they call ‘avatar in motion’ with an impressive demo showing a moving AI lip-synced video switching into alternative languages. Companies like Arcads and HeyGen will need to navigate the line between deepfake and avatar generation.
Content watermarking can’t become widespread soon enough. C2PA is an open approach providing publishers, creators, and consumers the ability to trace the origin of different types of media. Whilst this doesn’t stop deepfakes, a world where we are extra sceptical of anything we see online that doesn’t display clear ‘provenance’, is a world where the information integrity can be better managed.
But despite the BBC being a consortium member for example, it doesn’t appear that they are using the tech consistently to identify their web images. The C2PA standard is available in a selection of creative products such as Adobe’s suite and DALL·E 3 but is still not being widely adopted.
Takeaways: Using he C2PA watermarks is not straightforward but it is possible to start using a browser plugin to check for the credentials. Head over the plugin page to install and start checking images where available, the button that appears on images can indicate where the material came from and how it has been altered.
Model wars
2023 was relatively stable in terms of state-of-the-art models with OpenAI and Llama being all-dominant in the closed and open-weight categories respectively. In 2024 we’re seeing much more competition, including 3 notable entrants this week alone.
Analytics and ML firm Databricks is going “AI-native”, launching a large open-weight model, DBRX, that looks set to become the most powerful freely available. Its slightly above GPT3.5 level but with more efficiency and interestingly trained on more tokens (12 trillion) than GPT-4. Sources suggest a $10m training cost over 2 months on around 3,000 Nvidia H100s, providing a sense of the resources needed for a company to make a meaningful entry into this space.
The Israeli startup AI21 labs announced open-weight Jamba, a hybrid model that combines the typical LLM transformer architecture with a state-space (mamba) approach. This intriguing option should offer better performance over long inputs, and combine the reasoning power of the transformer with the supreme memory efficiency of the SSM.
On Thursday Musk’s xAI announced Grok 1.5, taking their X.com embedded model (not yet available in the UK, although accessible to Premium+ subscribers using a VPN), with GPT-4 class performance and a long 128k token input window. Musk also posted that “Grok 2 should exceed current AI on all metrics. In training now.”
Takeaways: The GPT-4 class level is getting increasingly crowded, although Anthropic’s Claude 3 family is inching ahead. Check out the community powered LMSYS model leader board showing Opus in top spot, Haiku the super low-cost and fast model entering the top ten, and Google’s Gemini models also performing well. For further insight into model performance and cost we also recommend Artificial Analysis. All eyes remain on what will be a defining AI summer, with GPT-5 and Llama3 setting out the next level of capability.
EXO
This week’s news reveals a surge in AI investments and innovation alongside mounting concerns about the need for ethical guidelines, security measures, and a shift away from centralized AI power.
AI business news
- Financial Times tests Ask FT, a chatbot trained on decades of its own articles (This shows how traditional media are leveraging AI to enhance their offerings, similar to the BBC’s AI plans.)
- OpenAI shows off first examples of third-party creators using Sora (This demonstrates the expanding ecosystem around OpenAI’s generative AI models and the potential for new AI-powered applications and services.)
- Amazon pours additional $2.75bn into AI startup Anthropic (This further highlights the massive investments flowing into AI development and the competition to create cutting-edge AI solutions.)
- OpenAI will pay developers for posting models on its new GPT store (This model marketplace could democratize access to AI capabilities and spur new business opportunities.)
- Stability AI CEO resigns because you’re ‘not going to beat centralized AI with more centralized AI’ (This resignation reveals the ongoing debate about the merits of centralized vs. decentralized AI development.)
AI governance news
- The White House puts new guardrails on government use of AI (This shows a growing focus on regulating AI use in sensitive areas, highlighting the need for responsible AI practices.)
- ‘Thousands’ of firms vulnerable to security bug in Ray AI (This vulnerability underscores the importance of addressing AI security risks as AI adoption spreads.)
- China turns to AI in propaganda mocking the ‘American Dream’ (This highlights how AI can be used for potentially harmful purposes like misinformation, reinforcing the need for ethical AI guidelines.)
- Uber Eats courier’s fight against AI bias shows justice under UK law is hard won (This case demonstrates the real-world consequences of AI bias and the ongoing challenges in ensuring fairness in AI-driven systems.)
- EU publishes election security guidance for social media giants and others in scope of DSA (This emphasizes the proactive measures being taken to combat AI-powered disinformation campaigns, similar to tech giants’ voluntary pledge.)
AI research news
- Generative AI for designing and validating easily synthesizable and structurally novel antibiotics (This breakthrough shows AI’s potential to revolutionize drug discovery and address pressing medical challenges.)
- Microsoft AI proposes CoT-Influx: A novel machine learning approach that improves LLM mathematical reasoning (This research could enhance AI’s ability to perform complex calculations and logical reasoning.)
- LLM2LLM: Boosting LLMs with novel iterative data enhancement (This technique could improve AI model performance and expand their capabilities.)
- LLM agents can achieve superhuman factuality (This research highlights the potential for AI to surpass human capabilities in generating factually accurate information.)
- Language models can reduce asymmetry in information markets (This study suggests AI could have positive impacts on market efficiency and information access.)
AI hardware news
- Behind the plot to break Nvidia’s grip on AI by targeting software (This story highlights growing efforts to challenge Nvidia’s dominance in the AI hardware market, potentially leading to greater competition and innovation.)
- Eliyan raises $60M for chiplet interconnects that speed up AI chips (This investment signifies the importance of specialized hardware solutions for accelerating AI workloads.)
- Intel confirms Microsoft’s Copilot AI will soon run locally on PCs, next-gen AI PCs require 40 TOPS of NPU performance (This news emphasizes the push to bring powerful AI capabilities directly to personal computers.)
- Amazon to ‘invest $150bn in data centers’ for AI growth (This massive investment underscores the vast infrastructural needs of the expanding AI landscape.)
- Nvidia could be primed to be the next AWS (This analysis suggests Nvidia’s AI dominance could mirror the success of Amazon’s cloud services.)