Week 35 news

Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…

Themes this week

JOEL

Themes this week:

  • Chinese labs unveil powerful new open models amid Xi Jinping’s concerns.
  • The three-way dynamic: AI in cyber tools, model protection and attack.
  • Shifting from fear to productivity gains.

China’s imperfect model drives creativity

Alibaba’s Qwen2-VL and Zhipu’s GLM-4-Flash may not be household names like ChatGPT and Claude, but they’re respective releases this week demonstrate a sharp uptick in the capability of Chinese labs and AI models. Both are frontier class, both are open-weight, and Qwen2-VL tops the leading models from OpenAI and Anthropic in some vision benchmarks.

But to the extent one can generalise about an entire region of over 1.4 billion people, China’s AI landscape is paradoxical. A recent Chinese Communist Party gathering that discussed the nation’s technological future, saw President Xi Jinping reportedly expressing concerns about AI risks. Xi’s supposed apprehensions, highlighted in a recent Economist article, stand in contrast to the nation’s previously stated aspirations to “lead in AI by 2030”.

“China should abandon uninhibited growth that comes at the cost of sacrificing safety,” states a study guide reportedly edited by Xi himself. Yet, the country continues to pour resources and expectation into AI development. The “AI+” initiative, announced in China’s 2024 Government Work Report, is accelerative. This strategy aims to integrate AI across all sectors of the economy, moving beyond isolated applications and high-tech industries, to cover traditional sectors as well, aiming to revolutionise everything from manufacturing and agriculture to healthcare and urban planning.

But AI needs computation, and the US is intent in limiting China’s access to Nvidia’s and other GPUs and plans to step up its controls in the coming months. Of course, Chinese companies are finding creative workarounds, from pre-ban stockpiles, using underground markets, and public cloud services to setting up offshore datacentres. And Huawei has made significant progress with its Ascend 910B chip, which is considered competitive with some lower-end Nvidia models. China may not have many advanced data centres, but it has less restricted access to data and fewer privacy restrictions, especially in areas like public surveillance. Chinese labs capability in vision models is likely not a confidence. As is their multi-lingual strengths with the diversity of non-Western, less Anglo-centric data sets potentially providing a unique advantage.

Also relative to the US and Europe, it has ready access to electricity generation. China can build and connect new power plants, including renewable energy sources, vastly faster than most other states. For example, China ‘centralised’ process has been known to construct and commission a large new power plant in as little as 18 months, versus the many years it takes elsewhere.

China also leads in AI research publication, hosting nine of the top ten institutions globally by publication volume. However, its universities, despite their research output, struggle to match the allure of US institutions. The increasing state control is a drag on morale and research freedoms. As one observer noted, “Tsinghua University, her alma mater, now values officials more than academics.”

Whilst research from China is extensive, it appears to materialise into commercial success less directly than in the US. China outpaces other countries in AI patent filings, but US patents are generally of higher quality. But again, the picture is mixed. In some fields, Chinese companies have been bold in coming to market. Shengshu with Vidu and Kuaishou with Kling have developed text-to-video AI tools comparable to OpenAI’s Sora and have already made them publicly available. These and the other so-called ‘AI tiger’ labs of Baichuan, Zhipu, Moonshot and MiniMax are attracting investors and releasing aggressively.

The regulatory landscape presents its own contradictions. While China has introduced laws to mitigate AI-related scams and safeguard user privacy, it also maintains a relatively light touch in most areas, allowing for the proliferation of open-source models. This openness, however, coexists with reports of heavy censorship in consumer facing applications. There are indications that the Chinese government is particularly concerned about AI’s potential impact on ideological control, with efforts to ensure AI systems align with party ideology. AI chatbots like Xue Xi, have been designed specifically to indoctrinate users with CCP materials.

In the realm of robotics and embodied AI, China is making massive strides. In Wuhan alone, around 500 Baidu-built robotaxis have been deployed, with plans to increase that to 1,000 by year-end. Chinese companies like Xiaomi and UBTech have made significant progress in developing humanoid robots with advanced AI capabilities, leveraging China’s existing global leadership in industrial robot deployment. Again, this progress in industrialised AI contrasts sharply with the challenges faced in semiconductor development.

Takeaways: As China navigates these contradictions, it’s unclear whether it will achieve AI leadership by 2030 (and it’s hard to know what that would even look like!). What’s very clear is that with nearly half the world’s AI researchers and a paradoxical landscape that demands constant innovation, China will unquestionably continue to drive the speed and diversity of global AI progress towards the end of the decade.

Machines protect machines

This week, Gartner updated their forecasts and predicted a surge in AI-driven cyber spending to $212 billion in 2025, a 15.1% increase from 2024.

The rapid evolution of AI has opened new avenues for cybercriminals, with Gartner also forecasting that by 2027 at least 17% of cyberattacks will involve the technology. Based on estimates of the cost of cybercrime worldwide, this could equate to more than $2 trillion in losses. This growth is forcing businesses to continue to upgrade their security operations, driving massive investment in AI-powered defences and AI-specific services. The impact is already evident in the market, with cybersecurity firms like SentinelOne reporting substantial growth – a 33% revenue increase in 2024.

Businesses face a 3-way challenge: harnessing AI’s potential for defence, protecting new AI systems, all while guarding against its misuse by attackers. This balancing act is complicated by a persistent skills shortage in the cybersecurity sector, pushing companies towards AI-augmented solutions and managed security services. “The continued heightened threat environment, cloud movement and talent crunch are pushing [AI] security to the top of the priorities list,” notes Shailendra Upadhyay, senior researcher at Gartner.

As AI becomes more deeply embedded in digital and cybersecurity infrastructure, the challenges get more complicated. Enter AI-SPM, or “AI security posture management”. Security product leaders Orca and Wiz are pioneering the concept; both companies are developing comprehensive solutions to address the unique security challenges posed by the rapid adoption of AI technologies.

Orca’s AI-SPM offering provides visibility into over common 50 AI models and software packages, allowing organisations to maintain security across their entire AI stack without adding new point solutions. It includes features like AI and ML Bill of Materials (BOM) for inventory, compliance frameworks, sensitive data detection in AI training sets, and alerts for public access to AI resources.

Similarly, Wiz’s AI-SPM capabilities focus on discovering AI use, detecting misconfigurations, and uncovering potential attack paths to AI services. Their approach emphasises full-stack visibility into AI resources, enforcing secure configurations, and protecting sensitive training data. Wiz (who recently saw a deal to be acquired by google for $23 billion fall through) also offers an AI Security Dashboard to help developers proactively address security issues in AI pipelines.

Whilst we’ve yet to see the deluge of AI driven attacks some have predicted; several types of AI-augmented trends are emerging. AI-driven social engineering attacks are identifying high-value targets and creating hyper-personalised phishing campaigns. Deepfakes are being used to manipulate audio and video for more convincing impersonation attacks. Malicious models are being used to generate harmful content or attack vectors at scale and AI-enabled ransomware is becoming more adaptive and difficult to detect.

Takeaways: As the AI cybersecurity arms race intensifies, businesses must invest in both AI-powered security solutions and their human defences through training and expert services. We’re seeing a three-way competition unfold across AI-SPM, advanced AI-powered security tools, and increasingly sophisticated AI-driven attacks. This relentless cycle of innovation means companies must stay alert to developments to keep pace. The long-running growth in the cybersecurity sector shows no sign of slowing.

JOOST

Beyond fear to productivity

The narrative surrounding AI has often been tinged with fear, particularly in recent discourse. The Financial Times has been notably sceptical about the technology, cautioning against the hype and unrealistic expectations. But FT journalist Elaine Moore recently argued that using fear to drive the AI journey is not only unproductive but also sets the stage for inevitable disappointment. The early excitement surrounding AI has led to inflated expectations, and while the potential of AI is immense, it’s crucial to approach it with a balanced perspective, focusing on practical applications rather than sensationalist fears.

Amidst this broader scepticism, there are tangible examples of AI delivering on its promises. Klarna, the Swedish fintech firm, is a case in point. Klarna has integrated AI into its operations with the expectation of achieving massive efficiency gains. According to recent reports, Klarna anticipates that AI could help the company reduce operational costs by as much as 40%, while also enhancing customer service and streamlining various processes. Boss Sebastian Siemiatkowski told the BBC this week that AI-driven job cuts would mean Klarna could pay its remaining workers more.

However, it’s essential to recognize that not all AI projects will succeed as per Klarna’s. The path to integrating AI into business operations is not without its challenges but it’s important to shift the conversation around AI from one of fear to one of opportunity. Rather than focusing on the potential disruptions AI might cause, businesses should consider how AI can complement and enhance human capabilities. AI is particularly effective in reducing workloads by automating routine tasks, which in turn allows employees to focus on more strategic and creative endeavours. This approach views AI not as a replacement for human labour, but as a tool to augment the workforce and drive greater productivity.

Moreover, AI is playing a transformative role in talent management. A recent report from McKinsey highlights how generative AI is reshaping corporate talent strategies. Companies are now using AI tools to identify skill gaps, personalize employee development plans, and even predict future workforce needs. This proactive approach not only enhances productivity but also ensures that organizations can retain and develop talent in a rapidly evolving job market.

Takeaways: While the journey of AI integration is complex and sometimes daunting, the focus should be on the potential benefits rather than allowing fear to rule. Klarna’s example demonstrates that with careful planning and implementation, AI can lead to significant gains for many existing employees.

ExoBrain symbol

EXO

Weekly news roundup

This week’s news highlights the growing integration of AI across various sectors, significant developments in AI governance, exciting research breakthroughs, and the continued evolution of AI hardware capabilities.

AI business news

AI governance news

AI research news

AI hardware news

Week 42 news

Image generation group test, feral meme-generators from the future, and AI goes nuclear

Week 40 news

OpenAI accelerates, auto-podcasting emerges from uncanny valley, and why not to take every economist’s view on AI at face value

Week 39 news

AlphaChip plays the optimisation game, money talks as content walks, and infrastructure goes hyperscale