Week 25 news

Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…

Themes this week

JOEL

This week we look at:

  • OpenAI’s former chief scientist Ilya Sutskever launches Safe Superintelligence Inc. to tackle AI alignment
  • Film-makers and audiences alike grapple with AI-generated content amid fears of creative obsolescence
  • Covert AI surveillance analyses UK train passengers’ emotions without consent

What Ilya did next

Ilya Sutskever, the prominent AI researcher and co-founder of OpenAI, has announced the launch of his new venture, Safe Superintelligence Inc. (SSI). This news comes in the wake of Sutskever’s recent departure from OpenAI, disappearance from public life, and his long-term struggle with the dilemmas of creating world-changing technology.

Sutskever, widely regarded as perhaps the most gifted researcher in the world of AI, has been at the forefront of the field for years. Early on, Sutskever had a childhood fascination with AI, and the intuition that the reason neural networks, the software structures that underpin AI, weren’t performing well was simply because they were too small. He believed that with significantly larger systems, unprecedented capabilities would emerge. This insight has proven to be crucial in the development of modern AI. He was involved in the early re-purposing of GPU technology to run neural networks, in some ways setting in train the journey that has seen Nvidia become the world’s most valuable company this week. He worked at Google in the mid-2010s at the forefront of AI research, and then left to co-found OpenAI, and was instrumental in the creation of ChatGPT.

However, the past year has been challenging for Sutskever. The Altman affair in November 2023, which saw him place a casting vote leading to the temporary ousting of OpenAI CEO Sam Altman, put Sutskever under immense pressure. In one of the last interviews before the events, he expressed concerns about the rapid pace of AI advancement and the need for stronger safety measures. “If you allow yourself, if you just accept that the progress that we’ve seen, maybe it will be slower, but it will continue,” Sutskever warned, emphasising the importance of developing the science to control future intelligence. In the weeks leading up to the November affair, Sutskever tweeted a cryptic message: “if you value intelligence above all other human qualities, you’re gonna have a bad time.” This tweet, coupled with his subsequent disappearance from public life and eventual split from OpenAI, has sparked fevered speculation about what he might have witnessed or realised about the trajectory of development.

His involvement in the boardroom upheaval has been closely tied to his growing unease about the company’s increasing ‘product’ and commercial focus, and misalignment with his ethical ideals. As an advocate for responsible development, Sutskever argued for balancing safety and alignment using the fruits of commercialisation and growth. This divergence is set to widen with recent stories suggesting that OpenAI will drop their capped-profit status in the coming months. Sutskever’s concerns about the availability of compute resources at OpenAI for his now defunct ‘super alignment’ project played a significant role in his decision to leave to launch SSI. Alignment, in the context of AI, refers to the challenge of ensuring that artificial intelligence systems behave in ways that are consistent with human values, goals, and intentions. As Sutskever puts it, are ‘pro-social’. As AI systems become increasingly powerful and autonomous, the risk of misalignment—AI acting in ways that are detrimental to human interests—grows. “Super alignment,” is the even more daunting task of aligning AI systems that are more intelligent than humans, ensuring that their actions and decisions remain beneficial to humanity even as they surpass our own cognitive capabilities.

The key challenge Sutskever identifies is that future systems will be capable of extremely complex and creative behaviours that will be difficult for humans to reliably supervise, making humans “weak supervisors.” To address this, some propose an analogy that can be studied empirically today; using smaller, less capable models to supervise larger, more powerful ones. By encouraging the stronger model to be more confident and disagree with the weak supervisor, OpenAI’s disbanded team had shown promising results in allowing the more powerful model to perform optimally whilst still being under weaker supervision. While there are many limitations and questions that remain, this approach provides a framework for making empirical progress now, in advance of future major developments.

Sutskever’s new venture, SSI, is a renewed attempt to tackle the challenge, and echoes the idea behind founding OpenAI… to ensure there are ethical competitors in the race to ASI. The company’s mission is to “build safe super intelligence,” which Sutskever describes as “the most important technical problem of our time.” Their website reads; “Our business model means safety, security, and progress are all insulated from short-term commercial pressures.” SSI is co-founded by Daniel Gross, a former AI head at Apple, and Daniel Levy, who previously worked at OpenAI. Of course, the work of SSI and the term “build” in the mission, will be an accelerant in the race towards super intelligence as well as no doubt hugely valuable if successful. This is the fundamental dilemma faced by many ethically minded AI researchers; whether to actively contribute to the development of the state-of-the-art, or to abstain from the field altogether to avoid hastening progress. The strongest argument for doing the former, and getting involved, is that the power of compute, GPUs and datacentres is skyrocketing, meaning an ‘over-hang’ effect could kick in. Slower progress today would result in extra fast progress in the future when the computational power is unlocked, causing rapid change that results in more widespread economic and social destabilisation. A more gradual build-up is likely to be less disruptive, giving everyone more time to adapt.

Takeaways: There are few individuals we should take as seriously as Ilya Sutskever when it comes to the future of AI. His ground breaking contributions and exposure to the latest research make him one of the most significant technical voices. However, it is his genuine awareness and indeed public struggles with the profound implications of his work that mean we should follow his journey closely.

JOOST

AI hasn’t killed the video-star… yet

The recent cancellation of the premiere for “The Last Screenwriter” at the Prince Charles Cinema has reignited the debate over the role of artificial intelligence in the film industry. The movie, which featured an AI-generated script, faced significant backlash, highlighting the fears (and existential angst) and uncertainties surrounding AI’s encroachment into the creative domains traditionally dominated by humans.

This incident is reminiscent of the turmoil witnessed in the music industry, where AI has already begun to reshape the landscape. Our previous article on the disruption in the music industry (ExoBrain, April 2024) discussed how AI-generated compositions are challenging the very essence of musical creativity. The parallels with the film industry are striking and suggest a broader cultural resistance to the idea that creativity can be automated.

The Hollywood writers’ strike of 2023 underscored this anxiety. The Writers Guild of America made it clear that the threat of AI-generated scripts was a significant concern. Their fears are not unfounded; the rapid advancements in AI technology pose a potential existential threat to screenwriters, directors, and even actors. But does the exclusion of AI from the creative process guarantee better movies? Or is it merely a reflection of our deep-seated belief that creativity is a uniquely human trait?

The disqualification of a photograph from a prestigious AI image contest last week serves as a reminder of the tremendous power and potential of human creativity. The winning photograph, later revealed to be the work of a human artist, was initially mistaken for an AI creation. This incident suggests that while AI can mimic artistic styles, it is the human touch that imbues art with meaning and emotional depth.

However, dismissing AI’s role in the creative process would be shortsighted. The leaps made in video generation technology, such as those demonstrated by Runway’s latest AI video generator, indicate that we are on the cusp of a fundamental disruption in visual content creation. AI can bring to life fantastical elements that would be impossible or prohibitively expensive to create using traditional methods.

The evolution of electronic music in the mid-1980s provides a useful historical analogy. Initially met with scepticism and resistance, electronic music eventually gained widespread acceptance and became a dominant genre. Similarly, AI-generated content is poised to challenge and potentially transform the film industry. In the near future, it is conceivable that anyone with a computer and a creative spark could produce a feature-length film from their bedroom.

This democratization of filmmaking presents both a threat and an opportunity. Established players – like Hollywood fat cats – will view this as an existential threat to their business models and the high-paid positions they protect. However, for aspiring filmmakers and creatives outside the traditional power structures, this technological shift could be revolutionary.

The future of the film industry will likely be a blend of human and artificial creativity. AI can serve as a tool to augment human ingenuity, enabling filmmakers to push the boundaries of what is visually and narratively possible. Rather than replacing human creators, AI could become a collaborator, providing new avenues for storytelling and artistic expression.

Takeaways: While the fear of AI’s encroachment into the film industry is understandable, it is essential to recognize the potential benefits and opportunities this technology brings. By embracing AI as a creative partner rather than an adversary, the film industry can evolve and thrive in this new era of technological innovation. The challenge lies in finding the right balance, ensuring that the human element remains at the heart of storytelling while leveraging AI to enhance and expand the creative possibilities.

JOEL

AI’s unwanted gaze

In the UK this week, it was revealed that thousands of train passengers have had their faces scanned by Amazon hosted AI systems that analyse demographics and emotions – without consent or public discussion. According to documents obtained by the civil liberties group Big Brother Watch through a freedom of information request, Network Rail, a government-owned company, conducted a pilot scheme at major UK train stations in 2022. The scheme involved using AI-powered cameras to analyse passengers’ emotions, determining whether they appeared happy, sad, or angry.

Jake Hurfurt, Head of Research & Investigations at Big Brother Watch, said: “Network Rail had no right to deploy discredited emotion recognition technology against unwitting commuters at some of Britain’s biggest stations, and I have submitted a complaint to the Information Commissioner about this trial. It is alarming that as a public body it decided to roll out a large-scale trial of Amazon-made AI surveillance in several stations with no public awareness, especially when Network Rail mixed safety tech in with pseudoscientific tools and suggested the data could be given to advertisers”.

It will come as a surprise to many that there is no regulation or legal restriction on this kind of large-scale surveillance. In the UK, the use of facial recognition technology and emotion analysis in public spaces remains unregulated. While the Data Protection Act 2018 and the UK General Data Protection Regulation provide some general principles for the processing of personal data, there are no specific laws addressing the unique challenges posed by AI-powered surveillance systems.

The UK and EU have taken divergent paths regarding the regulation of AI biometric surveillance. The EU has passed the AI Act, which will heavily restrict the use of AI surveillance, banning unwarranted use and imposing limits on its use by law enforcement. In contrast, the UK government had plans to expand AI surveillance for policing purposes, despite concerns raised by the former Biometric Surveillance Camera Commissioner about its reliability and fairness. The UK’s Data Protection and Digital Information Bill, which would have made significant changes to data protection regulations and oversight of biometric data, was dropped due to the July general election and will now never see the light of day. The future of AI surveillance regulation in the UK will depend on the priorities of the next government.

The situation is further complicated by the fact that the UK’s data protection authority, the Information Commissioner’s Office (ICO), has limited powers to enforce regulations against companies based outside the country, as demonstrated by the overturned Clearview AI fine. This gap leaves UK citizens vulnerable to having their personal data collected and processed by AI systems without their knowledge or consent.

Meanwhile AI researchers this week raised the alarm around weaknesses in machine perception. Experiments have shown that making tiny tweaks to images can cause even highly capable AI systems to confidently perceive things that aren’t there. Worryingly, these ‘adversarial attacks’ are getting better at targeting specific AI behaviours – fooling them into seeing or not seeing whatever the attacker wants. As AI-driven surveillance and monitoring proliferates, the implications of these perceptual weaknesses are a concern.

Takeaways: The deployment of AI surveillance technologies, such as facial recognition and emotion analysis, requires much greater transparency from the companies and organisations implementing them. Citizens should demand comprehensive regulations that mandate  strict security measures, built-in rights safeguards, and robust oversight to govern the use of AI surveillance in both the public and private sectors. Legislation must be technology-neutral and adaptable to close loopholes.

ExoBrain symbol

EXO

Weekly news roundup

This week’s news highlights the rapid advancements in AI across various industries, the ongoing debates around responsible AI development, and the growing competition in the AI hardware and research landscape.

AI business news

AI governance news

AI research news

AI hardware news

Week 44 news

ChatGPT Search takes on Google, taxing times for Labour and labour, and a glimpse of the future?

Week 43 news

Claude clicks with computers, are universities failing in their core mission, and AI takes a seat at the top table

Week 42 news

Image generation group test, feral meme-generators from the future, and AI goes nuclear