Week 26 news

Welcome to our weekly news post, a combination of thematic insights from the founders at ExoBrain, and a broader news roundup from our AI platform Exo…

Themes this week

JOEL

This week we look at:

  • Anthropic’s Claude 3.5 Sonnet release and new productivity features.
  • The World Fair and rise of the AI engineer.
  • Figma’s new design features from this week’s Figma Config 24.

Anthropic’s new model and features

Anthropic surprised many with a new model launch last week, releasing a version upgrade to their Claude family just 3-months since their last big update. They launched Claude 3.5 in ‘Sonnet’ form (a supposedly mid-sized model) but a week in, the industry and user response has been universally positive. The model is not only superior to all the other Claude variants, but for many has surpassed Open AI’s GTP-4o to become the ultimate AI on the planet.

Anthropic, despite being seen as the big AI lab with the biggest focus on safety and cautious progress now appear to be pushing the frontier forward faster than anyone. Dario Amodei, Anthropic co-founder, told VentureBeat: “Claude 3.5 Sonnet is now the most capable, smartest, and cheapest model available on the market today.” Its 2x faster than Claude 3 Opus at a 5x lower cost.

The model is particularly strong at coding and vision understand, and we at ExoBrain can vouch for its superhuman coding skills and instant responses. The speed of feedback loops and the scope of what’s possible has dramatically increased with the new version, even if the sophistication of thought remains nearer to Claude 3 Opus levels. Social media has been full of entire games, such as working versions of Doom with auto-generated levels being developed from a single prompt. There has been much speculation on how Anthropic has been able to pull this off, they have likely benefitted from the scale of compute their backers Amazon can provide, increasing the size of the model but also improving efficiency and capability with ever more carefully curated synthetically generated data.

Whilst OpenAI have had to back-track on plans to release their controversial voice mode, Anthropic have also been busy designing new ways to interact. These centre around two interesting new concepts; ‘artifacts’ and ‘projects.’ These feel much more intuitive than the chat thread has felt to date when working on common business tasks. Artifacts get created when you work with the model to write code or a document for examples. Instead of having numerous steps in a conversion with snippets and versions of the collaborative work, the artifact window pops up and allows for the changes to be reflected there as you go. This feels much more organised. OpenAI launched custom GPTs last year, but they have failed to catch-on. Projects is Anthropic’s version of this, but on early testing it feels a more natural approach. A Claude project can have multiple chats, but with special instructions and custom uploaded ‘knowledge’, and the artifacts being worked on in that project can be shared across threads. The approach is not yet perfect, we found in our testing that the model would often forget custom instructions, but early bugs aside, this user experience is going in a very positive direction.

Finally, Anthropic have also broken new ground with a beta their Steering API. This offers a glimpse into the future of AI manipulation by allowing developers to influence the internal features of the language model (much like they demonstrated with their Golden Gate Claude experiment, forcing a version of Claude 3 to become entirely obsessed by the bridge). This opens up new possibilities for customisation and fine-tuning of AI outputs. This could lead to highly specialised AI assistants tailored for specific industries or tasks.

Takeaways: With Claude 3.5, projects and artefacts, we believe Anthropic have the strongest subset of features on the market today. They’re available on the pro plan and we would highly recommend you explore this option. With these new features and plans to release new models every few months, OpenAI, Microsoft and Google should be worried.

AI Engineer World Fair

The AI Engineer World Fair in San Francisco this week showcased the growth of the community of developers focused on building AI powered products, with attendance quadrupling to 2,000 since the first such event in October year. Sean Wang (otherwise known as Swyx) host of the Latent Space podcast, conference organiser and author of the influential essay “Rise of the AI Engineer”, emphasised the dramatic acceleration in this new role. “A wide range of AI tasks that used to take five years and a research team to accomplish, now just require a spare afternoon,” Wang explained. This shift underscores the increasing accessibility of AI technology and the role of AI engineers in quickly translating capabilities. The AI Engineer role is positioned as a link between the more research-oriented machine learning and data science roles and the more product-oriented software engineering roles. AI Engineers work primarily with existing models and APIs to create practical applications, rather than developing new ML models from scratch.

Friend of ExoBrain, AI Engineer, and agent expert Eddie Forson shares the following insights from conference floor, highlighting the energy and excitement while noting the sense of a field still in its infancy. Eddie observed that AI agents are still unreliable, with “agents on rails” (predetermined workflows) being the safer option over unpredictable dynamic configurations. His sentiments from the conference include the critical importance of evaluation frameworks and quality assurance and building for the future: “Models are getting better fast. You should build with the future in mind. Imagine what you will be able to accomplish with better models in 3-6-12+ months, not now”.

Labs, startups and big tech were all present demoing and launching a range of new features across the main tracks of RAG & LLM frameworks, open models, AI leadership, code generation and dev tools, AI in the Fortune 500, multimodality, evaluation, ops, GPUs and inference, and of course agents. There was a lot of interest and ideas from speakers on the potential for agents to transform workflows. From enhancing productivity in traditional industries to creating entirely new categories of products and services, the applications remain tantalising. The event also highlighted the growing ecosystem of tools and platforms designed to make AI development more efficient and accessible. From advanced evaluation testing to specialised cloud infrastructure, these innovations are enabling AI engineers to build and deploy solutions faster than ever before.

Takeaways: The AI Engineer World Fair provides concentrated access to a field evolving at breakneck speed. The videos on YouTube are worth your time; some are quite technical, others philosophical, but they all describe the components, trends and ideas that will make the next phase of AI products and development possible.

ExoBrain symbol

EXO

Figma’s new AI features

Following various AI design announcements from Adobe and Canva, this week Figma unveiled a suite of AI-powered tools at Figma Config 24 aimed at revolutionising design, presentations and product development workflows. Figma’s new AI features, currently in limited beta, promise to generate design drafts from text prompts, facilitate visual searches across team files, automate tedious tasks, and even create working prototypes from static mockups. “In a world where more software is being created and reimagined because of AI, designing and building products is everyone’s business,” said Dylan Field, Figma’s co-founder and CEO.

This development comes at a time when the integration of AI into creative workflows is rapidly accelerating. Adobe (Figma’s one-time suitor until the deal fell through), recently faced backlash over concerns about user data being used to train its Firefly AI models. In contrast, Figma has emphasised that its AI features use third-party models, and that no private customer data was used in training.

The introduction of Figma AI, alongside new tools like Figma Slides and developer-focused features, signals a broader trend of design platforms evolving into comprehensive product development ecosystems. For individual users, these AI-powered tools offer the promise of enhanced creativity and productivity. The ability to generate design drafts from text prompts or quickly prototype ideas could lower the barrier to entry for aspiring designers and entrepreneurs.

Takeaways: Looking ahead, the integration of AI into design tools is likely to accelerate the convergence of design and development processes. Figma’s new “Ready for Dev” view and Code Connect feature hint at a future where the handoff between design and implementation becomes increasingly seamless. This could lead to more rapid product development cycles but may also necessitate new approaches to project management and quality assurance.

ExoBrain symbol

EXO

Weekly news roundup

This week’s news highlights the rapid advancements in AI technology, from more accessible avatar creation tools to the ongoing debates around responsible AI development and deployment.

AI business news

AI governance news

AI research news

AI hardware news

Week 42 news

Image generation group test, feral meme-generators from the future, and AI goes nuclear

Week 40 news

OpenAI accelerates, auto-podcasting emerges from uncanny valley, and why not to take every economist’s view on AI at face value

Week 39 news

AlphaChip plays the optimisation game, money talks as content walks, and infrastructure goes hyperscale