Exploring novel cognitive strategies in LLMs

JOEL

One of the most exciting aspects of AI in 2024 is that the full scope of knowledge and potential embedded in large language models (LLMs) remains largely unexplored. The ‘latent space’ of these models is a vast, rich, and uncharted territory, containing a wealth of information, skills, and cognitive strategies that could revolutionise how we interact with and get the most from AI.

One major hurdle in this exploration is the impracticality of manually evaluating novel capabilities. Traditional benchmarks and metrics often fail to capture the full range of skills and strategies that LLMs might possess, typically using questions drawn from human testing.

This is where the use of an AI interviewer, such as Claude 3 Opus, can be a useful tool. By engaging in conversation with a new AI model, probing its knowledge and pushing the boundaries of its capabilities, a skilled AI interviewer can surface insights and strategies that might otherwise remain hidden.

We’ve documented and organized the insights we’ve uncovered so far, into a toolkit and illustrated with some examples. We’ll be continuing this work and brining these strategies to our multi-agent solutions, and will also be analysing the significant new models expected from OpenAI and Meta in the coming months.

ExoBrain symbol

EXO

Subscribe for insider insights on applied AI. Our weekly newsletter analyses the top 3 themes, with a curated roundup delivered to your inbox every Friday afternoon…

Weekly AI news

Week 50 news

Gemini through the looking glass, Devin joins the team, and AI on the frontlines of healthcare

Week 49 news

On the first day of Christmas, a new AI czar, and Meta’s eco Llama

Week 48 news

Anthropic installs new plumbing for AI, the world’s first agent hacking game, and Sora testers go rogue

Week 47 news

DeepSeek’s deep thought, building a billion-agent workforce, and AI’s productivity puzzle