Exploring novel cognitive strategies in LLMs

JOEL

One of the most exciting aspects of AI in 2024 is that the full scope of knowledge and potential embedded in large language models (LLMs) remains largely unexplored. The ‘latent space’ of these models is a vast, rich, and uncharted territory, containing a wealth of information, skills, and cognitive strategies that could revolutionise how we interact with and get the most from AI.

One major hurdle in this exploration is the impracticality of manually evaluating novel capabilities. Traditional benchmarks and metrics often fail to capture the full range of skills and strategies that LLMs might possess, typically using questions drawn from human testing.

This is where the use of an AI interviewer, such as Claude 3 Opus, can be a useful tool. By engaging in conversation with a new AI model, probing its knowledge and pushing the boundaries of its capabilities, a skilled AI interviewer can surface insights and strategies that might otherwise remain hidden.

We’ve documented and organized the insights we’ve uncovered so far, into a toolkit and illustrated with some examples. We’ll be continuing this work and brining these strategies to our multi-agent solutions, and will also be analysing the significant new models expected from OpenAI and Meta in the coming months.

ExoBrain symbol

EXO

Subscribe for insider insights on applied AI. Our weekly newsletter analyses the top 3 themes, with a curated roundup delivered to your inbox every Friday afternoon…

Weekly AI news

2025 Week 16 news

o3 and o4-mini prime agentic AI for take-off, scaling laws show ongoing gains, and Shopify mandates AI proficiency

2025 Week 15 news

Trump hands China AI advantage, datacentres hunger for power, and Google connects agents

2025 Week 14 news

No liberation for AI amidst trade war, agents tire while humans persevere, and a vision of 2027

2025 Week 13 news

Gemini raises the bar, an insult to art itself, and tracing the thoughts of LLMs