Exploring novel cognitive strategies in LLMs

JOEL

One of the most exciting aspects of AI in 2024 is that the full scope of knowledge and potential embedded in large language models (LLMs) remains largely unexplored. The ‘latent space’ of these models is a vast, rich, and uncharted territory, containing a wealth of information, skills, and cognitive strategies that could revolutionise how we interact with and get the most from AI.

One major hurdle in this exploration is the impracticality of manually evaluating novel capabilities. Traditional benchmarks and metrics often fail to capture the full range of skills and strategies that LLMs might possess, typically using questions drawn from human testing.

This is where the use of an AI interviewer, such as Claude 3 Opus, can be a useful tool. By engaging in conversation with a new AI model, probing its knowledge and pushing the boundaries of its capabilities, a skilled AI interviewer can surface insights and strategies that might otherwise remain hidden.

We’ve documented and organized the insights we’ve uncovered so far, into a toolkit and illustrated with some examples. We’ll be continuing this work and brining these strategies to our multi-agent solutions, and will also be analysing the significant new models expected from OpenAI and Meta in the coming months.

ExoBrain symbol

EXO

Subscribe for insider insights on applied AI. Our weekly newsletter analyses the top 3 themes, with a curated roundup delivered to your inbox every Friday afternoon…

Weekly AI news

Week 46 news

Are the labs hitting a scaling wall, truth social, and the AI grandmother scamming the scammers

Week 45 news

Trump 2.0 risks American AI dominance, super-duper democracy, and Project 2025

Week 44 news

ChatGPT Search takes on Google, taxing times for Labour and labour, and a glimpse of the future?

Week 43 news

Claude clicks with computers, are universities failing in their core mission, and AI takes a seat at the top table