![](https://crypto4nerd.com/wp-content/uploads/2023/10/1DZ2qXGhie-dwOg_DqcKfLg.jpeg)
A new paper from researchers at Google DeepMind, USC, CMU, and UChicago proposes an intriguing new benchmark for evaluating whether AI systems can reason about others’ mental states and use that information to guide actions [1].
As the title suggests, “How Far Are Large Language Models From Agents With Theory-of-Mind?” argues that while today’s most advanced AI models may show some capacity for “theory of mind” reasoning, they still fall far short of human-like common sense when it comes to connecting inferences about beliefs and intentions to pragmatic action [1].
Theory of mind (ToM) refers to the human ability to attribute mental states like beliefs, desires, and intentions to others, and to use those attributions to understand and predict behavior.
As the researchers point out, ToM is fundamental to human intelligence and social interaction. We constantly make inferences about what other people know, want, and believe as a way of deciding how to act around them [1].
For example, say your friend Anne is looking for her backpack. You know Anne’s goal is to find her backpack.
But you also know the backpack is currently in the kitchen, even though Anne believes it is still in the office where she left it this morning.
Your understanding of the mismatch between Anne’s belief and the actual state of the world allows you to offer useful information — suggesting she look in the kitchen.
Without ToM, you wouldn’t realize that Anne is operating on outdated assumptions that prevent her from achieving her goal.
So ToM reasoning allows us to provide each other help and information in ways that are sensitive to our current knowledge and beliefs.
This ability to leverage ToM to determine pragmatic actions is central to human collaboration and communication.