Featured
- Get link
- Other Apps
AI models can outperform humans in
tests to identify mental states
Large language models don’t have a theory of mind the way
humans do—but they’re getting better at tasks designed to measure it in humans.
Humans are complicated beings. The ways we communicate are multi-layered,
and psychologists have devised many kinds of tests to measure our ability to
infer meaning and understanding from interactions with each other.
AI models are getting better at these tests. New research
published today in Nature Human Behavior found that some large
language models (LLMs) perform as well as, and in some cases better than,
humans when presented with tasks designed to test the ability to track people’s
mental states, known as “theory of mind.”
This doesn’t mean AI systems are actually able to work out
how we’re feeling. But it does demonstrate that these models are performing
better and better in experiments designed to assess abilities that
psychologists believe are unique to humans. To learn more about the processes
behind LLMs’ successes and failures in these tasks, the researchers wanted to
apply the same systematic approach they use to test theory of mind in humans.
In theory, the better AI models are at mimicking humans, the
more useful and empathetic they can seem in their interactions with us. Both
OpenAI and Google announced supercharged
AI assistants last week; GPT-4o and Astra are
designed to deliver much smoother, more naturalistic responses than their
predecessors. But we must avoid falling into the trap of believing that their
abilities are humanlike, even if they appear that way.
“We have a natural tendency to attribute mental states and
mind and intentionality to entities that do not have a mind,” says Cristina
Becchio, a professor of neuroscience at the University Medical Center
Hamburg-Eppendorf, who worked on the research. “The risk of attributing a
theory of mind to large language models is there.”
With hopes and fears about the technology running wild, it's
time to agree on what it can and can't do.
Theory of mind is a hallmark of emotional and social
intelligence that allows us to infer people’s intentions and engage and
empathize with one another. Most children pick up these kinds of skills between
three and five years of age.
The researchers tested two families of large language
models, OpenAI’s GPT-3.5 and GPT-4 and
three versions of Meta’s Llama,
on tasks designed to test the theory of mind in humans, including identifying
false beliefs, recognizing faux pas, and understanding what is being implied
rather than said directly. They also tested 1,907 human participants in order
to compare the sets of scores.
The team conducted five types of tests. The first, the
hinting task, is designed to measure someone’s ability to infer someone else’s
real intentions through indirect comments. The second, the false-belief task,
assesses whether someone can infer that someone else might reasonably be
expected to believe something they happen to know isn’t the case. Another test
measured the ability to recognize when someone is making a faux pas, while a
fourth test consisted of telling strange stories, in which a protagonist does
something unusual, in order to assess whether someone can explain the contrast
between what was said and what was meant. They also included a test of whether
people can comprehend irony.
The AI models were given each test 15 times in separate
chats, so that they would treat each request independently, and their responses
were scored in the same manner used for humans. The researchers then tested the
human volunteers, and the two sets of scores were compared.
Both versions of GPT performed at, or sometimes above, human
averages in tasks that involved indirect requests, misdirection, and false
beliefs, while GPT-4 outperformed humans in the irony, hinting, and strange
stories tests. Llama 2’s three models performed below the human average.
However, Llama 2, the biggest of the three Meta models tested, outperformed
humans when it came to recognizing faux pas scenarios, whereas GPT consistently
provided incorrect responses. The authors believe this is due to GPT’s general
aversion to generating conclusions about opinions, because the models largely
responded that there wasn’t enough information for them to answer one way or
another.
But what we perceive as deception is AI mindlessly achieving
the goals we’ve set for it.
“These models aren’t demonstrating the theory of mind of a
human, for sure,” he says. “But what we do show is that there’s a competence
here for arriving at mentalistic inferences and reasoning about characters’ or
people’s minds.”
One reason the LLMs may have performed as well as they did
was that these psychological tests are so well established, and were therefore
likely to have been included in their training data, says Maarten Sap, an
assistant professor at Carnegie Mellon University, who did not work on the
research. “It’s really important to acknowledge that when you administer a
false-belief test to a child, they have probably never seen that exact test
before, but language models might,” he says.
Ultimately, we
still don’t understand how LLMs work. Research like this can help deepen
our understanding of what these kinds of models can and cannot do, says Tomer
Ullman, a cognitive scientist at Harvard University, who did not work on the
project. But it’s important to bear in mind what we’re really measuring when we
set LLMs tests like these. If an AI outperforms a human on a test designed to
measure theory of mind, it does not mean that AI has theory of mind.
“I’m not anti-benchmark, but I am part of a group of people who are concerned
that we’re currently reaching
the end of usefulness in the way that we’ve been using benchmarks,” Ullman
says. “However this thing learned to pass the benchmark, it’s not— I don’t
think—in a human-like way.”
MIT- STEPHANIE
ARNETT/MITTR
- Get link
- Other Apps
Popular Posts
- Get link
- Other Apps
- Get link
- Other Apps
Comments
Post a Comment