upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks

  • Paper
  • Mar 14, 2023
  • #Naturallanguageprocessing #Cognitivescience #ArtificialIntelligence
Tomer Ullman
@TomerUllman
(Author)
arxiv.org
Read on arxiv.org
1 Recommender
1 Mention
Intuitive psychology is a pillar of common-sense reasoning. The replication of this reasoning in machine intelligence is an important stepping-stone on the way to human-like artific... Show More

Intuitive psychology is a pillar of common-sense reasoning. The replication of this reasoning in machine intelligence is an important stepping-stone on the way
to human-like artificial intelligence. Several recent tasks and benchmarks for examining this reasoning in Large-Large Models have focused in particular on
belief attribution in Theory-of-Mind tasks. These tasks have shown both successes and failures. We consider in particular a recent purported success case, and
show that small variations that maintain the principles of ToM turn the results on their head. We argue that in general, the zero-hypothesis for model evaluation in
intuitive psychology should be skeptical, and that outlying failure cases should outweigh average success rates. We also consider what possible future successes
on Theory-of-Mind tasks by more powerful LLMs would mean for ToM tasks with people.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Laura Ruis @LauraRuis · Apr 28, 2023
  • Post
  • From Twitter
This is a great paper not only because of the clever control tasks, but also because it relates current evals of LLMs to decades old discussions on psychologism vs behaviorism
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta