New AI Model Predicts Human Behavior With Uncanny Accuracy

New AI Model Predicts Human Behavior With Uncanny Accuracy

By studying real humans completing tasks (such as playing chess or solving a maze), researchers have determined a way to model human behavior. They did this by calculating a peron’s ‘inference budget’. Most humans think for some time, then act. How long they think before acting is called their ‘inference budget’. Researchers found they could measure a person’s individual budget by simply watching how long a person thought about a problem before acting.

“At the end of the day, we saw that the depth of the planning, or how long someone thinks about the problem, is a really good proxy of how humans behave,”

The next step was to run their own model to solve the problem presented to the person. Then, by watching how long the monitored agent took to solve the same problem, they could make very accurate inferences as to when the human stopped planning and know what the person would do next. That value could then be used to predict how that agent would react when solving similar problems.

The researchers tested their approach in three different tasks: inferring navigation goals from previous routes, guessing someone’s communicative intent from their verbal cues, and predicting subsequent moves in human-human chess matches and beat current models.

If we know that a human is about to make a mistake, having seen how they have behaved before, the AI agent could step in and offer a better way to do it. Or the agent could adapt to the weaknesses that its human collaborators have.

In an example from their paper, a person is given different rewards for reaching the blue or orange star. The path to the blue star is always easier than the orange star. As the complexity of the maze grows, the person will start showing bias towards the easier path in some cases. The difference between when they choose the higher reward vs the easier, lower reward can determine a person’s inference budget. When the system determines a problem will be harder than the person’s inference budget allows, the system might offer a hint.

Links:

  • Research paper: “Modeling Boundedly Rational Agents With Latent Inference Budgets” by Athul Paul Jacob, Abhishek Gupta and Jacob Andreas, ICLR 2024. OpenReview
  • Article:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.