AI model of human decision making

Can AI Predict Your Risky Choices? New Model Mimics Human Decision-Making

"Researchers train AI on arithmetic to understand how we weigh risks and rewards, offering insights into financial and personal choices."


For years, scientists have been fascinated by the parallels between how humans and Artificial Intelligence (AI) models make decisions. Large Language Models (LLMs), like those powering chatbots, often show patterns similar to human behavior when faced with cognitive tasks. This has sparked interest in using LLMs as tools to model human cognition, but it also raises some fundamental questions.

One of the biggest hurdles is that LLMs are trained on massive datasets, far exceeding the information a typical person encounters. Additionally, these models may have been exposed to data directly related to specific cognitive tasks, or even aligned with human preferences, leading to questions about the origins of the similarities between AI and human decision-making.

Now, a new approach is being proposed to enhance the utility of LLMs as cognitive models. This involves training AI on computationally equivalent tasks that both the AI and a rational agent need to master. By examining the specific task distributions required for an LLM to exhibit human-like behaviors, researchers hope to gain a deeper understanding of how we make choices, particularly when it comes to risk and reward.

Decoding Risky Choices: The Arithmetic Connection

AI model of human decision making

At the heart of decision-making, especially when dealing with risk or future rewards, lies the ability to calculate expectations. Whether it's assessing the potential payoff of a gamble or weighing the value of a reward received later versus one received today, these calculations are fundamental.

To assess the benefits of engaging in a gamble, an intelligent system must be able to calculate the expected value (EV) of the gamble, typically represented as: EV (A) = ∑ Pi × Xi where each outcome i of gamble A is associated with a payoff xi and a probability pi, with the constraint that ; pi = 1. Similarly, in considering an intertemporal choice the computation of the present value (PV) of future outcomes in A is crucial: PV(A) = ∑ dt x xt teA where the value xt is realized at time t and is discounted by a factor of d, reflecting the time preference of the decision-maker. Note that a risk-neutral and time-consistent agent should always select the option that maximizes EV and PV. However, extensive research in economics and psychology demonstrates that people systematically deviate from this maximizer model

While the rational choice model suggests we should always pick the option with the highest expected value, humans often deviate from this norm. This could be due to a variety of psychological factors, such as:
Researchers hypothesized that if an LLM were trained on these types of calculations, it might start to exhibit human-like decision patterns. Moreover, they wondered if deviations from rational choice could be explained by computational errors during the EV or PV calculations. To test this idea, they generated a series of synthetic datasets containing mathematical equations relevant to risky and intertemporal choices.

Implications for AI and Understanding the Human Mind

This research marks an important step towards creating more accurate and human-like AI models of cognition. By focusing on the fundamental computations underlying decision-making, scientists can gain a better understanding of how we weigh risks and rewards. This approach is not limited to just risky choices; it can be extended to other cognitive tasks that rely on language and reasoning. Ultimately, this work could pave the way for AI systems that not only mimic human behavior, but also offer insights into the inner workings of the human mind.

About this Article -

This article was crafted using a human-AI hybrid and collaborative approach. AI assisted our team with initial drafting, research insights, identifying key questions, and image generation. Our human editors guided topic selection, defined the angle, structured the content, ensured factual accuracy and relevance, refined the tone, and conducted thorough editing to deliver helpful, high-quality information.See our About page for more information.

This article is based on research published under:

DOI-LINK: https://doi.org/10.48550/arXiv.2405.19313,

Title: Language Models Trained To Do Arithmetic Predict Human Risky And Intertemporal Choice

Subject: cs.ai cs.cl econ.gn q-fin.ec

Authors: Jian-Qiao Zhu, Haijiang Yan, Thomas L. Griffiths

Published: 29-05-2024

Everything You Need To Know

1

How can AI, trained on arithmetic, help us understand human decision-making?

By training Artificial Intelligence (AI) on arithmetic, specifically calculations of Expected Value (EV) and Present Value (PV), researchers can observe if the AI mimics human behavior in risky and intertemporal choices. The study focuses on how the AI deviates from rational choice models, offering insights into the cognitive processes behind human decisions. This approach helps to understand how humans weigh risks and rewards, which is not possible with models trained on large datasets.

2

What is the significance of Expected Value (EV) and Present Value (PV) in the context of decision-making?

Expected Value (EV) and Present Value (PV) are fundamental calculations in decision-making, especially when considering risks and rewards. EV helps to assess the potential payoff of a gamble, calculated as the sum of each outcome's payoff multiplied by its probability. PV is used to compute the present value of future outcomes, discounted by a factor that reflects time preference. Risk-neutral and time-consistent agents should always select the option that maximizes EV and PV.

3

Why do humans often deviate from the rational choice model that focuses on maximizing Expected Value (EV) and Present Value (PV)?

Humans often deviate from the rational choice model due to various psychological factors. The article mentions the possibility that deviations from rational choice could be explained by computational errors during the EV or PV calculations. This research aims to understand these deviations and suggests that an AI trained on EV and PV calculations might exhibit similar patterns to human decision-making, offering insights into why people don't always choose the option with the highest expected value.

4

How does the new approach of training AI on specific computational tasks differ from previous methods using Large Language Models (LLMs)?

The new approach focuses on training AI on specific computationally equivalent tasks that both the AI and a rational agent need to master, such as EV and PV calculations. Previous methods often used Large Language Models (LLMs) trained on massive datasets, which led to questions about the origins of the similarities between AI and human decision-making. This new approach aims to enhance the utility of LLMs as cognitive models by examining the specific task distributions required for an LLM to exhibit human-like behaviors, leading to a deeper understanding of how we make choices.

5

What are the potential implications of this research for both AI development and understanding the human mind?

This research marks an important step towards creating more accurate and human-like AI models of cognition. By focusing on the fundamental computations underlying decision-making, scientists can gain a better understanding of how we weigh risks and rewards. Ultimately, this work could pave the way for AI systems that not only mimic human behavior, but also offer insights into the inner workings of the human mind. This approach is not limited to risky choices; it can be extended to other cognitive tasks that rely on language and reasoning.

Newsletter Subscribe

Subscribe to get the latest articles and insights directly in your inbox.