(Intuitive statistics; ecological rationality; optimal foraging; heuristics and biases; fast & frugal heuristics)
Are humans rational?
By what standard should we decide whether humans are rational? For an evolutionary-functional perspective, See:
Cosmides, L & Tooby, J. (2021) Adaptationism: A metanormative theory of rationality. In: Knauff, Marcus; Spohn, Wolfgang (Ed.): Handbook of Rationality, pp. 633-644, MIT Press.
Is it true that our minds are not equipped to reason well about probabilities? See:
Cosmides, Leda; Tooby, John (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition, vol. 58, no. 1, pp. 1–73
Brase, Gary L; Cosmides, Leda; Tooby, John. (1998) Individuation, counting, and statistical inference: The role of frequency and whole-object representations in judgment under uncertainty. Journal of Experimental Psychology: General, vol. 127, pp. 1-19,
Is “loss aversion” real? What about “ambiguity aversion? See:
Rode, Catrin; Cosmides, Leda; Hell, Wolfgang; Tooby, John (1999) When and why do people avoid unknown probabilities in decisions under uncertainty? Testing some predictions from optimal foraging theory. Cognition, 72(3), 269-304.
Judgment under uncertainty:
How have psychologists been defining rationality?
An influential position has been that human thought is rational to the extent that it conforms to normative theories drawn from mathematics and logic. During the 1970s and 1980s, Amos Tversky, Daniel Kahneman, and their colleagues pursued this approach, developing experiments to see whether people’s responses on tasks requiring probability judgments conformed to various normative theories of probability: Bayes’ rule (for calculating conditional probabilities), the law of large numbers, and, more generally, subjective expected utility theory and the Savage axioms. Compared to these normative theories, people’s performance looked irrational (or, at least, poorly designed).
A large body of research, known as the heuristics and biases school, emerged, demonstrating how people’s judgments deviate from the dictates of probability theory. The claim was made that the human mind does not contain any algorithms that embody aspects of a calculus of probability: Probability judgments are made instead by a collection of rules of thumb that produce judgments that do not closely track any normative theory. It was suggested that people use quick rules of thumb to make such judgments because the mind is so limited in its ability to process information (on analogy to a computer with a tiny RAM).
Yet behavioral ecologists who study foraging in insects and birds had found evidence of mechanisms that make rather accurate probability judgments, e.g., judgments that satisfy the constraints of Bayes’s rule, as well as normatively appropriate risk-sensitive foraging decisions. How can the fact that bird brains and insect minds make well-calibrated probability judgments be reconciled with a human mind that is “too limited”?
Absolute frequencies: An ecologically valid format.
Humans, like other animals, evolved as foragers. Thus we thought it unlikely that they would be substantially worse “intuitive probabilists” than other animals. Our initial work in this area (Cosmides & Tooby, 1996), along with that of Gerd Gigerenzer, showed that the format in which the probabilistic information is presented creates huge differences in performance. People appear to be poor at Bayesian reasoning (just as Tversky & Kahneman claim) when they are given problems that express the relevant information as percents and that ask them to judge the probability of a single event (e.g., “What is the chance that a person who tests positive for the disease actually has it?”). But a different picture emerges when information is presented in a more ecologically valid format.
Probabilities of single events are a byproduct of modern statistical and data-gathering techniques (ever wonder what a “60% chance of rain today” means?). No hunter-gatherer ever encountered a probability of a single event (it either rains today or it doesn’t). Instead, hunter-gatherers encountered statistical information in the same format that other species do: in the form of the actual frequencies of encountered events (e.g., It rained 6 out of the last 10 days; Not: there is a 60% chance of rain today). If so, then mechanisms designed to make well-calibrated statistical judgments might exist, but they might require information to be in a format that they can “read” to give the right output.
To test this, we translated the same standard problems into frequency formats and asked for the answer as a frequency (e.g., how many people who test positive for the disease actually have it? ___ out of ___) rather than as the probability of a single event. Whereas only 12-36% of untutored undergraduates gave the correct Bayesian response when the problem was phrased in the single event format, 76-92% gave the right answer when the problem had a frequency format.
This suggests that the mind does indeed have mechanisms that embody aspects of a calculus of probability. Gigerenzer has gone on to show that what matters are absolute frequencies (not relative frequencies), and that these are well-adapted to a natural sampling scheme. (In natural sampling (e.g., walking through a forest in which you encounter apple and cherry trees, some with red fruit on them others not), base rates become irrelevant because information about rarity or abundance is implicit in the absolute encounter frequency. If you want to know the probability that a tree with red fruit on it is an apple tree and you know the absolute frequency of apple trees with red fruit (hits) and other trees with red fruit (false alarms), then the answer is simple: hits / hits + false alarms. For discussion, see Brase, Cosmides & Tooby, 1998.) We expect mechanisms to be ecologically rational: designed to work well in the ecological circumstances for which they were designed.
Frequency computation and object perception
How does the frequency computation system interact with object perception? The object perception system is designed to parse and (token) individuate some aspects of the world, but not others (e.g., it will automatically individuate a leaf, but not the sides of a leaf). A frequency computation system depends on the ability to count, and items cannot be counted unless they are first individuated as tokens (and not just as types).
Following this logic, Brase, Cosmides, and Tooby (1998) showed that people respect Bayes’s rule when a correct solution requires that one assess the frequency of items that their object perception system is designed to token individuate (whole objects). But they make errors when a correct solution requires counting over items that the system does not individuate (e.g., inseparable aspects of objects, such as sides of a card or ends of a rod). Following an ecological rationality approach, we argued that debates over whether people are “rational” should be reframed. Instead, one should expect to find human probability judgment mechanisms that produce well-calibrated and adaptive judgments when operating within the parameters for which they were designed by selection. (See Kahneman 2003 on natural assessments.)
Is “loss aversion” real? Is “ambiguity aversion”?
Optimal foraging theory undercuts these claims
With Catrin Rode, we tested predictions from optimal foraging theory (Rode, Cosmides, Hell, & Tooby, 1999). This paper suggests that two phenomena – loss aversion and ambiguity avoidance – are byproducts of statistical inference mechanisms that conform to normative principles drawn from optimal foraging theory, according to which decisions should take into account not only the expected value and variance associated with a resource, but the need level of the decision-maker.
Most research on judgment under uncertainty does not consider the need level of the subject (and implicitly assumes it is zero), and in these cases, people usually prefer low variance options. But consider a bird who is deciding to forage on one of two patches, which have the same expected value (mean) but different variances. If the amount the bird needs to live another day is below the expected value, the safest bet is to forage on the low variance patch. But the high variance patch is a better bet if the bird needs to gather an amount greater than the expected value.
Following predictions such as these from the behavioral ecology literature on risk-sensitive foraging, we varied people’s need levels, using a betting task involving drawing from urns with different distributions of black and white balls (representing different variances and expected values). Like the birds, human subjects preferred the low variance option only when the number of balls needed to win was lower than the expected value; when the number needed to win exceeded the expected value of two alternative urns, subjects preferred the higher variance option.
Moreover, we were able to show that when need levels are low, people will prefer an ambiguous option as long as there is reason to think it is the lower variance alternative (thereby showing that, contrary to claims in the literature, there is no general tendency to avoid ambiguity).
Most striking (and spooky) were tests in which we used a wide variety of distributions, expected values, and need levels. In these experiments, using probability theory to explicitly calculate which options have the higher probability of success is arduous (for the experimenter). Nevertheless, the subjects’ intuitively-made choices closely tracked these probabilities.
Sandeep Mishra and Larry Fiddick (a CEP alum) have shown the same thing with the original disease problem that Tversky and Kahneman used, when they argued that framing problems in terms of loss versus gains creates “irrational” preference reversals. When need levels are specified, there are no framing effects. Mishra and Fiddick showed that framing effects re-emerge when the problem does not specify a need level–but they also show that this is because the framing changes the minimum number of lives that subjects feel they need to save.
For more research on risky decision making and optimal foraging theory in humans, see the interesting evolutionary psychological research done by X-T Wang.
Relevance to debates about human rationality.
Philosophers, economists, and legal scholars have developed an interest in CEP work on judgment under uncertainty and on social exchange, because it leads to different views of reasoning and rationality.
First, it suggests that it is a mistake to define rationality narrowly, as reasoning in accordance with the rules of logic or probability theory. For example, our cheater detection work eliminated the hypothesis that the results were caused by the activation of rules of logical inference. The algorithms involved embody an adaptive logic that does not map onto formal – that is, content-free – logical rules).
Second, it suggests that you cannot determine whether the mind does in fact reason in accordance with probability theory (or whatever the normative theory might be) unless you provide information in a format the mind was designed to read, and consider what adaptive problems (such as risk-sensitive foraging) the mechanisms were designed to solve.
A plethora of papers in behavioral economics and in law assume that people are poor intuitive statisticians—and make policy recommendations based on this view (often involving taking choice away from lay people and putting it in the hands of technocrats). But the research by Gigerenzer’s group on fast-and-frugal heuristics (decision rules that work quickly and well, based on limited information), as well as the research above, suggest that there are ways of presenting information about probability and risk that can activate well-calibrated probability judgments, even in lay people.
Gerd Gigerenzer’s Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin has a wealth of resources on this topic. http://www.mpib-berlin.mpg.de/en/forschung/abc/index.htm
Martie Haseltonof UCLA has interesting work on error management theory, according to which certain biases in judgment reflect mechanisms designed to minimize the probability of making the more costly error. See especially The Paranoid Optimist and Error Management Theory.