Most of the published research consists of studies of how adults understand or misunderstand particular statistical ideas. A seminal series of studies by Kahneman, Slovic, and Tversky (1982) revealed some prevalent ways of thinking about statistics that are inconsistent with a correct technical understanding. Some salient examples of these faulty ``heuristics" are summarized below.

**Representativeness:**

People estimate the likelihood of a sample based on how closely it resembles the population. (If you are randomly sampling sequences of 6 births in a hospital, where represents a male birth and a female birth; is believed to be a more likely outcome than .) Use of this heuristic also leads people to judge small samples to be as likely as large ones to represent the same population. (70%Heads is believed to be just as likely an outcome for 1000 tosses as for 10 tosses of a fair coin.)

**Gamblers fallacy:**

Use of the representative heuristic leads to the view that chance is a self-correcting process. After observing a long run of heads, most people believe that now a tail is ``due" because the occurrence of a tail will result in a more representative sequence than the occurrence of another head.

**Base-rate fallacy:**

People ignore the relative sizes of population subgroups when judging the likelihood of contingent events involving the subgroups. For example, when asked the probability of a hypothetical student taking history (or economics), when the the overall proportion of students in these courses is .70 and .30 respectively, people ignore these base rates and instead rely on information provided about the student's personality to determine which course is more likely to be chosen by that student.

**Availability:**

Strength of association is used as a basis for judging how likely an event will occur. (E.g., estimating the divorce rate in your community by recalling the divorces of people you know, or estimating the risk of a heart attack among middle-aged people by counting the number of middle-aged acquaintances who have had heart attacks.) As a result, people's probability estimates for an event are based on how easily examples of that event are recalled.

**Conjunction fallacy:**

The conjunction of two correlated events is judged to be more likely
than either of the events themselves. For example, a description is given
of a 31-year old woman named Linda who is single, outspoken, and very
bright. She is described as a former philosophy major who is deeply
concerned with issues of discrimination and social justice. When asked
which of two statements are more likely, fewer pick *A: Linda is a bank
teller*, than *B: Linda is a bank teller active in the feminist movement*,
even though A is more likely than B.

Additional research has identified misconceptions regarding correlation and causality (Kahneman, Slovic, &Tversky; 1982 ), conditional probability (e.g., Falk, 1988; Pollatsek, Well, Konold,&Hardiman; 1987), independence, (e.g., Konold, 1989b) randomness (e.g., Falk, 1981; Konold, 1991), the Law of Large Numbers (e.g., Well, Pollatsek, &Boyce; 1990), and weighted averages (e.g., Mevarech, 1983; Pollatsek, Lima, &Well, 1981).

A related theory of recent interest is the idea of the outcome orientation
(Konold, 1989a). According to this theory, people use a model of
probability that leads them to make yes or no decisions about single
events rather than looking at the series of events. For example:
A weather forecaster predicts the chance of rain to be 70%for 10 days.
On 7 of those 10 days it actually rained. How good were his forecasts?
Many students will say that the forecaster didn't do such a good job,
because it should have rained on all days on which he gave a 70%chance
of rain. They appear to focus on outcomes of single events rather than
being able to look at series of events-70%chance of rain means that it
**should** rain. Similarly, a forecast of 30%rain would mean it won't rain.
50%chance of rain is interpreted as meaning that you can't tell either way.
The power of this notion is evident in the college student who, on the
verge of giving it up, made this otherwise perplexing statement: ``I don't
believe in probability; because even if there is a 20%chance of rain,
it could still happen" (Falk and Konold, 1992, p. 155).

The conclusion of this body of research by psychologists seems to be that inappropriate reasoning about statistical ideas is widespread and persistent, similar at all age levels (even among some experienced researchers), and quite difficult to change (Garfield and Ahlgren, 1988).

Wed Jun 29 13:57:26 EDT 1994