Probability, Perception and False Positives

An understanding of probability empowers people to make informed choices in matters of great importance, including health screening, insurance, major weather events and terrorist threat. Unfortunately it has been shown that this understanding of probability eludes even some of our most educated professionals and decision-makers

Perceptions of Probability and Risk

There is a considerable body of work studying people’s perceptions of probability and risk, particularly by Amos Tversky and the Nobel prize-winning Daniel Kahnemann. This has uncovered many systematic errors humans make in judging the relative probabilities of uncertain events. The brain’s tendency to find patterns results in heuristics or rules that have consistent bias. For example, if we have recently experience or even heard of a bad random event, we perceive the probability to be higher than it really is. Having experienced two years of earthquakes in Christchurch, my estimation of the likelihood of an earthquake in other places is markedly increased. I (and many others from here) feel uneasy surrounded by tall buildings, street awnings and unsecured masonry in other cities, particularly Wellington, but even in cities with no known earthquake risk.

Cultural implications

The perception of probability is also found to be cultural. I analysed a probability-based task as part of the National Education Monitoring Project. I found that there was a statistical and practical difference between the responses of ten-year-old Pacific Island students and NZ European students. I hypothesised that different home experiences involving games of chance may have led to this.  Further reading uncovered other research which had identified other cultural differences. In particular, there are cultures in which everything is perceived to be decided by God and there is no chance but rather a lack of knowledge of God’s will.

In fact many things that we perceive to be subject to chance, would not be, if we had perfect knowledge. Increased understanding of weather patterns has made forecasting more reliable, which has reduced the level of uncertainty with regard to the arrival of bad storms like the recent Hurricane Sandy, or to a lesser extent, two heavy snowfalls in Christchurch in 2011. Even a coin toss is, strictly speaking, only a function of the placement of the coin and thumb, the amount of force applied and various other external factors. Because we cannot measure these factors, we are left to assume that the chance of a head or a tail is equal until shown otherwise.

Screening tests

In disease screening we generally do know the figures, and are not relying on subjective judgment as to the probabilities. However the interpretation of the figures is notoriously badly done. There is a great deal of money involved in the screening industry, and it is an emotive area. Neither money nor emotion aids rational decision-making. This is exacerbated by misinterpretation of probabilities, and selective cost-counting.

My eyes were opened to this issue by a keynote address by Gerd Gigerenzer, director at Max Planck Institute for Human Development . There is a very interesting 8 question quiz at the Harding Center. Try it now.  (I was very excited to score 100%, but I put that down to having heard the address, and thought seriously about this.) It would be great if you could tell us your score and reaction to the quiz in the comments below.

A week ago Tim Harford wrote about the lack of understanding among physicians in his post, “Why aren’t we doing the maths? – The practical implications of misplaced confidence when dealing with statistical evidence are obvious and worrying.” This problem is not going away. Some of the comments on the post expressed regret that probability questions like these are not part of the school curriculum, and that it is difficult to find resources to learn on-line. In New Zealand a new curriculum is being introduced with a greater emphasis on statistics at all levels. At year 12 knowledge of understanding of risk, particularly using two-way tables, is examined. As we develop materials to help teach this, we will make them available to the general public.


The following link takes you to a pdf of a powerpoint presentation that teaches a step-by-step approach to this: Risk and Screening – step-by-step approach
We have found that this approach is helpful to students.

In particular you need to make sure that the table has “What the test tells us” along the top, and “What is the reality” down the side. You do not have columns or rows saying “Correct” or “incorrect” as this is much more difficult.

At present there is no audio to go with this segment, but we hope it is self-explanatory.

The costs of screening

Just in case you are tempted to think that all screening must be good and more screening must therefore be better, here are some things to think about.

The following article Breast screening is harmful appeared recently and I found it after I had written the rest of this post. I am very excited to read that  “BreastScreen Aotearoa is revising its leaflets to incorporate information about the risks of overdiagnosis”.

Screening is big business. There are the obvious costs of the equipment and staffing, including nurses, doctors, technicians and clerical workers. Added to that is the cost of loss of productivity for the time taken for the test. The test itself may be harmful. The cost of a false positive is considerable, including unnecessary further tests and interventions, some of which do actual harm. When screening is increased to include people at low-risk, the number of false positives increases, which then takes up resources, and can prevent people who really need intervention from getting it. The emotional costs of a false positive are far-reaching, unnecessarily decreasing quality of life, as people lose confidence in their own health and medicine.

More screening can be harmful

Too often lobby groups,with well-intentioned but ill-informed leaders can do harm. This was possibly the case with breast cancer screening in New Zealand. The age of free screening was lowered to include a group for which the test is less accurate, resulting in many more false positives. A correct understanding of probability in the general populace might have prevented this.

What is clear is that information needs to be better explained in order for informed consent to occur.

9 thoughts on “Probability, Perception and False Positives

  1. Interesting quiz. My score 7/8 which I was happy with considering I did not watch the address and just answered based on my intuition about the level of risk in each case. I suspect much is related to the Type 1 vs Type 2 errors (and Type 3, 4) of any hypothesis testing [last blog] – and our perception quickly bias’s the likelihood of these errors (there is a TED video that is also interesting on this: if I recall right).

    Aside: I fall into the sub-category of “… everything is perceived to be decided by God and there is no chance but rather a lack of knowledge of God’s will.” Equivalently, the universe is deterministic, we just don’t know the rules [and will never know the rules unless we can step outside the system since these updating rules operate on the system]. Models of the way the universe operates are attempts to qualify these rules and are thus subject to model uncertainty and error.

  2. I got 6/8. I said 120 miles for the car question even though my first instinct was 12 miles and I thought twice the risk for female smokers at 55 was too high.

  3. Hi Dr. Nic,

    Very interesting. I got 7/8, without having heard the address, but with understanding the concepts of risk (and just loving this general topic, so I pay attention).

    I have to say, though, I found the quiz was really measuring two different things. One is understanding what risk means if you have the data (questions, 2, 5, and 8), and the other is having previous knowledge of the data on the risks of everyday things.

    In the medical context, it seems the former is much more important–can doctors and patients evaluate what a screening test means, in the presences of the actual risk estimates. That’s different that having no idea how many car accidents there are (or likewise, false positives).

    Many, many years ago, my very first statistical consulting gig was for two authors who were writing a book about medical risk. One was a physician, the other a writer. They consulted with me to ensure that they were understanding risk properly because physicians generally don’t. They said this was particularly problematic because patients didn’t understand it either, and used the tone of the physician’s voice to determine if the test results were bad.


    • I agree regarding what the question is testing, and what is more important. I got a score of 5/8, and all the questions I missed were questions of remember the data, rather than understanding probability. In an age where you can look up the data on your smartphone, I think a question of memory is rapidly becoming trivial, while a question of interpreting the data is very important indeed.

  4. Pingback: Reading Assignment #6 | Inferential Statistics and Problem Solving

  5. Pingback: Reading Assignment #12 | Inferential Statistics and Problem Solving

  6. Pingback: Reading Assignment #12 | Geographical Perspectives

  7. Pingback: Reading Assignment #6 | Geographical Perspectives

  8. Pingback: A Sensitive Approach to Risk and Screening | Learn and Teach Statistics and Operations Research

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s