In simple terms, it’s a common error we make in assessing likelihoods due to (a) over-emphasizing the rate of something within a group and (b) under-emphasizing how common that group is in the first place (i.e., the base rate).
For example, let’s say you see a chess set in a building with 1 avid chess player and 1000 other people. You might assume it belongs to the chess player, even though it’s more likely to belong to one of the others because there are so many of them – if only 1% of regular people own chess sets, there would likely be ~10 of them in a group of 1000, outnumbering the 1 chess player.
Sometimes also referred to as Base Rate Bias or Base Rate Neglect, this is a cognitive bias arising from the tendency to place too much emphasis on event-specific information, at the expense of relevant base rate information. Often this results in a sense of probabilities or rates that are very far from reality!
To understand what this means, let’s look at a few more examples:
Let’s say you are visiting the campus of a university with 60% business students, 30% law students, 5% computer science students, and 5% visual arts students. You see a student wearing glasses, a video game T-shirt, and a backpack with a Google logo, and is discussing tech companies with a friend. What do you think this student studies and how sure are you?
If you were pretty certain the student studied computer science, you likely fell for the Base Rate Fallacy. Although a much higher percentage of computer science students might fit this description, you may have overlooked the base rate – there are a lot more business students. Even assuming 80% of computer science students match this description and only 10% of business students do, 10% x 60% = 6% > 80% x 5% = 4%.
Kahneman’s Cab Driver Problem
Back in 1981, behavioral psychologist Daniel Kahneman designed a study to demonstrate the Base Rate Fallacy – now sometimes known as the “cab driver problem”:
There was a hit-and-run incident involving a cab. 85% of cabs in the city are green and 15% are blue. A witness said the cab was blue, and the court determined that the witness could reliably distinguish the colors about 80% of the time (and got it wrong the other 20%). What is the probability the cab was in fact blue?
If you thought the probability was somewhere around 80% (like many of the respondents in Kahneman’s study!), you likely fell for the Base Rate Fallacy.
Although the witness was 80% sure the cab was blue, the witness was also much more likely to see a green cab in the first place, so you need to compare how likely they saw and correctly identified a blue cab (15% x 80%) vs. saw and incorrectly identified a green cab (85% x 20%) – which will give the correct probability of roughly 41%!
The False Positive Paradox results from a specific type of Base Rate Fallacy. First, let’s establish some terminology:
- A true positive is when something is flagged correctly – like correctly identifying a duck as a bird.
- A true negative is the opposite – like not identifying a rabbit as a bird.
- A false positive is when something is flagged incorrectly – like being falsely accused of cheating, or a spell-checker claiming there is a typo in a correctly spelled word.
- A false negative is failing to flag a case that should have been flagged – like the spell-checker not catching a misspelled word.
When a population has a very low incidence of a given condition, the true positive rate is often (much) lower than the false positive rate. If the false positive rate is low, you might greatly underestimate how common false positives are relative to true positives, because you overlooked the low incidence rate – the False Positive Paradox. Let’s try an example.
Example: Breathalyzer Test
A company develops a new breathalyzer test and claims it is super accurate: the test has a 100% chance of detecting a truly drunk person, and only a 5% chance of incorrectly indicating a non-drunk driver as drunk.
Let’s say 1 out of 1,000 drivers are actually drunk drivers. The police stop a bunch of random drivers and the breathalyzer test identifies a group of them as drunk drivers. What percentage of this group was actually drunk?
If you thought the probability was as high as 95%, you fell for the Base Rate Fallacy. In fact, the probability is only roughly 2%!
Let’s break it down: If 1,000 drivers are tested, there will be 1 actually drunk driver, who is 100% detected. However, there will be 999 non-drunk drivers, and the test will incorrectly indicate 5% of them as drunk, or roughly 50 drivers. So out of the 51 drivers labeled as drunk by the test, only 1 was actually drunk!
As you can see, falling for the Base Rate Fallacy can result in a some very misleading conclusions, with real-life consequences!
Brain Easer also features a few brainteasers in which this is relevant – be careful not to fall for the Base Rate Fallacy!