Mining unobtainium is hard work.
The rare mineral appears
in only 1% of rocks in the mine.
But your friend Tricky Joe
has something up his sleeve.
The unobtainium detector he’s been
perfecting for months is finally ready.
The device never fails
to detect unobtainium if any is present.
Otherwise, it’s still highly reliable,
returning accurate
readings 90% of the time.
On his first day trying
it out in the field,
the device goes off, and
Joe happily places the rock in his cart.
As the two of you head back to camp
where the ore can be examined,
Joe makes you an offer:
he’ll sell you the ore for just $200.
You know that a piece of unobtanium
that size would easily be worth $1000,
but any other minerals
would be effectively worthless.
Should you make the trade?
Pause here if you want
to figure it out for yourself.
Answer in: 3
Answer in: 2
Answer in: 1
Intuitively, it seems like a good deal.
Since the detector
is correct most of the time,
shouldn’t you be able
to trust its reading?
Unfortunately, no.
Here’s why.
Imagine the mine
has exactly 1,000 pieces of ore.
An unobtainium rarity of 1%
means that there are only 10 rocks
with the precious mineral inside.
All 10 would set off the detector.
But what about the other 990
rocks without unobtainium?
Well, 90% of them,
891 rocks, to be exact,
won’t set off anything.
But 10%, or 99 rocks,
will set off the detector
despite not having unobtanium,
a result known as a false positive.
Why does that matter?
Because it means that all in all,
109 rocks will have
triggered the detector.
And Joe’s rock could be any one of them,
from the 10 that contain the mineral
to the 99 that don’t,
which means the chances of it containing
unobtainium are 10 out of 109 – about 9%.
And paying $200 for a 9%
chance of getting $1000 isn’t great odds.
So why is this result so unexpected,
and why did Joe’s rock seem
like such a sure bet?
The key is something called
the base rate fallacy.
While we’re focused on the relatively
high accuracy of the detector,
our intuition makes us forget to account
for how rare the unobtanium
was in the first place.
But because the device’s error rate of 10%
is still higher than
the mineral’s overall occurrence,
any time it goes off is still more likely
to be a false positive
than a real finding.
This problem is an example
of conditional probability.
The answer lies neither in the overall
chance of finding unobtainium,
nor the overall chance
of receiving a false positive reading.
This kind of background information
that we’re given before anything happens
is known as unconditional,
or prior probability.
What we’re looking for, though,
is the chance of finding unobtainium
once we know that the device did
return a positive reading.
This is known as the conditional,
or posterior probability,
determined once the possibilities have
been narrowed down through observation.
Many people are confused
by the false positive paradox
because we have a bias
for focusing on specific information
over the more general,
especially when immediate decisions
come into play.
And while in many cases
it’s better to be safe than sorry,
false positives can have
real negative consequences.
False positives in medical testing
are preferable to false negatives,
but they can still lead to stress or
unnecessary treatment.
And false positives in mass surveillance
can cause innocent people to be
wrongfully arrested, jailed, or worse.
As for this case, the one thing
you can be positive about
is that Tricky Joe is trying
to take you for a ride.