You and your partner Alex have been in a
strong, loving relationship for years,
and lately you're considering
getting engaged.
Alex is enthusiastic about the idea,
but you can’t get over the statistics.
You know a lot of marriages end
in divorce, often not amicably.
And over 10% of couples
in their first marriage get divorced
within the first five years.
If your marriage wouldn’t
even last five years,
you feel like tying the knot
would be a mistake.
But you live in the near future,
where a brand-new company just
released an AI-based model
that can predict your likelihood
of divorce.
The model is trained on data sets
containing individuals’
social media activity,
online search histories, spending habits,
and history of marriage and divorce.
And using this information,
the AI can predict if a couple
will divorce
within the first five years of marriage
with 95% accuracy.
The only catch is the model doesn’t offer
any reasons for its results—
it simply predicts that you will or won’t
divorce without saying why.
So, should you decide whether or not
to get married
based on this AI’s prediction?
Suppose the model predicts you and Alex
would divorce within five years
of getting married.
At this point, you'd have three options.
You could get married anyway
and hope the prediction is wrong.
You could break up now,
though there’s no way to know if ending
your currently happy relationship
would cause more harm than letting
the prediction run its course.
Or, you could stay together
and remain unmarried,
on the off-chance marriage itself
would be the problem.
Though without understanding the reasons
for your predicted divorce,
you’d never know if those mystery issues
would still emerge
to ruin your relationship.
The uncertainty undermining
all these options
stems from a well known issue with AI
around explainability and transparency.
This problem plagues tons of potentially
useful predictive models,
such as those that could be used
to predict which bank customers
are most likely to repay a loan,
or which prisoners are most likely
to reoffend if granted parole.
Without knowing why AI systems
reach their decisions,
many worry we can’t think critically
about how to follow their advice.
But the transparency problem
doesn’t just prevent us
from understanding these models,
it also impacts the user’s accountability.
For example, if the AI's prediction
led you to break up with Alex,
what explanation could you
reasonably offer them?
That you want to end
your happy relationship
because some mysterious machine
predicted its demise?
That hardly seems fair to Alex.
We don’t always owe people
an explanation for our actions,
but when we do,
AI’s lack of transparency can create
ethically challenging situations.
And accountability is just one
of the tradeoffs we make
by outsourcing important decisions to AI.
If you’re comfortable deferring
your agency to an AI model
it’s likely because you’re focused
on the accuracy of the prediction.
In this mindset, it doesn’t really matter
why you and Alex might break up—
simply that you likely will.
But if you prioritize authenticity
over accuracy,
then you'll need to understand
and appreciate the reasons
for your future divorce
before ending things today.
Authentic decision making like this is
essential for maintaining accountability,
and it might be your best chance
to prove the prediction wrong.
On the other hand,
it’s also possible the model
already accounted for your attempts
to defy it,
and you’re just setting yourself
up for failure.
95% accuracy is high,
but it’s not perfect—
that figure means 1 in 20 couples
will receive a false prediction.
And as more people use this service,
the likelihood increases that someone
who was predicted to divorce
will do so just because the AI
predicted they would.
If that happens to enough newlyweds,
the AI's success rate could
be artificially maintained
or even increased by these
self-fulfilling predictions.
Of course, no matter what
the AI might tell you,
whether you even ask for its prediction
is still up to you.