In the spring of 1979, a lab worker
in Sverdlovsk, USSR
removed a clogged air filter
in the ventilation system
and didn’t replace it.
His note to the supervisor was never
transferred to the official logbook,
so when the next shift rolled in,
workers simply started
production as usual.
Now, in most labs, this would
have been a minor mistake.
But this lab was
a biological weapons facility
producing huge quantities of anthrax—
which, if inhaled, can kill
up to 90% of those it infects.
This deadly anthrax powder floated
out into the sky for hours,
causing the largest documented outbreak
of inhalation anthrax on record
and resulting in at least 64 deaths.
What happened at Sverdlovsk was a tragedy,
and the Soviet bioweapons program
was a violation of international law.
But these days, it’s not just
state-sponsored bioweapons programs
that keep biosecurity experts up at night.
Nor is anthrax their largest concern.
They’re worried about an even more
dangerous kind of lab leak.
Since the 1970s, researchers have been
manipulating the DNA of microbes
to give them abilities
they didn’t have before.
This is called “gain of function” work
and it includes a huge body
of scientific research.
The majority of this work helps humanity
with very little risk,
for example, engineered viruses
are used in vaccine production,
gene therapy, and cancer treatments.
But within the gain of function realm
lies an intensely debated sub-field
where scientists engineer superbugs.
Officially known as
“enhanced potential pandemic pathogens,”
these ePPPs are typically variants
of well-known viruses,
such as Ebola or avian influenza that have
been engineered to be, say,
more transmissible or more deadly.
The stakes of this kind of work
are much higher:
if even one unusually dangerous
virus escaped a lab,
it could cause a global pandemic.
Virologists developing ePPPs argue
this research could help us prepare
for future pandemics,
allowing us to jump start treatments
and potentially save lives.
For example, in the early 2010s,
several research teams created
a deadly strain of bird flu
with the novel ability to spread
through the air between mammals.
Advocates of the project argued
that by creating this ePPP,
we could learn crucial information
about a worst-case-scenario virus
under controlled conditions.
But many critics argued
that it’s unclear whether bird flu
would ever evolve in the wild
as it did in the lab.
Consequently, they believed the knowledge
gained by studying this dangerous virus
wasn’t remotely worth the risk
of creating it in the first place.
Both sides of this ongoing debate
are trying to save lives;
they just disagree
on the best way to do it.
However, everyone agrees that an ePPP
lab leak could be catastrophic.
Labs that work with dangerous pathogens
are designed with numerous safety features
to protect the scientists who work there,
as well as the outside world,
such as ventilation systems
that decontaminate air
and airtight “spacesuits”
with dedicated oxygen.
Sometimes buildings are even nested
inside each other
to prevent natural disasters
from breaching the closed environment.
But this technology is expensive
to build and maintain.
And even when our tech doesn't fail,
there’s still room for the most common
kind of mistake:
human error.
Many human errors are inconsequential:
a researcher spills a sample,
but quickly disinfects
the otherwise well-controlled environment.
Other incidents, however,
are much more concerning.
In 2009, a researcher accidentally
stuck themselves
with an Ebola-contaminated needle,
endangering their life
and the lives of those treating them.
In 2014, six vials containing the virus
that causes smallpox were found
in an unsecured storage room
where they’d been forgotten for decades.
That same year, a CDC scientist
unknowingly contaminated
a sample of relatively harmless bird flu
with a deadly lab-grown variant,
and then shipped the contaminated
sample to the USDA.
While these incidents did not
lead to larger crises,
the potentially catastrophic consequences
of an ePPP leak
have convinced many scientists
that we should stop
this kind of research altogether.
But if that doesn’t happen,
what can we do to minimize risk?
Well, first, we can work to reduce
human error by examining past mistakes.
Some experts have suggested creating
an international database of leaks,
near-misses, and fixes taken
that would help labs adapt their protocols
to minimize human errors.
And a robust, well-funded pandemic
early warning system
would help protect us
from any disease outbreak—
whether it comes
from a lab leak or a natural spillover.
Developing the kind of global standards
and databases necessary
for these changes would be difficult—
requiring unprecedented international
collaboration and transparency.
But we need to overcome these hurdles
because pandemics don't care
about borders or politics.