Modern computers
are revolutionizing our lives,
performing tasks unimaginable
only decades ago.
This was made possible by a long series
of innovations,
but there's one foundational invention
that almost everything else relies upon:
the transistor.
So what is that,
and how does such a device enable
all the amazing things computers can do?
Well, at their core, all computers
are just what the name implies,
machines that perform
mathematical operations.
The earliest computers were manual
counting devices,
like the abacus,
while later ones used mechanical parts.
What made them computers was having
a way to represent numbers
and a system for manipulating them.
Electronic computers work the same way,
but instead of physical arrangements,
the numbers are represented
by electric voltages.
Most such computers use a type of math
called Boolean logic
that has only two possible values,
the logical conditions true and false,
denoted by binary digits one and zero.
They are represented by high
and low voltages.
Equations are implemented
via logic gate circuits
that produce an output of one or zero
based on whether the inputs satisfy
a certain logical statement.
These circuits perform three fundamental
logical operations,
conjunction, disjunction, and negation.
The way conjunction works is an "and gate"
provides a high-voltage output
only if it receives
two high-voltage inputs,
and the other gates work
by similar principles.
Circuits can be combined to perform
complex operations,
like addition and subtraction.
And computer programs
consist of instructions
for electronically performing
these operations.
This kind of system needs a reliable
and accurate method
for controlling electric current.
Early electronic computers,
like the ENIAC,
used a device called the vacuum tube.
Its early form, the diode,
consisted of two electrodes
in an evacuated glass container.
Applying a voltage to the cathode
makes it heat up and release electrons.
If the anode is at a slightly
higher positive potential,
the electrons are attracted to it,
completing the circuit.
This unidirectional
current flow could be controlled
by varying the voltage to the cathode,
which makes it release more
or less electrons.
The next stage was the triode,
which uses a third electrode
called the grid.
This is a wire screen
between the cathode and anode
through which electrons could pass.
Varying its voltage makes it either repel
or attract the electrons
emitted by the cathode,
thus, enabling fast current-switching.
The ability to amplify signals
also made the triode crucial for radio
and long distance communication.
But despite these advancements,
vacuum tubes were unreliable and bulky.
With 18,000 triodes, ENIAC was nearly
the size of a tennis court
and weighed 30 tons.
Tubes failed every other day,
and in one hour, it consumed the amount
of electricity used by 15 homes in a day.
The solution was the transistor.
Instead of electrodes,
it uses a semiconductor,
like silicon treated
with different elements
to create an electron-emitting N-type,
and an electron absorbing P-type.
These are arranged in three
alternating layers
with a terminal at each.
The emitter, the base, and the collector.
In this typical NPN transistor,
due to certain phenomena
at the P-N interface,
a special region called a P-N junction
forms between the emitter and base.
It only conducts electricity
when a voltage exceeding
a certain threshold is applied.
Otherwise, it remains switched off.
In this way, small variations
in the input voltage
can be used to quickly switch between
high and low-output currents.
The advantage of the transistor lies
in its efficiency and compactness.
Because they don't require heating,
they're more durable and use less power.
ENIAC's functionality can now be surpassed
by a single fingernail-sized microchip
containing billions of transistors.
At trillions of calculations per second,
today's computers may seem like
they're performing miracles,
but underneath it all,
each individual operation is still
as simple as the flick of a switch.