Imagine trying to use words
to describe every scene in a film,
every note in your favorite song,
or every street in your town.
Now imagine trying to do it using
only the numbers 1 and 0.
Every time you use the Internet
to watch a movie,
listen to music,
or check directions,
that’s exactly what your device is doing,
using the language of binary code.
Computers use binary because
it's a reliable way of storing data.
For example, a computer's main
memory is made of transistors
that switch between either high
or low voltage levels,
such as 5 volts and 0 volts.
Voltages sometimes oscillate,
but since there are only two options,
a value of 1 volt
would still be read as "low."
That reading is done by
the computer’s processor,
which uses the transistors’ states
to control other computer devices
according to software instructions.
The genius of this system
is that a given binary sequence
doesn't have a pre-determined meaning
on its own.
Instead, each type of data
is encoded in binary
according to a separate
set of rules.
Let’s take numbers.
In normal decimal notation,
each digit is multiplied by 10 raised
to the value of its position,
starting from zero on the right.
So 84 in decimal form is 4x10⁰ + 8x10¹.
Binary number notation works similarly,
but with each position
based on 2 raised to some power.
So 84 would be written as follows:
Meanwhile, letters are interpreted
based on standard rules like UTF-8,
which assigns each character to a specific
group of 8-digit binary strings.
In this case, 01010100 corresponds
to the letter T.
So, how can you know whether
a given instance of this sequence
is supposed to mean T or 84?
Well, you can’t from seeing
the string alone
– just as you can’t tell what the sound
"da" means from hearing it in isolation.
You need context to tell whether you're
hearing Russian, Spanish, or English.
And you need similar context
to tell whether you’re looking
at binary numbers or binary text.
Binary code is also used for
far more complex types of data.
Each frame of this video, for instance,
is made of hundreds
of thousands of pixels.
In color images,
every pixel is represented
by three binary sequences
that correspond to the primary colors.
Each sequence encodes a number
that determines
the intensity of that particular color.
Then, a video driver program transmits
this information
to the millions of liquid crystals
in your screen
to make all the different hues
you see now.
The sound in this video
is also stored in binary,
with the help of a technique
called pulse code modulation.
Continuous sound waves are digitized
by taking "snapshots" of their
amplitudes every few milliseconds.
These are recorded as numbers
in the form of binary strings,
with as many as 44,000
for every second of sound.
When they’re read by
your computer’s audio software,
the numbers determine how quickly
the coils in your speakers should vibrate
to create sounds of different frequencies.
All of this requires billions
and billions of bits.
But that amount can be reduced
through clever compression formats.
For example, if a picture has 30 adjacent
pixels of green space,
they can be recorded as "30 green" instead
of coding each pixel separately -
a process known as run-length encoding.
These compressed formats are themselves
written in binary code.
So is binary the end-all-be-all
of computing?
Not necessarily.
There’s been research
into ternary computers,
with circuits in three possible states,
and even quantum computers,
whose circuits can be
in multiple states simultaneously.
But so far, none of these has provided
as much physical stability
for data storage and transmission.
So for now, everything you see,
hear,
and read through your screen
comes to you as the result
of a simple "true" or "false" choice,
made billions of times over.