Here are two images of a house.
There’s one obvious difference,
but to this patient, P.S.,
they looked completely identical.
P.S. had suffered a stroke that
damaged the right side of her brain,
leaving her unaware of everything
on her left side.
But though she could discern no difference
between the houses,
when researchers asked her
which she would prefer to live in,
she chose the house that wasn’t burning—
not once, but again and again.
P.S.’s brain was still processing
information
from her whole field of vision.
She could see both images
and tell the difference between them,
she just didn’t know it.
If someone threw a ball at her left side,
she might duck.
But she wouldn’t have any
awareness of the ball,
or any idea why she ducked.
P.S.’s condition,
known as hemispatial neglect,
reveals an important distinction between
the brain’s processing of information
and our experience of that processing.
That experience is what
we call consciousness.
We are conscious of both the external
world and our internal selves—
we are aware of an image
in much the same way we are aware of
ourselves looking at an image,
or our inner thoughts and emotions.
But where does consciousness come from?
Scientists, theologians, and philosophers
have been trying to get to the bottom of
this question for centuries—
without reaching any consensus.
One recent theory is that
consciousness is the brain’s imperfect
picture of its own activity.
To understand this theory,
it helps to have a clear idea
of one important way the brain processes
information from our senses.
Based on sensory input,
it builds models,
which are continuously updating,
simplified descriptions
of objects and events in the world.
Everything we know is based
on these models.
They never capture every detail of
the things they describe,
just enough for the brain to determine
appropriate responses.
For instance, one model built deep
into the visual system
codes white light as brightness
without color.
In reality,
white light includes wavelengths
that correspond to all the
different colors we can see.
Our perception of white light is wrong
and oversimplified,
but good enough for us to function.
Likewise, the brain’s model of the
physical body
keeps track of the configuration
of our limbs,
but not of individual cells
or even muscles,
because that level of information
isn’t needed to plan movement.
If it didn’t have the model keeping track
of the body’s size, shape,
and how it is moving at any moment,
we would quickly injure ourselves.
The brain also needs models of itself.
For example,
the brain has the ability to pay attention
to specific objects and events.
It also controls that focus,
shifting it from one thing to another,
internal and external,
according to our needs.
Without the ability to direct our focus,
we wouldn’t be able to assess threats,
finish a meal, or function at all.
To control focus effectively,
the brain has to construct a model
of its own attention.
With 86 billion neurons constantly
interacting with each other,
there’s no way the brain’s model of its
own information processing
can be perfectly self-descriptive.
But like the model of the body,
or our conception of white light,
it doesn’t have to be.
Our certainty that we have a
metaphysical, subjective experience
may come from one of the brain’s models,
a cut-corner description of what it means
to process information
in a focused and deep manner.
Scientists have already begun trying
to figure out
how the brain creates that self model.
MRI studies are a promising avenue
for pinpointing the networks involved.
These studies compare patterns
of neural activation
when someone is and isn’t conscious
of a sensory stimulus, like an image.
The results show that the areas needed
for visual processing
are activated whether or not the
participant is aware of the image,
but a whole additional network lights up
only when they are conscious
of seeing the image.
Patients with hemispatial neglect,
like P.S.,
typically have damage to one particular
part of this network.
More extensive damage to the network
can sometimes lead to a vegetative state,
with no sign of consciousness.
Evidence like this brings us closer
to understanding
how consciousness is built into the brain,
but there’s still much more to learn.
For instance,
the way neurons in the networks
related to consciousness
compute specific pieces of information
is outside the scope of our
current technology.
As we approach questions of consciousness
with science,
we’ll open new lines of inquiry
into human identity.