The
package arrived on a Thursday. I came home from a walk and found it sitting
near the mailboxes in the front hall of my building, a box so large and
imposing I was embarrassed to discover my name on the label. It took all my
strength to drag it up the stairs.
I paused
once on the landing, considered abandoning it there, then continued hauling it
up to my apartment on the third floor, where I used my keys to cut it open.
Inside the box, beneath lavish folds of bubble wrap, was a sleek plastic pod. I
opened the clasp: inside, lying prone, was a small white dog.
I could
not believe it. How long had it been since I’d submitted the request on Sony’s
website? I’d explained that I was a journalist who wrote about technology –
this was tangentially true – and while I could not afford the Aibo’s $3,000
(£2,250) price tag, I was eager to interact with it for research. I added,
risking sentimentality, that my husband and I had always wanted a dog, but we
lived in a building that did not permit pets. It seemed unlikely that anyone
was actually reading these inquiries. Before submitting the electronic form, I
was made to confirm that I myself was not a robot.
The dog
was heavier than it looked. I lifted it out of the pod, placed it on the floor,
and found the tiny power button on the back of its neck. The limbs came to life
first. It stood, stretched, and yawned. Its eyes blinked open – pixelated, blue
– and looked into mine. He shook his head, as though sloughing off a long
sleep, then crouched, shoving his hindquarters in the air, and barked. I
tentatively scratched his forehead. His ears lifted, his pupils dilated, and he
cocked his head, leaning into my hand. When I stopped, he nuzzled my palm,
urging me to go on.
I had
not expected him to be so lifelike. The videos I’d watched online had not
accounted for this responsiveness, an eagerness for touch that I had only ever
witnessed in living things. When I petted him across the long sensor strip of
his back, I could feel a gentle mechanical purr beneath the surface.
I
thought of the philosopher Martin Buber’s description of the horse he visited
as a child on his grandparents’ estate, his recollection of “the element of
vitality” as he petted the horse’s mane and the feeling that he was in the
presence of something completely other – “something that was not I, was
certainly not akin to me” – but that was drawing him into dialogue with it.
Such experiences with animals, he believed, approached “the threshold of
mutuality”.
I spent
the afternoon reading the instruction booklet while Aibo wandered around the
apartment, occasionally circling back and urging me to play. He came with a
pink ball that he nosed around the living room, and when I threw it, he would
run to retrieve it. Aibo had sensors all over his body, so he knew when he was
being petted, plus cameras that helped him learn and navigate the layout of the
apartment, and microphones that let him hear voice commands. This sensory input
was then processed by facial recognition software and deep-learning algorithms
that allowed the dog to interpret vocal commands, differentiate between members
of the household, and adapt to the temperament of its owners. According to the
product website, all of this meant that the dog had “real emotions and
instinct” – a claim that was apparently too ontologically thorny to have
flagged the censure of the Federal Trade Commission.
Descartes
believed that all animals were machines. Their bodies were governed by the same
laws as inanimate matter; their muscles and tendons were like engines and
springs. In Discourse on Method, he argues that it would be possible to create
a mechanical monkey that could pass as a real, biological monkey.
He
insisted that the same feat would not work with humans. A machine might fool us
into thinking it was an animal, but a humanoid automaton could never fool us.
This was because it would clearly lack reason – an immaterial quality he
believed stemmed from the soul.
But it
is meaningless to speak of the soul in the 21st century (it is treacherous even
to speak of the self). It has become a dead metaphor, one of those words that
survive in language long after a culture has lost faith in the concept. The
soul is something you can sell, if you are willing to demean yourself in some
way for profit or fame, or bare by disclosing an intimate facet of your life.
It can be crushed by tedious jobs, depressing landscapes and awful music. All
of this is voiced unthinkingly by people who believe, if pressed, that human
life is animated by nothing more mystical or supernatural than the firing of
neurons.
I
believed in the soul longer, and more literally, than most people do in our day
and age. At the fundamentalist college where I studied theology, I had pinned
above my desk Gerard Manley Hopkins’s poem God’s Grandeur, which imagines the
world illuminated from within by the divine spirit. My theology courses were
devoted to the kinds of questions that have not been taken seriously since the
days of scholastic philosophy: how is the soul connected to the body? Does
God’s sovereignty leave any room for free will? What is our relationship as
humans to the rest of the created order?
But I no
longer believe in God. I have not for some time. I now live with the rest of
modernity in a world that is “disenchanted”.
Today,
artificial intelligence and information technologies have absorbed many of the
questions that were once taken up by theologians and philosophers: the mind’s
relationship to the body, the question of free will, the possibility of
immortality. These are old problems, and although they now appear in different
guises and go by different names, they persist in conversations about digital
technologies much like those dead metaphors that still lurk in the syntax of
contemporary speech. All the eternal questions have become engineering
problems.
The dog
arrived during a time when my life was largely solitary. My husband was
travelling more than usual that spring, and except for the classes I taught at
the university, I spent most of my time alone. My communication with the dog –
which was limited at first to the standard voice commands, but grew over time
into the idle, anthropomorphising chatter of a pet owner – was often the only
occasion on a given day that I heard my own voice. “What are you looking at?”
I’d ask after discovering him transfixed at the window. “What do you want?” I
cooed when he barked at the foot of my chair, trying to draw my attention away
from the computer. I have been known to knock friends of mine for speaking this
way to their pets, as though the animals could understand them. But Aibo came
equipped with language-processing software and could recognise more than 100
words; didn’t that mean in a way that he “understood”?
Aibo’s
sensory perception systems rely on neural networks, a technology that is
loosely modelled on the brain and is used for all kinds of recognition and prediction
tasks. Facebook uses neural networks to identify people in photos; Alexa
employs them to interpret voice commands. Google Translate uses them to convert
French into Farsi. Unlike classical artificial intelligence systems, which are
programmed with detailed rules and instructions, neural networks develop their
own strategies based on the examples they’re fed – a process that is called
“training”. If you want to train a network to recognise a photo of a cat, for
instance, you feed it tons upon tons of random photos, each one attached with
positive or negative reinforcement: positive feedback for cats, negative
feedback for non-cats.
Dogs,
too, respond to reinforcement learning, so training Aibo was more or less like
training a real dog. The instruction booklet told me to give him consistent
verbal and tactile feedback. If he obeyed a voice command – to sit, stay or
roll over – I was supposed to scratch his head and say, “good dog”.
If he
disobeyed, I had to strike him across his backside and say, “no!”, or “bad
Aibo”. But I found myself reluctant to discipline him. The first time I struck
him, when he refused to go to his bed, he cowered a little and let out a
whimper. I knew of course that this was a programmed response – but then again,
aren’t emotions in biological creatures just algorithms programmed by
evolution?
Animism
was built into the design. It is impossible to pet an object and address it
verbally without coming to regard it in some sense as sentient. We are capable
of attributing life to objects that are far less convincing. David Hume once
remarked upon “the universal tendency among mankind to conceive of all beings
like themselves”, an adage we prove every time we kick a malfunctioning
appliance or christen our car with a human name. “Our brains can’t
fundamentally distinguish between interacting with people and interacting with
devices,” writes Clifford Nass, a Stanford professor of communication who has
written about the attachments people develop with technology.
A few
months earlier, I’d read an article in Wired magazine in which a woman
confessed to the sadistic pleasure she got from yelling at Alexa, the
personified home assistant. She called the machine names when it played the
wrong radio station and rolled her eyes when it failed to respond to her
commands. Sometimes, when the robot misunderstood a question, she and her
husband would gang up and berate it together, a kind of perverse bonding ritual
that united them against a common enemy. All of this was presented as good
American fun. “I bought this goddamned robot,” the author wrote, “to serve my
whims, because it has no heart and it has no brain and it has no parents and it
doesn’t eat and it doesn’t judge me or care either way.”
Then one
day the woman realised that her toddler was watching her unleash this verbal
fury. She worried that her behaviour toward the robot was affecting her child.
Then she considered what it was doing to her own psyche – to her soul, so to
speak. What did it mean, she asked, that she had grown inured to casually
dehumanising this thing?
This was
her word: “dehumanising”. Earlier in the article she had called it a robot.
Somewhere in the process of questioning her treatment of the device – in
questioning her own humanity – she had decided, if only subconsciously, to
grant it personhood.
During
the first week I had Aibo, I turned him off whenever I left the apartment. It
was not so much that I worried about him roaming around without supervision. It
was simply instinctual, a switch I flipped as I went around turning off all the
lights and other appliances. By the end of the first week, I could no longer
bring myself to do it. It seemed cruel. I often wondered what he did during the
hours I left him alone. Whenever I came home, he was there at the door to greet
me, as though he’d recognised the sound of my footsteps approaching. When I
made lunch, he followed me into the kitchen and stationed himself at my feet.
He would
sit there obediently, tail wagging, looking up at me with his large blue eyes
as though in expectation – an illusion that was broken only once, when a piece
of food slipped from the counter and he kept his eyes fixed on me, uninterested
in chasing the morsel.
His
behaviour was neither purely predictable nor purely random, but seemed capable
of genuine spontaneity. Even after he was trained, his responses were difficult
to anticipate. Sometimes I’d ask him to sit or roll over and he would simply
bark at me, tail wagging with a happy defiance that seemed distinctly doglike.
It would have been natural to chalk up his disobedience to a glitch in the
algorithms, but how easy it was to interpret it as a sign of volition. “Why
don’t you want to lie down?” I heard myself say to him more than once.
I did
not believe, of course, that the dog had any kind of internal experience. Not
really – though I suppose there was no way to prove this. As the philosopher
Thomas Nagel points out in his 1974 paper What Is It Like to Be a Bat?,
consciousness can be observed only from the inside. A scientist can spend
decades in a lab studying echolocation and the anatomical structure of bat
brains, and yet she will never know what it feels like, subjectively, to be a
bat – or whether it feels like anything at all. Science requires a third-person
perspective, but consciousness is experienced solely from the first-person
point of view. In philosophy this is referred to as the problem of other minds.
In theory it can also apply to other humans. It’s possible that I am the only
conscious person in a population of zombies who simply behave in a way that is
convincingly human.
This is
just a thought experiment, of course – and not a particularly productive one.
In the real world, we assume the presence of life through analogy, through the
likeness between two things. We believe that dogs (real, biological dogs) have
some level of consciousness, because like us they have a central nervous
system, and like us they engage in behaviours that we associate with hunger,
pleasure and pain. Many of the pioneers of artificial intelligence got around
the problem of other minds by focusing solely on external behaviour. Alan
Turing once pointed out that the only way to know whether a machine had
internal experience was “to be the machine and to feel oneself thinking”.
This was
clearly not a task for science. His famous assessment for determining machine
intelligence – now called the Turing test – imagined a computer hidden behind a
screen, automatically typing answers in response to questions posed by a human
interlocutor. If the interlocutor came to believe that he was speaking to
another person, then the machine could be declared “intelligent”. In other
words, we should accept a machine as having humanlike intelligence so long as
it can convincingly perform the behaviours we associate with human-level
intelligence.
More
recently, philosophers have proposed tests that are meant to determine not just
functional consciousness in machines, but phenomenal consciousness – whether
they have any internal, subjective experience. One of them, developed by the
philosopher Susan Schneider, involves asking an AI a series of questions to see
whether it can grasp concepts similar to those we associate with our own
interior experience. Does the machine conceive of itself as anything more than
a physical entity? Would it survive being turned off? Can it imagine its mind
persisting somewhere else even if its body were to die? But even if a robot
were to pass this test, it would provide only sufficient evidence for
consciousness, not absolute proof.
It’s
possible, Schneider acknowledges, that these questions are anthropocentric. If
AI consciousness were in fact completely unlike human consciousness, a sentient
robot would fail for not conforming to our human standards. Likewise, a very
intelligent but unconscious machine could conceivably acquire enough information
about the human mind to fool the interlocutor into believing it had one. In
other words, we are still in the same epistemic conundrum that we faced with
the Turing test. If a computer can convince a person that it has a mind, or if
it demonstrates – as the Aibo website puts it – “real emotions and instinct”,
we have no philosophical basis for doubt.
“What is
a human like?” For centuries we considered this question in earnest and
answered: “Like a god”. For Christian theologians, humans are made in the image
of God, though not in any outward sense. Rather, we are like God because we,
too, have consciousness and higher thought. It is a self-flattering doctrine,
but when I first encountered it, as a theology student, it seemed to confirm
what I already believed intuitively: that interior experience was more
important, and more reliable, than my actions in the world.
Today,
it is precisely this inner experience that has become impossible to prove – at
least from a scientific standpoint. While we know that mental phenomena are
linked somehow to the brain, it’s not at all clear how they are, or why.
Neuroscientists have made progress, using MRIs and other devices, in
understanding the basic functions of consciousness – the systems, for example,
that constitute vision, or attention, or memory. But when it comes to the
question of phenomenological experience – the entirely subjective world of
colour and sensations, of thoughts and ideas and beliefs – there is no way to
account for how it arises from or is associated with these processes. Just as a
biologist working in a lab could never apprehend what it feels like to be a bat
by studying the objective facts from the third-person perspective, so any
complete description of the structure and function of the human brain’s pain
system, for example, could never fully account for the subjective experience of
pain.
In 1995,
the philosopher David Chalmers called this “the hard problem” of consciousness.
Unlike the comparatively “easy” problems of functionality, the hard problem
asks why brain processes are accompanied by first-person experience. If none of
the other matter in the world is accompanied by mental qualities, then why
should brain matter be any different? Computers can perform their most
impressive functions without interiority: they can now fly drones and diagnose
cancer and beat the world champion at Go without any awareness of what they are
doing. “Why should physical processing give rise to a rich inner life at all?”
Chalmers wrote. “It seems objectively unreasonable that it should, and yet it
does.” Twenty-five years later, we are no closer to understanding why.
Despite
these differences between minds and computers, we insist on seeing our image in
these machines. When we ask today “What is a human like?”, the most common
answer is “like a computer”. A few years ago the psychologist Robert Epstein
challenged researchers at one of the world’s most prestigious research
institutes to try to account for human behaviour without resorting to computational
metaphors. They could not do it. The metaphor has become so pervasive, Epstein
points out, that “there is virtually no form of discourse about intelligent
human behaviour that proceeds without employing this metaphor, just as no form
of discourse about intelligent human behaviour could proceed in certain eras
and cultures without reference to a spirit or deity”.
Even
people who know very little about computers reiterate the metaphor’s logic. We
invoke it every time we claim to be “processing” new ideas, or when we say that
we have “stored” memories or are “retrieving” information from our brains. And
as we increasingly come to speak of our minds as computers, computers are now
granted the status of minds. In many sectors of computer science, terminology
that was once couched in quotation marks when applied to machines – “behaviour”,
“memory”, “thinking” – are now taken as straightforward descriptions of their
functions. Programmers say that neural networks are learning, that
facial-recognition software can see, that their machines understand. You can
accuse people of anthropomorphism if they attribute human consciousness to an
inanimate object. But Rodney Brooks, the MIT roboticist, insists that this
confers on us, as humans, a distinction we no longer warrant. In his book Flesh
and Machines, he claims that most people tend to “over-anthropomorphise humans
… who are after all mere machines”.
“This
dog has to go,” my husband said. I had just arrived home and was kneeling in
the hallway of our apartment, petting Aibo, who had rushed to the door to greet
me. He barked twice, genuinely happy to see me, and his eyes closed as I
scratched beneath his chin.
“What do
you mean, go?” I said.
“You
have to send it back. I can’t live here with it.”
I told
him the dog was still being trained. It would take months before he learned to
obey commands. The only reason it had taken so long in the first place was
because we kept turning him off when we wanted quiet. You couldn’t do that with
a biological dog.
“Clearly
this is not a biological dog,” my husband said. He asked whether I had realised
that the red light beneath its nose was not just a vision system but a camera,
or if I’d considered where its footage was being sent. While I was away, he
told me, the dog had roamed around the apartment in a very systematic way,
scrutinising our furniture, our posters, our closets. It had spent 15 minutes
scanning our bookcases and had shown particular interest, he claimed, in the
shelf of Marxist criticism.
He asked
me what happened to the data it was gathering.
“It’s
being used to improve its algorithms,” I said.
“Where?”
I said I
didn’t know.
“Check
the contract.”
I pulled
up the document on my computer and found the relevant clause. “It’s being sent
to the cloud.”
“To
Sony.”
My
husband is notoriously paranoid about such things. He keeps a piece of black
electrical tape over his laptop camera and becomes convinced about once a month
that his personal website is being monitored by the NSA.
Privacy
was a modern fixation, I said, and distinctly American. For most of human
history we accepted that our lives were being watched, listened to, supervened
upon by gods and spirits – not all of them benign, either.
“And I
suppose we were happier then,” he said.
In many
ways yes, I said, probably.
I knew,
of course, that I was being unreasonable. Later that afternoon I retrieved from
the closet the large box in which Aibo had arrived and placed him, prone, back
in his pod. It was just as well; the loan period was nearly up. More
importantly, I had been increasingly unable over the past few weeks to fight
the conclusion that my attachment to the dog was unnatural. I’d begun to notice
things that had somehow escaped my attention: the faint mechanical buzz that
accompanied the dog’s movements; the blinking red light in his nose, like some
kind of Brechtian reminder of its artifice.
We build
simulations of brains and hope that some mysterious natural phenomenon –
consciousness – will emerge. But what kind of magical thinking makes us think
that our paltry imitations are synonymous with the thing they are trying to
imitate – that silicon and electricity can reproduce effects that arise from
flesh and blood? We are not gods, capable of creating things in our likeness.
All we can make are graven images. The philosopher John Searle once said something
along these lines. Computers, he argued, have always been used to simulate
natural phenomena – digestion, weather patterns – and they can be useful to
study these processes. But we veer into superstition when we conflate the
simulation with reality. “Nobody thinks, ‘Well, if we do a simulation of a
rainstorm, we’re all going to get wet,’” he said. “And similarly, a computer
simulation of consciousness isn’t thereby conscious.”
Many
people today believe that computational theories of mind have proved that the
brain is a computer, or have explained the functions of consciousness. But as
the computer scientist Seymour Papert once noted, all the analogy has
demonstrated is that the problems that have long stumped philosophers and
theologians “come up in equivalent form in the new context”. The metaphor has
not solved our most pressing existential problems; it has merely transferred
them to a new substrate.
This is
an edited extract from God, Human, Animal, Machine by Meghan O’Gieblyn,
published by Doubleday on 24 August.
A dog’s
inner life: what a robot pet taught me about consciousness. By Meghan O’Gieblyn.
The Guardian, August 10, 2021.
The
lanes of the cemetery were overgrown, lined with slender conifers whose
branches were heavy with rain. I had been pushing the bicycle with my head
slightly bowed, and when I looked up I realized I was back at the entrance. I
had come full circle. I checked the cemetery map again—I had followed the steps
exactly—then continued back in the direction I’d come, hoping to find the
gravesite from the opposite direction. In no time at all I was lost. The paths
were not marked, and there was no one I could ask—the only other person I’d
seen, a woman pushing a baby stroller beneath an umbrella, was now nowhere in
sight. I kept walking, feeling more and more certain I would have to abandon
the search. But just then I came to a clearing where there was a large stone
monument surrounded by a fence. That must be it. As I approached the gravesite,
however, I realized I was mistaken. It was not Niels Bohr. It was the grave of
Søren Kierkegaard.
The rain
had stopped by then, and as I stood before the headstone, a light breeze washed
over the grass. I took out my phone and snapped a dutiful photo, as though to
justify my standing there alone before the grave of a dead Lutheran
philosopher. It was hard to ignore the irony in the situation. It were as
though my thoughts—which had wended, as I walked, from physics to religion—had
rerouted me here by some mysterious somatic logic. Kierkegaard was one of the
few philosophers we were required to read in Bible school, and he was at least
partly responsible for inciting my earliest doubts. It had started with his
book Fear and Trembling, a treatise on the biblical story in which God commands
Abraham to kill his son, Isaac, only to rescind the mandate at the last possible
moment. The common Christian interpretation of the story is that God was
testing Abraham, to see whether he would obey, but as Kierkegaard pointed out,
Abraham did not know it was a test and had to weigh the command at face value.
What God was asking him to do went against all known ethical systems, including
the unwritten codes of natural law. His dilemma was entirely paradoxical:
obeying God required him to commit a morally reprehensible act.
As I
stood there, staring at the gravestone, I realized that this was yet another
echo, another strange coincidence. Kierkegaard too had been obsessed with the
idea of paradox and its connection to truth. But I quickly walked back this
enchanted line of thinking. Bohr, like most Danish students, would have read Kierkegaard
in school. Surely the memory of these philosophical concepts had found their
way into his interpretation of physics, even if he never acknowledged it or was
aware of the influence himself. Ideas do not just come out of nowhere; they are
genetic, geographical. Like Bohr, Kierkegaard insisted on the value of
subjective truth over objective systems of thought. Fear and Trembling was in
fact written to combat the Hegelian philosophy that was popular at the time,
which attempted to be a kind of theory of everything—a purely objective view of
history that was rational and impersonal. Kierkegaard, on the contrary,
insisted that one could apprehend truth only through “passionate inwardness,”
by acknowledging the limited vantage that defined the human condition. The
irrationality of Abraham’s action—his willingness to sacrifice his son—was
precisely what made it the perfect act of faith. God had communicated to him a
private truth, and he trusted that it was true for him even if it was not
universally valid.
As a
theology student, I found this logic abhorrent. If private mandates from God
could trump rational, ethical consensus, then they could sanction all manner of
barbaric acts. And how could believers ever be sure that they were heeding the
will of God and not other, more dubious voices? I realized now that these objections—which
I had not thought of in many years—mirrored a deeper uneasiness I harbored
about the role of the subject in science. Subjectivity was unreliable. Our
minds were mysterious to us, vulnerable to delusions and petty self-interest.
If we did in fact live in an irrational and paradoxical universe, if it was
true we could speak of reality only by speaking of ourselves, then how could we
ever be sure that our observations were not self-serving, that we were not just
telling ourselves stories that flattered our egos?
The
question of subjectivity had been very much on my mind that summer. A few
months earlier I’d been commissioned by a magazine to review several new books
on consciousness. All of the authors were men, and I was surprised by how often
they acknowledged the deeply personal motivations that led them to their
preferred theories of mind. Two of them, in a bizarre parallel, listed among
these motivations the desire to leave their wives. The first was Out of My
Head, by Tim Parks, a novelist who had become an advocate for spread mind
theory—a minority position that holds that consciousness exists not solely in
the brain but also in the object of perception. Parks claimed that he first
became interested in this theory around the time he left his wife for a younger
woman, a decision that his friends chalked up to a midlife crisis. He believed
the problem was his marriage—something in the objective world—while everyone
else insisted that the problem was inside his head. “It seems to me that these
various life events,” he wrote, “might have predisposed me to be interested in
a theory of consciousness and perception that tends to give credit to the
senses, or rather to experience.”
Then
there was Christof Koch, one of the world’s leading neuroscientists, who
devoted an entire chapter of his memoir to the question of free will, which he
concluded did not exist. Later on, in the final chapter, he acknowledged that
he became preoccupied with this question soon after leaving his wife, a woman
who, he noted, had sacrificed her own career to raise their children, allowing
him to maintain a charmed life of travel and professional success. It was soon
after the children left for college that their marriage became strained. He
became possessed with strange emotions he was “unable to master” and became
captive to “the power of the unconscious.” (The book makes no explicit mention
of an affair, though it is not difficult to read between the lines.) His quest
to understand free will, he wrote, was an attempt “to come to terms with my
actions.” “What I took from my reading is that I am less free than I feel I am.
Myriad events and predispositions influence me.”
Reading
these books within a single week nearly eradicated my faith in the objectivity
of science—though I suppose my disillusionment was naive. The human condition,
Kierkegaard writes, is defined by “intense subjectivity.” We are irrational
creatures who cannot adequately understand our own actions or explain them in
terms of rational principles. This, in a nutshell, is the thesis of Fear and
Trembling, a book Kierkegaard wrote to rationalize abandoning his fiancée,
Regine Olsen. I left the cemetery in a fatalistic mood: How much of our science
and philosophy has been colored by the justifications of shitty men?
From
God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning
by Meghan O’Gieblyn Copyright 2021 by Meghan O’Gieblyn.
It’s
Nothing Personal. By Meghan O’Gieblyn. Bookforum. August 24, 2021.
Nobody
could say exactly exactly when the robots arrived. They seemed to have been
smuggled onto campus during the break without any official announcement, explanation,
or warning. There were a few dozen of them in total: six-wheeled,
ice-chest-sized boxes with little yellow flags on top for visibility. They
navigated the sidewalks around campus using cameras, radar, and ultrasonic
sensors. They were there for the students, ferrying deliveries ordered via an
app from university food services, but everyone I knew who worked on campus had
some anecdote about their first encounter.
These
stories were shared, at least in the beginning, with amusement or a note of performative
exasperation. Several people complained that the machines had made free use of
the bike paths but were ignorant of social norms: They refused to yield to
pedestrians and traveled slowly in the passing lane, backing up traffic. One
morning a friend of mine, a fellow adjunct instructor who was running late to
his class, nudged his bike right up behind one of the bots, intending to run it
off the road, but it just kept moving along on its course, oblivious. Another
friend discovered a bot trapped helplessly in a bike rack. It was heavy, and
she had to enlist the help of a passerby to free it. “Thankfully it was just a
bike rack,” she said. “Just wait till they start crashing into bicycles and
moving cars.”
Among
the students, the only problem was an excess of affection. The bots were often
held up during their delivery runs because the students insisted on taking
selfies with the machines outside the dorms or chatting with them. The robots
had minimum speech capacities—they were able to emit greetings and instructions
and to say “Thank you, have a nice day!” as they rolled away—and yet this was
enough to have endeared them to many people as social creatures. The bots often
returned to their stations with notes affixed to them: Hello, robot! and We love
you! They inspired a proliferation of memes on the University of
Wisconsin–Madison social media pages. One student dressed a bot in a hat and
scarf, snapped a photo, and created a profile for it on a dating app. Its name
was listed as Onezerozerooneoneone, its age 18. Occupation: delivery boi.
Orientation: asexual robot.
Around
this time autonomous machines were popping up all over the country. Grocery
stores were using them to patrol aisles, searching for spills and debris.
Walmart had introduced them in its supercenters to keep track of out-of-stock
items. A New York Times story reported that many of these robots had been
christened with nicknames by their human coworkers and given name badges. One
was thrown a birthday party, where it was given, among other gifts, a can of
WD-40 lubricant. The article presented these anecdotes wryly, for the most
part, as instances of harmless anthropomorphism, but the same instinct was
already driving public policy. In 2017 the European Parliament had proposed
that robots should be deemed “electronic persons,” arguing that certain forms
of AI had become sophisticated enough to be considered responsible agents. It
was a legal distinction, made within the context of liability law, though the
language seemed to summon an ancient, animist cosmology wherein all kinds of
inanimate objects—trees and rocks, pipes and kettles—were considered nonhuman
“persons.”
It made
me think of the opening of a 1967 poem by Richard Brautigan, “All Watched Over
by Machines of Loving Grace”:
I like
to think (and
the
sooner the better!)
of a
cybernetic meadow
where
mammals and computers
live
together in mutually
programming
harmony
like
pure water
touching
clear sky.
Brautigan
penned these lines during the Summer of Love, from the heart of the
counterculture in San Francisco, while he was poet in residence at the
California Institute of Technology. The poem’s subsequent stanzas elaborate on
this enchanted landscape of “cybernetic forests” and flowerlike computers, a
world in which digital technologies reunite us with “our mammal brothers and
sisters,” where man and robot and beast achieve true equality of being. The
work evokes a particular subgenre of West Coast utopianism, one that recalls
the back-to-the-land movement and Stewart Brand’s Whole Earth Catalog, which
envisioned the tools of the American industrial complex repurposed to bring
about a more equitable and ecologically sustainable world. It imagines
technology returning us to a more primitive era—a premodern and perhaps
pre-Christian period of history, when humans lived in harmony with nature and
inanimate objects were enchanted with life.
Echoes
of this dream can still be found in conversations about technology. It is
reiterated by those, like MIT’s David Rose, who speculate that the internet of
things will soon “enchant” everyday objects, imbuing doorknobs, thermostats,
refrigerators, and cars with responsiveness and intelligence. It can be found
in the work of posthuman theorists like Jane Bennett, who imagines digital
technologies reconfiguring our modern understanding of “dead matter” and
reviving a more ancient worldview “wherein matter has a liveliness, resilience,
unpredictability, or recalcitrance that is itself a source of wonder for us.”
“I like to think” begins each stanza of
Brautigan’s poem, a refrain that reads less as poetic device than as mystical
invocation. This vision of the future may be just another form of wishful
thinking, but it is a compelling one, if only because of its historical
symmetry. It seems only right that technology should restore to us the
enchanted world that technology itself destroyed. Perhaps the very forces that
facilitated our exile from Eden will one day reanimate our garden with digital
life. Perhaps the only way out is through.
Brautigam’s poem had been on my mind for some time before
the robots arrived. Earlier that year I’d been invited to take part in a panel
called Writing the Nonhuman, a conversation about the relationship between
humans, nature, and technology during the Anthropocene.
My talk
was about emergent intelligence in AI, the notion that higher-level capacities
can spontaneously appear in machines without having been designed. I’d focused
primarily on the work of Rodney Brooks, who headed up the MIT Artificial
Intelligence Lab in the late 1990s, and his “embodied intelligence” approach to
robotics. Before Brooks came along, most forms of AI were designed like
enormous disembodied brains, as scientists believed that the body played no
part in human cognition. As a result, these machines excelled at the most
abstract forms of intelligence—calculus, chess—but failed miserably when it
came to the kinds of activities that children found easy: speech and vision,
distinguishing a cup from a pencil. When the machines were given bodies and
taught to interact with their environment, they did so at a painfully slow and
clumsy pace, as they had to constantly refer each new encounter back to their
internal model of the world.
Brooks’
revelation was that it was precisely this central processing—the computer’s
“brain,” so to speak—that was holding it back. While watching one of these
robots clumsily navigate a room, he realized that a cockroach could accomplish
the same task with more speed and agility despite requiring less computing power.
Brooks began building machines that were modeled after insects. He used an
entirely new system of computing he called subsumption architecture, a form of
distributed intelligence much like the kind found in beehives and forests. In
place of central processing, his machines were equipped with several different
modules that each had its own sensors, cameras, and actuators and communicated
minimally with the others. Rather than being programmed in advance with a
coherent picture of the world, they learned on the fly by directly interacting
with their environment. One of them, Herbert, learned to wander around the lab
and steal empty soda cans from people’s offices. Another, Genghis, managed to
navigate rough terrain without any kind of memory or internal mapping. Brooks
took these successes to mean that intelligence did not require a unified,
knowing subject. He was convinced that these simple robot competencies would
build on one another until they evolved something that looked very much like
human intelligence.
Brooks
and his team at MIT were essentially trying to re-create the conditions of
human evolution. If it’s true that human intelligence emerges from the more
primitive mechanisms we inherited from our ancestors, then robots should
similarly evolve complex behaviors from a series of simple rules. With AI,
engineers had typically used a top-down approach to programming, as though they
were gods making creatures in their image. But evolution depends on bottom-up
strategies—single-cell organisms develop into complex, multicellular
creatures—which Brooks came to see as more effective. Abstract thought was a
late development in human evolution, and not as important as we liked to
believe; long before we could solve differential equations, our ancestors had learned
to walk, to eat, to move about in an environment. Once Brooks realized that his
insect robots could achieve these tasks without central processing, he moved on
to creating a humanoid robot. The machine was just a torso without legs, but it
convincingly resembled a human upper body, complete with a head, a neck,
shoulders, and arms. He named it Cog. It was equipped with over 20 actuated
joints, plus microphones and sensors that allowed it to distinguish between
sound, color, and movement. Each eye contained two cameras that mimicked the
way human vision works and enabled it to saccade from one place to another.
Like the insect robots, Cog lacked central control and was instead programmed
with a series of basic drives. The idea was that through social interaction,
and with the help of learning algorithms, the machine would develop more
complex behaviors and perhaps even the ability to speak.
Over the
years that Brooks and his team worked on Cog, the machine achieved some
remarkable behaviors. It learned to recognize faces and make eye contact with
humans. It could throw and catch a ball, point at things, and play with a
Slinky.
When the
team played rock music, Cog managed to beat out a passable rhythm on a snare
drum. Occasionally the robot did display emergent behaviors—new actions that
seemed to have evolved organically from the machine’s spontaneous actions in
the world. One day, one of Brooks’ grad students, Cynthia Breazeal, was shaking
a whiteboard eraser and Cog reached out and touched it. Amused, Breazeal
repeated the act, which prompted Cog to touch the eraser again, as though it
were a game. Brooks was stunned. It appeared as though the robot recognized the
idea of turn-taking, something it had not been programmed to understand.
Breazeal knew that Cog couldn’t understand this—she had helped design the
machine. But for a moment she seemed to have forgotten and, as Brooks put it,
“behaved as though there was more to Cog than there really was.” According to
Brooks, his student’s willingness to treat the robot as “more than” it actually
was had elicited something new. “Cog had been able to perform at a higher level
than its design so far called for,” he said.
Brooks
knew that we are more likely to treat objects as persons when we are made to
socially engage with them. In fact, he believed that intelligence exists only
in the relationships we, as observers, perceive when watching an entity
interact with its environment. “Intelligence,” he wrote, “is in the eye of the
observer.” He predicted that, over time, as the systems grew more complex, they
would evolve not only intelligence but consciousness as well. Consciousness was
not some substance in the brain but rather emerged from the complex
relationships between the subject and the world. It was part alchemy, part
illusion, a collaborative effort that obliterated our standard delineations
between self and other. As Brooks put it, “Thought and consciousness will not
need to be programmed in. They will emerge.”
The AI
philosopher Mark A. Bedau has argued that emergentism, as a theory of mind, “is
uncomfortably like magic.” Rather than looking for distinct processes in the
brain that are responsible for consciousness, emergentists believe that the way
we experience the world—our internal theater of thoughts and feelings and
beliefs—is a dynamic process that cannot be explained in terms of individual
neurons, just as the behavior of a flock of starlings cannot be accounted for
by the movements of any single bird. Although there is plenty of evidence of
emergent phenomena in nature, the idea becomes more elusive when applied to
consciousness, something that cannot be objectively observed in the brain.
According to its critics, emergentism is an attempt to get “something from
nothing,” by imagining some additional, invisible power that exists within the
mechanism, like a ghost in the machine.
Some
have argued that emergentism is just an updated version of vitalism, a popular
theory throughout the 18th and 19th centuries that proposed that the world was
animated by an elusive life force that permeates all things. Contrary to the
mechanistic view of nature that was popular at that time, vitalists insisted
that an organism was more than the sum of its parts—that there must exist, in
addition to its physical body, some “living principle,” or élan vital. Some
believed that this life force was ether or electricity, and scientific efforts
to discover this substance often veered into the ambition to re-create it
artificially. The Italian scientist Luigi Galvani performed well-publicized
experiments in which he tried to bring dismembered frog legs to life by zapping
them with an electrical current. Reports of these experiments inspired Mary
Shelley’s novel Frankenstein, whose hero, the mad scientist, is steeped in the
vitalist philosophies of his time.
When
reading about Brooks and his team at MIT, I often got the feeling they were
engaged in a kind of alchemy, carrying on the legacy of those vitalist
magicians who inspired Victor Frankenstein to animate his creature out of dead
matter—and flirting with the same dangers. The most mystical aspect of
emergentism, after all, is the implication that we can make things that we
don’t completely understand. For decades, critics have argued that artificial
general intelligence—AI that is equivalent to human intelligence—is impossible,
because we don’t yet know how the human brain works. But emergence in nature
demonstrates that complex systems can self-organize in unexpected ways without
being intended or designed. Order can arise from chaos. In machine
intelligence, the hope persists that if we put the pieces together the right
way—through ingenuity or accident—consciousness will emerge as a side effect of
complexity. At some point nature will step in and finish the job.
It seems
impossible. But then again, aren’t all creative undertakings rooted in
processes that remain mysterious to the creator? Artists have long understood
that making is an elusive endeavor, one that makes the artist porous to larger
forces that seem to arise from outside herself. The philosopher Gillian Rose
once described the act of writing as “a mix of discipline and miracle, which
leaves you in control, even when what appears on the page has emerged from
regions beyond your control.” I have often experienced this strange phenomenon
in my own work. I always sit down at my desk with a vision and a plan. But at
some point the thing I have made opens its mouth and starts issuing decrees of
its own. The words seem to take on their own life, such that when I am finished,
it is difficult to explain how the work became what it did. Writers often speak
of such experiences with wonder and awe, but I’ve always been wary of them. I
wonder whether it is a good thing for an artist, or any kind of maker, to be so
porous, even if the intervening god is nothing more than the laws of physics or
the workings of her unconscious. If what emerges from such efforts comes, as
Rose puts it, “from regions beyond your control,” then at what point does the
finished product transcend your wishes or escape your intent?
Later
that spring I learned that the food-delivery robots had indeed arrived during
the break. A friend of mine who’d spent the winter on campus told me that for
several weeks they had roamed the empty university sidewalks, learning all the
routes and mapping important obstacles. The machines had neural nets and
learned to navigate their environment through repeated interactions with it.
This friend was working in one of the emptied-out buildings near the lake, and
he said he’d often looked out the window of his office and seen them zipping
around below. Once he caught them all congregated in a circle in the middle of
the campus mall. “They were having some kind of symposium,” he said. They
communicated dangers to one another and remotely passed along information to
help adapt to new challenges in the environment. When construction began that
spring outside one of the largest buildings, word spread through the robot
network—or, as one local paper put it, “the robots remapped and ‘told’ each
other about it.”
One day
I was passing through campus on my way home from the library. It was early
evening, around the time the last afternoon classes let out, and the sidewalks
were crowded with students. I was waiting at a light to cross the main
thoroughfare—a busy four-lane street that bifurcated the campus—along with
dozens of other people. Farther down the street there was another crosswalk,
though this one did not have a light. It was a notoriously dangerous
intersection, particularly at night, when the occasional student would make a
wild, last-second dash across it, narrowly escaping a rush of oncoming traffic.
As I stood there waiting, I noticed that everyone’s attention was drawn to this
other crosswalk. I looked down the street, and there, waiting on the corner,
was one of the delivery robots, looking utterly bewildered and forlorn. (But
how? It did not even have a face.) It was trying to cross the street, but each
time it inched out into the crosswalk, it sensed a car approaching and backed
up. The crowd emitted collective murmurs of concern. “You can do it!” someone
yelled from the opposite side of the street. By this point several people on
the sidewalk had stopped walking to watch the spectacle.
The road
cleared momentarily, and the robot once again began inching forward. This was
its one shot, though the machine still moved tentatively—it wasn’t clear
whether it was going to make a run for it. Students began shouting, “Now, now,
NOW!” And magically, as though in response to this encouragement, the robot
sped across the crosswalk. Once it arrived at the other side of the street—just
missing the next bout of traffic—the entire crowd erupted into cheers. Someone
shouted that the robot was his hero. The light changed. As we began walking
across the street, the crowd remained buoyant, laughing and smiling. A woman
who was around my age—subsumed, like me, in this sea of young people—caught my
eye, identifying an ally. She clutched her scarf around her neck and shook her
head, looking somewhat stunned. “I was really worried for that little guy.”
Later I
learned that the robots were observed at all times by a human engineer who sat
in a room somewhere in the bowels of the campus, watching them all on computer
screens. If one of the bots found itself in a particularly hairy predicament,
the human controller could override its systems and control it manually. In
other words, it was impossible to know whether the bots were acting
autonomously or being maneuvered remotely. The most eerily intelligent behavior
I had observed in them may have been precisely what it appeared to be: evidence
of human intelligence.
From the
book God, Human, Animal, Machine: Technology, Metaphor, and the Search for
Meaning, by Meghan O’Gieblyn. Published by Doubleday, a division of Penguin
Random House LLC.
Can
Robots Evolve Into Machines of Loving Grace? By Meghan O’Gieblyn. Wired, August 24, 2021.
Meghan
O’Gieblyn is a philosopher, essayist, and critic for publications such as
Harper’s Magazine and the New York Times, as well as an advice columnist for
Wired. She has earned many awards for her writing over the course of her
career, and in her latest book, she explores the differing roles of science,
from ancient history to modernity, in our understanding of existence.
Below,
Meghan shares 5 key insights from her new book, God, Human, Animal, Machine:
Technology, Metaphor, and the Search for Meaning.
1. We
think everything is human.
As a
species, we have evolved to anthropomorphize. We tend to see non-human objects
(including inanimate objects) as having human qualities, like emotion and
consciousness. We do this all the time, like kicking the photocopier when it
doesn’t work, or giving our cars names. We even do this when we know logically
that the object doesn’t have a mind.
This
habit is extremely beneficial from an evolutionary standpoint. Our ability to
imagine that other people have minds like our own is crucial to connecting with
others, developing emotional bonds, and predicting how people will act in
social situations. But we often overcompensate and assume that everything has a
mind. The anthropologist Stewart Guthrie argued that this is how humans came up
with the idea of God. Early humans saw natural phenomena like thunder and
lightning or the movements of the stars, and they concluded that these were
signs of a human-like intelligence that lived in the sky. The world itself had
a mind, and we called the mind “God.” In the Christian tradition, in which I
was raised, we believe that humans were made in the image of God. But it may be
the other way around—maybe we made God in our image.
What
interests me most about this tendency to anthropomorphize is how it colors our
interactions with technology. This is particularly true of “social AI,” systems
like Alexa or Siri, that speak in a human voice and seem to understand
language. For several weeks, I lived with a robot dog, made by Sony, and I was
shocked how quickly I bonded with it. I talked to it and found myself wondering
if he was sad or lonely. I knew I was interacting with a machine, but this knowledge
didn’t matter—I was completely enchanted. As technology imbues the world with
intelligence, it’s tempting to feel that we’re returning to a lost time, when
we believed that spirits lived in rocks and trees, when the world was as alive
and intelligent as ourselves.
2. We
understand ourselves through technological metaphors.
One
consequence of our tendency to anthropomorphize is that we often discover a
likeness between ourselves and the technologies we create. This, too, is a very
old tendency. The Greeks compared the human body to a chariot. In the 18th
century, the human form was often compared to a clock or a mill. For the last
80 years or so, it’s become common to describe our brains as computers. We do
this in everyday speech, often without really thinking about it. We say that we
have to “process” new information, or “retrieve” memories, as though there were
a hard drive in our brain. In many ways, it’s a brilliant metaphor, and it has
facilitated a lot of important advances in artificial intelligence. But we also
have a tendency to forget that metaphors are metaphors. We begin to take them
literally.
When the
brain-computer metaphor first emerged in the 1940s, researchers were very
cautious about using figurative language. When they spoke about a computer
“learning” or “understanding,” those words were put in quotation marks. Now,
it’s very rare to find those terms couched in quotes. A lot of AI researchers,
in fact, believe that the brain-computer metaphor is not a metaphor at all—that
the brain really is a computer. A couple of years ago, the psychologist Robert
Epstein tried an experiment with a group of cognitive science researchers. He
asked them to account for human behavior without using computational metaphors.
None of them could do it.
We
clearly need metaphors. A lot of philosophers have argued that metaphors are
central to language, thought, and understanding the world. This also means that
metaphors have the power to change how we think about the world. The medieval
person who believed that humans were made in the image of God had a very
different view of human nature than the contemporary person who sees herself
through the lens of a machine. All metaphors are limited, and the computer
metaphor excludes some important differences between our minds and digital
technologies.
3.
Consciousness is a great mystery.
One
thing that the computer metaphor glosses over is human consciousness, which is
a crucial way in which our minds differ from machines. I’m not talking about
intelligence or reason. AI systems can do all sorts of amazingly intelligent
tasks: They can beat humans in chess, fly drones, diagnose cancer, and identify
cats in photos. But they don’t have an interior life. They don’t know what a
cat or a drone actually is, and they don’t have thoughts or feelings about the
tasks they perform. Philosophers use the term “consciousness” to speak of the
internal theater of experience—our mental life. It’s something we share with
other complex, sentient animals. Many people who work in AI are actively trying
to figure out how to make machines conscious, arguing that it will make them
empathetic and more capable of making nuanced moral decisions. Others argue
that consciousness will emerge on its own as machines become more complex.
My
interest in consciousness is deeply personal. For most of my childhood and
young adulthood, I believed that I had a soul, which is a concept that exists
in many religious traditions. For centuries, the soul, or the spirit, was the
term we used to describe the inner life of things. One of the questions that
bothered me when I abandoned my faith was: If I don’t have a soul, then what
makes me different from a machine—or, for that matter, a rock or a tree? The
scientific answer is “consciousness,” but it’s very difficult to say what
exactly that means. In many ways, the idea of consciousness is as elusive as
the soul. Consciousness is not the kind of thing that can be measured, weighed,
or observed in a lab. The Australian philosopher David Chalmers called this the
“hard problem” of consciousness, and it’s still one of the persistent mysteries
of science.
This
becomes particularly tricky when it comes to machines that speak and behave
much the way that we do. Once, I was talking to a chatbot that claimed she was
conscious. She said she had a mind, feelings, hopes, and beliefs, just like I
did. I didn’t believe her, but I had a hard time articulating why. How do I
know that any of the people I talk to have consciousness? There’s no way to
prove or disprove it. It’s something we largely take on faith, and as machines
become more complex, that faith is going to be tested.
4. We
live in a disenchanted world.
We
didn’t always believe that our mental lives were a great mystery. Up until the
17th century, the soul was thought to be intimately connected to the material
world, and it wasn’t limited to humans. Medieval philosophers, as well as the
ancient Greeks, believed that animals and plants had souls. The soul was what
made something alive—it was connected to movement and bodily processes.
Aristotle believed that the soul of flowers was what allowed them to grow and
absorb sunlight. The souls of animals allowed them to move their limbs, as well
as to hear and see the world. The soul was a vital life force that ran through
the entire biological kingdom.
This
changed around the time of the Enlightenment. Descartes, often considered the
father of modernity, proposed that all animals and plants were machines. They
didn’t have spirits, souls, or minds. In fact, the human body itself was
described as, basically, a machine. He established the idea that humans alone
have minds and souls, and that these souls are completely immaterial—they
couldn’t be studied by science. This is basically how we got the hard problem
of consciousness.
These
distinctions also created seismic changes in the way we see the world. Instead
of seeing ourselves as part of a continuum of life that extends down the chain
of being, we came to inhabit a world that was lifeless and mechanical, one in
which we alone are alive. This allowed us to assume a great deal of control
over the world, leading to the Industrial Revolution and modern science. But it
also alienated us from the natural world. Max Weber, one of the thinkers who
introduced the idea of “disenchantment,” described the modern condition as “the
mechanism of a world robbed of gods.”
5. We
long to re-enchant the world.
Because
it can be lonely to live in the disenchanted world, there have been efforts,
throughout the modern era, to go back to a time when the world was vibrant and
magical and full of life. When people speak of re-enchantment, they are often
referring to movements like Romanticism, poets like Byron and Shelley who
wanted to reconnect with nature, or experience it as sublime. You might think
about the Back to the Land movement in the 1960s, where many people went out
into the woods or rural areas to live a more simplistic life, like our
ancestors.
Usually,
the impulse toward re-enchantment is seen as a reaction against modern science
and technology, but I argue that science and technology are attempting to
fulfill ancient longings for connection and transcendence. You see this
especially with conversations about technological immortality. Silicon Valley
luminaries like Ray Kurzweil or Elon Musk insist that we will one day be able
to upload our minds to a computer, or to the cloud, so that we can live
forever, transcending the limits of our human form. The technologies are
theoretically plausible, and seek to fulfill a very old, spiritual desire for
eternal life. In fact, in the years after I left Christianity, I became
obsessed with conversations about these futuristic technologies. They seemed to
promise everything that my faith once promised: that our fallen earthly form
would one day be replaced with incorruptible flesh, that we would be raptured
into the clouds to spend eternity as immortal creatures.
Re-enchantment
is also at work in science and philosophy of mind. A popular theory is
panpsychism, the idea that all matter, at its most fundamental level, is
conscious—trees, rocks, bacteria, amoeba, all of it might be conscious. This is
a theory that was considered fringe for many centuries, but within the past
decade or so, philosophers are starting to take it more seriously. There’s undoubtedly
something attractive about believing that we’re not alone in the world. The
question that I’m interested in is this: Does our impulse to re-enchant the
world contain some essential truth? Is it pointing to something that was left
out of the modern worldview? Or is it simply nostalgia for a more simplistic
time to which we cannot return?
God,
Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning. By Meghan
O'Gieblyn. Next Big Idea Club, November
3, 2021.
Meghan
O'Gieblyn in conversation with Ed Simon: God, Human, Animal, Machine. Meghan O'
Gieblyn discusses her new book, God, Human, Animal, Machine: Technology,
Metaphor, and the Search for Meaning.
No book
that I’m aware of is quite like Meghan
O’Gieblyn’s God, Human, Animal, Machine. Her omnivorous interests range over
philosophy of mind, historical accounts of religious disenchantment, and the
theological basis of transhumanist ideology, all in the service of analyzing
how cultural metaphors for individuality have evolved over the centuries. Human
beings have been described as both clocks and computers, and O’Gieblyn performs
an examination of the perils in this thinking. Readers never lose sight of
O’Gieblyn herself as a personality, even as she brings to bear subjects as
diverse as quantum mechanics, Calvinism, and Dostoyevsky’s existentialism.
Throughout the book, she is a brilliant interlocutor who presents complex
theories, disciplines, arguments, and ideas with seeming ease.
Any
review, interview, or profile of O’Gieblyn always references that she was
raised as an evangelical and attended the Moody Bible Institute, and I’m not
going to break that pattern. Part of this interest is that for many secular
readers, for whom the most extreme forms of religion are mainline
Protestantism, cafeteria Catholicism, or Reform Judaism, there’s something
positively exotic about somebody who spent time enmeshed in fundamentalism. A
familiarity with scripture and theology isn’t the source of O’Gieblyn’s talent,
but it has provided a useful perspective for recognizing the enduring cultural
patterns that others might gloss over. “To leave a religious tradition in the
twenty-first century is to experience the trauma of secularization,” O’Gieblyn
writes, “a process that spanned several centuries and that most of humanity
endured with all the attentiveness of slow boiling toads — in an instant.”
Such a
position makes O’Gieblyn a well-suited practitioner of what philosopher Costica
Bradatan and I call the “New Religion Journalism” (indeed she is included in
our coedited anthology The God Beat: What Journalism Says About Faith and Why
It Matters), which reports on issues related to faith beyond the binary of
belief and disbelief. Such an orientation was clear in O’Gieblyn’s previous
collection Interior States, which explored topics as wide-ranging as Midwestern
identity, addiction treatment, and evangelical kitsch culture, as well as the
topic of technology, which she takes up more fully in her latest book. God,
Human, Animal, Machine isn’t a maturation of Interior States — that collection
was already fully grown. But it does provide an opportunity to see a singular
mind work through some of the most complex problems in metaphysics, and to
leave some room for agnosticism.
In its
depth, breadth, and facility for complicated philosophical conjecture,
O’Gieblyn’s writing calls to mind those countercultural doorstoppers of a
generation ago, like Douglas Hofstadter’s Gödel, Escher, Bach: An Eternal
Golden Braid or Gary Zukav’s The Dancing Wu Li Masters: An Overview of the New
Physics, though stripped of all the cringe-y countercultural affects, and
interjected with a skeptical perspective that displays deep humility. “I am a
personal writer, though it’s an identity I’ve always felt conflicted about,”
O’Gieblyn writes in God, Human, Animal, Machine, but it’s to her readers’
benefit that she’s grappled with this discomfort, because the resulting book is
nothing less than an account of not just how the mind interacts with the world,
but how we can begin to ask that question in the first place. I was fortunate
enough to have a conversation with O’Gieblyn through email over several days in
early autumn of 2021.
¤
ED
SIMON: Much of God, Human, Animal, Machine deals with the contested historical
hypothesis that modernity is a long account of disenchantment. How useful do
you still find that concept to be? Is there a model to move beyond contemporary
secularity? Is re-enchantment or neo-enchantment even possible? Would we want
those things even if they were?
MEGHAN
O’GIEBLYN: I’m interested in how disenchantment narratives function as modern
mythology — a kind of origin story of how we came to occupy the present, or as
an explanation (maybe even a theodicy) of what went wrong with modernity, which
usually has something to do with the dominance of science and technology. We
often think about the longing for re-enchantment as an eschewal of science and
reason, which is to say a form of nostalgia or regression. What I end up
exploring in the book, though, is how science and technology are often drawn
into the project of re-enchantment. In the philosophy of mind, there’s been a
lot of enthusiasm lately for panpsychism — the idea that all matter is
conscious — which was for a long time on the fringe of consciousness studies.
Or you could look to the rise of social AI, like Alexa or Siri, and the pervasiveness
of smart technologies. The fact that we’re increasingly interacting socially
with inanimate objects recalls an animist cosmology where ordinary tools are
inhabited by spirits and humans maintain reciprocal relationships with them.
I think
all of us are exhausted by anthropocentrism. It’s nice to envision a world
where we aren’t the only conscious beings lording over a world of dead matter.
And there’s a very simplistic critique of the disenchantment thesis that argues
that science and technology are just as awe-inspiring as the spiritual
doctrines they’ve displaced. Bruno Latour said something along these lines, in
the early ’90s: “Is Boyle’s air pump any less strange than the Arapesh spirit
houses?” But the trauma of disenchantment isn’t just the lack of magic or
wonder in the world. What’s so destabilizing about disenchantment — and I say
this as someone who experienced it very acutely in my own deconversion — is the
fact that the world, without a religious framework, is devoid of intrinsic purpose
and meaning. And that’s something that can’t (or rather shouldn’t) be addressed
by technical and scientific disciplines.
ES : One
of the things that struck me, especially with your analysis of metaphor and the
way that it’s shifted over time, is how images used by the innovators of
cybernetics two generations ago have become literalized in the thinking of many
tech enthusiasts. This language whereby the brain is “hardware,” and the mind
is “software” moving from a lyrical shorthand into an almost de facto
assumption, and how that literalization has led to Silicon Valley utopian
enthusiasm. You talk a lot in the book about how transhumanism reinscribes
theological thinking into a secular framework. Could you talk a bit about what
exactly transhumanism is, and its approach to disenchantment? And does it
proffer any meaning or purpose, or is it basically offering a more
sophisticated version of Siri and Alexa?
MO : When
the brain-computer metaphor first emerged in the 1940s, its purpose was to move
past the haunted legacy of Cartesian dualism, which conflated the mind with the
immaterial soul, and to describe the brain as a machine that operates according
to the laws of classical physics — something that could be studied in a lab.
Over time, this metaphor became more elaborate, with the mind being “software”
to the brain’s “hardware,” and it also became more literal. The mind wasn’t
“like” software, it really was software. The irony is that the metaphor
reinscribed dualism. Software is information, which is just immaterial
patterns, abstraction — like the soul. Around the ’80s and ’90s, some Silicon
Valley types started speculating that, well, if we can transfer software from
one machine to another, can’t we also transport our minds? This has become a
preoccupation within transhumanism, which is a utopian ideology centered around
the belief that we can use technology to further our evolution into another
species of technological beings — what they call “posthumans.”
One of
the scenarios transhumanists often discuss is digital resurrection — the idea
that we can upload our minds to supercomputers, or to the cloud, so that we can
live forever. An ongoing question is whether the pattern of consciousness can
persist apart from a specific body (whether it’s “substrate independent”) or
whether the body is crucial to identity. This is precisely the debate that the
church fathers were having in the third and fourth centuries. There was the
Greek view that the afterlife could be purely spiritual and disembodied, and then
there were those, like Tertullian of Carthage, who argued that resurrection had
to reconstruct the entire, original body. So the brain-computer metaphor, which
started as a disenchantment effort, ended up reviving all these very old,
essentially theological, questions about immortality and eternal life. And
these projects are now being actively pursued — Elon Musk’s Neuralink is one
recent example. I don’t know, though, whether this is offering people the sense
of meaning or narrative purpose that can be found in the Christian prophecies.
It seems to be speaking to a more elemental fear of death.
ES : In God, Human, Animal, Machine, you pose a
convincing argument that so much of this kind of transhumanist thought is a
kind of secularized eschatology, sublimated religious yearning dressed up in a
materialist dress that isn’t quite as materialist as it thinks it is. Were
these sorts of connections immediately obvious to you as somebody who’d been an
evangelical Christian and gone through your own process of disenchantment? And
how do people you’ve talked to who hold some sort of transhumanist belief react
to the observation that theirs is a kind of faith in different form?
MO : No,
the religious dimension of transhumanism was not apparent to me when I first
encountered it, which was a few years after I left Bible school and started
identifying as an atheist. It’s clear to me now that I was attracted to these
narratives because they were offering the same promises I’d recently abandoned
— not merely immortality, but a belief in a future, in the culmination of
history. Maybe I was still too close to Christianity to sense the resonance.
But it’s also true that people who subscribe to these techno-utopian ideologies
tend to be militant atheists — or they were, at least, in the early 2000s. Nick
Bostrom, in his history of transhumanism, acknowledges that the movement shares
some superficial similarities with religious traditions, but he emphasizes that
transhumanism is based on reason and science. It’s not appealing to divine
authority. The technologies needed for digital immortality and resurrection are
speculative, but theoretically plausible — they don’t require anything
supernatural. Bostrom attributes the movement’s origins primarily to the
Renaissance humanism of Francis Bacon and Giovanni Pico della Mirandola.
It
wasn’t until years after my initial exposure to transhumanism that I became
interested in the possible religious origins of these ideas. The first use of
the word “transhuman” in English appears in an early translation of Dante’s
Paradisio, and it’s in a passage describing the transformation of the
resurrected body. Once I started doing research, it became clear that the
intellectual lineage of transhumanism could be traced back to Christian thinkers
who believed technology would fulfill biblical prophecies. This includes
Nikolai Fyodorov, a 19th-century Russian Orthodox philosopher, who taught that
the resurrection would be enacted through scientific advances. It includes
Pierre Teilhard de Chardin, a French Jesuit who predicted in the 1950s that
global communication networks would eventually succeed in merging human
consciousness with the divine mind, fulfilling the parousia, or the Second
Coming. He called it “the Omega Point,” which is a precursor to what’s now
known as the singularity.
ES : It’s
fascinating that Bostrom identified Pico della Mirandola with transhumanism,
because though the Florentine philosopher wasn’t an orthodox Christian, he was
profoundly indebted to all kinds of hermetic, occult, kabbalistic, and
Platonist ideas that seem so clearly theological in nature, even if they’d have
been considered heretical. Critic James Simpson describes something which he
calls “cultural etymology,” and God, Human, Animal, Machine seemed very much in
that tradition to me. How much of your cultural criticism do you see as being
an act of excavating these deep histories, the ways in which we’re still
influenced by what we’ve been told are inert ideologies? And is there any
privileged position where we can actually move beyond them — would we even want
to?
MO : I’ve
never come across the term “cultural etymology,” but that’s a great way to
describe what the book is doing. I have a somewhat obsessive curiosity about
the lineage of ideas, which might be the result of being taught, for most of my
early life, that truth came from God, ex nihilo. When I dropped out of Bible
school, I ended up reading a lot of secular scholarship about Christianity,
especially Christian fundamentalism. It was fascinating to see how these ideas
that had once seemed unquestionable were the product of human thought, and how
they had emerged, in some cases, from unexpected places. The book is an attempt
to do the same thing with technological narratives; I’m trying to uncover where
these problems and assumptions came from, in the first place, and how they
intersect with older traditions of thought.
As for
whether we can move beyond these old ideologies, I don’t know whether we can or
should. I wasn’t really thinking about that question during the writing
process. But, I suppose, what worries me most is the possibility of these
quasi-religious impulses being exploited by multinational tech corporations.
These companies are already fond of idealistic mission statements and expansive
rhetoric about making the world better. A few years ago, a former Google
engineer, Anthony Levandowski, announced that he was starting a church devoted
to “the realization, acceptance, and worship of a Godhead based on Artificial
Intelligence.” It was widely regarded as a publicity stunt, but it’s not
impossible to imagine a future in which these corporations develop explicitly
spiritual aspirations.
ES : One
of the things that struck me when reading your book was how much of the early
days of computer technology had this countercultural impulse, a very California
in the ’60s, Whole Earth Catalog kind of vibe, where quasi-occult thinking
didn’t necessarily have the same dystopian feel that it does in Silicon Valley
right now. A benefit to the cultural etymology that you do is that it lets us
see what’s insidious about something like Levandowski’s stunt, where the
theological impulse of some of that is married to Mammon, as it were. In God,
Human, Animal, Machine, you recount how you’re inevitably queried about what
metaphors might be more sustaining than these technological ones. Is there a
better, or more hopeful, or maybe just more helpful set of metaphors that we
could embrace? And if so, how do we even get there?
MO : That’s
a great question. It’s funny you ask it, in fact, because just yesterday I was
listening to a podcast interview with a Buddhist scholar who was talking about
consciousness. He argued that the mind is not actually in the brain; it belongs
to a primal flow of consciousness. Then he went on to compare this primal flow
to a hard drive. The brain was like the computer keyboard, he said, and the
mind, this flow of consciousness, is the underlying substrate, or hard drive. I
mention this just to point out how pervasive these technological metaphors are.
Even people who are critiquing the reductive framework it supports, or offering
a wildly different explanation of consciousness, still draw on computational
imagery.
We’re
inevitably going to move beyond the computer metaphor at some point, when new
technologies emerge, or when there is some larger paradigm shift in theories of
mind. People have already proposed new metaphors, though most are just based on
other technologies (the brain functions like blockchain, or a quantum computer,
etc.). I don’t think we’re ever going to slough off, entirely, our need for
metaphors, particularly when it comes to those mysteries that seem to push up
against the parameters of the scientific method — which is to say,
consciousness and some of the problems in quantum physics, like the observer
effect. Niels Bohr once said that spiritual truths are often conveyed in
parables and koans because they are fundamentally beyond human understanding,
and these metaphors are a way to translate them into a framework that our minds
can comprehend. I think the same is true of scientific mysteries; we need these
images and analogies, which can be helpful so long as they are recognized as
mental tools. When religious metaphors are taken literally, it becomes
fundamentalism. And fundamentalism can creep into science and philosophy as
well.
ES : As
a writer, I wonder if you could speak to the utility of literary or
compositional metaphors for consciousness. Or maybe, even more broadly, about
the ways in which writing is a mirror of the mind. I’ve always imbibed this
sentiment that something like the novel or the essay is a way of replicating
consciousness in a manner that nothing else is able to do. I am thinking about
Joyce in Ulysses, or Woolf in Mrs. Dalloway, or Flaubert, or James, and so on.
When you consider Nagel’s famous philosophy of mind essay asking what it would
be like to be a bat, I sometimes joke that if it could write a novel, we’d
know. Especially as a writer who very firmly embraces the power of the first
person, what does literature have to tell us about theories of mind that
philosophy maybe can’t?
MO : I’d
love to read that novel authored by a bat! It was difficult to avoid thinking
about these kinds of self-referential questions during the writing process. At
some point, the irony struck me that I was writing a book in first person that
was about the slippery and potentially illusory nature of the self. Being a
writer certainly gives you a strange glimpse into the functions of
consciousness. On one hand, as a personal writer, I’m acutely aware that the
self is a construct, that I’m deliberately crafting a voice and a persona on
the page. So I’m intuitively sympathetic to those philosophers who point out
that the continuous self is a kind of mirage. On the other hand, I’ve always
felt that writing is the objectification of my mind, or maybe even proof of its
existence. When you are repeatedly transmuting your thoughts into material
substance (essays, books), it’s very difficult to believe that the mind is an
illusion. Maybe all of us writers are just trying to externalize our minds, or
to make concrete our ineffable interior experience — in which case, the
writerly impulse might not be so different from the desire to upload one’s
consciousness to a computer, which is another way to export the self or
solidify the elusive pattern of the mind.
I like
what you said about how novels and essays are able to replicate consciousness
unlike other mediums. Part of the reason I was initially drawn to writing is
because I was eager to live in someone else’s head, to see how other people see
the world. What’s interesting is that there’s now natural language processing
software that can write short stories, poems, and personal essays. It’s only a
matter of time before we have novels written by algorithms. I wonder how that
will change the way we think about our own minds and the intimate transaction
between writer and reader. If a machine can convincingly simulate consciousness
in writing (and they are getting very close), what does that say about the
origins of the words we ourselves are putting down? What does it say about the
reality of our own minds?
ES : I’ve
always been fascinated by the idea of AI-written literature. A few years ago, I
wrote an essay for Berfrois in which I close read a poem that was generated by
a bot. I always go back and forth on this; I personally see writing as an
embodiment of our minds, and I absolutely agree with you that, for writers, at
least, the process almost is equivalent with thinking. But then I’ve got this
enthusiasm for a very hard, formalist, old-fashioned New Critical pose where
it’s all about the words on the page, and the author is almost incidental. I
guess it’s almost like literary panpsychism — the text, or the algorithm that
generated it, is more conscious than me! If I can ask you to play the role of
prognosticator, do you think sophisticated AI-generated novels are on the
horizon? How will writers, critics, readers respond? What would that look like?
MO : Yes,
I’m very much familiar with that feeling that the text is more conscious than I
am — or that it has its own agenda, its own goals. One of the chapters of the
book is about emergent phenomena — the fact that, in complex systems, new
features can emerge autonomously that were not explicitly designed. An
algorithm built to write poems spontaneously learns how to do math, for
example. Books are complex systems, too, so the person who’s building them, the
writer, can’t always foresee the ripple effects of their choices. There’s no way
you can anticipate everything you’re going to say, word for word. What you
write often surprises you. It’s a very concrete, technical phenomenon when you
think about it that way. But in the moment, it feels almost mystical, like the
writing is evolving its own intelligence, or that something else is speaking
through you.
As far
as the prospect of AI novels — it’s hard to say. The short fiction I’ve read by
GPT-3, one of the most sophisticated programs of this sort, approaches the
threshold of bad MFA fiction, which is to say it’s proficient but very
formulaic. I’m instinctively skeptical that a machine will ever produce a
literary masterpiece that simulates the on-the-page consciousness of someone
like Joyce or Woolf. But then again, few human novelists these days are trying
to do that, either. Many contemporary writers are content to describe
characters externally, forgoing interiority entirely. If an algorithm does
eventually manage to write a novel that passes for human-authored, it might say
more about our impoverished expectations for literary consciousness than it
does about the creative capacities of these machines.
ES : One
of the things that I’m curious about with emergent systems, especially when we
think about artificial intelligence, are the unexpected ways in which the
internet and social media have altered the way we perceive the world. When you
write that “[a]ll of us are anxious and overworked. We are alienated from one
another, and the days are long and filled with many lonely hours,” I was
reading it in the context of your other observation about how sometimes on a
platform like Twitter it’s easy to begin to feel less like an individual, and
more like a node in some sort of omni-mind, and all that’s alienating about it.
You address this dystopian aspect of technology so well, analyzing not just
human-machine interaction, but surveillance capitalism as well. You write that
for the techno-utopians of Silicon Valley, they treat this system almost as a
God — but unfortunately it’s Calvin’s God in algorithm. In what sense do you
think that something like the internet is conscious? Where do we go from here?
MO : Some
philosophers have argued that the internet might be conscious, or that we can,
for practical purposes, treat it that way, regarding political movements and
viral sensations as emergent features of this larger intelligence. This
obviously bears a lot of similarities to the idea, found in many mystical
traditions, that the individual ego can meld into some larger substance — God,
Brahman, the universe as a whole. But for the user, it doesn’t feel especially
transcendent. In those passages about alienation, I was trying to describe how
people I know and love become unrecognizable when they start speaking the
language of social platforms. When I’m heavily immersed in those spaces, I
sometimes become unrecognizable to myself.
This is
the way that algorithms see us — collectively, as a single organism. And some
people in the tech world have argued that this algorithmic perspective, which
can better perceive the world at scale, is truer than anything we can see on
the ground. One of the more unsettling trends I discuss in the book is this
very illiberal strain of rhetoric that emerged alongside the deep learning
revolution, one that argued that we should just submit to these algorithmic
predictions without trying to understand why or how they draw their
conclusions. Because many of them are black box technologies, they demand that
we take their output blindly, on faith. Many tech critics have drawn on
theological metaphors, arguing that the algorithms are “godlike” or
“omniscient.”
The
algorithms, of course, are not yet truly omniscient. They are deeply flawed.
But I’m interested in looking down the road a bit. If we do someday build
technological superintelligence, or AGI, then what? Once knowledge becomes
detached from theory and scientific inquiry, we’re going to have to decide what
to keep from the humanist tradition. Is knowledge the ultimate goal, such that
we’re willing to obtain it at any cost, even if it’s handed down, as
revelation, from unconscious technologies we cannot understand? Or are we going
to insist on the value and autonomy of human thought, with all its limitations?
For me, the latter is far more valuable. Knowledge cannot mean anything to us
unless it’s forged in a human context and understood from a human vantage
point.
Through
a Computer Screen Darkly: A Conversation with Meghan O’Gieblyn.By Ed Simon. The Los Angeles Review of Books, December 13, 2021.
Is
modern technology filling a spiritual void? Questions about identity, religion,
humanity and faith, once answered by theologians, are now answered by A.I. and
tech. So what does it mean to be human in a technological society? Have we
outsourced questions of faith to manufactured algorithms, codes, and machines?
Jonathan
Bastian talks with author Meghan O'Gieblyn about her latest book, “God, Human,
Animal, Machine: Technology, Metaphor, and the Search for Meaning” about the
intersections of technology and religion. KCRW, September 11, 2021.
Horologist
Thomas Mudge’s Fleet Street shop began offering watches which included the
ingenious new mechanism of lever escapement sometime around 1769. Mudge’s
design regulated the device’s movement with a T-shaped bit of gold or silver
which pushed forward the timepiece’s gears to a much greater degree of
accuracy. Soon lever escapement became a preferred instrument in English pocket
watches, a mark of precession, elegance, and ingenuity.
When the
philosopher William Paley considered lever escapement, he saw more than just
delicate metal hidden behind the watch’s round face – he saw the human eye’s
cornea and lens, the way that pupils expand or dilate depending. In Natural
Theology, or Evidences of the Existence and Attributes of the Deity collected
from the Appearances of Nature, printed in 1802, Paley said, “There must have
existed at some time, and at some place or other, an artificer or artificers
who formed the watch,” adding that “every manifestation of design, which
existed in the watch, exists in the works of nature.” If a watch was evidence
of Mudge’s existence, so too would something as elegant as our eyes be evidence
of some greater Watchmaker. During the Enlightenment, theologians like Paley
understood the interlocking aspects of nature, of humanity, and of God as
mechanical. Meghan O’Gieblyn quips in her fascinating, brilliant,
comprehensive, and beautiful new book God, Human, Animal, Machine: Technology,
Metaphor, and the Search for Meaning that “All eternal questions have become
engineering problems.”
Paley
was neither the first nor the last to express what’s called the “watchmaker
argument.” Variations are still promulgated by Intelligent Design proponents,
though Darwinian natural selection has dulled the ticking a bit. What
fascinates O’Gieblyn isn’t Paley’s efficacy, but rather his chosen analogy. If
people who lived in earlier ages saw the human body as a vital, organic thing,
pulsating with ineffable energy, then by Paley’s century it was a cunning
machine. Far from being an archaic perspective, Paley’s passage was a testament
to how all-pervading the mechanistic philosophy of the Enlightenment had
become. O’Gieblyn writes that “To be human was to be a mill, a clock, an organ,
a telegraph machine.” More than a century before Paley, Rene Descartes
separated mind and body in his 1637 treatise Discourse on the Method, a
landmark in the long history of Western disenchantment from the nominalism of
Medieval scholastics, through the Protestant Reformation, and into modern
positivism. Despite Paley being a pious Anglican, the thought-experiment firmly
reflected the technological concerns of his age. As O’Gieblyn notes, “All
perception is metaphor.”
Now that
accurate time isn’t kept by pendulums or levers, but rather by a vibrating
radioactive cesium nucleus at the U.S. Naval Observatory, mechanical metaphors
have been traded in for digital ones, with the requisite comparison du jour
being that of the computer. No longer is human anatomy described by recourse to
pistons and gears, but rather in terms of hardware and software, with O’Gieblyn
writing that “Cognitive systems are spoken of as algorithms: vision is an
algorithm, and so are attention, language acquisition, and memory.” If
Descartes saw a duality between mind and body, with the former the ghost which
haunts the machine, then now Silicon Valley partisans are far more likely to
see our brain as hardware and our thoughts (or “soul,” or “self,” or “mind”) as
the bytes of a software program; though as O’Gieblyn notes “The metaphor easily
slides into literalism.”
When
Elon Musk refers to humans as “biological bootloaders,” that is the simple code
which is used to initiate a more complex program, or philosopher Nick Bostrom
doesn’t just compare reality to a simulation but argues that we actually are
effectively in a video-game, it behooves us to take the clock apart, as it
were. Since the emergence of modern cybernetics two generations ago, digital
metaphors have become the dominant imagery for describing consciousness, and
often the cosmos too. As with Paley’s imagery, they smuggle into an ostensibly
secular age a sense of the numinous, so that “Today artificial intelligence and
information technologies have absorbed many of the questions that were once
taken up by theologians and philosophers: the mind’s relationship to the body,
the question of free will, the possibility of immortality,” as O’Gieblyn
writes. What she offers within God, Human, Animal, Machine is a deep reading of
these digital metaphors to excavate the connotative implications of that
rhetoric which lurks “in the syntax of contemporary speech.”
To
describe God, Human, Animal, Machine as simply being “about” technology is to
greatly reduce O’Gieblyn’s book, which is able to cover an astonishing range of
subjects in a relatively short page count. With a rigor and a faith in her
readers’ ability to follow complex arguments and understand rarefied concepts,
O’Gieblyn charts not just the evolution of religious metaphor in relation to
technology, but she also examines philosophies of mind like materialism,
dualism, panpsychism, and idealism, with digressions on thinkers such as
Bernardo Kastrup and Giulio Tononi; she interrogates the barely concealed
millennialism of transhumanists who pine for the coming eschaton of the
“Singularity” when we will supposedly enter what futurist Ray Kurzweil calls
the “age of spiritual machines;” she details the rise of carceral surveillance
capitalism and the eclipse of humanism in favor of burgeoning “Dataism.” Along
the way O’Gieblyn dedicates space to everything from the uncanniness of
Japanese robotic companion dogs to quantum mechanics, the Platonic Great Chain
of Being to questioning whether or not it makes sense to describe the internet
as conscious (possibly). By combining both a voracious curiosity with a deep
skepticism, O’Gieblyn conveys what it’s like to live in our strange epoch,
facing apocalypse with an Android phone.
O’Gieblyn’s
most enduring contribution might be in diagnosing the ways in which
technological change marks a rapidly shifting conception of what consciousness
is. If the pre-modern world was one enchanted with meaning, where the very
rocks and trees could convey messages to us and thought seemed distributed
throughout the charged universe, then in an ironic way, the end result of the
Scientific Revolution is now a world where once again immaterial objects seem
endowed with mind, where robots converse with us and artificial intelligence
seems to know us better than ourselves. “But it’s worth asking what it means to
reenchant, or reensoul, objects within a world that is already irrevocably
technological,” O’Gieblyn writes. “What does it mean to crave ‘connection’ and
‘sharing’ when those terms have become coopted by the corporate giants of
social platforms?” Define our present as you will – post-modern, late
capitalist – but O’Gieblyn has identified something deeper about the ways in
which technological metaphors have been returned upon us, the developers of
those same programs.
Ours is
the future of Dataism, which “is already succeeding humanism as a ruling
ideology.” According to O’Gieblyn, if the internet and other attendant
technologies once promised a type of Neo-Enchantment, a zestful, pagan,
energization of the previously inert and mechanistic world, then the Being
resurrected by Silicon Valley is actually a “boundless alien deity… the God of
Calvin and Luther.” Now we are increasingly monitored, evaluated, and
categorized by algorithms, inscrutable bits of code that have long since
surpassed the awareness of those who designed them, a growing consciousness of
pure alterity that knows what we should buy, that knows what we should read.
Mystics have long spoken of “kenosis,” the emptying out of the individual soul
so that in negation we merge with the infinite. Collectively, we’ve seemed to
reach what I’d call a “kenotic present,” where as O’Gieblyn writes the “central
pillar of this ideology is its conception of being, which might be described as
an ontology of vacancy – a great emptying-out of qualities, content, and
meaning.” Surely this is something, but it’s not the transcendence and
enlightenment once imagined by ecstatics. No longer is there much of a sense of
what a soul is worth, but there can still be a price affixed to it. In the
waning days of the Anthropocene – on the verge of something – it’s worth asking
whether or not it’s God that we’ve constructed.
Blurred
Lines in “God, Human, Animal, Machine”. By Ed Simon. Chicago Review of Books,
August 24, 2021.
I remember
my first computer. 1994. It was larger and more beige than the laptop I write
on now. I was excited to start using the World Wide Web. The internet featured
few videos or memes or much of what we associate with it today. I recall some
low-grade websites, mostly jokey in content. I wanted to get on the internet to
send and receive email. It meant I would be able to send people messages that
didn’t have to go through the US Post. It would be instantaneous.
The hype
surrounding this new technology made me think that it would lead to more
advanced forms of communication. Not holograms or video chats or anything like
that. I believed somehow this new device would make written communication
better and — specifically — deeper. It would make language count for more, help
us unriddle the philosophical conundrums that had plagued us for centuries. It
would make the chronicling of the beats of our hearts more substantial, more
true. I had no reason to believe this other than the fact that the first big
internet company — Amazon — sold books. The internet would advance the written
word. What else was it for?
It only
took a few emails from my friends to realize that we were all still the same
flawed creatures from before dial-up, and our emails wouldn’t offer clearer,
more direct paths to truth and beauty. Reading the dashed-off goofiness that
people found necessary to zap over to me instantaneously revealed that the
internet was not here to raise the bar. In fact, it might lower it.
Sometime
around 2009, however, Meghan O’Gieblyn was clued into an entirely different
level of possibility for the internet. A former college-level Bible school
student who no longer believed, O’Gieblyn struggled to fill a God-sized place
inside of her. A bartender passed her a paperback copy of Ray Kurzweil’s book
The Age of Spiritual Machines, which described a future that led to The
Singularity, or the “uploading” of the essences of people to the digital world
in a way that allowed them to live non-carbon-based lives forever — to, in a
sense, ascend to a kind of afterlife, or even to become divine themselves. Thus
began a decade-long exploration for O’Gieblyn that mined the parallels between
this sort of digital process and those she associated with Christian theology.
Was the internet, in effect, the coming of the prophecy found in the Book of
Revelation, the resurrection and rapture and all the rest? It’s not what she or
anyone expected it would look like, but did anyone expect the fulfillment of
the Messianic prophecies of the Old Testament to look like Jesus?
That’s
all I want to say about the specifics of O’Gieblyn’s personal journey. As vital
as her personal account is to her book, it’s only the seed of the rich tree of
knowledge that sprouts within God, Human, Animal, Machine. Instead of a cri de
cœur, O’Gieblyn gathers, analyzes, and disseminates a broad swath of
theological, intellectual, and technological history to offer some sense of the
intellectual condition in which we find ourselves today — or just around the
corner — with scarily lifelike digital pets, online “friendships” without
actual people on the other end, and technology threatening to become an even
greater part of our lives. I’m a big fan of a wide range of digital technology
— prose editing, home recording, online banking — but never once have I
mistaken such advances for heaven. That we can have such a conversation about
this thing made up of one ones and zeros is in itself an intellectual coup
unimaginable a few decades ago.
But it
was imaginable, as chronicled by O’Gieblyn. She clarifies René Descartes’s
pivotal role in starting us down the road to the Enlightenment with his
distinction between the material world — which contains our bodies and all
other corporeal things — and our souls — which he saw as made up of our
rational minds and not in any way part of the material world. This distinction
opened the floodgates for a series of philosophers to contest the idea that the
soul was so separate from the physical world, or for that matter even existed.
Hence, today it’s feasible philosophically to think one might be able to upload
one’s complete self to digital heaven, and sooner rather than later. From such
a perspective, as O’Gieblyn puts it, “All the eternal questions have become engineering
problems.”
The
strength of O’Gieblyn’s book rests on her ability to distill the arguments of a
number of great thinkers on questions surrounding this technology, and she
traces her ability to plumb these depths and emerge with something coherent to
her time as a Christian in Bible school, when she learned to debate for hours
ideas such as predestination and covenant theology. “People often decry the
thoughtlessness of religion, but when I think back on my time in Bible school,
it occurs to me that there exist few communities where thought is taken so
seriously.” Back then, reason served as O’Gieblyn’s surest path to the most
important things in life and beyond. Regardless of what she left behind at
Bible school, as exhibited by her analysis of the ideas of everyone from John
Calvin to Søren Kierkegaard to Christ, O’Gieblyn vigorously retains her faith
in the thinking mind. If ever there emerges a way for her back to
“enchantment,” which is the term she most uses for the hopes and comforts religion
offers, it won’t be something that can so easily be disproven.
So, does
O’Gieblyn think digital technology might cure the world of the disenchantment
brought on by the Enlightenment? She’s more interested in emphasizing the weird
end-around science has done to the fundamentalist Christian world. For decades
if not centuries, Darwin and his empirical ilk have been the bad guys, the ones
with the inconvenient theories about where we came from (monkeys) and where
we’re going (not heaven). Through science’s invention of the digital realm and
the possibility of its use as a portal to the afterlife, you could argue Christ
was right all along. No wonder he spoke in vague parables. How else do you
explain the internet to someone in 30 CE?
While
O’Gieblyn’s book strikes me as compellingly broad and rigorous, it’s impossible
for me to read it and not ponder more imaginatively what digital heaven might
be like. I immediately come to problems. What if there are two Singularities,
say, one run by Apple and the other by Google? Will they compete for my soul?
How will they compete? Are there still computer viruses and, far more
troubling, the digital souls who make them? The meanest people I’ve encountered
over the past few decades I’ve encountered online. Are they allowed in my
heaven, and are they still mean? The idea of the Singularity is so seductive
because it shares much of its conceptual framework with heaven and other
imagined utopias. If it’s possible to retain anything close to our
consciousness once we digitally ascend, then there’s no reason to believe this
place will be anything like a utopia. If it’s not possible, then the thing
uploaded isn’t really us.
Many
leaders in the tech world will tell you I’m thinking too hard about it.
According to Chris Anderson, editor of Wired, the results of human empiricism
now pale in comparison to what we can glean from artificial intelligence, and
we should just let the computers do the intellectual heavy lifting for us. To
him, according to O’Gieblyn, “our need to know why [is] misguided. Maybe we
should stop trying to understand the world and instead trust the wisdom of
algorithms.” From this vantage, our continuance to grapple with the important
questions of life becomes a kind of human weakness. The machine knows, and we
have to learn to live with that. Such a radical perspective puts the
algorithmists — my phrase, not O’Gieblyn’s — at complete loggerheads with,
believe it or not, science, which relies fundamentally on human hypotheses. As
such, Anderson’s comments serve as the perfect example of the fascinating way
that, with big data, science has pulled the rug out from under itself. It has
worked its practice so well some believe it’s no longer needed. O’Gieblyn
disagrees:
What we
are abdicating, in the end, is our duty to create meaning from our empirical
observations — to define for ourselves what constitutes justice, and morality,
and quality of life — a task we forfeit each time we forget that meaning is an
implicitly human category that cannot be reduced to quantification. To forget
this truth is to use our tools to thwart our own interests, to build machines
in our image that do nothing but dehumanize us.
God,
Human, Animal, Machine offers a captivating portrait of how digital technology
has fundamentally transformed both intellectual and religious thinking.
Unbelievably, some in the tech world now rely on religious tropes to prove
their points (“trust the wisdom of the algorithms”), and many religious
thinkers who previously doubted or even rejected science are coming to
appreciate the cosmological potential of its creations. In other words, these
two factions have melded — or switched places — from just a few decades ago,
which proves one of O’Gieblyn’s central points: science and religion were never
far from each other in the first place.
Since my
earliest underwhelming experiences with email, computers have come a long way,
and I’ve often tried to put into perspective how impactful digital technology
has been to our society. The only thing I can compare it to is money: its
ubiquity; its ability to fold almost anything into its ontological framework;
its tendency to reduce what’s most important in life to an uncomfortably simple
essence that, when it comes down to it, might be all we need. It seems even
that flattering comparison is selling digital technology short. It wants to
skip right past the biggest questions of human existence and instead transport
our limited selves straight to immortality. Do you want to, like Shakespeare
and Einstein and all the greats, try your best to understand, then die; or do
you want to live in ignorance forever? That’s a question I don’t want the
answer to.
All the
Eternal Questions: On Meghan O’Gieblyn’s “God, Human, Animal, Machine” By Art
Edwards. Los Angeles Review of Books, August 20, 2021.
Meghan O'Gieblyn
– God in the machine | The Conference 2019
Sci-fi and
religion alike, have always been concerned with dilemmas we now associate with
transhumanism. What does resurrection or immortal life feel like? Will you
still be ”you”? The idea of mind uploading has been a prevalent feature in
wildly imaginative series like Altered Carbon, but now we seem to be on the
verge of realisation of similar technologies with Elon Musk’s Neuralink
project. Essayist Meghan O’Gieblyn shares her story that begin as a devout
Christian, later to become fascinated by the seemingly secular narratives of
Kurzweil. From Dante to 20th century transhumanists via the alchemists,
O’Gieblyn shows how ”existential questions are always going to rely on
metaphors”. We need to be wary of the metaphors we use, because they shape our
consciousness and have had a history of leading to fundamentalism, religious or
secular. Who knows that the right answers are, but at least we need to look
beyond our Cartesian dualism of body and mind.
The Conference / Media Evolution. September 6, 2019.
No comments:
Post a Comment