It’s an
ominous sign of the times that human extinction is an increasingly common topic
of discussion. If you search for ‘human extinction’ in the Google Ngram Viewer,
which charts the frequency of words in Google’s vast corpora of digitised
books, you’ll see that it’s rarely mentioned before the 1930s. This changes
slightly after the Second World War – the beginning of the Atomic Age – and
then there’s a sudden spike in the 1980s, when Cold War tensions rose, followed
by a decline as the Cold War came to an end. Since the 2000s, though, the
term’s frequency has risen sharply, perhaps exponentially.
This is
no doubt due to growing awareness of the climate crisis, as well as the various
dangers posed by emerging technologies, from gene editing to artificial intelligence.
The obstacle course of existential hazards before us seems to be expanding, and
indeed many scholars have argued that, to quote Noam Chomsky, the overall risk
of extinction this century is ‘unprecedented in the history of Homo sapiens’.
Similarly, Stephen Hawking declared in 2016 that ‘we are at the most dangerous
moment in the development of humanity’. Meanwhile, the Doomsday Clock,
maintained by the venerable Bulletin of the Atomic Scientists, is currently
showing that it’s a mere 90 seconds before midnight, or doom – the closest it’s
ever been since this clock was created in 1947. The direness of our situation
is also widely acknowledged by the general public, with one survey reporting
that a whopping 24 per cent of people in the United States, the United Kingdom,
Canada and Australia ‘rated the risk of humans being wiped out’ within the next
100 years ‘at 50 per cent or greater’.
But so
what if we’re wiped out? What does it matter if Homo sapiens no longer exists?
The astonishing fact is that, despite acquiring the ability to annihilate
ourselves back in the 1950s, when thermonuclear weapons were invented, very few
philosophers in the West have paid much attention to the ethics of human
extinction. Would our species dying out be bad, or would it in some way be good
– or just neutral? Would it be morally wrong, or perhaps morally right, to
cause or allow our extinction to occur? What arguments could support a ‘yes’ or
‘no’ answer?
These
are just some of the questions that I place within a field called ‘existential
ethics’, which, as noted, has been largely ignored by the philosophical
community. This is a real shame for several reasons: first, even if you don’t
think our extinction is likely this century, reflecting on the questions above
can provide clarity to a wide range of philosophical issues. The fact is that
existential ethics touches upon some of the most fundamental questions about
value, meaning, ethics and existence, which makes meditating on why our species
might – or might not – be worth saving a very useful exercise. Second, if you
do agree with Chomsky, Hawking and the Bulletin of the Atomic Scientists that
our extinction is more probable now than in centuries past, shouldn’t we want
to know whether, and why, tumbling into the eternal grave would be right or
wrong, good or bad, better or worse? Here we are, inches away from the
precipice, tempting the same fate that swallowed up the dinosaurs and the dodo,
with hardly anyone thinking seriously about the ethical implications of this
possibility. Surely this is a situation that we should not only rectify, but do
so with a degree of moral urgency – or so I would argue.
This
points to another question: why exactly has existential ethics been so
neglected? Why has it languished in relative obscurity while so many other
fields – machine ethics, business ethics, animal ethics, bioethics, and so on –
have become thriving areas of research over the past several decades? One
explanation is that philosophers have, in general, failed to appreciate just
how rich and complicated the topic is. For example, the question ‘Would human
extinction be bad?’ looks simple and straightforward, yet it conceals a
treasure trove of fascinating complexity. Consider that ‘human’ and
‘extinction’ can be defined in many different, equally legitimate ways.
Most
people intuitively equate ‘human’ with our species, Homo sapiens, yet scholars
sometimes use the word to mean ‘Homo sapiens and whatever descendants we might
have’. On the latter definition, Homo sapiens could disappear completely and
forever without human extinction having occurred. Indeed, one way to ‘go
extinct’ would be to evolve into a new posthuman species, something that will
inevitably occur over the next million years if only because of Darwinian evolution.
Would this be bad? Or we might ‘go extinct’ by replacing ourselves with, say, a
population of intelligent machines. Some would see this as a dystopian outcome,
though others have recently argued that we should want it to happen. In his
book Mind Children (1988), the computer scientist Hans Moravec, for instance,
not only views this kind of extinction as desirable, but hopes to actively
bring it about. He thus holds that the extinction of Homo sapiens would
constitute a great tragedy – unless it were to coincide with the creation of
machinic replacements, in which case it would be very good.
On my
count, there are at least six distinct types of extinction that are relevant to
existential ethics, although for our purposes we can focus on what I call the
‘prototypical conception’, whereby Homo sapiens disappears entirely and forever
without leaving behind any successors. In other words, our extinction marks a
complete and final end to the human story, which is why I label it ‘final’
human extinction.
So, this
is one bundle of complexity hidden behind what appears to be a simple question:
would human extinction be bad? Whenever someone asks this, the first thing you
should do is reply: what do you mean by ‘human’? And which type of ‘extinction’
are you talking about? Once you’re clear on these issues, there’s a second
complication to navigate. Consider the following, which isn’t a trick question:
would it be bad for a child to die prematurely? What about an elderly person,
or someone in the middle years of life? Would it be bad if their deaths were
preceded by lots of physical suffering, anxiety or fear? My guess is that
you’ve answered ‘Yes, obviously!’ to these questions. If so, then you’ll have
to believe that human extinction would be very bad if caused by a worldwide
catastrophe. Indeed, since an extinction-causing catastrophe would literally
kill everyone on Earth, it would be the absolute worst disaster possible. There
is no disaster with a higher body count. This is so widely accepted – only the
most sadistic, ghoulish person would reject it – that I call it the ‘default
view’. Do we all agree, then, that extinction would be bad? Is there nothing
left to say?
Hardly!
But to see what’s left, it’s crucial to distinguish between two aspects of
extinction: first, there’s the process or event of Going Extinct and, second,
there’s the state or condition of Being Extinct. You could draw a rough analogy
with individual death: many friends tell me that they aren’t afraid of being
dead – after all, they won’t be around any longer to worry about the world, to
suffer, or even to experience FOMO (fear of missing out). They are, however,
terrified of the pain that dying might entail. They’re scared about what might
lead to death, but not about their subsequent nonexistence. Not everyone will
agree with this, of course – some find the thought of having perished quite
frightening, as do I. The point is that there are two features of extinction
that one might identify as sources of its badness: maybe extinction is bad because
of how Going Extinct unfolds, or perhaps there’s something about Being Extinct
that also contributes some badness. Almost everyone – except sadists and ghouls
– will agree that, if Going Extinct involves a catastrophe, then our extinction
would be very bad in this sense. Indeed, this is more or less trivially true,
given that the definition of ‘catastrophe’ is, to quote the Merriam-Webster
dictionary, ‘a momentous tragic event ranging from extreme misfortune to utter
overthrow or ruin’. Yet there is an extraordinary array of positions that go
well beyond this simple, widely accepted idea.
For
example, imagine a scenario in which every person around the world decides not
to have children. Over the next 100 years or so, the human population gradually
dwindles to zero, and our species stops existing – not because of a violent
catastrophe, but because of the freely chosen actions of everyone on the
planet. Would this be bad? Would there be something bad about our extinction
even if Going Extinct were completely voluntary, didn’t cut anyone’s life
short, and didn’t introduce any additional suffering?
Some
philosophers would vociferously respond: ‘Yes, because there’s something bad
about Being Extinct totally independent of how Going Extinct takes place.’ In
defending this position, such philosophers would point to some further loss
associated with Being Extinct, some opportunity cost arising from our species
no longer existing. What might these further losses, or opportunity costs, be?
In his book Reasons and Persons (1984), the moral theorist Derek Parfit
proposed two answers: on the one hand, Being Extinct would prevent human
happiness from existing in the future and, given that Earth will remain
habitable for another ~1 billion years, the total amount of future happiness
could be enormous. All of this happiness would be lost if humanity were to
cease existing, even if the cause were voluntary and harmless. That would be
bad. On the other hand, the future could witness extraordinary developments in
science, the arts and even morality and, since Being Extinct would preclude
such developments, it would – once again – be bad, independent of how Going
Extinct unfolds. This led Parfit to declare that the elimination of humanity,
however it happens, ‘would be by far the greatest of all conceivable crimes’,
an idea first articulated, using almost the exact same words, by the
utilitarian Henry Sidgwick in The Methods of Ethics (1874).
I
classify this as the ‘further-loss view’ for the obvious reason that it hinges on
the idea of further losses arising from our nonexistence. Many contemporary
philosophers take Parfit’s side, including advocates of an ethical framework
called ‘longtermism’ recently promoted by the Oxford philosopher William
MacAskill in his book What We Owe the Future (2022). In fact, not only do
longtermists see Being Extinct as one source of extinction’s badness, but most
would argue that Being Extinct is the worst aspect of our extinction by a long
shot. This is to say, even if Going Extinct were to involve horrendous amounts
of suffering, pain, anguish and death, these harms would be utterly dwarfed by
the further loss of all future happiness and progress. Here’s how the
philosophers Nick Beckstead, Peter Singer and Matt Wage (the first of whom laid
the foundations for longtermism) express the idea in their article ‘Preventing
Human Extinction’ (2013):
“One
very bad thing about human extinction would be that billions of people would
likely die painful deaths. But in our view, this is, by far, not the worst
thing about human extinction. The worst thing about human extinction is that
there would be no future generations.”
Other
philosophers – myself included – reject this further-loss view. We argue that
there can’t be anything bad about Being Extinct because there wouldn’t be
anyone around to experience this badness. And if there isn’t anyone around to
suffer the loss of future happiness and progress, then Being Extinct doesn’t
actually harm anyone. To quote Jonathan Schell’s magisterial book The Fate of
the Earth (1982): ‘although extinction might appear to be the largest
misfortune that mankind could ever suffer, it doesn’t seem to happen to
anybody.’ The reason is that ‘we, the living, will not suffer it; we will be
dead. Nor will the unborn shed any tears over their lost chance to exist; to do
so they would have to exist already.’ Similarly, the philosopher Elizabeth
Finneron-Burns asks: ‘If there is no form of intelligent life in the future,
who would there be to lament its loss?’
I call
this the ‘equivalence view’, because it claims that the badness or wrongness of
extinction comes down entirely to the badness or wrongness of Going Extinct.
This means that, if there isn’t anything bad or wrong about Going Extinct, then
there isn’t anything bad or wrong about extinction – full stop. Applying this
to the scenario mentioned above, since there’s nothing bad or wrong about
people not having children, there wouldn’t be anything bad or wrong about our
extinction if caused by everyone choosing to be childless. The answer one gives
to ‘Would our extinction be bad?’ is equivalent to the answer one gives to
‘Would this or that way of Going Extinct be bad?’
There
are some important similarities and differences between further-loss and
equivalence views. Both accept the default view, of course, although they
diverge in a crucial way about whether the default view says everything there
is to say about extinction. Philosophers like myself believe that the default
view is the whole story about why extinction would be bad or wrong. If Going
Extinct involves a catastrophe, then obviously extinction would be bad, since
an extinction-causing catastrophe ‘would entail maximum mortality, likely
preceded by unprecedented human suffering,’ to quote the philosopher Karin Kuhlemann,
who (along with Finneron-Burns and myself) accepts the equivalence view. But if
Going Extinct doesn’t cause lots of suffering and death, if it happens through
some peaceful, voluntary means, then extinction wouldn’t be bad or wrong. Those
who accept a further-loss view will strenuously disagree: they claim that, if
Going Extinct entails suffering and death, this would be one source of
extinction’s badness. But it wouldn’t be the only source, nor even the biggest
source: the subsequent state of Being Extinct would also be very bad, because
of all the attendant further losses.
There’s
another thought experiment that helps to foreground the major disagreement
between these positions. Imagine two worlds, A and B. Let’s say that world A
contains 11 billion people and world B contains 10 billion. Now, a terrible
disaster rocks both worlds, killing exactly 10 billion in each. There are two
questions we can ask about what happens here. The first is pretty
straightforward: how many events occur in world A versus world B? Most will
agree that, on the highest level of abstraction, one event happens in world A –
the loss of 10 billion people in a sudden disaster – while two events happen in
world B – the loss of 10 billion people plus the extinction of humanity, since
the total population was 10 billion. That’s the first question. The second is
whether this extra event in world B – the extinction of humanity – is morally
relevant. Does it matter? Does it somehow make the disaster of world B worse
than the disaster of world A? If a homicidal maniac named Joe causes both
disasters, does he do something extra-wrong in world B?
If you
accept a further-loss view, then you’re going to say: ‘Absolutely, the
catastrophe in world B is much worse, and hence Joe did something extra-wrong
in B compared with A.’ But if you accept the equivalence view, you’ll say: ‘No,
the badness or wrongness of these catastrophes is identical. The fact that the
catastrophe causes our extinction in world B is irrelevant, because Being
Extinct is not itself a source of badness.’ The intriguing implication of this
answer is that, according to equivalence views, human extinction does not
present a unique moral problem. There is nothing special about extinction
itself: it doesn’t introduce a distinct moral conundrum; the badness or
wrongness of extinction is wholly reducible to how it happens, beyond which
there is nothing left to say. Since Being Extinct is not bad, the question of
whether our extinction is bad depends entirely on the details of Going Extinct.
Advocates of the further-loss view will of course say, in response, that this
is deeply misguided: extinction does pose a unique moral problem. Why?
Precisely because it would entail some further losses – things of value,
perhaps immense value, that wouldn’t be lost forever if our species were to
continue existing, such as future happiness and progress (though there are many
other further losses that one could point to, which we won’t discuss here). For
them, assessing the badness or wrongness of extinction requires looking at both
the details of Going Extinct and the various losses that Being Extinct would
involve.
We’re
starting to see the lay of the land at this point, the different positions that
one could take within existential ethics. But there’s another option that we
haven’t yet touched upon: whereas further-loss views say that Being Extinct is
bad, and equivalence views assert that Being Extinct is not bad, you might also
think, for some reason or other, that Being Extinct would be less bad, or maybe
even positively good. This points to a third family of views that I call
‘pro-extinctionist views’, which a surprising number of philosophers have
accepted. Right away, it’s important to be clear about this position: virtually
all pro-extinctionists accept the default view. They would agree that a
catastrophic end to humanity would be horrible and tragic, and that we should
try to avoid this. No sane person would want billions to die. However,
pro-extinctionists would add that the outcome of Being Extinct would
nonetheless be better than Being Extant – ie, continuing to exist. Why? There
are many possible answers. One is that, by no longer existing, we would prevent
future human suffering from existing too, which would be less bad, or more good,
than our current situation. There are several ways of thinking about this.
The
first concerns the possibility that the future contains huge amounts of human
suffering. According to the cosmologist Carl Sagan – who defended a
further-loss view in his writings – if humanity survives for another 10 million
years on Earth, there could come to exist some 500 trillion people. This number
is much larger if we colonise space: the philosopher Nick Bostrom in 2003 put
the figure at 1023 (that’s a 1 followed by 23 zeros) biological humans per
century within the Virgo Supercluster, a large agglomeration of galaxies that
includes our own Milky Way, although the number rises to 1038 if we include
digital people living in virtual-reality computer simulations. Subsequent
number-crunching in Bostrom’s book Superintelligence (2014) found that some
1058 digital people could exist within the Universe as a whole. What motivated
these calculations from Sagan and others was the further-loss view that the
nonexistence of all these future people, and hence all the value they could
have created, is a major opportunity cost of Being Extinct. However, one could
flip this around and argue that, even if these future people have lives that
are overall worthwhile, the suffering they will unavoidably experience during
the course of these lives could still add up to be, in absolute terms, very
large. However, if we go extinct, none of this suffering will exist.
The
second consideration is based on the idea, which I find very plausible, that
there are some kinds of suffering that no amount of happiness could ever
counterbalance. Imagine, for example, that you need to undergo a painful
surgical procedure that will leave you bedridden for a month. You go through
with the surgery and, after recovering, you live another 50 very happy years.
The surgery and recovery process might have been dreadful – not something you’d
want to relive – but you might still say that it was ‘worth it’. In other
words, the decades of happiness you experienced after surgery counterbalanced
the pain you had to endure. But now consider some of the worst things that
happen in the world: abuse of children; genocides, ethnic cleansings,
massacres; people tortured in prisons; and so on. Ask yourself if there’s any
amount of happiness that can make these things ‘worth it’. Is there any heap of
goodness large enough to counterbalance such atrocities? If you think of
particular historical horrors, you might find the question outright offensive:
‘No, of course that genocide can’t somehow be counterbalanced by lots of
happiness experienced by other people elsewhere!’ So, the argument goes, since
continuing to exist carries the risk of similar atrocities in the future, it
would be better if we didn’t exist at all. The good things in life just aren’t
worth gambling with the bad things.
Considerations
like these are why pro-extinctionists will argue that the scenario of world B
above is actually preferable to the scenario of world A. Here’s what they might
say about this: ‘Look, there’s no question that the disaster in world B is
terrible. It’s absolutely wretched that 10 billion people died. I say that
unequivocally because I, like everyone else, accept the default view! However,
for precisely the same reason that I think this disaster is very bad, I also
maintain that the second event in world B – the extinction of humanity – makes
this scenario better than the scenario of A, as it would mean no more future
suffering. At least world B’s disaster has an upside – unlike in world A, where
10 billion die and future people will suffer.’
The
major practical problem for pro-extinctionism is one of getting from here to
there: if you think that Being Extinct is better than Being Extant, how should
we actually bring this about? There are three main options on the menu:
antinatalism, whereby enough people around the world stop having children for
humanity to die out; pro-mortalism, whereby enough people kill themselves for
this to happen; and omnicide, whereby someone, or some group, takes it upon
themselves to kill everyone on Earth. (The word ‘omnicide’ was defined in 1959
by the theatre critic Kenneth Tynan as ‘the murder of everyone’, although,
curiously, a chemical company had earlier trademarked it as the name for one of
its insecticides.)
A very
small number of pro-extinctionists have advocated for omnicide – mostly fringe
environmental extremists who see humanity as a ‘cancer’ on the biosphere that
must be excised. This was defended in an article from the Earth First! Journal
titled ‘Eco-Kamikazes Wanted’ (1989) and later endorsed by a group called the
Gaia Liberation Front. But omnicide also appears to be an implication of
‘negative utilitarianism’, an ethical theory that, in its strongest form,
asserts that the only thing that matters is the reduction of suffering. As the
philosopher R N Smart noted in 1958, this means that one should become a
‘benevolent world-exploder’ who destroys humanity to eliminate all human
suffering, which Smart described as patently ‘wicked’. However, the Oxford
philosopher Roger Crisp (who isn’t a negative utilitarian) recently contended
that if you were to discover a huge asteroid barrelling toward Earth, and if
you could do something to redirect it, you should seriously consider letting it
slam into our planet, assuming it would kill everyone immediately. This would,
in effect, be omnicide by inaction rather than action, although Crisp never
says that you should definitely do this, only that you should think very hard
about it, given that Being Extinct ‘might’ be ‘good’, a tentative conclusion
based on the possibility that some suffering cannot be counterbalanced by any
amount of happiness.
Other
pro-extinctionists have advocated for both antinatalism and pro-mortalism. An
example is the 19th-century German pessimist Philipp Mainländer, who argued
that we should not just refrain from procreating, but never have sex in the
first place – in other words, we should all remain virgins. He also endorsed
suicide, and indeed shortly after receiving the first copies of Volume I of his
magnum opus The Philosophy of Redemption (1876), he placed them on the floor,
stood on top of them, stepped off, and hanged himself. He was only 34 years
old, and wasn’t the only person in his family to commit suicide: his older
brother and sister did, too.
Most
pro-extinctionists, though, have held that the only morally acceptable route to
extinction is antinatalism – refusing to have children. The best-known advocate
of this position today is David Benatar, who argues in his book Better Never to
Have Been (2006) that our collective nonexistence would be positively good,
since it would mean the absence of suffering, and the absence of suffering is
good. On the flip side, he notes that even though Being Extinct would entail
the loss of future happiness, this wouldn’t be bad because there’d be no one
around to suffer such a loss. Hence, Being Extinct corresponds to a good (no
suffering) and not-bad (no happiness) situation, which contrasts with our
current state, Being Extant, which involves the presence of both happiness
(good) and suffering (bad). He concludes that since a good/not-bad situation is
obviously better than a good/bad situation, we should strive to bring about our
extinction – by remaining childless.
My own
view is a complicated mix of these considerations, which push and pull in
diametrically opposite directions. The first thing I would emphasise is that
our minds are totally ill-equipped to grasp just how awful Going Extinct would
be if caused by a catastrophe. In a fascinating paper from 1962, the German
philosopher Günther Anders declared that, with the invention of nuclear
weapons, we became ‘inverted Utopians’. Whereas ‘ordinary Utopians are unable
to actually produce what they are able to visualise, we are unable to visualise
what we are actually producing,’ ie, the possibility of self-annihilation. That
is to say, there’s a yawning chasm – he called it the ‘Promethean gap’ –
between our capacity to destroy ourselves and our ability to feel, comprehend
and imagine the true enormity of catastrophic extinction. This dovetails with a
cognitive-emotional phenomenon called ‘psychic numbing’, which the psychologist
Paul Slovic describes as the
“inability to appreciate losses of life
as they become larger. The importance of saving one life is great when it is
the first, or only, life saved, but diminishes marginally as the total number
of lives saved increases. Thus, psychologically, the importance of saving one
life is diminished against the background of a larger threat – we will likely
not ‘feel’ much different, nor value the difference, between saving 87 lives
and saving 88, if these prospects are presented to us separately.”
Now
imagine that the number isn’t 87 or 88 deaths but 8 billion – the total human
population on Earth today. As a Washington Post article from 1947 quotes Joseph
Stalin as saying, ‘if only one man dies of hunger, that is a tragedy. If
millions die, that’s only statistics,’ which is often truncated to: ‘A single
death is a tragedy, a million deaths are a statistic.’ The point is that an
extinction-causing catastrophe would be horrendous to a degree that our puny
minds cannot even begin to grasp, intellectually or emotionally, although
simply understanding this fact can help us better calibrate assessments of its
badness, by compensating for this deficiency. In my opinion, the terribleness
of such catastrophes – events with the highest possible body count – is more
than enough reason to prioritise, as a species, projects aimed at reducing such
risk. The default view about Going Extinct is even more profound, and its
implications even more compelling, than one might initially believe.
But I
also think that Being Extinct would be regrettable for many reasons. As Mary
Shelley wrote in her novel The Last Man (1826) – one of the very first books to
address the core questions of existential ethics – without humanity there would
be no more poetry, philosophy, painting, music, theatre, laughter, knowledge
and science, and that would be very sad. I feel the pull of this sentiment, and
find myself especially moved by the fact that our extinction would bring the
transgenerational enterprise of scientific understanding to a screeching halt.
It would be a monumental shame if humanity had popped into existence, looked around
at the Universe in puzzlement and awe, pondered the Leibnizian question of why
there is something rather than nothing, and then vanished into the oblivion
before knowing the answer. Maybe the answer is unknowable, but even discovering
this fact could provide a degree of intellectual satisfaction and psychological
closure, an ‘ah-ha’ moment that relieves and vindicates one’s prior
frustration.
This is
a version of what’s called the ‘argument from unfinished business’ and, while
many people aren’t persuaded by it, I am. However, I don’t see it as a
specifically moral position, but would instead classify it as a non-moral
further-loss view. This matters because we typically see moral claims as having
much more force than non-moral ones. There’s a big difference between saying
‘You shouldn’t eat chocolate ice-cream because vanilla is better’ and ‘You
shouldn’t drown kittens in your bathtub for fun.’ The first expresses a mere
aesthetic preference, and hence carries much less weight than the second, which
expresses a moral proposition. So, the unfinished business argument that I
accept isn’t very weighty. Other considerations – especially moral
considerations – could easily override this personal preference of mine.
This
leads directly to the question of whether Being Extinct might be, in some
morally relevant way, better than Being Extant. Here I find myself sympathetic
with the sentiments behind pro-extinctionism. Although Being Extinct would mean
no more happy experiences in the future, no more poetry, painting, music and
laughter, it would also guarantee the absence of the worst atrocities
imaginable – child abuse, genocides, and the like. There would be no more love,
but there would also be no more heartbreak, and I suspect many would agree,
upon reflection, that heartbreak can hurt worse than love feels good. It’s also
entirely possible that scientific and technological advancements make
unspeakable new forms of suffering feasible. Imagine a world in which radical
life-extension technologies enable totalitarian states to keep people alive in
torture chambers for indefinitely long periods of time – perhaps hundreds or
thousands of years. Is our continued existence worth risking such agony and
anguish? Those who answer ‘yes’ put themselves in the awkward position of
saying that torture, child abuse, genocide and so on are somehow ‘worth it’ for
the good things that might come to exist alongside them.
Considerations
arising from our impact on the natural world, and the way we treat our fellow
creatures on Earth, also support the pro-extinctionist view. Who can deny that
humanity has been a force of great evil by obliterating ecosystems, razing
forests, poisoning wildlife, polluting the oceans, hunting species to
extinction and tormenting domesticated animals in factory farms? Without
humanity, there would be no more humanity-caused evils, and surely that would
be very good.
So where
does this leave us? I’m inclined to agree with the philosopher Todd May, who
argued in The New York Times in 2018 that human extinction would be a mixed
bag. I reject the further-loss views of Parfit and the longtermists, and accept
the equivalence view about the badness of extinction. But I’m also sympathetic
with aspects of pro-extinctionism: all things considered, it’s hard to avoid
the conclusion that Being Extinct might, on balance, be positive – even though
I’d be saddened if the business of revealing the arcana of the cosmos were left
forever unfinished. (The sadness here, though, is not really of the moral kind:
it’s the same sort of sadness I’d experience if my favourite sports team were
to lose the championship.)
That
said, the horrors of Going Extinct in a global catastrophe are so enormous that
we, as psychically numb inverted Utopians, should do everything in our power to
reduce the likelihood of this happening. On my view, the only morally
permissible route from Being Extant to Being Extinct would be voluntary
antinatalism, yet as many antinatalists themselves have noted – such as Benatar
– the probability of everyone around the planet choosing not to have children
is approximately zero. The result is a rather unfortunate predicament in which
those who agree with me are left anticipatorily mourning all the suffering and
sorrow, terrors and torments that await humanity on the road ahead, while
simultaneously working to ensure our continued survival, since by far the most
probable ways of dying out would involve horrific disasters with the highest
body count possible. The upshot of this position is that, since there’s nothing
uniquely bad about extinction, there’s no justification for spending
disproportionately large amounts of money on mitigating extinction-causing
catastrophes compared with what have been called ‘lesser’ catastrophes, as the
longtermists would have us do, given their further-loss views. However, the
bigger the catastrophe, the worse the harm, and for this reason alone
extinction-causing catastrophes should be of particular concern.
My aim
here isn’t to settle these issues, and indeed our discussion has hardly
scratched the surface of existential ethics. Rather, my more modest hope is to
provide a bit of philosophical clarity to an immensely rich and surprisingly
complicated subject. In a very important sense, virtually everyone agrees that
human extinction would be very bad. But beyond this default view, there’s a
great deal of disagreement. Perhaps there are other insights and perspectives
that have not yet been discovered. And maybe, if humanity survives long enough,
future philosophers will discover them.
The
ethics of human extinction. By Émile P. Torres. Aeon, February 20, 2023.
Every
year, the Bulletin of the Atomic Scientists convenes a meeting to decide whether
the minute hand of its famous Doomsday Clock will move forward, backward or
stay put. Invented in 1947, the clock is a metaphor aimed at conveying our
collective proximity to global destruction, which is represented by midnight:
doom. Initially set to seven minutes before midnight, the minute hand has
wiggled back and forth over the decades: In 1953, the year after the U.S. and
Soviet Union detonated the first thermonuclear weapons, it inched forward to a
mere two minutes before doom, where it stayed until 1960, when it returned to
the original setting. In 1991, following the dissolution of the Soviet Union,
which marked an official end to the Cold War, the minute hand was pushed back
to a reassuring 17 minutes.
Since
then, however, the time has pretty steadily lurched forward, and in 2018 the
Bulletin's Science and Security Board once again placed the time at two minutes
before midnight, due to growing nuclear tensions and the ignominious failure of
world governments to properly address the worsening climate crisis. Two years
later, the minute hand moved forward to just 100 seconds before midnight, and
this January — the most recent Doomsday Clock announcement — it was, for the
first time ever, inched forward to 90 seconds.
But what
exactly does this mean? The Doomsday Clock has plenty of critics, and my sense
is that even those who like the metaphor agree that it's not perfect. The
minute hands of most clocks don't move backward, and the Doomsday Clock, once
its minute hand is set, doesn't start ticking forward. Many people on social
media dismiss it as "scare-mongering," a way of frightening the
public — which is not entirely wrong, as the Bulletin's purpose from the start
was, to quote one of its founders, Eugene Rabinowitch, "to preserve
civilization by scaring men into rationality." There is, in fact, a long
tradition since the Atomic Age began in 1945 of employing what historian Paul
Boyer describes as "the strategy of manipulating fear to build support for
political resolution of the atomic menace."
One sees
this very same strategy being employed today by environmentalists like Greta
Thunberg, who declared before an audience at Davos in 2019: "I don't want
your hope. I don't want you to be hopeful. I want you to panic. I want you to
feel the fear I feel every day. And then I want you to act." Fear can be
paralyzing, but it can also be a great motivator. They key is to figure out how
to inspire what the German philosopher Günther Anders described as "a
special kind" of fear, one that "drive[s] us into the streets instead
of under cover." It's hard to know whether the Doomsday Clock does this.
It certainly hasn't inspired large protests or demonstrations, although it is
taken seriously by some political leaders, and in fact the Kremlin itself reacted
with alarm to the announcement, blaming — of course — the U.S. and NATO.
Aside
from potential problems with the metaphor and questions about the efficacy of
fear, the Doomsday Clock does convey something important: Humanity is in a
genuinely unprecedented situation these days, in the mid-morning of the 21st
century. The fact is that before the invention of thermonuclear weapons in the
early 1950s, there was no plausible way for humanity — much less a handful of
individuals with twitching fingers next to some "launch" button — to
completely destroy itself. Perhaps everyone around the world could have decided
to stop having children, a possibility considered in 1874 by the English
philosopher Henry Sidgwick (who said it would be "the greatest of conceivable
crimes"). Some people around that time argued that we should do exactly
that, arguing that life is so full of suffering that it would be better if our
species were extinguished. Consider German philosopher Philipp Mainländer, who
after receiving the first copies of his magnum opus on pessimism in 1876,
stacked them up on the floor, climbed to the top of the pile and hanged
himself.
Yet even
Mainländer acknowledged that persuading everyone to stop their baby-making
activities (he advocated for celibacy) would be difficult. Human extinction by
choosing not to procreate just isn't realistic. But thermonuclear weapons
really could create an anthropogenic extinction scenario. How? The main danger,
you may be surprised to learn, isn't from the initial blasts. Those would
certainly be catastrophic, and indeed the most powerful thermonuclear weapon
ever detonated — the Soviet-made Tsar Bomba, tested just once in 1961 — had an
explosive yield more than 1,500 times that of the bomb dropped on Hiroshima,
which destroyed much of the city. That's bigger than most of the weapons in our
nuclear arsenal, but these can still wreak horrific havoc, as you can see for
yourself by testing out different weapons in different cities on Nuke Map.
But in
fact, the greater threat comes from the possibility that nuclear explosions
would ignite massive conflagrations called firestorms, so hot that they produce
their own gale-force winds. The explosion over Hiroshima, in fact, triggered a
hellish firestorm that, along with the initial blast, killed roughly 30 percent
of the city's population.The intense heat would catapult large quantities of
the black soot produced by these fires straight through the troposphere — the
layer of the atmosphere closest to Earth's surface — and into the stratosphere,
the next layer up, which you may have traversed if you've ever flown in a
commercial jetliner over the poles.
This is
an important point, because there are several ways that soot could be removed
from the atmosphere. The first is weather: You could think of rain as a kind of
atmospheric sandpaper, removing aerosols floating about the air. The second is
gravity: if some particulate matter is heavier than the air, it will eventually
fall back to Earth. But first of all, there's no weather in the stratosphere,
so this mechanism of removing soot won't work. And second, gravity takes a
pretty long time to remove stuff like soot, which means once the soot is in the
stratosphere, it's going to stay there for a while — potentially years.
That
soot would blocks incoming light from our life-giving sun. Without light,
photosynthesis can't happen, and without photosynthesis, plants die. Without
plants, global agriculture and food chains would collapse, and the surface of
our planet would plunge into subfreezing temperatures. So even someone
thousands of miles away from any possible ground zero — that is, where the
thermonuclear blasts occur — would sooner or later starve to death under
pitch-black skies at noon. According to a 2020 study, a nuclear war between India
and Pakistan could kill more than 2 billion people, while a war between the
U.S. and Russia could result in 5 billion deaths, more than 60% of the entire
human population. An all-out nuclear war involving every nuclear power in the
world today? Carl Sagan himself wrote in 1983 that "there seems to be a
real possibility of the extinction of the human species," although not
everyone would agree with this assessment.
So when
people complain that the Doomsday Clock announcement is nothing but
scare-mongering, spreading alarm for no good reason, they're just plain wrong.
The nuclear threat is real, which is why all the nuclear saber-rattling that
Vladimir Putin engaged in leading up to the Ukraine war literally kept me up at
night. In fact, what caused those sleepless nights wasn't just the thought of
Putin detonating a "tactical" nuclear device in Ukraine, although
that could easily create a situation that quickly spirals out of control,
creating a nuclear nightmare affecting billions.
I was
also nervous about the possibility of a miscalculation, error, or accident that
could trigger Armageddon. The history of nuclear near-misses is frankly
shocking. My advice is not to fall down this particular rabbit-hole before bed.
Did you know, for example, that an undetonated nuclear bomb is buried somewhere
in the farmland around Goldsboro, North Carolina? It accidentally fell out of
an airplane, and was never recovered. Consider the case of Vasili Arkhipov, who
was a naval officer on a Soviet submarine during the Cuban missile crisis of
1962. After losing contact with Moscow, the submarine's captain believed that
war might have already broken out, and wanted to launch a nuclear torpedo at
the U.S. But all three senior officers had to agree to such a launch, and
Arkhipov stubbornly resisted. He may well have single-handedly saved the world,
quite a legacy to leave behind.
Considering
that history, I've wondered how many near-misses there may have been during the
Ukraine war, especially since Putin put his nuclear forces on "high
alert," that we won't learn about for many years (if we ever do). How
close have we come to the brink without realizing it? It remains entirely
possible that a mistake, tomorrow or next week or next month, could start World
War III.
This is
the world we now live in, and it's why the warnings behind the Doomsday Clock
are nothing to sneeze at. And we haven't even gotten to climate change, the
other major threat that the Bulletin considers when setting the clock's time.
Although it's fair to say that climate change is unlikely to cause our complete
extinction, the potential harm it could cause will be unprecedented in human
history. The future world your grandchildren will live in will be profoundly
different, and in many ways worse, than the one we now occupy. If civilization
is an experiment, it's failing. The only other single species to alter the
biosphere as much as we have, and as we will in the coming centuries, was a
single-celled critter called cyanobacteria, which some 2.5 billion years ago flooded
the atmosphere with oxygen for the first time. Since oxygen is toxic to
anaerobic organisms, that may have initiated a mass extinction event, although
it's hard to know for sure because there aren't many fossil remains from that
period of Earth's history.
The
point is that climate change also poses real, urgent and profound dangers. It
will negatively impact the livability of our planet for the next 10,000 years,
a significantly longer period than what we call "civilization" has
existed. So this is no joke. The Doomsday Clock, for all its flaws, should be
taken seriously by every citizen of our planet. Right now, it stands at 90
seconds to midnight, and given that climate change is worsening and there's no
end in sight to the war in Ukraine, we can expect the clock's minute hand to
keep moving forward.
The
Doomsday Clock is an imperfect metaphor — but the existential danger is all too
real. By Émile P. Torres. Salon, February 5, 2023.
My first
article for The Dig claimed that the ideology of longtermism is in important
respects another iteration of the “eternal return of eugenics.” Not only can
one trace its genealogy back to the 20th-century eugenics tradition, but the
longtermist movement has significant connections with people who’ve defended,
or expressed, views that are racist, xenophobic, ableist and classist. Indeed,
my point of departure was an email from 1996 that I’d come across in which Nick
Bostrom, the Father of Longtermism, wrote that “Blacks are more stupid than
whites,” and then used the N-word. He subsequently “apologized” for this email
without retracting his claim that certain racial groups might be less
“intelligent” than others for genetic reasons.
I aimed
to show that when one wanders around the neighborhoods that longtermism calls
home, it’s difficult not to notice the attitudes mentioned above almost
everywhere. At the very least, leading longtermists have demonstrated a
shocking degree of comfort with, and tolerance of, such attitudes. For example,
despite their deeply problematic views about race, Sam Harris and Scott
Alexander are revered by many longtermists, and longtermists like Anders
Sandberg have approvingly cited the work of scientific racists like Charles
Murray.
This is
alarming because longtermism is an increasingly influential ideology. It’s
pervasive within the tech industry, with billionaires like Elon Musk calling it
“a close match for my philosophy.” Perhaps even more concerning is the fact
that longtermism is gaining a foothold in world politics. The U.N. Dispatch
reported in August 2022 that “the foreign policy community in general and the
United Nations in particular are beginning to embrace longtermism.”
My
article covered a lot of ground, and some readers suggested that I write a
brief explainer of some of the story’s main concepts and characters. What
follows is a primer along these lines that I hope provides useful background
for those interested in the big picture.
Eugenics
The word
“eugenics” was coined by Charles Darwin’s cousin, Francis Galton, in 1883,
although the idea was popularized earlier in his 1869 book “Hereditary Genius.”
The word literally means “good birth.” There are two general types of eugenics:
positive and negative. The first aims to encourage people with “desirable
traits,” such as high “intelligence,” to have more children. This was the idea
behind “better baby” and “fitter family” contests in the United States during
the early 20th century. The second strives to prevent what eugenicists
variously called “idiots,” “morons,” “imbeciles,” “defectives” and the
“feeble-minded” (often identified using IQ tests) from reproducing. This is
what led to the forced-sterilization programs of states like Indiana and
California, the latter of which sterilized some 20,000 people between 1909 and
the 1970s (yes, that recently).
A book
on Eugenics published by T.W. Shannon in 1919. Photo by Andrew Kuchling / (CC
BY-SA 2.0)
Although
the modern eugenics movement was born in the late 19th century, its influence
peaked in the 1920s. During the 1930s, it was taken up by the Nazis, who
actually studied California’s eugenics program in hopes of replicating it back
in Germany. Despite the appalling crimes committed by the Nazis during World War
II, eugenics still had plenty of advocates in the decades that followed. Julian
Huxley, for example, presided over the British Eugenics Society from 1959 to
1962, while also promoting the transhumanist idea that by controlling “the
mechanisms of heredity” and using “a method of genetic change” (i.e.,
eugenics), “the human species can, if it wishes, transcend itself — not just
sporadically, an individual here in one way, an individual there in another
way, but in its entirety, as humanity.”
Today, a
number of philosophers have defended what’s called “liberal eugenics” —
transhumanism being an example — which many contrast with the “old” or
“authoritarian” eugenics of the past century, although we will see in my second
article for The Dig that the “old” and “new” eugenics are really not so
different in practice.
Effective
Altruism (EA)
To
understand the idea of “effective altruism,” it helps to break the term down.
First, the “altruism” part can be traced back, most notably, to a 1972 article
by Peter Singer, one of the leading effective altruists, titled “Famine,
Affluence, and Morality.” It contains the famous “drowning child” scenario, in
which you see a child struggling to stay afloat in a nearby pond. Most of us
wouldn’t hesitate to, say, ruin a nice new pair of shoes to save the child. But
what’s the difference, really, between a child drowning 30 feet away from us
and a child starving to death on the other side of the world? What does
physical proximity have to do with morality? Maybe what those of us in affluent
countries should do is forgo purchasing those new shoes in the first place
(assuming that our old shoes are still wearable) and instead give that money to
help disadvantaged people suffering from poverty or a devastating famine. This
is the altruism part: selflessly promoting the well-being of others for their
own sakes.
But
there’s a follow-up question: If you’re convinced by Singer’s argument that we
should be altruistic and give some fraction of our disposable income to people
in need, who exactly should we give it to? Which charities yield the biggest
bang for one’s buck? What if charity X can save 100 lives with $10,000, while
charity Y would only save a single life? Obviously, you’d want to give your
money to the first charity. But how would you know that one charity is so much
better than the other?
This is
where the “effective” part enters the picture: In the 2000s, a small group of
people, including William MacAskill and Toby Ord, began to seriously
investigate these questions. Their aim was to use “reason and evidence” to
determine which charities are the most effective at improving the world, for
the sake of — in MacAskill’s words — “doing good better.” That’s how the
effective altruism movement came about, with the term itself being
independently coined in 2007 and 2011.
At first
glance, all of this sounds pretty good, right? However, some of the conclusions
that effective altruists ended up with are controversial and highly
counterintuitive. For example, MacAskill has argued that we should actively
support sweatshops, ignore “Fairtrade” goods and resist giving to disaster
relief. He’s also defended an idea called “earn to give.” Let’s say that you
want to save as many lives as possible with your money. One of the best ways to
do this, according to effective altruists, is by donating to the Against
Malaria Foundation, which distributes bed nets that help prevent mosquito
bites. So, the more money you give the Against Malaria Foundation, the more bed
nets get distributed and the more lives you save.
This led
MacAskill to argue that some people should pursue the most lucrative careers
possible, even if this means working for what he calls “evil organizations,”
like petrochemical companies or companies on Wall Street. In fact, in 2012
MacAskill traveled to the United States and happened to sit down for lunch with
a young MIT student named Sam Bankman-Fried, and convinced him to pursue this
“earn to give” path. Bankman-Fried then got a job at a proprietary trading firm
called Jane Street Capital, where other effective altruists have worked, and
later decided to get into crypto to — as one journalist put it — “get filthy
rich, for charity’s sake.” Consequently, Bankman-Fried founded two companies
called Alameda Research and FTX, the latter of which catastrophically imploded
last November. Bankman-Fried is now facing charges of “fraud, conspiracy to
commit money laundering and conspiracy to defraud the U.S. and violate campaign
finance laws,” which carry a maximum sentence of 115 years.
Longtermism
Finally,
let’s talk about longtermism. This ideology has been the primary focus of my
critiques over the past two years, as I am genuinely worried that it could have
extremely dangerous consequences if those in power were to take it seriously.
One way to understand its development goes like this: In the early 2000s, Nick
Bostrom published a number of articles on something he called “existential
risks.” In one of these articles, he pointed out that if humanity were to
colonize an agglomeration of galaxies called the Virgo Supercluster, which
includes our own Milky Way, the future human population could be enormous. Not
only could our descendants live on exoplanets similar to the fourth rock from
the sun that we call home (i.e., Earth), but we could gather material resources
and build planet-sized computers on which to run virtual-reality worlds full of
trillions and trillions of people.
In
total, he estimated that some 10^38 people — that’s a 1 followed by 38 zeros —
could exist in the Virgo Supercluster per century if we create these giant
computer simulations, although he later crunched the numbers and estimated some
10^58 simulated people within the entire universe. That’s a really, really big
number. So far on Earth, there have been about 117 billion humans, which is 117
followed by “only” nine zeros. The future population could absolutely dwarf the
total number of people who’ve so far existed. Although our past is pretty
“big,” the future could be so much bigger.
Sometime
around the end of the 2000s, it seems that some effective altruists came across
Bostrom’s work and were struck by an epiphany: If the aim is to positively
influence the greatest number of people possible, and if most people who could
ever exist would exist in the far future, then maybe what we should do is focus
on them rather than current-day people. Even if there’s a tiny chance of
influencing these far-future folks living in huge computer simulations, the
fact that there are so many still yields the conclusion that they should be our
main concern.
As
MacAskill and a colleague wrote in 2019, longtermism implies that we can more
or less simply ignore the effects of our actions over the next 100 or even
1,000 years, focusing instead on their consequences beyond this “near-termist”
horizon. It’s not that contemporary people and problems don’t matter, it’s just
that, by virtue of its potential bigness, the far future is much more
important. In a phrase: Won’t someone consider the vast multitude of unborn
(digital) people?
Longtermism
was born in a collision between the EA idea of “doing the most good possible”
and the realization that the future population over the next “millions,
billions and trillions of years” could be much bigger than the current
population, or even the total number of people who’ve so far journeyed from the
cradle to the grave on this twirling planetary spaceship of ours.
This
being said, it is useful to distinguish between two versions of longtermism:
the moderate version says that focusing on the far future is a key moral
priority of our time whereas the radical version says that this is the key
priority — nothing matters more. In his recent book “What We Owe the Future,”
MacAskill defends the moderate version, although as I’ve argued elsewhere, he
probably did this for marketing reasons, since the moderate version is less
off-putting than the radical one. However, the radical version is what one
finds defended in many of longtermism’s founding documents, and MacAskill
himself told a Vox reporter last year that he’s “sympathetic” with radical
longtermism and thinks it’s “probably right” (to quote the reporter).
This is
the lay of the land — the main characters, the central ideas, that populated my
last Truthdig article. I hope this helps readers understand what I believe is
the significant overlap between longtermism and the racist, xenophobic, ableist
and classist attitudes that animated 20th-century eugenicists.
Longtermism
and Eugenics: A Primer. By Émile P. Torres. TruthDig, February 4, 2023
Sometime
last year, I happened to come across an email from 1996, written by a 23-year-old
graduate student at the London School of Economics named “Niklas Bostrom.” Upon
reading it, my jaw dropped to the floor, where it stayed for the rest of the
day.
Here’s
part of what Bostrom, now known as “Nick Bostrom,” an Oxford University philosopher
who’s been profiled by The New Yorker and become highly influential in Silicon
Valley, sent to the listserv of “Extropians”:
“Blacks are more stupid than
whites.
I like
that sentence and think it is true. But recently I have begun to believe that I
won’t have much success with most people if I speak like that. They would think
that I were [sic] a “racist”: that I disliked black people and thought that it
is fair if blacks are treated badly. I don’t. It’s just that based on what I have
read, I think it is probable that black people have a lower average IQ than
mankind in general, and I think that IQ is highly correlated with what we
normally mean by “smart” and stupid” [sic]. I may be wrong about the facts, but
that is what the sentence means for me. For most people, however, the sentence
seems to be synonymous with:
I hate
those bloody [the N-word, included in Bostrom’s original email, has been
redacted]!!!!
My point
is that while speaking with the provocativness [sic] of unabashed objectivity
would be appreciated by me and many other persons on this list, it may be a
less effective strategy in communicating with some of the people “out there”.”
Although
shocking, I honestly can’t say I was surprised. I wasn’t. In fact, I’d been working
on a series of articles for Truthdig exploring the deep connections between
longtermism, a bizarre, techno-utopian ideology that Bostrom helped establish,
and eugenics, a pseudoscientific movement that inspired some of the worst atrocities
of the 20th century. The fact is that, as the artificial intelligence
researcher Timnit Gebru, one of TIME’s “100 most influential people of 2022,”
has repeatedly pointed out on Twitter, longtermism is “rooted in eugenics” or
even “eugenics under a different name.”
This is
not hyperbole; it’s not an exaggeration. If anything, Gebru’s statement doesn’t
go far enough: longtermism, which emerged out of the effective altruism (EA)
movement over the past few years, is eugenics on steroids. On the one hand,
many of the same racist, xenophobic, classist and ableist attitudes that
animated 20th-century eugenics are found all over the longtermist literature
and community. On the other hand, there’s good reason to believe that if the
longtermist program were actually implemented by powerful actors in high-income
countries, the result would be more or less indistinguishable from what the
eugenicists of old hoped to bring about. Societies would homogenize, liberty
would be seriously undermined, global inequality would worsen and white
supremacy — famously described by Charles Mills as the “unnamed political
system that has made the modern world what it is today” — would become even
more entrenched than it currently is. The aim of this article is to explore the
first issue above; the second will be our focus in the next article of this
series for Truthdig.
So, back
to Bostrom. My first thought after reading his email was: Is this authentic?
Has it been tampered with? How can I know if he really wrote this? I thus
contacted everyone who participated in the email thread, and someone replied to
confirm that Bostrom did indeed write those words. However, I also contacted
Anders Sandberg, a long-time collaborator of Bostrom’s with whom I’d been
acquainted for many years through conferences on “existential risk” — the most
important concept of longtermist ideology. (Until 2019 or so, I identified as a
longtermist myself, a fact that I deeply regret. But this has, at least, given
me an intimate understanding of what I would now describe as a profoundly
dangerous ideology.)
In
response, Sandberg suggested to me that the email is probably authentic (we now
know it is), and then, apparently, alerted Bostrom of the fact that I’m aware
of his remarks. This prompted Bostrom to release a perfunctory,
sloppily-written “apology” full of typos and grammatical errors that didn’t
bother to redact the N-word and, if anything, has done more to alert the
general public of this noxious ideology than anything I might have published
about Bostrom’s email two weeks ago.
“I have
caught wind,” Bostrom writes, “that somebody has been digging through the
archives of the Extropians listserv with a view towards finding embarrassing
materials to disseminate about people.” He continues, writing as if he’s the
victim: “I fear that selected pieces of the most offensive stuff will be
extracted, maliciously framed and interpreted, and used in smear campaigns. To
get ahead of this, I want to clean out my own closet, and get rid of the very
worst of the worst in my contribution file.” It appears that he believes his
“apology” is about public relations rather than morality; it’s about “cleaning
out his closet” rather than making things right. He goes on to say that he
thinks “the invocation of a racial slur was repulsive” and has donated to
organizations like GiveDirectly and Black Health Alliance, though he leaves
wide-open the possibility that there really might be genetically based
cognitive differences between groups of people (there’s no evidence of this).
“It is not my area of expertise, and I don’t have any particular interest in
the question,” he writes with a shrug. “I would leave to others [sic], who have
more relevant knowledge, to debate whether or not in addition to environmental
factors, epigenetic or genetic factors play any role.”
Sandberg
then casually posted this “apology” on Twitter, writing that Bostrom’s words do
“not represent his views and behavior as I have seen them over the 25 years I
have known him.” He further warns that “the email has become significantly more
offensive in the current cultural context: levels of offensiveness change as
cultural attitudes change (sometimes increasing, often decreasing). This causes
problems when old writings are interpreted by current standards.” Sandberg
seems to be suggesting that Bostrom’s statements weren’t that big a deal when
they were written in 1996, at least compared to how our “woke” world of “overly
sensitive” “social justice warriors” always on the hunt to “cancel” the next
“beleaguered” white man will see them (my scare quotes).
This, of
course, triggered an avalanche of protest from academics and onlookers, with
one person replying, “I am the same age as Nick Bostrom and participated in
many free-wheeling philosophical discussions. I never wrote anything like this,
and it would have been shockingly racist at any point in my life.” Another
said, “I was a student in the UK in the mid-90s and it was just as offensive
then as it is now.” Still others took issue with the fact that Bostrom “never
even backed down from the assertion that black people are intellectually
inferior and instead went on to assert ‘it’s just not his area of expertise.’”
Many simply dismissed it as “a study in a non-apology,” given that Bostrom
“says he repudiates the horrific comments” he made, but “then goes right back
into them.” As Gebru summarized the whole ignominious affair:
“I don’t know what’s worse. The initial
email, Bostrom’s “statement” about it, or [Sandberg’s Twitter] thread. I’m
gonna go with the latter 2 because that’s what they came up with in preparation
for publicity. Their audacity never ceases to amaze me no matter how many times
I see it.”
In my
view, a good apology should do three things: First, make a clear and compelling
case that one understands why one’s words or deeds were wrong or caused harm.
Second, make a clear and compelling case that one is sincerely remorseful for
having done that wrong or caused harm. And third, take concrete steps toward
making things right. I like to call this an “active apology,” which contrasts
with the facile “passive” apology that insouciantly says, “Yeah, whatever,
sorry, now let’s move on.”
Bostrom’s
“apology” was passive in the extreme, not active. He showed virtually no
evidence that he understands why claiming that whites are more intelligent than
Blacks would be hurtful or wrong — both morally and scientifically — and seems
more concerned about public relations than driven by genuine compunction. His
dismissive attitude about the whole debacle, in fact, is on full display on his
personal website, which he updated to say: “[S]ometimes I have the impression
that the world is a conspiracy to distract us from what’s important —
alternatively by whispering to us about tempting opportunities, at other times
by buzzing menacingly around our ears like a swarm of bloodthirsty mosquitos.”
He seems — so far as I can tell from this — to think of those speaking out
against his racist remarks and shameless non-apology as “bloodthirsty
mosquitos” who are “buzzing menacingly” around him as if part of a “conspiracy
to distract” him “from what’s really important,” such as saving the world from
superintelligent machines or suggesting that a highly invasive global
surveillance system may be necessary to save civilization from itself.
As it happens,
I believe in forgiveness. People make mistakes, and a single statement
shouldn’t define one’s entire career, reputation or life. Someone can say
something racist and not be a racist, and someone can be a racist and later
change their views. Christian Picciolini, a former leader of the white power
movement in the United States, whose life’s work now focuses on combating
hatred (he cofounded the organization Life After Hate), provides an example.
Indeed, the original article I was working on for Truthdig about longtermism
and eugenics didn’t say that much about Bostrom’s email. It wasn’t the
centerpiece of the article, but instead served a merely background function.
Background to what? To everything Bostrom’s written since then. In my view,
it’s difficult to avoid the conclusion that he still believes that whites are
more “intelligent” than Blacks — hence his decision not to denounce this
statement in his “apology.”
For
example, consider that six years after using the N-word, Bostrom argued in one
of the founding documents of longtermism that one type of “existential risk” is
the possibility of “dysgenic pressures.” The word “dysgenic” — the opposite of
“eugenic” —is all over the 20th-century eugenics literature, and worries about
dysgenic trends motivated a wide range of illiberal policies, including
restrictions on immigration, anti-miscegenation laws and forced sterilizations,
the last of which resulted in some 20,000 people being sterilized against their
will in California between 1909 and 1979.
For
Bostrom, the primary “dysgenic”-related worry is that less “intellectually
talented individuals” might outbreed their more “intellectually talented”
peers. In his 2002 article on “existential risks,” which helped launch the
longtermist movement, he writes:
“Currently it seems that there is a
negative correlation in some places between intellectual achievement and
fertility. If such selection were to operate over a long period of time, we
might evolve into a less brainy but more fertile species, homo philoprogenitus
(“lover of many offspring”).”
Although
Bostrom doesn’t elaborate on what “in some places” means, it’s not hard to see
a “racial” link here, given that, at the time he was writing, the fertility
rates among white people tended to be lower than other groups, both in the U.S.
and the world.
Yet this
was not the only time Bostrom made this claim: He repeated it in a 2017 book
chapter with — you guessed it — Anders Sandberg. (So it’s not surprising that
Sandberg was willing to defend Bostrom: A defense of Bostrom is also a defense
of himself.) They wrote: “It should be noted that IQ correlates negatively with
fertility in many modern societies,” and then cited three papers, all from the
1970s and 1980s, to support this. One of these papers argues that Blacks score
on average about three-quarters of a standard deviation lower than whites on
vocabulary tests, which the authors (of the cited article) say “perform quite
well as measures of general intelligence.” These authors add that “nonwhites average
more children and lower test scores,” and that earlier publications showing “a
neutral or slightly eugenic [as opposed to dysgenic] relationship” are biased
“in part because they did not include nonwhites.” When nonwhites and other
missing factors are included, the relationship between “intelligence and
completed fertility” appears “predominantly negative.” This is one of the
papers on which Bostrom and Sandberg base their “negative correlation” claim.
But it
gets so much worse. First, the notion of “IQ” is highly dubious. Intelligence
is a complex phenomenon that cannot be reduced to a single number. The Nobel
laureate Richard Feynman had an IQ of 126 (not very high), and plenty of people
in Mensa aren’t university professors. In 1972, Robert Williams created the
“Black Intelligence Test of Cultural Homogeneity,” a multiple-choice test that,
it turns out, Black people scored considerably higher on than white people. As
Daphne Martschenko, an assistant professor at the Stanford Center for
Biomedical Ethics, notes, IQ tests were developed in part by 20th-century
eugenicists, and “in their darkest moments” they became “a powerful way to
exclude and control marginalized communities using empirical and scientific
language.” Gebru similarly observes in a chapter for “The Oxford Handbook of
Ethics of AI” that IQ tests were “designed by White men whose concept of
‘smartness’ or ‘genius’ was shaped, centered and evaluated on specific types of
White men.”
Yet the
longtermist community is, for lack of a better word, obsessed with “IQ” and
“intelligence.” To quote Zoe Cremer, a prominent critic of EA, the movement
that gave rise to longtermism, “intelligence, as a concept and an asset, plays
a dominant role in EA.” It’s not just a “highly valued trait in the community,”
but surveys even “sometimes ask what IQ members have.” Community members “also
compliment and kindly introduce others using descriptors like intelligent or
smart,” and certain people are widely known and revered for their intellect.
They are said to be intimidatingly intelligent and therefore epistemically
superior. Their time is seen as precious. EAs sometimes showcase their humility
by announcing how much lower they would rank their own intelligence underneath
that of the revered leaders.
Examples
would include Bostrom, Sandberg, Eliezer Yudkowsky, Robin Hanson, Scott
Alexander, Toby Ord and William MacAskill (all white men, by the way, a point
that isn’t lost on Cremer). Indeed, the obsession with IQ is partly because of
these individuals. Yudkowsky has on numerous occasions boasted about his high
IQ (supposedly 143), and Bostrom published a paper in 2014, which argues that
by selecting embryos with the genetic markers of superior intelligence,
creating new embryos out of them (via stem cells) and then repeating this
process 10 times, you could get IQ gains of up to 130 points.
Meanwhile,
Sandberg and Julian Savulescu — a philosopher who once argued that “moral
bioenhancement” should be mandatory — write in a coauthored book chapter that
IQ is linked to things like poverty, criminal behavior, high school dropout
rates, parentless children, welfare recipiency and out-of-wedlock births. Where
do they get their data from? It may not surprise you to discover the answer is
Charles Murray’s 1994 book “The Bell Curve,” written with the late Richard
Herrnstein. Murray is world-renowned for his scientific racism, according to
which Black people are less intelligent than whites for genetic reasons —
exactly the view that Bostrom expressed in his email and left the door open to
in his subsequent “apology.”
You
might think that this is a one-off, but you’d be wrong: The fingerprints of
Murray’s “scholarship” are all over the longtermist community. Consider that
Scott Alexander, mentioned above, is widely revered within the EA and long
termist communities. In a leaked email, Alexander wrote that “human
biodiversity” — the view that groups of people differ in traits like
“intelligence” for genetic reasons, once described as “an ideological successor
to eugenics” — is “probably partially correct,” to which he added: “I will
appreciate if you NEVER TELL ANYONE I SAID THIS, not even in confidence. And by
‘appreciate,’ I mean that if you ever do, I will probably either leave the
Internet forever or seek some sort of horrible revenge.” Elsewhere, Alexander
has publicly aligned himself with Murray, who happens to be a member of the
far-right “Human Biodiversity Institute,” and made the case on his blog Astral
Codex Ten that “dysgenics is real,” though happening slowly — similar to the
claim Bostrom made in 2002. He writes:
“In general, educated people reproduce
less than uneducated people … The claim isn’t that fewer people will have PhDs
in the future: colleges will certainly solve that by increasing access to education
and/or dumbing down requirements. It’s a dysgenic argument where we assume at
any given time the people with higher degrees have on average higher genetic
intelligence levels. If they’re reproducing less, the genetic intelligence
level of the population will decrease.”
Alexander
goes on to say that there’s “some debate in the scientific community about
whether this is happening, but as far as I can tell the people who claim it
isn’t have no good refutation for the common sense argument it has to be. The
people who claim that it is make more sense.” He concludes that while this
isn’t good news, the fact that it’s slow suggests this dysgenic trend probably
won’t be “apocalyptic.”
Or
consider that Sam Harris has vigorously defended Charles Murray’s race science,
even promoting it on his popular Making Sense podcast, and Harris is closely
linked with the EA and longtermist communities. For example, Harris appeared on
stage next to prominent long termists like Bostrom, Elon Musk and Max Tegmark
during an event hosted by the Future of Life Institute, a longtermist
organization to which Musk donated $10 million. The Future of Life Institute
also platformed Harris on their podcast, and Harris was invited to the
exclusive “AI safety conference in Puerto Rico”in 2015 in which Bostrom,
Sandberg, Yudkowsky, Hanson and Toby Ord all participated. Harris even wrote a
glowing blurb for MacAskill’s recent book “What We Owe the Future,” in which he
says that “no living philosopher has had a greater impact upon my ethics than
Will MacAskill.”
Even
more, some existential risk scholars seem to have changed their minds about
Murray based on Harris’ promotion of Murray’s race science. To quote Olle
Häggström — a Swede, like Bostrom, whose recent work has focused on existential
risks — “Murray was portrayed as a racist and worse, and I actually think that
those of us who have been under that impression for a couple of decades owe him
the small favor of listening to [Harris’] podcast episode and finding out what
a wise and sane person he really is” (translated from Swedish).
Harris
himself holds the very same racist views about “intelligence” and “IQ” that
both Bostrom and Murray have articulated. For example, here’s what he said in a
podcast interview several years ago (quoting at length):
“As bad luck would have it, but as you
[would] absolutely predict on the basis of just sheer biology, different
populations of people, different racial groups, different ethnicities,
different groups of people who have been historically isolated from one another
geographically, test differently in terms of their average on this measure of
cognitive function. So you’re gonna give the Japanese and the Ashkenazi Jews,
and African Americans, and Hawaiians … you’re gonna take populations who differ
genetically—and we know they differ genetically, that’s not debatable—and you
give them IQ tests, it would be a miracle if every single population had the
exact same mean IQ. And African Americans come out about a standard deviation
lower than white Americans. … So, if it’s normed to the general population,
predominantly white population for an average of 100, the average in the
African American community has been around 85.”
To my
knowledge, none of the leading long termists have publicly objected to this
jumble of scientifically illiterate race science. In fact, MacAskill,
Yudkowsky, Bostrom and Toby Ord all appeared on Harris’ podcast after Harris
promoted Charles Murray and made the racist remarks quoted above. Similarly, no
one complained when MacAskill got a ringing endorsement from Harris. In fact, I
asked MacAskill point-blank during a Reddit “Ask Me Anything” about why he’d
requested a blurb from Harris given Harris’ scientific racism, and my question
was (drum roll) quickly deleted.
Longtermists,
most of whom are also transhumanists, like to claim that they’re far more
enlightened than the eugenicists of the last century. As Bostrom writes in his
paper “Transhumanist Values,” which explains that the core value of
transhumanism is to use person-engineering technologies to radically “enhance”
ourselves: “racism, sexism, speciesism, belligerent nationalism and religious
intolerance are unacceptable.” Similarly, the World Transhumanist Association’s
FAQ, mostly written by Bostrom, says that “in addition to condemning the
coercion involved in [last century’s eugenics programs], transhumanists
strongly reject the racialist and classist assumptions on which they were
based.” Yet the evidence suggests the opposite: longtermism, and the
transhumanist ideology that it subsumes, is often infused with the very same
racist, xenophobic, classist and ableist attitudes that animated the vile
eugenicists of the last century. There are many more examples — in addition to
everything mentioned above — and indeed once you start looking for instances,
they begin to appear everywhere.
Yudkowsky,
for example, tweeted in 2019 that IQs seem to be dropping in Norway, which he
found alarming. However, he noted that the “effect appears within families, so
it’s not due to immigration or dysgenic reproduction” — that is, it’s not the
result of less intelligent foreigners immigrating to Norway, a majority-white
country, or less intelligent people within the population reproducing more.
Earlier, in 2012, he responded with stunning blitheness to someone asking: “So
if you had to design a eugenics program, how would you do it? Be creative.”
Yudkowsky then outlined a 10-part recipe, writing that “the real step 1 in any
program like this would be to buy the 3 best modern textbooks on animal
breeding and read them.” He continued: “If society’s utility has a large
component for genius production, then you probably want a very diverse mix of
different high-IQ genes combined into different genotypes and phenotypes.” But
how could this be achieved? One possibility, he wrote, would be to impose taxes
or provide benefits depending on how valuable your child is expected to be for
society. Here’s what he said:
“There would be a tax or benefit based on
how much your child is expected to cost society (not just governmental costs in
the form of health care, schooling etc., but costs to society in general,
including foregone labor of a working parent, etc.) and how much that child is
expected to benefit society (not lifetime tax revenue or lifetime earnings, but
lifetime value generated — most economic actors only capture a fraction of the
value they create). If it looks like you’re going to have a valuable child, you
get your benefit in the form of a large cash bonus up-front … and lots of free
childcare so you can go on having more children.”
This
isn’t a serious proposal — it’s a fictional exercise — but it exemplifies the
high level of comfort that this community has with eugenics and the
hereditarian idea that “intelligence” is substantially determined by our genes.
Or take
another example: Peter Singer, who once defended a longtermist position,
although he now seems to share the view that longtermism could in fact be
dangerous. Nonetheless, Singer is one of the leading effective altruists, along
with MacAskill and Toby Ord, and has been fiercely criticized for holding views
that are hardly distinguishable from those of the most vicious eugenicists of
centuries past. In a 1985 book titled Should the Baby Live?, Singer and his
coauthor warn their audience that “this book contains conclusions which some
readers will find disturbing. We think that some infants with severe
disabilities should be killed.” Why? In part because of the burden they’d place
on society.
This is
eugenics of the darkest sort — but has anyone in the longtermist or EA
communities complained? No, not a peep, because the ideas of eugenics are so
ubiquitous within these communities that once you’re immersed within them, they
simply become normalized. Indeed, the flip side of worries that intellectually
disabled infants would be too costly for society is a concern that too few
smart people — a problem of underpopulation, one of Musk’s big worries — could
slow down economic productivity, which longtermists like MacAskill believe would
be really bad. This leads MacAskill to argue in “What We Owe the Future” that
if scientists with Einstein-level research abilities were cloned and trained
from an early age, or if human beings were genetically engineered to have
greater research abilities, this could compensate for having fewer people
overall and thereby sustain technological progress.
At the
extreme, MacAskill even suggests that we might simply replace the human
workforce with sentient machines, since “this would allow us to increase the
number of ‘people’ working on R&D as easily as we currently scale up
production of the latest iPhone.”
It
should be clear at this point why longtermism, with its transhumanist vision of
creating a superior new race of “posthumans,” is eugenics on steroids. Whereas
the old eugenicists wanted to improve the “human stock,” longtermists like
MacAskill would be more than happy to create a whole new population of
“posthuman stock.” In Bostrom’s vision, the result could quite literally be a
“Utopia,” which he vividly details in his “Letter from Utopia.” Imagine a world
in which we become superintelligent, immortal posthumans who live in
“surpassing bliss and delight.” Imagine a world in which you pursue knowledge
instead of “hanging out in the pub,” talk about philosophy instead of
“football,” listen to jazz and work “on your first novel” instead of “watching
television.” This is how Bostrom pictures the march toward Utopia, and as
Joshua Schuster and Derek Woods observe in their book “Calamity Theory,” “the class
snobbery here is tremendous.” So, we’ve covered racism, xenophobia, ableism and
now classism. The new eugenics is really no different than the old one.
In fact,
the glaring similarities between the new and the old are no coincidence. As
Toby Ord writes in his book “The Precipice,” which could be seen as the prequel
to MacAskill’s “What We Owe the Future,” the ultimate task for humanity is to
“fulfill our long-term potential” in the universe. What exactly is this
supposed “potential”? Ord isn’t really sure, but he’s quite clear that it will
almost certainly involve realizing the transhumanist project. “Forever
preserving humanity as it now is may also squander our legacy, relinquishing
the greater part of our potential,” he declares, adding that “rising to our
full potential for flourishing would likely involve us being transformed into
something beyond the humanity of today.” Now consider the fact that the idea of
transhumanism was literally developed by some of the most prominent eugenicists
of the 20th century, most notably Julian Huxley, who was president of the
British Eugenics Society from 1959 to 1962. Using almost the exact same words
as Ord, Huxley wrote in 1950 — after the horrors of World War II, one should
note — that if enough people come to “believe in transhumanism,” then “the
human species will be on the threshold of a new kind of existence … It will at
last be consciously fulfilling its real destiny.” In fact, as philosophers will
affirm, transhumanism is classified as a form of so-called “liberal eugenics.”
(The term “liberal,” and why it’s misleading, is the focus of the next article
of this series.)
While
Huxley, upon witnessing the rise of Nazism in 1930s Germany, came to believe
that eugenicists should reject racism, it’s not hard to find such attitudes
among members of the first organized transhumanist movement: the Extropians,
which formed in the early 1990s and established the listserv to which Bostrom
sent his now-infamous email. Indeed, Bostrom wasn’t the only one on the
listserv making racist remarks. One participant going by “Den Otter” ended an
email with the line, “What I would most desire would be the separation of the
white and black races” (although some did object to this, just as some opposed
Bostrom’s overt racism). Meanwhile, one year after Bostrom’s email, the MIT
Extropians wrote on their website, which they also included in a “pamphlet that
they sent out to freshmen,” the following:
“MIT certainly lowers standards for women
and “underrepresented” minorities: The average woman at MIT is less intelligent
and ambitious than the average man at MIT. The average “underrepresented”
minority at MIT is less intelligent and ambitious than the average
non-“underrepresented” minority.”
These
ideas were common then, and they’re common now. So while everyone should be
appalled by Bostrom’s email, no one should be surprised. The long termist
movement that Bostrom helped found is, I would argue, just another iteration of
what some scholars have called the “eternal return of eugenics.”
Likewise,
no one should be surprised that so many long termists couldn’t care less about
the scientific racism of Sam Harris, Scott Alexander and Charles Murray, or the
“kill disabled infants” view of Singer. No one should be surprised to find
Sandberg citing Murray’s data about IQ and poverty, criminality, welfare and
out-of-wedlock births. No one should be surprised by Bostrom’s repeated claims
about “intelligence” or “IQ” being inversely correlated with fertility rates.
No one should be surprised that the EA community has for many years wooed
Steven Pinker, who believes that Ashkenazi Jews are intellectually superior
because of rapid genetic evolution from around 800 AD to 1650 — an idea that
some have called the “smiling face of race science.” No one should be surprised
to stumble upon, say, references to “elite ethnicities” in Robin Hanson’s work,
by which Hanson means the Jewish people, since — he writes — “Jews comprise a
disproportionate fraction of extreme elites such as billionaires, and winners
of prizes such as the Pulitzer, Oscar and Nobel prizes.”
And no
one should be surprised that all of this is wrapped up in the same language of
“science,” “evidence,” “reason” and “rationality” that pervades the eugenics
literature of the last century. Throughout history, white men in power have
used “science,” “evidence,” “reason” and “rationality” as deadly bludgeons to
beat down marginalized peoples. Effective altruism, according to the movement’s
official website, “is the use of evidence and reason in search of the best ways
of doing good.” But we’ve heard this story before: the 20th-century eugenicists
were also interested in doing the most good. They wanted to improve the overall
health of society, to eliminate disease and promote the best qualities of
humanity, all for the greater social good. Indeed, many couched their aims in
explicitly utilitarian terms, and utilitarianism is, according to Toby Ord, one
of the three main inspirations behind EA. Yet scratch the surface, or take a
look around the community with unbiased glasses, and suddenly the same
prejudices show up everywhere.
I should
be clear that not every EA or longtermist holds these views. I know that some
don’t. My point is that you don’t just find them on the periphery of the
movement. They’re not merely espoused by those at the fringe. They’re positions
expressed, promoted or at least tolerated by some of the most influential and
respected members of the community. The main focus of longtermism is ensuring
that the long-run future of humanity goes as well as possible. Who, though,
would want to live in the “Utopia” they envision?
Nick
Bostrom, Longtermism, and the Eternal Return of Eugenics. By Émile P. Torres. TruthDig, January 20, 2023.
In his
new book What We Owe the Future, William MacAskill outlines the case for what
he calls “longtermism.” That’s not just another word for long-term thinking.
It’s an ideology and movement founded on some highly controversial ideas in
ethics.
Longtermism
calls for policies that most people, including those who advocate for long-term
thinking, would find implausible or even repugnant. For example, longtermists
like MacAskill argue that the more “happy people” who exist in the universe,
the better the universe will become. “Bigger is better,” as MacAskill puts it
in his book. Longtermism suggests we should not only have more children right
now to improve the world, but ultimately colonize the accessible universe, even
creating planet-size computers in space in which astronomically large
populations of digital people live in virtual-reality simulations.
Backed
by an enormous promotional budget of roughly $10 million that helped make What
We Owe the Future a bestseller, MacAskill’s book aims to make the case for
longtermism. Major media outlets like The New Yorker and The Guardian have
reported on the movement, and MacAskill recently appeared on The Daily Show
with Trevor Noah. Longtermism’s ideology is gaining visibility among the
general public and has already infiltrated the tech industry, governments, and
universities. Tech billionaires like Elon Musk, who described longtermism as “a
close match for my own philosophy,” have touted the book, and a recent article
in the UN Dispatch noted that “the foreign policy community in general and the
United Nations in particular are beginning to embrace longtermism.” So it’s
important to understand what this ideology is, what its priorities are, and how
it could be dangerous.
Other
scholars and I have written about the ethical hazards of prioritizing the
future potential of humanity over the lives of Earth’s current inhabitants. As
the philosopher Peter Singer put it: “The dangers of treating extinction risk
as humanity’s overriding concern should be obvious. Viewing current problems
through the lens of existential risk to our species can shrink those problems
to almost nothing, while justifying almost anything that increases our odds of
surviving long enough to spread beyond Earth.”
MacAskill
sees nuclear war, engineered pathogens, advanced artificial intelligence, and
global totalitarianism as “existential risks” that could wipe out humans
altogether or cause the irreversible collapse of industrial civilization.
However, he is notably less concerned about climate change, which many
longtermists believe is unlikely to directly cause an existential catastrophe,
although it may increase the probability of other existential risks. MacAskill
also makes several astonishing claims about extreme global warming that simply
aren’t supported by today’s best science, and he is overly optimistic about the
extent to which technology can fix climate change. In my view, policy makers
and the voting public should not make decisions about climate change based on
MacAskill’s book.
Defining
existential risk.
For longtermists, an existential risk is any event that would
prevent humanity from fulfilling its “long-term potential” in the universe,
which typically involves colonizing space, using advanced technologies to
create a superior posthuman species, and producing astronomical amounts of “value”
by simulating huge numbers of digital people. Avoiding existential risks is a
top priority for longtermism.
The most
obvious existential risk is human extinction, but there are many survivable
scenarios as well. For example, technological “progress” could stall, leaving
humans Earth-bound. Or, more controversially, Nick Bostrom—a philosopher at the
University of Oxford’s Future of Humanity Institute who has been called “the
father of longtermism”—argued in a 2002 seminal paper that if less “intellectually
talented” individuals outbreed their more intelligent peers, the human species
could become too unintelligent to develop the technologies needed to fulfill
our potential (although he assured readers that humans will probably develop
the ability to create super-smart designer babies before this happens).
Longtermists
typically don’t regard climate change as an existential risk, or at least not
one that’s as worrisome as superintelligent machines and pandemics. Bostrom’s
colleague and fellow philosopher Toby Ord, for example, concluded in his 2020
book The Precipice that there’s only about a 1-in-1,000 chance that climate
change will cause an existential catastrophe in the next 100 years, compared
with about a 1-in-10 chance of superintelligent machines doing this. The first
figure is based in part on unpublished research by Ord’s former colleague, John
Halstead, who examined ways that climate change might directly compromise
humanity’s long-term potential in the universe. Halstead, an independent
researcher who until recently worked at the Forethought Foundation for Global
Priorities Research directed by MacAskill, argued that “there isn’t yet much
evidence that climate change is a direct [existential] risk; it’s hard to come
up with ways in which climate change could be.”
It’s
impossible to read the longtermist literature published by the group 80,000
Hours (co-founded by MacAskill), Halstead, and others without coming away with
a rosy picture of the climate crisis. Statements about climate change being bad
are frequently followed by qualifiers such as “although,” “however,” and “but.”
There’s lip service to issues like climate justice—the fact that the Global
North is primarily responsible for a problem that will disproportionately
affect the Global South—but ultimately what matters to longtermists is how
humanity fares millions, billions, and even trillions of years from now. In the
grand scheme of things, even a “giant massacre for man” would be, in Bostrom’s
words, nothing but “a small misstep for mankind” if some group of humans
managed to survive and rebuild civilization.
One
finds the same insouciant attitude about climate change in MacAskill’s recent
book. For example, he notes that there is a lot of uncertainty about the
impacts of extreme warming of 7 to 10 degrees Celsius but says “it’s hard to
see how even this could lead directly to civilisational collapse.” MacAskill
argues that although “climatic instability is generally bad for agriculture,”
his “best guess” is that “even with fifteen degrees of warming, the heat would
not pass lethal limits for crops in most regions,” and global agriculture would
survive.
Assessing
MacAskill’s climate claims.
These claims struck me as dubious, but I’m not a
climate scientist or agriculture expert, so I contacted a number of leading
researchers to find out what they thought. They all told me that MacAskill’s
climate claims are wrong or, at best, misleading.
For
example, I shared the section about global agriculture with Timothy Lenton, who
directs the Global Systems Institute and is Chair in Climate Change and Earth
System Science at the University of Exeter. Lenton told me that MacAskill’s
assertion about 15 degrees of warming is “complete nonsense—we already show
that in a 3-degree-warmer world there are major challenges of moving niches for
human habitability and agriculture.”
Similarly,
Luke Kemp, a research associate at the University of Cambridge’s Centre for the
Study of Existential Risk who recently co-authored an article with Lenton on
catastrophic climate change and is an expert on civilizational collapse, told
me that “a temperature rise of 10 degrees would be a mass extinction event in
the long term. It would be geologically unprecedented in speed. It would mean
billions of people facing sustained lethal heat conditions, the likely
displacement of billions, the Antarctic becoming virtually ice-free, surges in
disease, and a plethora of cascading impacts. Confidently asserting that this
would not result in collapse because agriculture is still possible in some
parts of the world is silly and simplistic.”
I also
contacted Gerardo Ceballos, a senior researcher at the Universidad Nacional
Autónoma de México’s Institute of Ecology and a member of the National Academy
of Sciences, who described MacAskill’s claim as “nonsense.”
The
renowned climatologist and geophysicist Michael Mann, Presidential
Distinguished Professor in the Department of Earth and Environmental Science at
the University of Pennsylvania, said MacAskill’s “argument is bizarre and
Panglossian at best. We don’t need to rely on his ‘best guess’ because actual
experts have done the hard work of looking at this objectively and
comprehensively.” For example, Mann said, recent assessments by the
Intergovernmental Panel on Climate Change have reported that at even 2 to 3
degrees of warming, “we are likely to see huge agricultural losses in tropical
and subtropical regions where cereal crops are likely to decrease sharply—and
more extreme weather disasters, droughts, floods, and interruptions of distribution
systems and supply chains will offset the once-theorized benefit of longer
growing seasons in mid-to-high latitude regions.”
The
experts I consulted had similar responses to another claim in MacAskill’s book,
that underpopulation is more worrisome than overpopulation—an idea frequently
repeated by Elon Musk on social media. Ceballos, for example, replied: “More
people will mean more suffering and a faster collapse,” while Philip Cafaro, an
environmental ethicist, told me that MacAskill’s analysis is “just wrong on so
many levels. . . It’s very clear that 8 billion people are not sustainable on
planet Earth at anything like our current level of technological power and
per-capita consumption. I think probably one to two billion people might be
sustainable.”
Feedback
and advice.
Why, then, did MacAskill make these assertions? In the first few
pages of the book’s introduction, MacAskill writes that it took more than a
decade’s worth of full-time work to complete the manuscript, two years of which
were dedicated to fact-checking its claims. And in the acknowledgments section,
he lists 30 scientists and an entire research group as having been consulted on
“climate change” or “climate science.”
I wrote
to all but two of the scientists MacAskill thanked for providing “feedback and
advice,” and the responses were surprising. None of the 20 scientists who
responded to my email said they had advised MacAskill on the controversial
climate claims above, and indeed most added, without my prompting, that they
very strongly disagree with those claims.
Many of
the scientists said they had no recollection of speaking or corresponding with
MacAskill or any of the research assistants and contributors named in his book.
The most disturbing responses came from five scientists who told me that they
were almost certainly never consulted.
“There
is a mistake. I do not know MacAskill,” replied one of the scientists.
“This
comes as something of a surprise to me, because I didn’t consult with him about
this issue, nor in fact had I heard of it before,” wrote another.
“I was
contacted by MacAskill’s team to review their section on climate change, though
unfortunately I did not have time to do so. Therefore, I did not participate in
the book or in checking any of the content,” a third scientist told me.
[Editor’s
note: The Bulletin contacted MacAskill to ask about the acknowledgements. In an
email, he replied that acknowledging one scientist who declined to participate
was “an administrative error” that will be corrected in the book’s paperback
edition. MacAskill said the other climate experts he listed were contacted by a
member of his research team and “did provide feedback to my team on specific
questions related to climate change. Regrettably, the researcher who reached
out to these experts did not mention that the questions they were asking were
for What We Owe The Future, which explains why they were surprised to be
acknowledged in the book.” MacAskill also said he “would never claim that
experts who gave feedback on specific parts of the book agree with every
argument made.”]
It is
troubling that MacAskill did not ask all of his climate consultants to vet his
bold claims about the survivability of extreme warming and the importance of
increasing the human population. Longtermism has become immensely influential
over the past five years—although this may change following the collapse of the
cryptocurrency exchange platform FTX, which was run by an ardent longtermist,
Sam Bankman-Fried, who may have committed fraud—and could affect the policies
of national and international governing bodies. Yet some longtermist claims
about climate change lack scientific rigor.
Humanity
faces unprecedented challenges this century. To navigate these, we will need
guiding worldviews that are built on robust philosophical foundations and solid
scientific conclusions. Longtermism, as defended by MacAskill in his book,
lacks both. We desperately need more long-term thinking, but we can—and must—do
better than longtermism.
Correction:
Owing to an editing error, the original version of this piece identified John
Halstead as the leader of an applied research team at the Founder’s Pledge.
Halstead left that position in 2019. Also, the original version of this article
said the author had written to all 30 climate experts MacAskill thanked.
What “longtermism” gets wrong about climate
change. By Émile P. Torres. Bulletin of the Atomic Scientists, November 22,
2022
People
are bad at predicting the future. Where are our flying cars? Why are there no
robot butlers? And why can’t I take a vacation on Mars?
But we
haven’t just been wrong about things we thought would come to pass; humanity
also has a long history of incorrectly assuring ourselves that certain
now-inescapable realities wouldn’t. The day before Leo Szilard devised the
nuclear chain reaction in 1933, the great physicist Ernest Rutherford
proclaimed that anyone who propounded atomic power was “talking moonshine.”
Even computer industry pioneer Ken Olsen in 1977 supposedly said he didn’t
foresee individuals having any use for a computer in their home.
Obviously
we live in a nuclear world, and you probably have a computer or two within
arm’s reach right now. In fact, it’s those computers — and the exponential
advances in computing generally — that are now the subject of some of society’s
most high-stakes forecasting. The conventional expectation is that ever-growing
computing power will be a boon for humanity. But what if we’re wrong again?
Could artificial superintelligence instead cause us great harm? Our extinction?
As
history teaches, never say never.
It seems
only a matter of time before computers become smarter than people. This is one
prediction we can be fairly confident about — because we’re seeing it already.
Many systems have attained superhuman abilities on particular tasks, such as
playing Scrabble, chess and poker, where people now routinely lose to the bot
across the board.
But
advances in computer science will lead to systems with increasingly general
levels of intelligence: algorithms capable of solving complex problems in
multiple domains. Imagine a single algorithm that could beat a chess
grandmaster but also write a novel, compose a catchy melody and drive a car
through city traffic.
According
to a 2014 survey of experts, there’s a 50 percent chance “human-level machine
intelligence” is reached by 2050, and a 90 percent chance by 2075. Another
study from the Global Catastrophic Risk Institute found at least 72 projects
around the world with the express aim of creating an artificial general
intelligence — the steppingstone to artificial superintelligence (ASI), which
would not just perform as well as humans in every domain of interest but far
exceed our best abilities.
The
success of any one of these projects would be the most significant event in
human history. Suddenly, our species would be joined on the planet by something
more intelligent than us. The benefits are easily imagined: An ASI might help
cure diseases such as cancer and Alzheimer’s, or clean up the environment.
But the
arguments for why an ASI might destroy us are strong, too.
Surely
no research organization would design a malicious, Terminator-style ASI
hellbent on destroying humanity, right? Unfortunately, that’s not the worry. If
we’re all wiped out by an ASI, it will almost certainly be on accident.
Because
ASIs’ cognitive architectures may be fundamentally different than ours, they
are perhaps the most unpredictable thing in our future. Consider those AIs
already beating humans at games: In 2018, one algorithm playing the Atari game
Q*bert won by exploiting a loophole “no human player … is believed to have ever
uncovered.” Another program became an expert at digital hide-and-seek thanks to
a strategy “researchers never saw … coming.”
If we
can’t anticipate what algorithms playing children’s games will do, how can we
be confident about the actions of a machine with problem-solving skills far
above humanity’s? What if we program an ASI to establish world peace and it
hacks government systems to launch every nuclear weapon on the planet —
reasoning that if no human exists, there can be no more war? Yes, we could
program it explicitly not to do that. But what about its Plan B?
Really,
there are an interminable number of ways an ASI might “solve” global problems
that have catastrophically bad consequences. For any given set of restrictions
on the ASI’s behavior, no matter how exhaustive, clever theorists using their
merely “human-level” intelligence can often find ways of things going very
wrong; you can bet an ASI could think of more.
And as
for shutting down a destructive ASI — a sufficiently intelligent system should
quickly recognize that one way to never achieve the goals it has been assigned
is to stop existing. Logic dictates that it try everything it can to keep us
from unplugging it.
It’s
unclear humanity will ever be prepared for superintelligence, but we’re
certainly not ready now. With all our global instability and still-nascent
grasp on tech, adding in ASI would be lighting a match next to a fireworks
factory. Research on artificial intelligence must slow down, or even pause. And
if researchers won’t make this decision, governments should make it for them.
Some of
these researchers have explicitly dismissed worries that advanced artificial
intelligence could be dangerous. And they might be right. It might turn out
that any caution is just “talking moonshine,” and that ASI is totally benign —
or even entirely impossible. After all, I can’t predict the future.
The
problem is: Neither can they.
How AI
could accidentally extinguish humankind. By Émile P. Torres. The Washington Post, August 31, 2022.
No comments:
Post a Comment