We’re all gonna die! How the idea of human extinction has reshaped our world

The topic of human extinction — its possibility, its likelihood, even its inevitability — is everywhere right now. Major media outlets publish articles and broadcast interviews on the subject, and prominent political figures in several countries are beginning to take the idea seriously. Some environmental activists are warning that climate change could threaten humanity’s survival over the coming centuries, while “AI doomers” are screaming that the creation of artificial general intelligence, or AGI, in the near future could lead to the death of literally everyone on Earth.

Having studied the history of thinking about human extinction, I can tell you that this is a unique moment in our history. Never before has the idea of human extinction been as widely discussed, debated and fretted over as it is right now. This peculiarity is underlined by the fact that only about two centuries ago, nearly everyone in the Western world would have agreed that human extinction is impossible. It isn’t how our story ends—because it isn’t how our story could end. There is simply no possibility of our species dying out the way the dodo and dinosaurs did, of disappearing entirely from the universe. Humanity is fundamentally indestructible, these people would have said, a pervasive assumption that dates back to the ancient Greek philosophers.

So, what changed from then to now? How did this idea evolve from being virtually unthinkable two centuries ago to a topic that people can’t stop talking about today? The answer is: through a series of earth-shattering epiphanies that unfolded in abrupt shifts beginning in the mid-19th century. With each shift came a completely new understanding of our existential precarity in the universe — a novel conception of our vulnerability to annihilation — and in every case these shifts were deeply startling and troubling. A close reading of Western history reveals four major ruptures in our thinking about extinction, three of which happened over the past 80 years. Together, they tell a harrowing story of profound psycho-cultural trauma, in which the once-ubiquitous assumption of our collective indestructibility has been undermined and replaced by the now-widespread belief that we stand inches from the precipice.

In fact, there were some among the ancients who believed that human extinction could or would occur, while simultaneously affirming that our species is indestructible. How could that be? Doesn’t extinction imply that we can be destroyed? Not necessarily. Consider the ancient Greek philosopher Xenophanes, born around 570 B.C. A strikingly original thinker, Xenophanes knew that fossilized marine organisms had been found on Mediterranean islands like Malta, south of Italy, and Paros, near Athens. He thus proposed that Earth oscillates between two phases: wetness and dryness. When dryness dominates, life abounds in all its colorful magnificence, but when this gives way to wetness, Earth’s surface is flooded over, which is how water-dwelling critters ended up on dry land.

Xenophanes believed that the phase of wetness destroys all human life on Earth. This had, it seems, happened an infinite number of times in the past, and would happen again an infinite number of times in the future. We will someday go extinct; that is our inescapable fate. Yet, by virtue of the cosmic order of things, our species will always re-emerge after its extinction, which is only ever a temporary state of affairs. In this sense, we are both indestructible and destined to die out, engaged in an eternal dance between nothingness and being, being and nothingness, to the rising and falling rhythms of these endless cosmological cycles.

Xenophanes wasn’t the only ancient Greek to hold such a view: subsequent thinkers proposed similar theories, including the vegetarian Empedocles and the Stoics. According to them, our extinction is inevitable but never irreversible: We will someday disappear entirely, but will always reappear again. This points to a useful distinction between two types of extinction, which we could call the “weak” and “strong” sense. The former corresponds to temporary extinction, first appearing in classical antiquity, whereas the latter corresponds to permanent extinction, which is how most of us think of human extinction today: If our species were to die out next year, we would naturally assume that brings the whole human story to a complete and final end. Extinction would be the last sentence in the collective autobiography of our species, a terminal sigh before fading into eternal oblivion. Yet this strong sense of extinction didn’t make its debut in the theater of Western thought until the 19th century, due to two major developments.

Before looking at these developments, though, it’s worth taking a moment to appreciate what happened in-between ancient times and the 1800s. The most significant event was the rise of Christianity, which made even the weak sense of “extinction” virtually unthinkable

According to the Christian account, humanity plays an integral role in God’s grand plan for the cosmos. Since this plan cannot unfold without us, our nonexistence — to true believers — is inconceivable.

For one thing, Christians believed — and still do — that each human being is immortal. If each human being is immortal, and humanity is just the sum total of all humans, then humanity itself must be immortal. Another reason is eschatological, meaning the study of “last things” or the “end of the world.” According to the Christian account, humanity plays an integral role in God’s grand plan for the cosmos. Since this plan cannot unfold without us, our nonexistence — to true believers — would have been inconceivable. How could the great battle between Good and Evil ever resolve? How could the scales of cosmic justice ever be balanced? We all know that there’s no justice in this world: the righteous are often dealt a bad hand, and the wicked frequently prosper. Without God’s children in the picture, how could these wrongs be righted, and these rights be rewarded? Extinction simply cannot be the way our story ends.

Sure, according to traditional Christian doctrine, the apocalypse is coming, but humanity will endure the cataclysmic spasms at the end of time. What Christians anticipate is not termination but transformation: a new, supernaturally refurbished world in which believers enjoy everlasting life with God in heaven, while the unrepentant suffer forever in hell with Satan.

Christianity became widespread in the Roman Empire by the 4th or 5th centuries after Christ, shaping nearly every aspect of the Western worldview for the next 1,500 years. It was during this period that just about everyone would have found our extinction unthinkable. Asking “Can humanity go extinct?” would have been like asking whether circles can have corners. Obviously not, as anyone who understands the concept of a circle will agree. Similarly, the idea of humanity contains within it the idea of immortality, in the Christian view. Since extinction can only happen to mortal kinds of things, “human extinction” would have struck most people as oxymoronic: a contradiction of ideas.

Everything changed in the 19th century. A tectonic shift in cultural attitudes toward religion began to unfold, especially among the educated classes, robbing Christianity of the monopoly it once enjoyed. It was in that century that Karl Marx called religion the “opium of the masses” and Friedrich Nietzsche famously declared that “God is dead” because “we have killed him!” Whatever triggered this wave of secularization, the effect was to fling open the door to thinking, for the first time since the ancients, that human extinction might be possible. Even more, it made our extinction in the strong sense conceivable, marking a radical break from everything that came before.

This is where the story gets more complicated, because one certainly might believe that human extinction could happen in principle, while also believing that it could never actually occur. By analogy, I think it’s possible for unicorns to exist. There’s no law of biology that prevents horses with spiraling horns from having evolved. But do I think I’ll ever actually ride a unicorn? No, of course not. It just so happens that the invisible hand of natural selection never fashioned any horses with horns, and hence the probability of riding one through an enchanted forest someday is approximately zero. Similarly, maybe there’s no law of nature or principle of reality that guarantees humanity’s survival, but we just so happen to occupy a world in which human extinction isn’t something that will ever occur. If one lives in a world without cliffs, one needn’t worry about falling into canyons.

As luck would have it, at exactly the same time that our extinction was becoming conceivable, scientists discovered the first credible threat to our collective existence. Before this, many people had speculated about worldwide disasters. Some of these get extra points for creativity. Edgar Allan Poe, for example, wrote a short story about a comet passing close to Earth, extracting all the nitrogen from our atmosphere as it whooshed past. This left lots of oxygen behind, and oxygen is highly flammable. Consequently, the planet becomes engulfed in a giant conflagration that destroys humanity. Others worried about sunspots blotting out the Sun, the Moon crashing into Earth, worldwide floods, global pandemics that sweep across the continents, and even comets directly colliding with our planet. But none of these were widely accepted. They were hardly more than hand-waving, with little or no supporting evidence. Imaginative as these scenarios were, few people took them seriously.

All that changed in the 1850s, when scientists uncovered a fundamental law of physics with terrifying implications. It was the second law of thermodynamics, and people immediately realized that, in a world governed by the second law, our Sun will gradually burn out, Earth will become an icy wasteland, and the universe as a whole will inexorably sink into a frozen pond of thermodynamic equilibrium—a lifeless state of eternal quietude. In the end, nothing will remain, not even a trace that our species once existed. Everything will be erased as if it had never been, though scientists predicted that wouldn’t happen for tens of millions of years. (As someone supposedly once asked a professor, “Excuse me, but when did you say that the universe would come to an end?” “In about four billion years,” the professor replied. “Thank God,” the first person exclaimed. “I thought you said four million!”)

A fundamental law of physics had terrifying implications: In a world governed by the second law of thermodynamics, our Sun will gradually burn out, Earth will become an icy wasteland, and the universe as a whole will inexorably sink into a frozen pond of equilibrium.

This new idea — the “heat death” of the universe — spread like wildfire. Over the next few decades, scientists reported that our world was careening toward a “dead stagnant state.” Philosophers lamented, as Bertrand Russell wrote, that “all the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man’s achievement must inevitably be buried beneath the debris of a universe in ruins.” Science fiction writers like H.G. Wells introduced the general public to the eschatology of thermodynamics. In his 1895 novel “The Time Machine,” for example, the lonely protagonist travels into the far future to discover a cold, dark planet stripped of the biosphere it once sustained.

It is difficult to overstate how significant the double-whammy of secularization and the second law were. Not only was our extinction now seen by many as possible, but physics informed us that this is inevitable in the long run. Death — universal death, as Russell put it — is a bullet our species cannot outrun. Eventually, the forces of humanity will be crushed by the merciless dictatorship of entropy. Put another way, the second law stamped an expiration date on humanity’s forehead, and while scientists today still believe this is our ultimate and inescapable destiny, they’ve pushed the date back to many trillions of years from now. There is no need to panic — yet.

This was just the first of many traumas that punctuated the following centuries. The next one came  in the mid-1950s, as the Cold War was in full swing. Surprisingly, the 1945 bombings of Hiroshima and Nagasaki, which ended World War II, didn’t trigger much anxiety about extinction. Almost no one linked that possibility to the massive scientific breakthrough of splitting the atom. At first, people saw atomic bombs as bigger hammers, so to speak, that states like the U.S. and Soviet Union could use to smash each other to bits. They might reduce modern civilization to smoldering ashes, but humanity itself would surely persist among the cities in ruins. This was the general view, shared by most, during the decade after World War II.

Two events in the 1950s changed people’s minds. The first was the invention of thermonuclear weapons, also called “hydrogen bombs” or “H-bombs.” These weapons are far more powerful than the “A-bombs” dropped on Japan. If the biggest thermonuclear weapon ever detonated — the Soviet Union’s Tsar Bomba — corresponded to the height of Mount Everest, the atomic device that flattened Hiroshima would stand just under nine feet tall. Yes, I triple-checked the math: That’s the difference in destructive power between the “A” and the “H.”

Second, the U.S. conducted a series of thermonuclear tests in the Marshall Islands, about 3,000 miles north of New Zealand. The first of these, code-named Castle Bravo, produced an explosive yield 2.5 times larger than expected. An enormous ball of ferocious heat burst through the atmosphere, catapulting radioactive particles around the entire globe. Such particles were detected in North America, India, Europe and Australia. The implications of this nuclear mishap and its planetary effects were immediately obvious: If a single thermonuclear explosion could scatter radioactivity so far and wide, then a thermonuclear war could potentially blanket Earth with deadly amounts of DNA-mutating radiation.

Concern shifted almost overnight from a nuclear world war destroying civilization to the outright annihilation of humanity. Speaking to a radio audience of around eight million people, the above-mentioned Bertrand Russell declared, two days before Christmas in 1954, that “if many hydrogen bombs are used there will be universal death — sudden only for a fortunate minority, but for the majority a slow torture of disease and disintegration.” The following year, prominent physicists like Albert Einstein began to alert the public that species self-annihilation was now feasible, while military officials warned that “a world war in this day and age would be general suicide.” In a 1956 book, the German philosopher Günther Anders described humanity as having become “killable,” and in 1959 theater critic Kenneth Tynan coined the word “omnicide” to denote “the murder of everyone.” Two years later, John F. Kennedy told the U.N. General Assembly that “today, every inhabitant of this planet must contemplate the day when this planet may no longer be habitable.”

It is, once again, impossible to overstate the significance of this transformation. Before Castle Bravo, almost no one worried about near-term self-extinction. But after that, this terrifying, dreadful and vertiginous idea was everywhere. Suddenly, people understood the threat environment to include two doomsday scenarios based on solid science: the heat death of the universe and global thermonuclear fallout. One was natural, the other anthropogenic. One would inevitably kill us in the far-distant future, the other could do us in tomorrow. One would take us out with a whimper, the other with a mighty bang.

Yet the bad news about our existential precarity only got worse over the following decades. As the secularization trend accelerated during the 1960s, spreading from the educated classes to the general public, a constellation of new worries popped up alongside thermonuclear fallout. Some reputable scientists warned about synthetic chemicals — insecticides like DDT — that had become environmentally ubiquitous, which the founder of modern environmentalism, Rachel Carson, argued could render Earth “unfit for all life.” Others sounded the alarm about global overpopulation, resurrecting Malthusian fears about the human population growing faster than our food sources can sustain us. Still others fretted about genetically modified pathogens, ozone depletion, advanced nanotechnology, “ultra-intelligent” machines and the newly discovered “nuclear winter” scenario — a different way that nukes could kill us, by flooding the upper atmosphere with sunlight-blocking soot.

This was the beginning of the Age of Anthropogenic Apocalypse, we could say, since all these dangers were the direct result of human activities. The threat environment was quickly becoming a dense obstacle course of human-made death traps. One wrong move, and the whole species might perish.

For nearly the entire Cold War period, however, no one seriously considered the possibility that natural phenomena could cause our extinction in the near term. The second law would get us eventually, but the prevailing view among reputable scientists was that we appear to live on a safe planet in a safe neighborhood of the universe. Perhaps it sounds implausible that scientists believed that, but they did. Before the 1850s, lots of people speculated about natural global catastrophes, though none were taken seriously by more than a handful of doomsayers with overactive imaginations. Then, around the time the second law of thermodynamics was being discovered, the scientific community also happened to embrace the peculiar belief that natural disasters — earthquakes, floods, tsunamis, volcanic eruptions and so on — were always localized events that, as such, never affect more than a limited region of Earth. Planetary-scale catastrophes just don’t happen. Haven’t in the past, won’t in the future.

That became scientific orthodoxy for almost 150 years, spanning much of the 20th century, including nearly the entire Cold War period. It was reassuring. It was comforting. If humanity was now on suicide watch — or, more accurately, omnicide watch — at least we didn’t have to worry about nature killing us. The universe is on our side, even if our own actions are destroying our precious planet.

But this alliance with nature was not destined to last. In 1980, a team of scientists suggested that a giant asteroid had wiped out the dinosaurs 66 million years ago. They based this on evidence found at various sites around the world, which suggested that a thin layer of iridium, a chemical element rare in Earth’s crust, had an extraterrestrial origin. Their hypothesis was a bombshell, calling into question beliefs that had dominated the earth sciences for generations. It was, in one journalist’s words, “as explosive for science as an impact would have been for Earth.”

While some scientists quickly accepted the “Alvarez hypothesis,” as it came to be called, others dismissed it as “codswallop.” Paleontologists, in particular, were extremely skeptical. They had some good reasons, too: if a huge rock had fallen from the heavens and collided with the Earth, where was the crater? This was a burning question throughout the 1980s, when debate about the hypothesis raged. Without the “crater of doom,” a crucial piece of the puzzle was conspicuously missing.


Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.


Then, 10 years after the hypothesis was proposed, a graduate student discovered the smoking gun. In the Yucatán Peninsula, near the Mexican city of Chicxulub, he uncovered what everyone was looking for: an enormous underground crater dating back exactly 66 million years. Almost overnight, the entire scientific community, including the paleontologists, were compelled to agree that, in fact, natural global catastrophes do happen. Mass extinctions have occurred. Giant asteroids can smash into the earth.

The implications were profound, since if such catastrophes are facts about the past, they are genuine possibilities in the future. As NASA scientist David Morrison wrote in 1993, scientists could no longer “exclude the possibility of a large comet appearing at any time and dealing the Earth such a devastating blow — a blow that might lead to human extinction.” This idea quickly seeped into the public consciousness, in part because Hollywood took notice. The Alvarez hypothesis was referenced in the 1993 blockbuster “Jurassic Park,” and just five years after that the movies “Deep Impact” and “Armageddon,” both about an asteroid heading toward Earth, were released.

All of that is the dizzying backdrop of profound psycho-cultural trauma that frames our current historical moment. It’s the winding path of paradigm shifts that leads to contemporary worries about humanity’s future — and about whether we will even have a future — although much of this dread right now, in the 2020s, is bound up with climate change and AGI, which few people worried much about even 20 or 30 years ago. Indeed, one key feature of the current existential mood, as I call it, is the terrifying realization that however perilous the 20th century was, the 21st century will be even more so. In other words, the worst is yet to come.

This is an extraordinary moment in history, the climax of a series of traumas that have scarred the Western psyche over the past two centuries. Will this be our last century? Is the entire human story in danger of coming to an end?

Can’t you feel this mood? I bet you can. It’s everywhere these days. Top AI scientists warn that AGI could destroy humanity, while climatologists scream that we’re hurtling toward an unprecedented environmental catastrophe. Influential scholars like Noam Chomsky say the risk of extinction is now “unprecedented in the history of Homo sapiens,” and futurists like Toby Ord claim that that probability is about the same as playing a round of Russian roulette. One survey of the American public found that four in 10 people believe there’s a 50% chance that climate change will wipe us out. Another reports that 55% are “somewhat” or “very worried” that AGI could “pose a threat to the existence of the human race.”

The prospect of extinction has never been so salient in any human society. Never has the idea of our annihilation been such a prominent feature of the cultural landscape. This is an extraordinary moment in history, the climax of a series of terrible traumas that have scarred the Western psyche over the past two centuries. Will this be our last century? Is the entire human story in danger of coming to a complete and final end? Are we, as Russell once suggested, writing the “prologue” or the “epilogue” of our collective autobiography?

Hidden behind the story of these traumas and the different “existential moods” that they have produced is a revolution that few people have noticed. It’s the third in a trilogy of revolutions, the first two of which are already well-known: the Copernican revolution, which removed humanity from the center of the universe, and the Darwinian revolution, which removed humanity from the center of creation. The final revolution arises from the admission that we are not, in fact, a permanent fixture of the universe. Humanity really could disappear — like the dodo and the dinosaurs — and the universe would carry on with hardly more than a shrug of indifference.

In an 1893 essay titled “The Extinction of Man,” H.G. Wells writes that “it is part of the excessive egotism of the human animal that the bare idea of its extinction seems incredible to it.” Over eons and epochs, ages and eras, species have come and gone on this little oasis in space. “Surely,” he continues, “it is not so unreasonable to ask why man should be an exception to the rule.”

With each revolution, humanity’s position in the universe has been demoted and decentered. We are, it turns out, not as special as we once believed. In every case, the target of these revolutions was the persistent belief that our species is the center of everything, an idea called anthropocentrism. Whereas Copernicus mortally wounded a sense of cosmic anthropocentrism, Darwin’s theory demolished the biological anthropocentrism that remained. Each was a massive blow to the narcissism of our species.

The ultimate injury, though, is the horrifying recognition that human extinction is possible and could happen anytime. This undermined our sense of existential anthropocentrism, the view that “a world without us,” to quote Wells, was too intolerable a thought for anyone to take seriously.

Yet this is where we have ended up. It is our current existential mood: Humanity is not fundamentally indestructible, our extinction could occur in the near future and, in the end, it’s a fate we cannot escape. The question isn’t whether we’ll end up in the grave, buried above the dinosaurs and beneath whatever might come after us, but how we decide to spend the time we have until the curtains are drawn and the lights go out.

Read more

from Émile P. Torres on AI, climate change and the future

Comments

Leave a Reply

Skip to toolbar