“It’s tough to make predictions, especially about the future” is a line sometimes attributed to Yogi Berra and sometimes to the Danish physicist Niels Bohr. Whoever said it first, or better, I’ll give it to the Dane. Bohr revealed the randomness and unknowability that defines that tiniest of time horizons, the movement of electrons, and his legacy is an exemplar of the paradox of progress. After helping the Allies win a nuclear arms race against the Nazis, Bohr tried and failed to stop another between Washington and Moscow, only to die of heart failure weeks after the end of the Cuban Missile Crisis, in somber accord with another Berra observation, this one uncontested, that the future ain’t what it used to be.
The tragic ironies and grim specters of Bohr’s generation haunt but don’t much disturb What We Owe the Future, William MacAskill’s contribution to longtermism—a small, well-funded, and stealthily influential movement based in his home institution, Oxford University. Longtermism, in its most distilled form, posits that one’s highest ethical duty in the present is to increase the odds, however slightly, of humanity’s long-term survival and the colonization of the Virgo supercluster of galaxies by our distant descendants. Since the distribution of intellectual and material resources is zero-sum, this requires making sacrifices in the present. These sacrifices are justified, argue the longtermists, because technological advance will in time produce such literally astronomical amounts of future “value” that the travails and sufferings of today’s meat-puppet humanity pale to near-insignificance. This overwhelming potential, writes MacAskill, creates an ethical mandate to abandon “the tyranny of the present over the future.”
The number of potential people who fill this theoretical future can’t be counted without recourse to the factored numbers that clutter so much longtermist literature. MacAskill dispenses with these in favor of three pages filled with the unisex bathroom symbol, each representing 10 billion people, to suggest the scope of the moral responsibility they impose upon us. “The future could be very big,” according to MacAskill. “It could also be very good—or very bad.” The good version, he argues, requires us to maintain and accelerate economic growth and technological progress, even at great cost, to facilitate the emergence of artificial intelligence that can, in turn, scale growth exponentially to fuel cosmic conquest by hyperintelligent beings who will possess only a remote ancestral relationship to homo sapiens.
With its blend of wild-eyed techno-optimism and utopianism, longtermism has emerged as the parlor philosophy of choice among the Silicon Valley jet-pack set. MacAskill’s colleague, the Swedish-born philosopher Nick Bostrom, drew attention in the early 2000s with his work developing the concept of “existential risk”—a philosophical frame that assesses the value and significance of events by how likely they are to secure or threaten humanity’s continued existence. In recent years, Peter Thiel, Elon Musk, and Skype founder Jaan Tallinn have all expressed interest in his ideas. In 2012, Tallinn co-founded the Centre for the Study of Existential Risk at Cambridge University, and he has been a major donor to the Future of Humanity Institute at Oxford, where Bostrom has worked on a longtermist research agenda.
MacAskill’s work has also found praise in Silicon Valley, with Elon Musk describing What We Owe the Future, in a typically self-regarding endorsement, as “a close match for my philosophy.” Of the many marquee-name blurbs and affirmations showered on the book, this one by the world’s richest person comes closest to suggesting the real significance of longtermism’s creeping influence. Because, for all its sci-fi flavorings, what commands attention to longtermism is not any compelling case for prioritizing the value of distant exoplanets and centuries, but the billionaire politics it seeks to impose on the rest of us.
Readers who know MacAskill as the boyish face of the Effective Altruism movement, or EA, may be surprised to find him behind the controls of an observatory-size telescope, training logic games on deep space-time. His 2015 book, Doing Good Better, set out a blueprint for directing charity toward projects figured to bring the largest, most immediate, and most measurable benefits. An effective altruist might insist, for instance, that, on per-dollar impact metrics, it is “better” to spend $100 on mosquito nets in Africa than it is to spend $1,000 on just about anything else.
As EA grew into a philanthropic and social-entrepreneurial juggernaut over the last decade—with donors committing tens of billions through affiliated nonprofits—MacAskill emerged as a recurring character at TED talks and a brand-name global “thought leader.” His profile allowed him to stay mostly mute against critics who alleged that EA suffered from a status quo bias that valorized EA-approved philanthropies and their wealthiest backers. The philosopher Alice Crary captured the thrust of the criticisms by describing EA as a philosophy designed for “perpetuating the institutions that reliably produce the ills they address,” since in EA, actions that alleviate present suffering are supposedly best, while dwelling on the systemic causes of that suffering is inefficient—a waste of time.
Despite EA’s fixation on the present, it has always shared philosophical DNA with longtermism. Bostrom developed longtermism using many of the same conceptual tools that Toby Ord and MacAskill drew on with EA, most crucially utilitarianism—according to which the “rightness” of an action is determined entirely by whether the outcomes are “good” according to one’s values and priorities—and “expected value” theory—which holds that those actions are best that have the highest expected payoff. In 2012, when MacAskill became the president of the Centre for Effective Altruism, housed in the same offices as Bostrom’s Future of Humanity Institute, the two men already shared a philosophical language, if not yet areas of interest.
What We Owe the Future hews closely to other works of longtermism in its assessment of threats to humanity and technological progress. These range from threats posing high extinction risks that warrant immediate attention and resources (unleashed AI, biological superweapons) to those that MacAskill suggests would be major bummers and bumps on the road, but do not necessarily spell doom for civilization or the longtermist project (climate change, nuclear war). Like his peers, MacAskill identifies “technological stagnation” as a terrifying prospect. Since any extended slowdown in economic growth or technological progress would delay or possibly preclude the arrival of a growth-powered digital guardianship of the galaxy, longtermists consider stagnation a fate not much better than extinction. Even if humanity were to establish a peaceful and ecologically balanced paradise on Earth lasting hundreds of millions of years, longtermism would judge it a catastrophe on par with nuclear war, since any civilization not obsessed with technological progress will fail to build the great digital Valhalla. (Bostrom has described “dysgenic pressures,” or a lowering of intelligence in the population, as another disaster in league with nuclear holocaust, for similar reasons.)
To identify and measure actions we might take to forestall stagnation, MacAskill proposes three metrics: significance (the size of an action’s impact on the future), persistence (the duration of its impact), and contingency (the degree to which the impact depends on the action). Thus equipped, MacAskill argues, we can shape the molten glass of history before it “becomes rigid, and further change is impossible without remelting.” This is done by the “entrenchment of values.” “Once a value system has become sufficiently powerful, it can stay that way by suppressing the competition,” he writes. “Predominant cultures in society tend to entrench themselves.”
What We Owe the Future doesn’t have much to say, however, about which or whose values we should be busy entrenching, or how. Longtermists, he proposes, “should focus on promoting more abstract or general moral principles or, when promoting particular moral actions, tie them into a more general worldview.” They must also remember that values can be “benevolent or sadistic, exploratory or rigid.” MacAskill himself is a big fan of the Golden Rule. Promoting it will “have robustly positive effects into the indefinite future.”
Stripped of their philosophical ornamentations, these are platitudes. As advice for influencing events hundreds and thousands of years in the future, they are absurd platitudes. Banalities don’t cease to be banalities when you attach enormous strings of zeros to them. Longtermists have a remarkably weak appreciation for the zigzags and convulsions of recent history, which MacAskill sometimes blunders into to try and make or ballast a point. Pol Pot, for example, is a simple demonstration of what can happen when the right values are not properly entrenched. “The rise of Nazism and Stalinism,” meanwhile, “shows how easy it is for moral regress to occur, including on the issue of free labor.” How would the longtermist project prevent or withstand such regressions in centuries to come? MacAskill doesn’t say. Some longtermists have coined a term, “moral cluelessness,” to describe the uncertainty inherent in trying to steer the future. MacAskill’s book would have benefited from greater use of it.
There are good and growing reasons to be wary of longtermism. MacAskill himself once understood them well. Longtermism, he writes, initially “left me cold.” After all, “there are real problems in the world facing real people.” Within a few years, however, he warmed to the idea of the “tyranny of the present.” In 2014, his Centre for Effective Altruism launched the longtermist Global Priorities Project. In 2018, he announced a second longtermist incubator within his center, the Forethought Foundation for Global Priorities Research. In 2021, the Centre for Effective Altruism was awarded a $7.5 million grant for “research on humanity’s long-run future” from Open Philanthropy, an EA-aligned outfit with funding from Facebook cofounder Dustin Moskovitz.
As the longtermist infrastructure took shape, Effective Altruists started writing a lot more about nudging the odds in favor of the hyper-remote digitalized future, and a lot less about suffering and doing good in the here and now. At latest count, EA donors are offering up to five prizes of $100,000 for new blogs on EA and longtermist themes, and have provided funding for “Future Perfect” at Vox, an EA vertical that is increasingly longtermist-curious. In 2020, MacAskill’s EA co-founder Toby Ord published a longtermist treatise, The Precipice. Last summer, MacAskill co-wrote a paper titled “The Case for Strong Longtermism” with his EA colleague Hilary Greaves. What We Owe the Future is the expanded, public-facing version of that paper.
The modifier in the title of the 2021 paper—“strong longtermism”—is significant. It positioned MacAskill within the longtermist camp that sees protecting the distant future as “the” key moral priority of our time, rather than one of many. It’s noteworthy, too, that MacAskill relegates this distinction to an appendix in his new book, calling longtermism “a” moral priority in the main body. “Strong” longtermism is a tougher sell than EA, or caring about the future as most people understand it. In EA forums, MacAskill has been explicit that “strong longtermism” should be downplayed for marketing reasons, and a soft-serve version offered up instead as a kind of gateway drug.
Midway through What We Owe the Future, MacAskill acknowledges that all theories of population ethics have “some unintuitive or unappealing implications.” But he does not go into quite the detail that other longtermist thinkers—found throughout the book’s body and footnotes—have in their own publications and interviews. Bostrom has concluded that, given a 1 percent chance of quadrillions of people existing in the theoretical future, “the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth one hundred billion times as much as a billion human lives.” Nick Beckstead, in an influential 2013 longtermist dissertation, discusses how this fact calls for reexamining “ordinary enlightened humanitarian standards.” If future beings contain exponentially more “value” than living ones, reasons Beckstead, and if rich countries drive the innovation needed to bring about their existence, “it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.” Hilary Greaves has likewise acknowledged that longtermist logic clearly, if sometimes unfortunately, points away from things that once seemed ethically advisable, such as “transferring resources from the affluent western world to the global poor.”
Then there is the observation of economist Tyler Cowen that utilitarianism seems to “support the transfer of resources from the poor to the rich … if we have a deep concern for the distant future.” In a more recent paper published by the Global Priorities Institute, Oxford philosopher Andreas Mogensen writes that utilitarianism “seems to imply that any obligation to help people who are currently badly off is trumped by obligations to undertake actions targeted at improving the value of the long-term future.”
Quotes like these shed light on why Émile Torres, a philosopher, science writer, and onetime longtermist turned apostate, describes longtermism as “an immensely dangerous ideology [that] goes far beyond a simple shift away from myopic, short-term thinking.” By prioritizing the creation of “value” in the far future, Torres notes, longtermists are building a philosophical foundation for rationalizing far worse things than indifference. He sees parallels between longtermism and apocalyptic religion, observing that longtermists believe that “we stand at the most pivotal moment in human history that will determine whether the future is filled with near-infinite amounts of goodness or an empty vacuum of unforgivable moral ruination.”
MacAskill relied on dozens of assistants and issue experts to research and write his summaries of the threats posed by AI, bio- and nuclear weapons, climate change, and technological stagnation. Managing them all, he writes, required three “chiefs of staff.” Remarkably, none of these people seem to have encountered the growing literatures on Earth’s regulatory systems, the limits beyond which they break, and their incompatibility with the endless growth upon which the longtermist project is predicated. That accelerating rates of economic growth requires a transformation of all of Earth into a toxic and ultimately uninhabitable landscape of extraction sites, processing facilities, and waste pits is not acknowledged or entertained. As does Bostrom, MacAskill writes like a stranger to the idea that the world is not a math game, but a place bound by actual biophysical math.
Among the proliferating number of studies that illustrate the scope of MacAskill’s delusion is a multipart investigation commissioned by the government of Finland. The author, geologist Simon Michaux, found that powering growth in a decarbonized Europe—solarizing homes, expanding wind farms, building lithium battery–powered cars and machines—would require far greater amounts of minerals than are currently known to exist in global reserves. In a recent work, Michaux estimates that the transition would need 940 million tons of nickel, an amount that at current mining capacity takes 400 years to produce. Much of this nickel would need replacing every several years, along with similarly huge amounts of copper, cobalt, and ultra-scarce lithium. In the event that non–lithium battery chemistries are invented, Michaux notes, material inputs would still require an endless mining scramble to meet rising demand.
The impossibility of endless growth has been accepted by systems engineers, geologists, geophysicists, and climate scientists who have closely examined our collision course with iron physical laws such as entropy, or the flow of energy in a closed system. In the words of a 2019 open letter signed by 15,000 scientists, avoiding the collapse and extinction scenarios that longtermists fear will require shifting away from “GDP growth and the pursuit of affluence toward sustaining ecosystems and improving human well-being.” A program that abstracts civilization so completely from these systems is not a serious one for thinking about the year 2122, never mind 3122. Longtermism is less a sober plan for securing the future than a fanciful scheme for locking in an eternal present that can never be.
What We Owe the Future rejects the scientific consensus that we have roughly a decade left to initiate the changes needed to preserve a living planet capable of supporting a complex civilization. Even MacAskill’s “worst-case” climate scenario—the burning of 300 years’ worth of fossil fuels, resulting in three trillion tons of emitted carbon—is a survivable scenario with a sunny side. He surmises the resulting warming of 7 to 9.5 degrees Celsius would be bad, but finds it “hard to see how even this could lead directly to civilizational collapse.” After all, “richer countries would be able to adapt, and temperate regions would emerge relatively unscathed.”
On the downside, opines MacAskill, such warming “would be bad for agriculture in the tropics,” where “outdoor labor would become increasingly difficult” because of the heat. Even 15 degrees of warming, he chirps, would not “pass lethal limits for crops in most regions.” And what if these temperatures (and global drought, which goes unmentioned) did kill crops? MacAskill does not consider that mass starvation could lead to violence and war, noting simply that “most conflict researchers” rate climate change “a small driver relative to other factors, such as state capacity and economic growth.” To the extent MacAskill is worried about the prospect of using remaining stores of fossil fuels, it is out of concern that there won’t be any left for a post-collapse society to “rebuild” another coal-burning civilization from the ruins of the old.
This nonchalance extends to the book’s Strangelovean sections on nuclear war. Based on studies on the temperature impacts of nuclear winter, MacAskill reassures us that global thermonuclear holocaust would be “bad but manageable” for coastal South America and Australia, where summers would be “about five degrees cooler than usual.” Granted, the sun will be blacked out by dense clouds of radioactive ash, but MacAskill suggests looking to “forms of food production not dependent on sunlight.”
By the last chapter of What We Owe the Future, the reader has learned that the future is an endless expanse of expected value and must be protected at all costs; that nuclear war and climate change are bad (but probably not “existential-risk” bad); and that economic growth must be fueled until it gains enough speed to take us beyond distant stars, where it will ultimately merge, along with fleshy humanity itself, into the Singularity that is our cosmic destiny. The title’s promise, however, remains unfulfilled. We owe the future—or at least, the Muskean version of it. But what, exactly, do we owe it? What do we do?
Here MacAskill—the co-founder of an EA-oriented career-choice nonprofit—shifts into the familiar mode of guidance counselor. Like Effective Altruism, his longtermism comes topped with a healthy dollop of careerism. “Take a bet,” he writes, “on a longer-term path that could go really well (seeking upsides), usually by building the career capital that will most accelerate you in it.”
Are you a budding longtermist interested in reducing the existential risks posed by pandemics? MacAskill suggests seeking opportunities with the Bipartisan Commission on Biodefense, the Johns Hopkins Center for Health Security, and organizations “promoting innovation to produce cheap and fast universal diagnostics and extremely reliable personal protective equipment.” Interested in energy and climate? Give 10 percent of your income to the Clean Air Task Force, the Founders Pledge Climate Fund, and other EA-approved nonprofits that make “far-reaching political change much more likely.”
MacAskill recommends that longtermists vote, and talk to family and friends “about important ideas, like better values or issues around war, pandemics, or AI”—but never “promote these ideas aggressively or in a way that might alienate those you love.” He also recommends having children, noting, “Although your off-spring will produce carbon emissions, they will also do lots of good things, such as contributing to society, innovating, and advocating for political change.” Just not too aggressively, one hopes.
One of the few pleasures of MacAskill’s book is imagining how the author’s purported heroes would react to it. He repeatedly lauds the early abolitionists—they aced the significance, persistence, and contingency tests—and notes that he keeps an engraving of the Quaker abolitionist Benjamin Lay on his desk. But an eighteenth-century MacAskill would have been on the receiving end of one of Lay’s righteous diatribes after telling him to tone down the guerrilla theater, and instead seek an internship with a think tank focused on developing reasonable alternatives to the trans-Atlantic slave economy.
The English suffragettes praised by MacAskill likewise wouldn’t have had much patience for his focus on the far future or his high-tea approach to influencing it. Emmeline Pankhurst’s Women’s Social and Political Union sent well-dressed women to shatter storefronts and paintings in London’s posh shopping districts, and responded to Parliament’s rejection of a 1913 franchise law with a yearlong arson campaign. “We had to discredit the Government … spoil English sports, hurt businesses, destroy valuable property,” and “upset the whole orderly conduct of life,” Pankhurst later wrote about the direct-action template she bequeathed to Greenpeace, ACT-UP, and Extinction Rebellion, to name a few. Despite his praise for their predecessors, MacAskill’s logic requires he view these groups as threats to the longtermist agenda, rather than allies in securing any future worth having.
Around the time I finished MacAskill’s book in June, I heard from a friend who works with Indigenous groups in the Ecuadoran Amazon. He was exhausted from organizing weeks of nationwide protests against a spate of oil and mining projects newly greenlit by Ecuador’s government. If allowed to proceed, the projects would further poison rain forest communities’ water and food supplies, and accelerate the fraying of regional systems that help regulate planetary carbon and water cycles. Across the country, Indigenous communities and their allies were blocking roads and facing down the tear gas canisters and truncheon blows of army and police units. Two protesters had been killed.
In quieter times, I might have mentioned MacAskill’s book, but my friend was already bloodied from fighting a real-world manifestation of its thesis. His allies in this fight extended beyond the Indigenous groups rolling boulders onto highways, to everyone who rejects demands that we hand over what remains of the biosphere, and cut off alternative civilizational visions and pathways, in sacrifice to the hypothetical “value” contained in the space castles and disembodied consciousnesses of tech-billionaire dreams. This rejection is not a failure of the self-evident ethical duty to think about future generations. It represents another version of that duty, at once more imaginative and more realistic, grounded in an understanding that the future is now.