STAGING
Giving from your Founders Pledge DAF this year-end? Check our 2024 giving deadlines

Safeguarding the future report

Existential risk summary

▲ Photo by ThisisEngineering RAEng on Unsplash

Related recommendations

This is an executive summary of our investigation into safeguarding the future and catastrophic risks

Read the full report

Please note this page was last updated in 2019. While our overall views remain unchanged, some details may be out of date.

The giving recommendations relevant to this research are:

Homo sapiens have been on Earth for 200,000 years, but human civilization could, if things go well, survive and thrive for millions of years. This means that whatever you value — be it happiness, knowledge, creativity, or something else — there is much more to come in the future. As long as we survive, humanity could flourish to a much greater extent than today: millions of generations could live lives involving much more happiness, knowledge, or creativity than today. Therefore, for members who value future generations, a top priority should be to safeguard the future of civilization.

The Problem: emerging man-made risks

This is an especially urgent time to focus on safeguarding the future. Homo sapiens have survived for 200,000 years without being killed off by natural risks such as asteroids, and volcanoes, which is evidence that these pose a relatively small risk. However, the major risks we face today are man-made, stemming from our increasing power to affect our material conditions.

The Industrial Revolution and man-made risk

From the dawn of humanity until 1800, the rate of technological innovation across the globe had been quite stagnant, even as humanity moved out of hunter-gatherer societies into agricultural and pre-industrial societies. The trend of stagnation ended abruptly at the dawn of the Industrial Revolution in northern England in 1800, as innovation, automation and living standards soared.

Figure 1. World GDP over the last two millennia

Source: Our World in Data, ‘Economic Growth

Since the Industrial Revolution, we gained the power to feed a growing population, to reduce child mortality, and to create technologies allowing us to travel and communicate across great distances.

However, our power to improve our material conditions increased in tow with our destructive power. According to work by Professor Ian Morris of Stanford, war-making capacity also exploded after the Industrial Revolution.

Figure 2. Trends in war-making capacity in the last 3,000 years

war-making-capacity

Source: Luke Muehlhauser, ‘How big a deal was the Industrial Revolution?’ using data adapted from Morris, The Measure of Civilization, Princeton University Press (2013)

The most dramatic shift in our destructive capacity came with the invention of nuclear weapons in 1945. This marked the dawn of a new epoch in which humanity for the first time potentially gained the ability to destroy itself. Developments in other areas may potentially be even more serious than nuclear weapons. Biotechnology and AI will greatly improve living standards, but according to many experts working in those fields, also carry potentially serious downside risks. Similarly, the burning of fossil fuels drove the huge increases in welfare we have seen over the last 200 years, but has caused CO2 concentrations in the atmosphere to rise to levels unprecedented in hundreds of thousands of years, increasing the risk of extreme climate change.

Overall, the picture for the 21st century is one of increasing prosperity and flourishing, but also one of increasing risk that threatens to undo all this progress.

Safeguarding the future is a highly neglected problem

Despite the unprecedented threat, global catastrophic risk reduction is highly neglected for several reasons. Future generations are the main beneficiaries of global catastrophic risk reduction, but they cannot vote, nor can they pay the current generation for protection. Global catastrophic risks are also global in scope, so no single nation enjoys all the benefits of reducing them.

Moreover, because the risks are unprecedented, increasing in the future, and also relatively unlikely, they are not salient to the public or to political leaders. Consequently, leaders will tend to pay insufficient attention to them.

Finally, due to the psychological bias of scope insensitivity, people are insensitive to the large numbers at stake in global catastrophes. Our emotional reaction to finding out that a problem kills 1 million people or 100 million people is similar, and yet these tragedies call for very different social responses. The implications for global catastrophic risk are clear: there are trillions of potential lives in the future, but people may not take adequate account of this when thinking about the importance of global catastrophic risk.

For all these reasons, global efforts to safeguard the future have tended to be inadequate. For prospective donors, this means that the potential to find “low-hanging fruit” in this cause area is high at present. Just as VC investors can make outsized returns in large uncrowded markets, philanthropists can have outsized impact by working on large and uncrowded problems.

Overall risk this century

Estimating the overall level of global catastrophic risk this century is difficult, but the evidence, combined with expert surveys, suggests that the risk is plausibly greater than 1 in 100. Given the stakes involved, we owe it to future generations to reduce the risk significantly.

Outlining the major risks and promising solutions

Based on expert surveys and our own reading of the evidence, we believe that the greatest threats to the flourishing of future civilization stems from advances in biotechnology and advanced AI systems, with nuclear war and climate change also posing some risk.

Nuclear war

The discovery of nuclear weapons marked the dawn of a new epoch in which humankind may for the first time have gained the ability to destroy itself. The most concerning effect, first raised during the Cold War, is a potential nuclear winter in which the smoke from a nuclear war blocks out the Sun, disrupting agriculture for years. The potential severity of a nuclear winter is the subject of some controversy, but given the current split in expert opinion, it would be premature to rule it out.

As Figure 3 shows, global nuclear arsenals peaked in 1986 at around 64,000. While arsenals are significantly smaller today, each of the US and Russia together still have around 4,000 nuclear weapons each, with 1,400 of these strategically deployed (i.e. on ballistic missiles or at bomber bases).

Figure 3. Number of nuclear missiles held by the US and Russia

nuclear warheads

Source: Bulletin of the Atomic Scientists, Nuclear Notebook (2018)

Reducing the risk of nuclear war

There are a number of possible ways to reduce the risk of civilization-threatening nuclear winter.

  • Reduce the risk of conflict between major powers through diplomacy and other means.
  • Change elements of nuclear strategy, such as taking nuclear weapons off hair-trigger alert.
  • Reducing nuclear arsenals while maintaining the deterrence benefits of nuclear weapons. The US and Russian nuclear arsenals now far exceed what is needed to provide effective deterrence.
  • Since much of the damage of nuclear war stems from smoke blocking out the sun, one could fund research into scaling up the production of food not reliant on sunlight.

Engineered bioweapons

Developments in biotechnology promise to bring huge benefits to human health, helping to cure genetic disease and create new medicines. But they also carry major risks. Scientists have already demonstrated the ability to create enhanced pathogens, such as a form of bird flu potentially transmissible between mammals, as well as to create dangerous pathogens from scratch, such as horsepox, a virus similar to smallpox. Figure 4 shows that the cost of gene synthesis has fallen by many orders of magnitude in recent years (note that the y-axis is a logarithmic scale).

Figure 4. Cost of DNA sequencing, gene synthesis and oligo synthesis (oligos can be used to synthesize genes)

BEC DNA Price 2017

Source: Carlson, On DNA and transistors, (2016)

At present, the expertise and tacit knowledge required to exploit these improvements to create dangerous catastrophic biological events remain substantial. However, the worry is that as biotechnology capabilities increase and biotechnology becomes more widely accessible, scientists, governments or terrorists might be able, by accident or design, to create viruses or bacteria that could kill hundreds of millions of people. Such weapons would be much harder to control than nuclear weapons because the barriers to acquiring them are likely to be considerably lower.

Reducing the risk of engineered pandemics

Various different approaches can be used to reduce the risk of engineered pathogens.

  • Improve capacity for disease surveillance and response.
  • Scenario planning for major global catastrophic biological risks, which would raise awareness about the risk and improve planning among important global actors.
  • Investing in medical countermeasures, such as surge capacity for ventilators, vaccines, antivirals, and so on.
  • Fostering a culture of safety among biotechnology researchers would also be valuable. Making researchers aware of the dual-use potential of research could allow researchers to produce beneficial insights without creating unnecessary risks.
  • Developing and strengthening international biosafety norms to reduce the risk of accidental release from laboratories.

Artificial intelligence

Developments in artificial intelligence also promise significant benefits, such as helping to automate tasks, improving scientific research, and diagnosing disease. However, they also bring risks. Humanity’s prosperity on the planet is due to our intelligence: we are only slightly more intelligent than chimpanzees, but, as Stuart Armstrong has noted, in this slight advantage lies the difference between planetary dominance and a permanent place on the endangered species list. Most surveyed AI researchers believe that we will develop advanced human-level AI systems at some point in the next 100 years. In creating advanced general AI systems, we would be forfeiting our place as the most intelligent being on the planet, but currently we do not know how to ensure that AI systems are aligned with human interests.

Experience with today’s narrow AI systems has shown that it can be difficult to ensure that the systems do what we want rather than what we specify, that they are reliable across contexts, and that we have meaningful oversight. In narrow domains, such failures are usually trivial, but for a highly competent general AI, especially one that is connected to much of our infrastructure through the internet, the risk of unintended consequences is great. Developing a highly competent general AI could also make one state unassailably powerful, which increases the risk of misuse.

Managing the transition to AI systems that surpass humans at all tasks is likely to be one of humanity’s most important challenges this century, because the outcome could be extremely good or extremely bad for our species.

Reducing the risk from advanced AI

There are several different ways to tackle the risks from advanced AI.

  • Build the field of AI researchers who are aware of and concerned about AI safety. This could be especially valuable to help build a culture of safety as AI systems develop over the coming decades.
  • Technical research in computer science seems to have made progress in recent years, and could be impactful if the timeline to advanced general AI turns out to be shorter than we think.
  • Work on AI governance is in the early stages and could focus on researching the unique coordination challenges raised by transformative AI, and on advocating for awareness of these issues at the national and international level.

Climate change

Burning fossil fuels has allowed us to harness huge amounts of energy for industrial production, but also exacerbates the greenhouse effect. On current plans and policy, there is upwards of a 1 in 20 chance of global warming in excess of 6°C. This would make the Earth unrecognizable, causing flooding of major cities, making much of the tropics effectively uninhabitable, and exacerbating drought. Whether climate change is likely to cause a global catastrophe is unclear, and most of the risk seems to be very indirect. Donors interested in learning more about how to tackle climate change should see our climate change cause report and our Climate Change Fund.

Recommendations

For these recommendations, we are grateful to be able to utilize the in-depth expertise of, and background research conducted by, current and former staff at Open Philanthropy, the world’s largest grant-maker on global catastrophic risk. Open Philanthropy identifies high-impact giving opportunities, makes grants, follows the results and publishes its findings. (Disclosure: Open Philanthropy has made several unrelated grants to Founders Pledge.)

We recommend five high-impact funding opportunities for safeguarding the future.

  • The Center for Human Compatible AI, an academic research center at University of California, Berkeley, that carries out technical and advocacy work to help ensure the safety of AI systems and build the field of future AI researchers.
  • Center for Health Security, a think tank at the Bloomberg School of Public Health at Johns Hopkins University, which researches and advocates for improved biosecurity policy in the US and internationally.
  • The biosecurity programs at Nuclear Threat Initiative’s biosecurity programs, a nonprofit, nonpartisan global security organization focused on reducing nuclear and biological threats imperiling humanity.
  • The Center for Security and Emerging Technology, a think tank producing policy analysis at the intersection of national and international security and emerging tech, based at Georgetown University.
  • Research led by Professor Philip Tetlock’s forecasting research, a Professor of Political Science at the University of Pennsylvania aiming to carry out research related to forecasting global catastrophic risks.

Acknowledgements

For helpful comments on this report, we are grateful to:

  • Dr Seth Baum, Global Catastrophic Risk Institute
  • Haydn Belfield, Cambridge Centre for the Study of Existential Risk
  • Dr Niel Bowerman, 80,000 Hours
  • Joe Carlsmith, Open Philanthropy Project
  • Goodwin Gibbins, Imperial College London
  • Howie Lempel, 80,000 Hours
  • Dr Gregory Lewis, Future of Humanity Institute, Oxford
  • Matthew van der Merwe, Future of Humanity Institute, Oxford
  • Dr Stefan Schubert, University of Oxford
  • Carl Shulman, Future of Humanity Institute, Oxford
  • Dr Jess Whittlestone, Centre for the Future of Intelligence, Cambridge

Notes

  1. Muehlhauser’s post discusses the subtleties surrounding Morris’ data. He cites Morris as saying “By “destructive power” I mean the number of fighters they can field, modified by the range and force of their weapons, the mass and speed with which they can deploy them, their defensive power, and their logistical capabilities.”

  2. For a discussion of other biases relevant to the judgement of other existential risks, see Eliezer Yudkowsky, “Cognitive Biases Potentially Affecting Judgment of Global Risks,” in Global Catastrophic Risks, ed. Nick Bostrom and Milan M. Ćirković (Oxford: Oxford University Press, 2008).

  3. For example, Toby Ord of the Future of Humanity Institute at Oxford puts the risk at around 1 in 12. Toby Ord, The Precipice: Existential Risk and the Future of Humanity (Bloomsbury Publishing, 2020).

  4. Arms Control Association, “Nuclear Weapons: Who Has What at a Glance,” June 2018, https://www.armscontrol.org/factsheets/Nuclearweaponswhohaswhat.

  5. For an overview of recent developments, see footnote 15 in Robert Wiblin, “Positively Shaping the Development of Artificial Intelligence,” 80,000 Hours, March 2017, https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/. For discussion of some of the key issues in AI safety research, see the discussion by researchers at Google, OpenAI and Stanford in Dario Amodei et al., “Concrete Problems in AI Safety,” ArXiv:1606.06565 [Cs], June 21, 2016, http://arxiv.org/abs/1606.06565.

  1. The Problem: emerging man-made risks
    1. The Industrial Revolution and man-made risk
    2. Safeguarding the future is a highly neglected problem
    3. Overall risk this century
  2. Outlining the major risks and promising solutions
    1. Nuclear war
    2. Engineered bioweapons
    3. Artificial intelligence
    4. Climate change
  3. Recommendations
    1. Notes

      About the author

      John Halstead

      John Halstead

      Former head of Applied Research

      John is the former head of Applied Research at Founders Pledge. He spent the previous last four years researching climate change catastrophic risk, including writing a detailed report for the Finnish Ministry of Foreign Affairs, and supporting background research on climate change for the leading book on existential risk, The Precipice by Toby Ord.

      John has a deep knowledge of both the science and policy challenges of climate change, authoring our 2018 Climate Change Report, which was covered by Vox and the New York Times.