STAGING
Giving from your Founders Pledge DAF this year-end? Check our 2024 giving deadlines

Introducing the Global Catastrophic Risks Fund

Illustrative image

▲ A nation-wide civil defense drill in South Korea. Photo by Republic of Korea on Flickr

For thirteen tense days between October 16th and October 29th, 1962, the United States and the Soviet Union teetered on the brink of thermonuclear war. Several times during those two weeks — including one incident when a bear wandered onto an Air Force base and triggered a false alarm — small accidents and misunderstandings almost caused an all-out war.

October 2022 marks the 60th anniversary of the Cuban Missile Crisis. At one particularly tense moment of the crisis, U.S. Navy ships dropped depth charges near a Soviet submarine in an attempt to force it to surface. The submarine, however, had lost contact with the outside world. As the depth charges rattled the sub’s crew, its captain began to believe that war had finally broken out. The time had come, he argued, to order a launch of their nuclear-armed torpedo. By sheer coincidence, the flotilla’s chief of staff Vasily Arkhipov was on board this particular vessel, and had to approve any launch decision. In a heated argument, Arkhipov alone opposed the launch.

Ever since, we’ve repeatedly been lucky. Sixty years later, humanity still faces immense global catastrophic risks — risks that could kill hundreds of millions or even billions of people alive today — and we can’t afford to keep relying on luck.

To help mitigate these risks, we are launching a new addition to Founders Pledge’s Funds offering: the Global Catastrophic Risks Fund.

Keeping Humanity Safe

The Fund will build on Founders Pledge’s recent research into great power conflict and risks from frontier military and civilian technologies, with a special focus on international stability — a pathway that we believe shapes a number of the biggest risks facing humanity. These risks include:

  • War between great powers, like a U.S.-China clash over Taiwan;
  • Nuclear war, especially emerging threats to nuclear stability, like vulnerabilities of nuclear command, control, and communications;
  • Risks from artificial intelligence (AI), including risks from both machine learning applications (like autonomous weapon systems) and from transformative AI;
  • Catastrophic biological risks, such as naturally-arising pandemics, engineered pathogens, laboratory accidents, and the misuse of new advances in synthetic biology; and
  • Emerging threats from new technologies and in new domains (including the militarization of space).

Moreover, the Fund will support field-building activities around the study and mitigation of global catastrophic risks, and methodological interventions, including new ways of studying these risks, such as probabilistic forecasting and experimental wargaming.

Current and Future Generations

This Fund is designed both to tackle threats to humanity’s long-term future and to take action now to protect every human being alive today. We believe both that some interventions on global catastrophic risks can be justified on a simple cost-benefit analysis alone, and also that safeguarding the long-term future of humanity is among the most important things we can work on. Whether or not you share our commitment to longtermism, or believe that reducing existential risks is particularly important, you may still be interested in the Fund for the simple reason that you want to help prevent the deaths and suffering of millions of people.

To illustrate this with an example from our Fund Prospectus, we may explore biosecurity grants on improvements to personal protective equipment (PPE), as these could be helpful both for navigating natural pandemics like COVID, and for protecting against future engineered biological agents. Similarly, the Fund may support the development of confidence-building measures on AI — like an International Autonomous Incidents Agreement — with the aim of both mitigating the destabilizing impact of near-term military AI applications, as well as providing a focal point for long-term AI governance. Some grants will focus mainly on near-term risks; others mainly on longtermist concerns. In short, the Fund will fund work on catastrophic and existential risks, on averting civilizational collapse, and on steering humanity’s ship away from the rocks (and better understanding the location of these rocks).

Innovative grant-making with philanthropic Funds

Like our other Funds, this will be a philanthropic co-funding vehicle designed to enable us to pursue a number of grant-making opportunities, including:

  • Active grant-making, working with organizations to shape their plans for the future;
  • Seeding new organizations and projects with high expected value;
  • Committing to multi-year funding to give stability to promising projects and decrease their fundraising costs;
  • Filling small funding gaps that fall between the cracks of traditional philanthropy;
  • Pooling donations to support projects beyond the reach of individual donors;
  • Partnering and collaborating with other funders and organizations (see below);
  • Making expert-advised grants by working with domain experts and current and former policymakers; and
  • Responding to dynamic opportunities, like short-lived policy windows.

Collaboration and Partnerships

One particular use case we want to highlight is an improved ability to partner and collaborate with other funders. There are several scenarios when individual donors are unable to support a project, including when funding requirements are very large, when grant recipients would prefer multiple donors, and when a diverse group of supporters sends an important signal about an organization to the public.

We see the GCR Fund as a complement to existing efforts and envision collaborating across the funding landscape. For example, we are in touch with our partners at various longtermist grant-making organizations and envision potentially co-funding certain projects with them. Moreover, the GCR Fund is complementary to the Patient Philanthropy Fund, a Founders Pledge-incubated project to invest and grow philanthropic resources until they are needed most — donors to both funds can allocate their giving according to their beliefs and predictions about the optimal timing of philanthropy.

Contributions to the Fund and Next Steps

We are actively seeking contributions to launch the Global Catastrophic Risks Fund and support the first grants. Visit the Fund page to learn more and get involved. This is also where we will publish short summaries of our grant-making and the rationale behind our grants.

  1. Keeping Humanity Safe
    1. Current and Future Generations
      1. Innovative grant-making with philanthropic Funds
        1. Collaboration and Partnerships
          1. Contributions to the Fund and Next Steps

            About the author

            Portrait

            Christian Ruhl

            Global Catastrophic Risks Lead

            Christian Ruhl is our Global Catastrophic Risks Lead based in Philadelphia. Before joining Founders Pledge in November 2021, Christian was the Global Order Program Manager at Perry World House, the University of Pennsylvania's global affairs think tank, where he managed the research theme on “The Future of the Global Order: Power, Technology, and Governance.” Before that, Christian studied on a Dr. Herchel Smith Fellowship at the University of Cambridge for two master’s degrees, one in History and Philosophy of Science and one in International Relations and Politics, with dissertations on early modern submarines and Cold War nuclear strategy. Christian received his BA from Williams College in 2017.