STAGING
Giving from your Founders Pledge DAF this year-end? Check our 2024 giving deadlines

Centre for Long-Term Resilience

Illustrative image

▲ Photo by Khamkéo Vilaysing on Unsplash

The Centre for Long-Term Resilience (CLTR) is a UK-based think tank specializing in extreme risks. Their primary focuses are (1) extreme AI risks, (2) extreme biological risks, and (3) improving government risk management capacity. CLTR conducts research and provides policy advice to help governments better manage these risks.

What problem are they trying to solve?

Advances in technology are creating risks that are potentially catastrophic if mismanaged, yet decision-makers often struggle to give these risks appropriate consideration and take action. Areas such as AI safety, biosecurity, and systemic risk management require expertise and resources that governments frequently lack.

CLTR addresses these gaps by providing governments with concrete policy recommendations, built on expertise in AI, biosecurity, and risk management. CLTR’s primary focus is on the UK government, which is an influential global actor in building global resilience to extreme AI and biological risks. Their goal is to see policies adopted that substantially reduce risks from advanced technologies and enable better response to extreme events.

What do they do?

CLTR researches extreme risks and works directly with governments to translate findings into actionable policy recommendations. Core activities include:

Why do we recommend them?

UK government policy is critical for reducing global extreme risk. The UK has significant economic and diplomatic power, and has especially high influence in emerging technology and extreme risks. For instance, major AI tech companies such as DeepMind, OpenAI and Anthropic have offices based in London; whilst organizations with relevant expertise on extreme risks (such as the Centre for the Study of Existential Risk and the Centre for the Governance of AI) are based in Cambridge and Oxford, respectively.

More broadly, policies in the UK have the potential to be adopted internationally, due to the UK’s geopolitical influence (as a permanent member of the UNSC, the world's 6th largest economy, and alliance with the US). In June this year, the UK Prime Minister confirmed that, following a discussion with the US President, the UK would host a global AI safety summit autumn 2023 to evaluate and monitor AI's "most significant risks," including those posed by frontier systems, and that he wanted to make the UK the "home of global AI safety regulation."

CLTR has a proven track record of influencing UK policy on extreme risks. Policy changes they have contributed to include:

What would they do with more funding?

CLTR is currently an nine-person team of experts, with small policy units. They plan to expand to fifteen by 2025, which will allow them to:

  • Provide critical advice to relevant policymakers on AI, Biosecurity and Risk Management
  • Generate research reports and input on AI, Biosecurity and Risk Management
  • Continue developing a strong network with policymakers and politicians, to spot future opportunities and brief senior stakeholders on the critical importance of boosting resilience to extreme risks
  1. What problem are they trying to solve?
  2. What do they do?
  3. Why do we recommend them?
  4. What would they do with more funding?