STAGING
Giving from your Founders Pledge DAF this year-end? Check our 2024 giving deadlines

FAR AI

Illustrative image

▲ Photo by Taylor Vick on Unsplash

FAR AI conducts technical research and field building in AI safety, aiming to reduce the risks from advanced AI systems and ensure their alignment with human values and robustness against errors and security vulnerabilities.

What problem are they trying to solve?

If not properly aligned and safeguarded, advanced AI systems could lead to global catastrophes. This concern stems from the rapid development of AI technology, which could reach a transformative stage comparable to or surpassing the industrial revolution in its impact. This area is often neglected due to its highly technical nature.

Contemporary systems have a variety of problems, from jailbreaks to toxic output; it’s fundamentally unclear how these systems work or how to fix them in a rigorous fashion. Progress in safety is being made, but at a slower speed than progress in model capabilities. Safety is a global public good and, like many public goods, it receives considerably less investment than the development of capabilities that could generate profit for individual private firms.

What do they do?

FAR AI operates primarily in the United States, but its work has global implications given the universal reach and influence of AI technologies. The organization focuses on technical research to ensure AI systems are robust (error-resistant), well-aligned with human values, and accurately evaluated for safety.

FAR AI’s technical research in AI safety focuses on increasing the robustness of AI systems, aligning them with human values, and developing reliable model evaluation methods. Their activities directly contribute to reducing the risks associated with advanced AI systems, aiming to prevent potential global or existential threats that could arise from misaligned or error-prone AI technologies.

Why do we recommend them?

FAR AI’s approach addresses the urgent need to mitigate risks associated with rapidly advancing AI technologies. In the past, FAR AI has successfully engaged top AI safety researchers globally through dialogues between Western & Chinese scientists and workshops on AI safety. They have influenced research directions in major AI labs through their research and red-teaming. Compared to other giving opportunities, FAR AI stands out for addressing a future-oriented, high-leverage issue that could shape the trajectory of humanity, making it a uniquely promising investment in long-term global safety.

With additional funding, FAR AI will expand its research in AI safety, focusing on the key areas of robustness, alignment and evaluation. This investment will allow them to deepen their technical research, potentially influencing the development of safer AI technologies on a global scale. The funding will also support their field building activities, including workshops and dialogues, to cultivate a broader and more skilled community of AI safety researchers.

More resources

  1. What problem are they trying to solve?
  2. What do they do?
  3. Why do we recommend them?
  4. More resources