Frontier Model Forum and the $10M AI Safety Fund Initiative

Published On Mon Jun 16 2025
Frontier Model Forum and the $10M AI Safety Fund Initiative

Untitled

In July 2023, four tech giants: Open AI, Microsoft, Google, and Anthropic, announced a platform called the Frontier Model Forum. The Forum aims to ensure the safe and responsible development of generative AI, especially frontier AI models. Today, the companies have made a joint announcement of the appointment of Chris Meserole as the first Executive Director of the Frontier Model Forum and the establishment of an AI safety fund with a more than $10 million initiative to promote research in the field of AI safety. Additionally, the Frontier Model is sharing its first technical working group update on red teaming.

Chris Meserole's Role

In Google’s blog post, Chris Meserole recently worked as Director of the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution. Now, as the Executive Director, his latest role requires him to identify and share best practices for frontier models, expand knowledge to academics, policymakers, and civilians, and support efforts to leverage AI to address society’s biggest challenges. Chris Meserole added: “The most powerful AI models hold enormous promise for society, but to realize their potential we need to better understand how to safely develop and evaluate them. I’m excited to take on that challenge with the Frontier Model Forum.”

AI Safety Fund Contributions

As for the AI Safety Fund, alongside the four tech titans, major contributions have come from the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Eric Schmidt (former CEO of Google and philanthropist), and Jaan Tallinn (Estonian billionaire). The Forum aims to deepen engagements with the broader research community to promote safe development and use of AI.

Future Initiatives

Previously, the forum agreed to voluntary AI commitments at the White House, including a pledge to report vulnerabilities in AI forums and support third-party discovery. It has since been working on establishing definitions of terms and processes to ensure common ground in future discussions. The fund will prioritize the development of new model evaluations and techniques for red-teaming AI models to test potentially dangerous capabilities.

Latest Working Group Update

In the latest working group update, the group introduced a definition for red teaming as “A structured process for probing AI systems and products for the identification of harmful capabilities, outputs, or infrastructural threats.” The update also included case studies that highlight the issue. The forum is also working on a responsible disclosure process and plans to establish an Advisory Board in the future to help with strategizing.

AI Safety Fund initiates first round of research grants - Frontier ...

The fund, administered by the Meridian Institute, will call for proposals in the next few months, and grants can be expected shortly after that.

© Since 2000 Neowin®. All trademarks mentioned are the property of their respective owners.