Tech Companies Announce Frontier Model Forum to Address Safe, Responsible AI Development
Anthropic, Google, Microsoft, and OpenAI have partnered to launch the Frontier Model Forum to draw on the expertise of member companies to promote safety and responsibility in developing frontier AI models. They are calling for other companies to join them in this collective effort.
“Frontier” AI models are those which are being developed and will exceed existing model capabilities, the companies said. Now is the time for a slate of “best practices” and “guardrails” to be developed to address issues arising with the continued development of AI.
The Forum’s goals are to:
- Promote AI safety research and standards as well as independent evaluations of it;
- Generate best practices for developing frontier models responsibly, and educate the public about their capabilities and limitations;
- Collaborate with government, academia, business, and the public to share knowledge about AI;
- Support developing useful models to help mitigate real-world issues, such as environment, health, and cybersecurity; and
- Develop a public library of resources about research, standards, safety, and evaluation of AI technology.
The Forum is calling for organizations to join them who are:
- Developing and deploying frontier models;
- Dedicated to frontier model safety;
- Willing to participate in joint initiatives to address the issues arising with frontier models; and
- Willing to share research and information with others.
The Forum will create an advisory board, working group, and executive board and will establish a charter, governance, and funding sources. The Forum is also committed to cooperation with existing AI safety and responsibility initiatives such as the G7 Hiroshima Process, the Partnership on AI, and others to support each other in assessing risks, benefits, standards, and social impact of AI technology.
In making this announcement, representatives of the Forum’s founding companies stressed their excitement about the new venture and the importance of its work.
“We’re all going to need to work together to make sure AI benefits everyone,” said Kent Walker, president of global affairs for Google and Alphabet.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Brad Smith, vice chair and president of Microsoft.
“The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety,” said Dario Amodei, Anthropic CEO.
“It is vital that AI companies — especially those working on the most powerful models — align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible,” said Anna Makanju, vice president of global affairs at OpenAI. “This is urgent work and this forum is well positioned to act quickly to advance the state of AI safety.”
To learn more, visit OpenAI’s Frontier Model Forum page.