Security News > 2023 > July > OpenAI, Microsoft, Google, Anthropic Launch Frontier Model Forum to Promote Safe AI

OpenAI, Google, Microsoft and Anthropic have announced the formation of the Frontier Model Forum.
The goal of the Frontier Model Forum is to have member companies contribute technical and operational advice to develop a public library of solutions to support industry best practices and standards.
The forum says it will "Establish trusted, secure mechanisms for sharing information among companies, governments, and relevant stakeholders regarding AI safety and risks." The forum will follow best practices in responsible disclosure in areas such as cybersecurity.
SEE: OpenAI Is Hiring Researchers to Wrangle 'Superintelligent' AI. What are the criteria for membership in the Frontier Model Forum?
"It is vital that AI companies - especially those working on the most powerful models - align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible," said Anna Makanju, vice president of global affairs at OpenAI. Advancing AI safety is "Urgent work," she said, and the forum is "Well-positioned" to take quick actions.
The Frontier Model Forum announcement comes less than a week after OpenAI, Google, Microsoft, Anthropic, Meta, Amazon and Inflection agreed to the White House's list of eight AI safety assurances.
News URL
https://www.techrepublic.com/article/openai-frontier-model-forum-news/
Related news
- Microsoft Bing shows misleading Google-like page for 'Google' searches (source)
- Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation (source)
- Microsoft Takes Legal Action Against AI “Hacking as a Service” Scheme (source)
- Microsoft sues 'foreign-based' cyber-crooks, seizes sites used to abuse AI (source)
- Microsoft eggheads say AI can never be made secure – after testing Redmond's own products (source)
- Google: Over 57 Nation-State Threat Groups Using AI for Cyber Operations (source)
- Malvertising Scam Uses Fake Google Ads to Hijack Microsoft Advertising Accounts (source)
- Google says hackers abuse Gemini AI to empower their attacks (source)
- Microsoft Patches Critical Azure AI Face Service Vulnerability with CVSS 9.9 Score (source)
- Microsoft Edge update adds AI-powered Scareware Blocker (source)