Security News > 2023 > July > OpenAI, Microsoft, Google, Anthropic Launch Frontier Model Forum to Promote Safe AI
OpenAI, Google, Microsoft and Anthropic have announced the formation of the Frontier Model Forum.
The goal of the Frontier Model Forum is to have member companies contribute technical and operational advice to develop a public library of solutions to support industry best practices and standards.
The forum says it will "Establish trusted, secure mechanisms for sharing information among companies, governments, and relevant stakeholders regarding AI safety and risks." The forum will follow best practices in responsible disclosure in areas such as cybersecurity.
SEE: OpenAI Is Hiring Researchers to Wrangle 'Superintelligent' AI. What are the criteria for membership in the Frontier Model Forum?
"It is vital that AI companies - especially those working on the most powerful models - align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible," said Anna Makanju, vice president of global affairs at OpenAI. Advancing AI safety is "Urgent work," she said, and the forum is "Well-positioned" to take quick actions.
The Frontier Model Forum announcement comes less than a week after OpenAI, Google, Microsoft, Anthropic, Meta, Amazon and Inflection agreed to the White House's list of eight AI safety assurances.
News URL
https://www.techrepublic.com/article/openai-frontier-model-forum-news/
Related news
- Microsoft says it's not using your Word, Excel data for AI training (source)
- Microsoft Fixes AI, Cloud, and ERP Security Flaws; One Exploited in Active Attacks (source)
- Google Chrome’s AI feature lets you quickly check website trustworthiness (source)
- Google says new scam protection feature in Chrome uses AI (source)
- Google Chrome uses AI to analyze pages in new scam detection feature (source)
- Microsoft Bing shows misleading Google-like page for 'Google' searches (source)
- Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation (source)
- Microsoft Takes Legal Action Against AI “Hacking as a Service” Scheme (source)
- Microsoft sues 'foreign-based' cyber-crooks, seizes sites used to abuse AI (source)
- Microsoft eggheads say AI can never be made secure – after testing Redmond's own products (source)