Security News > 2023 > May > Finding bugs in AI models at DEF CON 31
DEF CON's AI Village will host the first public assessment of large language models at the 31st edition of the hacker convention this August, aimed at finding bugs in and uncovering the potential for misuse of AI models.
During the conference, red teams will put LLMs from some of the leading vendors, such as Anthropic, Google, Hugging Face, NVIDIA, OpenAI, Stability, and Microsoft, to the test.
They will do so on an evaluation platform developed by Scale AI. "Traditionally, companies have solved this problem with specialized red teams. However this work has largely happened in private. The diverse issues with these models will not be resolved until more people know how to red team and assess them," said Sven Cattell, founder of AI Village.
The aim of this exercise is to uncover both the possibilities and the limitations of LLMs. By testing these models, red teams hope to reveal any potential vulnerabilities and evaluate the extent to which LLMs can be manipulated.
The support from the White House, the National Science Foundation's Computer and Information Science and Engineering Directorate, and the Congressional AI Caucus for the upcoming red teaming exercise is a clear indication of the importance they place on the use of LLMs. It also highlights the potential risks associated with such technology.
"This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models. Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation," the White House stated.
News URL
https://www.helpnetsecurity.com/2023/05/09/finding-bugs-ai-models/