Security News > 2024 > January > Securing AI systems against evasion, poisoning, and abuse
The publication, "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations," is a key component of NIST's broader initiative to foster the creation of reliable AI. This effort aims to facilitate the implementation of NIST's AI Risk Management Framework and aims to assist AI developers and users in understanding potential attacks and strategies to counter them, acknowledging that there is no silver bullet.
"The risks of AI are as significant as the potential benefits. The latest publication from NIST is a great start to explore and categorize attacks against AI systems. It defines a formal taxonomy and provides a good set of attack classes. It does miss a few areas, such as misuse of the tools to cause harm, abuse of inherited trust by people believing AI is an authority, and the ability to de-identify people and derive sensitive data through aggregated analysis," Matthew Rosenquist, CISO at Eclipz.io commented.
Evasion attacks happen post-deployment of an AI system and involve modifying an input to alter the system's response.
Abuse attacks entail embedding false information into a source, like a website, which is then assimilated by an AI. Differing from the previously mentioned poisoning attacks, abuse attacks focus on feeding the AI erroneous data from a genuine, yet tampered source, with the aim of redirecting the AI system's original purpose.
"Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities. Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set," said co-author Alina Oprea, a professor at Northeastern University.
"Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences. There are theoretical problems with securing AI algorithms that simply haven't been solved yet. If anyone says differently, they are selling snake oil," he concluded.
News URL
https://www.helpnetsecurity.com/2024/01/09/securing-ai-systems-evasion-poisoning-abuse/