Security News > 2020 > October > A new threat matrix outlines attacks against machine learning systems

A new threat matrix outlines attacks against machine learning systems
2020-10-27 07:54

A report published last year has noted that most attacks against artificial intelligence systems are focused on manipulating them, but that new attacks using machine learning are within attackers' capabilities.

Microsoft now says that attacks on machine learning systems are on the uptick and MITRE notes that, in the last three years, "Major companies such as Google, Amazon, Microsoft, and Tesla, have had their ML systems tricked, evaded, or misled." At the same time, most businesses don't have the right tools in place to secure their ML systems and are looking for guidance.

Experts at Microsoft, MITRE, IBM, NVIDIA, the University of Toronto, the Berryville Institute of Machine Learning and several other companies and educational organizations have therefore decided to create the first version of the Adversarial ML Threat Matrix, to help security analysts detect and respond to this new type of threat.

"Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms. Data can be weaponized in new ways which requires an extension of how we model cyber adversary behavior, to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle," MITRE noted.

They encourage contributors to point out new techniques, propose best practices, and share examples of successful attacks on machine learning systems.


News URL

http://feedproxy.google.com/~r/HelpNetSecurity/~3/ULi83kbl8cM/

Related vendor

VENDOR LAST 12M #/PRODUCTS LOW MEDIUM HIGH CRITICAL TOTAL VULNS
Matrix 13 6 37 15 3 61