Security News > 2020 > October > New Framework Released to Protect Machine Learning Systems From Adversarial Attacks
Microsoft, in collaboration with MITRE, IBM, NVIDIA, and Bosch, has released a new open framework that aims to help security analysts detect, respond to, and remediate adversarial attacks against machine learning systems.
Just as artificial intelligence and ML are being deployed in a wide variety of novel applications, threat actors can not only abuse the technology to power their malware but can also leverage it to fool machine learning models with poisoned datasets, thereby causing beneficial systems to make incorrect decisions, and pose a threat to stability and safety of AI applications.
According to a Gartner report cited by Microsoft, 30% of all AI cyberattacks by 2022 are expected to leverage training-data poisoning, model theft, or adversarial samples to attack machine learning-powered systems.
The idea is that companies can use the Adversarial ML Threat Matrix to test their AI models' resilience by simulating realistic attack scenarios using a list of tactics to gain initial access to the environment, execute unsafe ML models, contaminate training data, and exfiltrate sensitive information via model stealing attacks.
"The goal of the Adversarial ML Threat Matrix is to position attacks on ML systems in a framework that security analysts can orient themselves in these new and upcoming threats," Microsoft said.
News URL
http://feedproxy.google.com/~r/TheHackersNews/~3/XcTJVlqWwWY/adversarial-ml-threat-matrix.html