Security News > 2023 > February > Attacking Machine Learning Systems

Attacking Machine Learning Systems
2023-02-06 11:02

There is a lesson in that similarity: the complex mathematical attacks make for good academic papers, but we mustn't lose sight of the fact that insecure software will be the likely attack vector for most ML systems.

At their core, modern ML systems have complex mathematical models that use training data to become competent at a task.

Like everything else, these systems will be hacked through vulnerabilities in those more conventional parts of the system.

Directly attacking an ML system with a model inversion attack or a perturbation attack isn't as passive as eavesdropping on an encrypted communications channel, but it's using the ML system as intended, albeit for unintended purposes.

Building secure ML systems is important research and something we in the security community should continue to do.

While ML systems bring new risks that we haven't previously encountered, we need to recognize that the majority of attacks against these systems aren't going to target the ML part.


News URL

https://www.schneier.com/blog/archives/2023/02/attacking-machine-learning-systems.html