Security News > 2019 > November > Manipulating Machine Learning Systems by Manipulating Training Data
2019-11-29 11:43
Interesting research: "TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents": Abstract:: Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time. In this work, we show that these training-time vulnerabilities extend to deep reinforcement learning (DRL) agents and can be exploited by an adversary with access to the training...
News URL
https://www.schneier.com/blog/archives/2019/11/manipulating_ma.html