Security News > 2023 > October > Model Extraction Attack on Neural Networks
Abstract: Billions of dollars and countless GPU hours are currently spent on training Deep Neural Networks for a variety of tasks.
Thus, it is essential to determine the difficulty of extracting all the parameters of such neural networks when given access to their black-box implementations.
Many versions of this problem have been studied over the last 30 years, and the best current attack on ReLU-based deep neural networks was presented at Crypto'20 by Carlini, Jagielski, and Mironov.
It resembles a differential chosen plaintext attack on a cryptosystem, which has a secret key embedded in its black-box implementation and requires a polynomial number of queries but an exponential amount of time.
In this paper, we improve this attack by developing several new techniques that enable us to extract with arbitrarily high precision all the real-valued parameters of a ReLU-based DNN using a polynomial number of queries and a polynomial amount of time.
Our attack replaces this with our new techniques, which require only 30 minutes on a 256-core computer.
News URL
https://www.schneier.com/blog/archives/2023/10/model-extraction-attack-on-neural-networks.html