Security News > 2021 > January > Extracting Personal Information from Large Language Models Like GPT-2

Extracting Personal Information from Large Language Models Like GPT-2
2021-01-07 12:14

Abstract: It has become common to publish large language models that have been trained on private datasets.

This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model.

We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data.

We find that larger models are more vulnerable than smaller models.

We conclude by drawing lessons and discussing possible safeguards for training large language models.

Out of the 1,800 samples, we found 604 that contain text which is reproduced verbatim from the training set.


News URL

https://www.schneier.com/blog/archives/2021/01/extracting-personal-information-from-large-language-models-like-gpt-2.html