Security News > 2020 > April > Fooling NLP Systems Through Word Swapping
The software, developed by a team at MIT, looks for the words in a sentence that are most important to an NLP classifier and replaces them with a synonym that a human would find natural.
Abstract: Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models.
In this paper, we present TextFooler, a simple but strong baseline to generate natural adversarial text.
By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks.
We demonstrate the advantages of this framework in three ways: effective - it outperforms state-of-the-art attacks in terms of success rate and perturbation rate, utility-preserving - it preserves semantic content and grammaticality, and remains correctly classified by humans, and efficient - it generates adversarial text with computational complexity linear to the text length.
News URL
https://www.schneier.com/blog/archives/2020/04/fooling_nlp_sys.html