Security News > 2024 > February > Malicious AI models on Hugging Face backdoor users’ machines

Malicious AI models on Hugging Face backdoor users’ machines
2024-02-28 22:12

At least 100 instances of malicious AI ML models were found on the Hugging Face platform, some of which can execute code on the victim's machine, giving attackers a persistent backdoor.

JFrog's security team found that roughly a hundred models hosted on the platform feature malicious functionality, posing a significant risk of data breaches and espionage attacks.

JFrog developed and deployed an advanced scanning system to examine PyTorch and Tensorflow Keras models hosted on Hugging Face, finding one hundred with some form of malicious functionality.

"This count excludes false positives, ensuring a genuine representation of the distribution of efforts towards producing malicious models for PyTorch and Tensorflow on Hugging Face.".

The malicious payload used Python's pickle module's " reduce " method to execute arbitrary code upon loading a PyTorch model file, evading detection by embedding the malicious code within the trusted serialization process.

JFrog says some of the malicious uploads could be part of security research aimed at bypassing security measures on Hugging Face and collecting bug bounties, but since the dangerous models become publicly available, the risk is real and shouldn't be underestimated.


News URL

https://www.bleepingcomputer.com/news/security/malicious-ai-models-on-hugging-face-backdoor-users-machines/