Security News > 2024 > January > Apple, AMD, Qualcomm GPU security hole lets miscreants snoop on AI training and chats

Research made public on Tuesday detailed how miscreants can exploit the hole to read data they're not supposed to in a system's local GPU memory.
While the flaw potentially affects all GPU applications on vulnerable chips, it is especially concerning for those processing machine-learning applications because of the amount of data these models process using GPUs, and therefore the amount of potentially sensitive information that could be swiped by exploiting this issue.
The chip house also confirmed that a lot of its products are vulnerable to the memory leak, including multiple versions of its Athlon and Ryzen desktop and mobile processors, Radeon graphics cards, and Radeon and Instinct data center GPUs.
Google pointed out to Trail of Bits that some Imagination GPUs are impacted, and that the processor designer released a fix for its holes last month.
Google is aware of this vulnerability impacting AMD, Apple, and Qualcomm GPUs.
Google has released fixes for ChromeOS devices with impacted AMD and Qualcomm GPUs as part of the 120 and 114 releases in the Stable and LTS channels, respectively.
News URL
https://go.theregister.com/feed/www.theregister.com/2024/01/17/leftoverlocals_gpu_flaw/
Related news
- How to find out if your AI vendor is a security risk (source)
- Meta to resume AI training on content shared by Europeans (source)
- Meta Resumes E.U. AI Training Using Public User Data After Regulator Approval (source)
- Network Security at the Edge for AI-ready Enterprise (source)
- One in three security teams trust AI to act autonomously (source)
- Coaching AI agents: Why your next security hire might be an algorithm (source)
- Why security teams cannot rely solely on AI guardrails (source)
- Learning How to Hack: Why Offensive Security Training Benefits Your Entire Security Team (source)
- AI forces security leaders to rethink hybrid cloud strategies (source)
- LlamaFirewall: Open-source framework to detect and mitigate AI centric security risks (source)