Security News > 2023 > September > UK’s NCSC Warns Against Cybersecurity Attacks on AI
The National Cyber Security Centre provides details on prompt injection and data poisoning attacks so organizations using machine-learning models can mitigate the risks.
Large language models used in artificial intelligence, such as ChatGPT or Google Bard, are prone to different cybersecurity attacks, in particular prompt injection and data poisoning.
AIs are trained not to provide offensive or harmful content, unethical answers or confidential information; prompt injection attacks create an output that generates those unintended behaviors.
Prompt injection attacks work the same way as SQL injection attacks, which enable an attacker to manipulate text input to execute unintended queries on a database.
A less dangerous prompt injection attack consists of having the AI provide unethical content such as using bad or rude words, but it can also be used to bypass filters and create harmful content such as malware code.
Prompt injection attacks may also target the inner working of the AI and trigger vulnerabilities in its infrastructure itself.
News URL
https://www.techrepublic.com/article/uks-ncsc-warns-against-cybersecurity-attacks-on-ai/
Related news
- AI-Assisted Attacks Top Cyber Threat For Third Consecutive Quarter, Gartner Finds (source)
- Google Cloud Cybersecurity Forecast 2025: AI, geopolitics, and cybercrime take centre stage (source)
- Using AI to drive cybersecurity risk scoring systems (source)
- ANZ CIO Challenges: AI, Cybersecurity & Data Analytics for 2025 (source)
- Dell Unveils AI and Cybersecurity Solutions at Microsoft Ignite 2024 (source)
- AI Kuru, cybersecurity and quantum computing (source)
- Cybersecurity Blind Spots in IaC and PaC Tools Expose Cloud Platforms to New Attacks (source)
- Microsoft Fixes AI, Cloud, and ERP Security Flaws; One Exploited in Active Attacks (source)
- A Guide to Securing AI App Development: Join This Cybersecurity Webinar (source)
- Treat AI like a human: Redefining cybersecurity (source)