Security News > 2023 > September > UK’s NCSC Warns Against Cybersecurity Attacks on AI

The National Cyber Security Centre provides details on prompt injection and data poisoning attacks so organizations using machine-learning models can mitigate the risks.
Large language models used in artificial intelligence, such as ChatGPT or Google Bard, are prone to different cybersecurity attacks, in particular prompt injection and data poisoning.
AIs are trained not to provide offensive or harmful content, unethical answers or confidential information; prompt injection attacks create an output that generates those unintended behaviors.
Prompt injection attacks work the same way as SQL injection attacks, which enable an attacker to manipulate text input to execute unintended queries on a database.
A less dangerous prompt injection attack consists of having the AI provide unethical content such as using bad or rude words, but it can also be used to bypass filters and create harmful content such as malware code.
Prompt injection attacks may also target the inner working of the AI and trigger vulnerabilities in its infrastructure itself.
News URL
https://www.techrepublic.com/article/uks-ncsc-warns-against-cybersecurity-attacks-on-ai/
Related news
- EU invests €1.3 billion in AI and cybersecurity (source)
- 3 Ways the UK Government Plans to Tighten Cyber Security Rules with New Bill (source)
- Alan Turing Institute: UK can't handle a fight against AI-enabled crims (source)
- Who's calling? The threat of AI-powered vishing attacks (source)
- Cybersecurity in the AI Era: Evolve Faster Than the Threats or Get Left Behind (source)
- Strategic AI readiness for cybersecurity: From hype to reality (source)
- Developers Beware: Slopsquatting & Vibe Coding Can Increase Risk of AI-Powered Attacks (source)
- 13 core principles to strengthen AI cybersecurity (source)
- Wallarm Agentic AI Protection blocks attacks against AI agents (source)
- The future of AI in cybersecurity in a word: Optimistic (source)