Security News > 2023 > July > HackerOne: How Artificial Intelligence Is Changing Cyber Threats and Ethical Hacking
Security experts from HackerOne and beyond weigh in on malicious prompt engineering and other attacks that could strike through LLMs. HackerOne, a security platform and hacker community forum, hosted a roundtable on Thursday, July 27, about the way generative artificial intelligence will change the practice of cybersecurity.
How threat actors take advantage of generative AI. "We have to remember that systems like GPT models don't create new things - what they do is reorient stuff that already exists stuff it's already been trained on," said Klondike.
Thacker said, if an attacker uses prompt injection to take control of the context for the LLM function call, they can exfiltrate data by calling the web browser feature and moving the data that's exfiltrated to the attacker's side.
Carta compared generative AI to a knife; like a knife, generative AI can be a weapon or a tool to cut a steak.
How businesses can secure generative AI. The threat model Klondike and his team created at AI Village recommends software vendors to think of LLMs as a user and create guardrails around what data it has access to.
Michiel Prins, HackerOne cofounder and head of professional services, pointed out that, when it comes to LLMs, organizations seem to have forgotten the standard security lesson to "Treat user input as dangerous."