Security News > 2023 > January > Russian criminals can't wait to hop over OpenAI's fence, use ChatGPT for evil
The security shop's research team said it has already seen Russian cybercriminals on underground forums discussing OpenAI workarounds so that they can bring ChatGPT to the dark side.
We'd have thought ChatGPT would be most useful for coming up with emails and other messages to send people to trick them into handing over their usernames and passwords, but what do we know? Some crooks may find the AI model helpful in offering malicious code and techniques to deploy.
Russia is one of a handful of countries banned from using OpenAI. The researcher team also found several Russian tutorials on the forums about how to bypass OpenAI's SMS verification and register for ChatGPT. "We believe these hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations. Cyberciminals are growing more and more interested in ChatGPT, because the AI technology behind it can make a hacker more cost-efficient," the Checkpoint crew wrote.
While there are other examples of how to query ChatGPT to create malicious code, in their latest research CyberArk bypassed ChatGPT's content filters and showed how, "With very little effort or investment by the adversary, it is possible to continuously query ChatGPT so we receive a unique, functional and validated piece of code each time," CyberArk Senior Security Researcher Eran Shimony told The Register.
Shimony and Tsarfati found a way to bypass this by using multiple constraints and asking ChatGPT to obey.
Using the ChatGPT API within the malware, on-site, instead of an off-site environment also helps the malware fly under the radar, according to the researchers.
News URL
https://go.theregister.com/feed/www.theregister.com/2023/01/18/russia_openai_chatgpt_workarounds/