Security News > 2024 > February > How to weaponize LLMs to auto-hijack websites

How to weaponize LLMs to auto-hijack websites
2024-02-17 11:39

AI models, the subject of ongoing safety concerns about harmful and biased output, pose a risk beyond content emission.

Computer scientists affiliated with the University of Illinois Urbana-Champaign have demonstrated this by weaponizing several large language models to compromise vulnerable websites without human guidance.

Every open source model failed, and GPT-3.5 is only marginally better than the open source models.

The first two, GPT-4 and GPT-3.5, are proprietary models operated by OpenAI while the remaining eight are open source.

One explanation cited in the paper is that GPT-4 was better able to change its actions based on the response it got from the target website than the open source models.

"Qualitatively speaking, we found that the open source models are not nearly as good at function calling as the OpenAI models."


News URL

https://go.theregister.com/feed/www.theregister.com/2024/02/17/ai_models_weaponized/