Security News > 2024 > July > Big Tech's eventual response to my LLM-crasher bug report was dire
After publication of my "Kryptonite" article about a prompt that crashes many AI chatbots, I began to get a steady stream of emails from readers - many times the total of all reader emails I'd received in the previous decade.
Disappointingly, too many of them consisted of little more than a request to reveal the prompt so that they could lay waste to large language models.
One email arrived from an individual - I won't mention names, except to say that readers would absolutely recognize the name of this Very Important Networking Talent - who asked for the prompt, promising to pass it along to the appropriate group at the Big Tech company at which he now works.
A few of the LLMs that would regularly crash with this prompt seem to have been updated - behind the scenes.
Somewhere deep within the guts of ChatGPT and Copilot, something looks like it has been patched to prevent the behavior induced by the prompt.
I now feel my discovery - and subsequent story - highlighted an almost complete lack of bug reporting infrastructure from the LLM providers.
News URL
https://go.theregister.com/feed/www.theregister.com/2024/07/10/vendors_response_to_my_llmcrasher/