Security News
OpenAI's new "ChatGPT search" Chrome extension feels like nothing more than a typical search hijacker, changing Chrome's settings so your address bar searches go through ChatGPT Search instead. [...]
OpenAI has disrupted over 20 malicious cyber operations abusing its AI-powered chatbot, ChatGPT, for debugging and developing malware, spreading misinformation, evading detection, and conducting...
OpenAI on Wednesday said it has disrupted more than 20 operations and deceptive networks across the world that attempted to use its platform for malicious purposes since the start of the year....
OpenAI has banned ChatGPT accounts linked to an Iranian crew suspected of spreading fake news on social media sites about the upcoming US presidential campaign. OpenAI attributed the phony posts to Storm-2035, a Tehran-backed group that Microsoft also sounded the alarm about last week as it and other Iranian groups have continued to meddle in elections - some veering toward attempts at inciting violence.
OpenAI on Friday said it banned a set of accounts linked to what it said was an Iranian covert influence operation that leveraged ChatGPT to generate content that, among other things, focused on...
The online forum OpenAI employees use for confidential internal communications was breached last year, anonymous sources have told The New York Times.Hackers lifted details about the design of the company's AI technologies from forum posts, but they did not infiltrate the systems where OpenAI actually houses and builds its AI. OpenAI executives announced the incident to the whole company during an all-hands meeting in April 2023, and also informed the board of directors.
Security in brief It's been a week of bad cyber security revelations for OpenAI, after news emerged that the startup failed to report a 2023 breach of its systems to anybody outside the organization, and that its ChatGPT app for macOS was coded without any regard for user privacy. According to an exclusive report from the New York Times, citing a pair of anonymous OpenAI insiders, someone managed to breach a private forum used by OpenAI employees to discuss projects early last year.
About Bruce Schneier I am a public-interest technologist, working at the intersection of security, technology, and people. I've been writing about security issues on my blog since 2004, and in my monthly newsletter since 1998.
With Anthropic's map, the researchers can explore how neuron-like data points, called features, affect a generative AI's output. The researchers go into detail in their paper on scaling and evaluating sparse autoencoders; put very simply, the goal is to make features more understandable - and therefore more steerable - to humans.
OpenAI on Thursday disclosed that it took steps to cut off five covert influence operations (IO) originating from China, Iran, Israel, and Russia that sought to abuse its artificial intelligence...