Security News

OpenAI’s GPT-4 Can Autonomously Exploit 87% of One-Day Vulnerabilities, Study Finds
2024-04-26 00:40

The GPT-4 large language model from OpenAI can exploit real-world vulnerabilities without human intervention, a new study by University of Illinois Urbana-Champaign researchers has found. How successful is GPT-4 at autonomously detecting and exploiting vulnerabilities? GPT-4 can autonomously exploit one-day vulnerabilities.

OpenAI's GPT-4 can exploit real vulnerabilities by reading security advisories
2024-04-17 10:15

Your profile can be used to present content that appears more relevant based on your possible interests, such as by adapting the order in which content is shown to you, so that it is even easier for you to find content that matches your interests. Content presented to you on this service can be based on your content personalisation profiles, which can reflect your activity on this or other services, possible interests and personal aspects.

Whizkids jimmy OpenAI, Google's closed models
2024-03-13 08:34

Boffins have managed to pry open closed AI services from OpenAI and Google with an attack that recovers an otherwise hidden portion of transformer models. "We thereby confirm, for the first time, that these black-box models have a hidden dimension of 1024 and 2048, respectively. We also recover the exact hidden dimension size of the gpt-3.5-turbo model, and estimate it would cost under $2,000 in queries to recover the entire projection matrix."

OpenAI’s Sora Generates Photorealistic Videos
2024-02-16 21:37

OpenAI released on Feb. 15 an impressive new text-to-video model called Sora that can create photorealistic or cartoony moving images from natural language text prompts. Sora isn't available to the public yet; instead, OpenAI released Sora to red teamers - security researchers who mimic techniques used by threat actors - to assess possible harms or risks.

OpenAI blocks state-sponsored hackers from using ChatGPT
2024-02-15 15:56

OpenAI has removed accounts used by state-sponsored threat groups from Iran, North Korea, China, and Russia, that were abusing its artificial intelligence chatbot, ChatGPT. The AI research organization took action against specific accounts associated with the hacking groups that were misusing its large language model services for malicious purposes after receiving key information from Microsoft's Threat Intelligence team. Forest Blizzard [Russia]: Utilized ChatGPT to conduct research into satellite and radar technologies pertinent to military operations and to optimize its cyber operations with scripting enhancements.

OpenAI shuts down China, Russia, Iran, N Korea accounts caught doing naughty things
2024-02-15 00:10

OpenAI has shut down five accounts it asserts were used by government agents to generate phishing emails and malicious software scripts as well as research ways to evade malware detection. "We disrupted five state-affiliated malicious actors: two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard," the OpenAI team wrote.

Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks
2024-02-14 14:39

Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber...

OpenAI rolls out imperfect fix for ChatGPT data leak flaw
2023-12-21 16:44

OpenAI has mitigated a data exfiltration bug in ChatGPT that could potentially leak conversation details to an external URL. According to the researcher who discovered the flaw, the mitigation isn't perfect, so attackers can still exploit it under certain conditions. Security researcher Johann Rehberger discovered a technique to exfiltrate data from ChatGPT and reported it to OpenAI in April 2023.

OpenAI Is Not Training on Your Dropbox Documents—Today
2023-12-19 12:09

There's a rumor flying around the Internet that OpenAI is training foundation models on your Dropbox documents. Dropbox isn't sharing all of your documents with OpenAI. But here's the problem: we don't trust OpenAI. We don't trust tech corporations.

GuardRail: Open-source tool for data analysis, AI content generation using OpenAI GPT models
2023-12-14 07:32

GuardRail OSS is an open-source project delivering practical guardrails to ensure responsible AI development and deployment. GuardRail OSS offers an API-driven framework for advanced data analysis, bias mitigation, sentiment analysis, content classification, and oversight tailored to an organization's specific AI needs.