Security News
Reduced traffic to your website or app becomes problematic, as users getting answers directly through ChatGPT and its plugins no longer need to find or visit your pages. Worried about ChatGPT scraping your content? Learn how to outsmart AI bots, defend your content, and secure your web traffic.
Today, OpenAI released ChatGPT Enterprise, an enterprise-grade version of its popular generative AI chatbot. ChatGPT Enterprise has enhanced security and privacy meant for business use and unlimited access to a high-speed version of ChatGPT's underlying large language model GPT-4.
Read about a new tool advertised on the Dark Web called WormGPT. As artificial intelligence technology such as ChatGPT continues to improve, so does its potential for misuse by cybercriminals. ChatGPT credentials and jailbreak prompts on the Dark Web ChatGPT stolen credentials on the Dark Web.
For every 10,000 enterprise users, an enterprise organization is experiencing approximately 183 incidents of sensitive data being posted to ChatGPT per month, according to Netskope.Based on data from millions of enterprise users globally, researchers found that generative AI app usage is growing rapidly, up 22.5% over the past two months, amplifying the chances of users exposing sensitive data.
An overwhelming number of respondents familiar with ChatGPT were concerned about the risks it poses to security and safety, according to Malwarebytes. Machine learning models like ChatGPT are "Black boxes" with emergent properties that appear suddenly and unexpectedly as the amount of computing power used to create them increases.
The rise of Generative-AI apps and GPT services exacerbates this issue, with employees of all departments rapidly adding the latest and greatest AI apps to their productivity arsenal, without the security team's knowledge. From engineering apps such as code review and optimization to marketing, design and sales apps such as content & video creation, image creation and email automation apps.
ChatGPT can be used to generate phishing sites, but could it also be used to reliably detect them? Security researchers have tried to answer that question. What surprised the researchers was the fact that ChatGPT managed to detect potential phishing targets.
Compromised credentials were found within the logs of info-stealing malware traded on illicit dark web marketplaces over the past year, according to Group-IB. The number of available logs containing compromised ChatGPT accounts reached a peak of 26,802 in May 2023.Unauthorized access to ChatGPT accounts may expose confidential or sensitive information, which can be exploited for targeted attacks against companies and their employees.
Singapore-based threat intelligence outfit Group-IB has found ChatGPT credentials in more than 100,000 stealer logs traded on the dark web in the past year. "Group-IB's experts highlight that more and more employees are taking advantage of the Chatbot to optimize their work, be it software development or business communications," said the company, adding that demand for account credentials was gaining "Significant popularity."
Over 100,000 compromised OpenAI ChatGPT account credentials have found their way on illicit dark web marketplaces between June 2022 and May 2023, with India alone accounting for 12,632 stolen credentials. The credentials were discovered within information stealer logs made available for sale on the cybercrime underground, Group-IB said in a report shared with The Hacker News.