Security News

For every 10,000 enterprise users, an enterprise organization is experiencing approximately 183 incidents of sensitive data being posted to ChatGPT per month, according to Netskope.Based on data from millions of enterprise users globally, researchers found that generative AI app usage is growing rapidly, up 22.5% over the past two months, amplifying the chances of users exposing sensitive data.

An overwhelming number of respondents familiar with ChatGPT were concerned about the risks it poses to security and safety, according to Malwarebytes. Machine learning models like ChatGPT are "Black boxes" with emergent properties that appear suddenly and unexpectedly as the amount of computing power used to create them increases.

The rise of Generative-AI apps and GPT services exacerbates this issue, with employees of all departments rapidly adding the latest and greatest AI apps to their productivity arsenal, without the security team's knowledge. From engineering apps such as code review and optimization to marketing, design and sales apps such as content & video creation, image creation and email automation apps.

ChatGPT can be used to generate phishing sites, but could it also be used to reliably detect them? Security researchers have tried to answer that question. What surprised the researchers was the fact that ChatGPT managed to detect potential phishing targets.

Compromised credentials were found within the logs of info-stealing malware traded on illicit dark web marketplaces over the past year, according to Group-IB. The number of available logs containing compromised ChatGPT accounts reached a peak of 26,802 in May 2023.Unauthorized access to ChatGPT accounts may expose confidential or sensitive information, which can be exploited for targeted attacks against companies and their employees.

Singapore-based threat intelligence outfit Group-IB has found ChatGPT credentials in more than 100,000 stealer logs traded on the dark web in the past year. "Group-IB's experts highlight that more and more employees are taking advantage of the Chatbot to optimize their work, be it software development or business communications," said the company, adding that demand for account credentials was gaining "Significant popularity."

Over 100,000 compromised OpenAI ChatGPT account credentials have found their way on illicit dark web marketplaces between June 2022 and May 2023, with India alone accounting for 12,632 stolen credentials. The credentials were discovered within information stealer logs made available for sale on the cybercrime underground, Group-IB said in a report shared with The Hacker News.

More than 101,000 ChatGPT user accounts have been stolen by information-stealing malware over the past year, according to dark web marketplace data. Cyberintelligence firm Group-IB reports having identified over a hundred thousand info-stealer logs on various underground websites containing ChatGPT accounts, with the peak observed in May 2023, when threat actors posted 26,800 new ChatGPT credential pairs.

What risks do businesses face regarding compliance with data protection laws when using ChatGPT? ChatGPT is not exempt from data protection laws, such as the General Data Protection Regulation, the Health Insurance Portability and Accountability Act, the Payment Card Industry Data Security Standard, and the Consumer Privacy Protection Act.

The revolutionary technology of GenAI tools, such as ChatGPT, has brought significant risks to organizations' sensitive data. The report titled "Revealing the True GenAI Data Exposure Risk" provides crucial insights for data protection stakeholders and empowers them to take proactive measures.