Security News
ChatGPT has transformed the way businesses generate textual content, which can potentially result in a quantum leap in productivity. However, Generative AI innovation also introduces a new...
FraudGPT is the evil counterpart to ChatGPT. Criminals use it to target businesses with phishing emails and scams with speed and accuracy like never before. The AI can be prompted to create the most realistic phishing emails, perfected down to a business' tone and artistic style, and encourage victims to hand over sensitive information, such as bank information or corporate login details.
ChatGPT has attracted hundreds of millions of users and was initially praised for its transformative potential. Concerns for safety controls and unpredictability have landed it on IT leaders' list of apps to ban in the workplace.
Reduced traffic to your website or app becomes problematic, as users getting answers directly through ChatGPT and its plugins no longer need to find or visit your pages. Worried about ChatGPT scraping your content? Learn how to outsmart AI bots, defend your content, and secure your web traffic.
Today, OpenAI released ChatGPT Enterprise, an enterprise-grade version of its popular generative AI chatbot. ChatGPT Enterprise has enhanced security and privacy meant for business use and unlimited access to a high-speed version of ChatGPT's underlying large language model GPT-4.
Read about a new tool advertised on the Dark Web called WormGPT. As artificial intelligence technology such as ChatGPT continues to improve, so does its potential for misuse by cybercriminals. ChatGPT credentials and jailbreak prompts on the Dark Web ChatGPT stolen credentials on the Dark Web.
For every 10,000 enterprise users, an enterprise organization is experiencing approximately 183 incidents of sensitive data being posted to ChatGPT per month, according to Netskope.Based on data from millions of enterprise users globally, researchers found that generative AI app usage is growing rapidly, up 22.5% over the past two months, amplifying the chances of users exposing sensitive data.
An overwhelming number of respondents familiar with ChatGPT were concerned about the risks it poses to security and safety, according to Malwarebytes. Machine learning models like ChatGPT are "Black boxes" with emergent properties that appear suddenly and unexpectedly as the amount of computing power used to create them increases.
The rise of Generative-AI apps and GPT services exacerbates this issue, with employees of all departments rapidly adding the latest and greatest AI apps to their productivity arsenal, without the security team's knowledge. From engineering apps such as code review and optimization to marketing, design and sales apps such as content & video creation, image creation and email automation apps.
ChatGPT can be used to generate phishing sites, but could it also be used to reliably detect them? Security researchers have tried to answer that question. What surprised the researchers was the fact that ChatGPT managed to detect potential phishing targets.