Security News

DEF CON's AI Village will host the first public assessment of large language models at the 31st edition of the hacker convention this August, aimed at finding bugs in and uncovering the potential for misuse of AI models. During the conference, red teams will put LLMs from some of the leading vendors, such as Anthropic, Google, Hugging Face, NVIDIA, OpenAI, Stability, and Microsoft, to the test.

At DEF CON this year, Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI will all open up their models for attack. The DEF CON event will rely on an evaluation platform developed by Scale AI, a California company that produces training for AI applications.

62% of business leaders cite customer retention as a top benefit of personalization, while nearly 60% say personalization is an effective strategy for acquiring new customers. To power even more sophisticated real-time customer experiences, the vast majority of businesses are turning to AI to harness high volumes of real-time data and power their personalization efforts.

"Since the beginning of 2023 until the end of April, out of 13,296 new domains created related to ChatGPT or OpenAI, 1 out of every 25 new domains were either malicious or potentially malicious," Check Point researchers have shared on Tuesday.On Wednesday, Meta said that, since March 2023, they've blocked 1,000+ malicious links leveraging ChatGPT as a lure from being shared across their technologies.

The success of ChatGPT, a text-generation chatbot, has sparked widespread interest in generative AI among millions of people worldwide. According to Jumio's research, 67% of consumers globally are aware of generative AI technologies, and in certain markets, such as Singapore, 45% have utilized an application that employs such technologies.

Meta says it has shut down over 1,000 links related to ChatGPT that lead its users to malware, as criminals seek to profit from the current craze for generative AI. ChatGPT has quickly bagged more than 100 million users, encouraging many organizations to explore how generative AI might help them increase productivity and profit. Scammers are thinking along the same lines, offering links and other stuff related to the chat bot to draw people into malicious websites that steal their info or offer downloads laced with malware.

We will discuss how organizations can proactively improve their security posture by embracing technology and implementing best practices to defend against these advanced threats. One of the primary ways web applications can be targeted is through vulnerability exploitation searches, where attackers focus on known vulnerabilities in web servers, databases, content management systems, and third-party libraries.

Generative AI has captured the imagination of millions worldwide, largely driven by the recent success of ChatGPT, the text-generation chatbot. Our new research showed that globally, 67% of consumers have heard of generative AI technologies, and in some markets, like Singapore, almost half have used an application that uses them.

As a first step, our report recommends the inclusion of AI security concerns within the cybersecurity programs of developers and users. Many AI products are deployed without institutions fully understanding the security risks they pose.

Are we moving too fast with AI? This is a central question both inside and outside the tech industry, given the recent tsunami of attention paid to ChatGPT and other generative AI tools. Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?