Security News

CLEARVIEW AI collects photographs from a wide range of websites, including social networks, and sells access to its database of images of people through a search engine in which an individual can be searched using a photograph. Worse still, CNIL castigated Clearview for trying to cling onto the very data it shouldn't have collected in the first place.

Ted Chiang has an excellent essay in the New Yorker: "Will A.I. Become the New McKinsey?". This is the dream of many A.I. researchers.

First, a trustworthy AI system must be controllable by the user. These requirements are all well within the technical capabilities of AI systems.

The Biden administration, last week, articulated aims to put guardrails around generative and other AI, while attackers get bolder using the technology. The post White House addresses AI’s risks...

DEF CON's AI Village will host the first public assessment of large language models at the 31st edition of the hacker convention this August, aimed at finding bugs in and uncovering the potential for misuse of AI models. During the conference, red teams will put LLMs from some of the leading vendors, such as Anthropic, Google, Hugging Face, NVIDIA, OpenAI, Stability, and Microsoft, to the test.

At DEF CON this year, Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI will all open up their models for attack. The DEF CON event will rely on an evaluation platform developed by Scale AI, a California company that produces training for AI applications.

62% of business leaders cite customer retention as a top benefit of personalization, while nearly 60% say personalization is an effective strategy for acquiring new customers. To power even more sophisticated real-time customer experiences, the vast majority of businesses are turning to AI to harness high volumes of real-time data and power their personalization efforts.

"Since the beginning of 2023 until the end of April, out of 13,296 new domains created related to ChatGPT or OpenAI, 1 out of every 25 new domains were either malicious or potentially malicious," Check Point researchers have shared on Tuesday.On Wednesday, Meta said that, since March 2023, they've blocked 1,000+ malicious links leveraging ChatGPT as a lure from being shared across their technologies.

The success of ChatGPT, a text-generation chatbot, has sparked widespread interest in generative AI among millions of people worldwide. According to Jumio's research, 67% of consumers globally are aware of generative AI technologies, and in certain markets, such as Singapore, 45% have utilized an application that employs such technologies.

Meta says it has shut down over 1,000 links related to ChatGPT that lead its users to malware, as criminals seek to profit from the current craze for generative AI. ChatGPT has quickly bagged more than 100 million users, encouraging many organizations to explore how generative AI might help them increase productivity and profit. Scammers are thinking along the same lines, offering links and other stuff related to the chat bot to draw people into malicious websites that steal their info or offer downloads laced with malware.