Security News

Google combats AI misinformation with Search labels, adds dark web security upgrades
2023-05-15 23:18

Google will add artificial intelligence to several online safety features and give users more insight into whether their information might have been posted on the dark web, the tech giant announced during the Google I/O conference on May 10. Google offers AI image generation and plans to roll out markups that will label those images as AI-generated in Search.

VirusTotal AI code analysis expands Windows, Linux script support
2023-05-15 19:54

Google has added support for more scripting languages to VirusTotal Code Insight, a recently introduced artificial intelligence-based code analysis feature. While launched only with support for analyzing a subset of PowerShell files, Code Insight can now also spot malicious Batch, Command Prompt, Shell, and VBScript scripts.

Zut alors! Raclage crapuleux! Clearview AI in 20% more trouble in France
2023-05-15 18:36

CLEARVIEW AI collects photographs from a wide range of websites, including social networks, and sells access to its database of images of people through a search engine in which an individual can be searched using a photograph. Worse still, CNIL castigated Clearview for trying to cling onto the very data it shouldn't have collected in the first place.

Ted Chiang on the Risks of AI
2023-05-12 14:00

Ted Chiang has an excellent essay in the New Yorker: "Will A.I. Become the New McKinsey?". This is the dream of many A.I. researchers.

#AI
Building Trustworthy AI
2023-05-11 11:17

First, a trustworthy AI system must be controllable by the user. These requirements are all well within the technical capabilities of AI systems.

#AI
White House addresses AI’s risks and rewards as security experts voice concerns about malicious use
2023-05-09 14:24

The Biden administration, last week, articulated aims to put guardrails around generative and other AI, while attackers get bolder using the technology. The post White House addresses AI’s risks...

Finding bugs in AI models at DEF CON 31
2023-05-09 08:09

DEF CON's AI Village will host the first public assessment of large language models at the 31st edition of the hacker convention this August, aimed at finding bugs in and uncovering the potential for misuse of AI models. During the conference, red teams will put LLMs from some of the leading vendors, such as Anthropic, Google, Hugging Face, NVIDIA, OpenAI, Stability, and Microsoft, to the test.

#AI
AI Hacking Village at DEF CON This Year
2023-05-08 15:29

At DEF CON this year, Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI will all open up their models for attack. The DEF CON event will rely on an evaluation platform developed by Scale AI, a California company that produces training for AI applications.

Consumer skepticism is the biggest barrier to AI-driven personalization
2023-05-08 03:30

62% of business leaders cite customer retention as a top benefit of personalization, while nearly 60% say personalization is an effective strategy for acquiring new customers. To power even more sophisticated real-time customer experiences, the vast majority of businesses are turning to AI to harness high volumes of real-time data and power their personalization efforts.

#AI
ChatGPT and other AI-themed lures used to deliver malicious software
2023-05-04 10:32

"Since the beginning of 2023 until the end of April, out of 13,296 new domains created related to ChatGPT or OpenAI, 1 out of every 25 new domains were either malicious or potentially malicious," Check Point researchers have shared on Tuesday.On Wednesday, Meta said that, since March 2023, they've blocked 1,000+ malicious links leveraging ChatGPT as a lure from being shared across their technologies.