Security News

The identity verification market is experiencing a significant surge in growth. In recent years, many solutions have emerged to assist businesses in establishing trust and facilitating remote user onboarding.

Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver RedLine Stealer malware. BATLOADER is a loader malware that's propagated via drive-by downloads where users searching for certain keywords on search engines are displayed bogus ads that, when clicked, redirect them to rogue landing pages hosting malware.

These apps have popped up in the Google Play and Apple App Store. "Scammers have and always will use the latest trends or technology to line their pockets. ChatGPT is no exception. With interest in AI and chatbots arguably at an all-time high, users are turning to the Apple App and Google Play Stores to download anything that resembles ChatGPT," said Sean Gallagher, principal threat researcher, Sophos.

Google will add artificial intelligence to several online safety features and give users more insight into whether their information might have been posted on the dark web, the tech giant announced during the Google I/O conference on May 10. Google offers AI image generation and plans to roll out markups that will label those images as AI-generated in Search.

Google has added support for more scripting languages to VirusTotal Code Insight, a recently introduced artificial intelligence-based code analysis feature. While launched only with support for analyzing a subset of PowerShell files, Code Insight can now also spot malicious Batch, Command Prompt, Shell, and VBScript scripts.

CLEARVIEW AI collects photographs from a wide range of websites, including social networks, and sells access to its database of images of people through a search engine in which an individual can be searched using a photograph. Worse still, CNIL castigated Clearview for trying to cling onto the very data it shouldn't have collected in the first place.

Ted Chiang has an excellent essay in the New Yorker: "Will A.I. Become the New McKinsey?". This is the dream of many A.I. researchers.

First, a trustworthy AI system must be controllable by the user. These requirements are all well within the technical capabilities of AI systems.

The Biden administration, last week, articulated aims to put guardrails around generative and other AI, while attackers get bolder using the technology. The post White House addresses AI’s risks...

DEF CON's AI Village will host the first public assessment of large language models at the 31st edition of the hacker convention this August, aimed at finding bugs in and uncovering the potential for misuse of AI models. During the conference, red teams will put LLMs from some of the leading vendors, such as Anthropic, Google, Hugging Face, NVIDIA, OpenAI, Stability, and Microsoft, to the test.