Security News

Frustration grows over Google's AI Overviews feature, how to disable
2024-05-19 17:43

Since Google enabled its AI-powered search feature, many people have tried and failed to disable the often incorrect AI Overviews feature in regular search results. AI Overviews, also known as "Search Generative Experience," is Google's new search feature that summarizes web content using its in-house LLM. Google says AI overviews appear only when the search engine believes it can provide more value than traditional blue links.

Is an open-source AI vulnerability next?
2024-05-16 05:30

Let's explore why open-source AI security is lacking and what security professionals can do to improve it. First, it's essential to acknowledge that AI is not something different from software; it is software.

Cloud security incidents make organizations turn to AI-powered prevention
2024-05-16 03:30

Cloud security incidents are alarmingly on the rise, with 61% of organizations reporting breaches within the last year, marking a significant increase from 24% the year before, according to Check...

Google Launches AI-Powered Theft and Data Protection Features for Android Devices
2024-05-15 17:00

Google has announced a slew of privacy and security features in Android, including a suite of advanced protection features to help secure users' devices and data in the event of a theft. These...

FCC names and shames Royal Tiger AI robocall crew
2024-05-14 21:30

Your profile can be used to present content that appears more relevant based on your possible interests, such as by adapting the order in which content is shown to you, so that it is even easier for you to find content that matches your interests. Content presented to you on this service can be based on your content personalisation profiles, which can reflect your activity on this or other services, possible interests and personal aspects.

Tailoring responsible AI: Defining ethical guidelines for industry-specific use
2024-05-14 04:00

In this Help Net Security interview, Chris Peake, CISO & SVP at Smartsheet, explains how responsible AI should be defined by each organization to guide their AI development and usage. Peake emphasizes that implementing responsible AI requires balancing ethical considerations, industry regulations, and proactive risk assessment to ensure that AI is used transparently.

AI red-teaming tools helped X-Force break into a major tech manufacturer 'in 8 hours'
2024-05-13 14:00

Your profile can be used to present content that appears more relevant based on your possible interests, such as by adapting the order in which content is shown to you, so that it is even easier for you to find content that matches your interests. Content presented to you on this service can be based on your content personalisation profiles, which can reflect your activity on this or other services, possible interests and personal aspects.

Red teaming: The key ingredient for responsible AI
2024-05-13 05:24

Red teaming exercises are one of the best ways to find novel risk, making them ideal for finding security and safety concerns in emerging technologies like generative AI. This can be done using a combination of penetration testing, time-bound offensive hacking competitions, and bug bounty programs. With this clear focus on safety, security, and accountability, red teaming practices are likely to be considered favorably by regulators worldwide, as well as aligning with the UK government's vision for responsible AI development.

#AI
AI’s rapid growth puts pressure on CISOs to adapt to new security risks
2024-05-13 04:30

"GenAI and AI have the potential to both disrupt and enhance security operations. As the acting gatekeepers - CISOs are under immense pressure," said Harold Rivas, CISO, Trellix. With cyberattacks on the rise, AI pressures mounting, and responsibilities growing, it's no surprise 90% of CISOs find themselves under increased pressure.

How AI affects vulnerability management in open-source software
2024-05-13 04:00

In this Help Net Security video, Itamar Sher, CEO of Seal Security, discusses how AI affects the risk and operational aspects of managing vulnerabilities in open-source software. One of the core issues around open-source vulnerability patch management has been the coupling between security patches and other code changes.