Security News

Since Google enabled its AI-powered search feature, many people have tried and failed to disable the often incorrect AI Overviews feature in regular search results. AI Overviews, also known as "Search Generative Experience," is Google's new search feature that summarizes web content using its in-house LLM. Google says AI overviews appear only when the search engine believes it can provide more value than traditional blue links.

Let's explore why open-source AI security is lacking and what security professionals can do to improve it. First, it's essential to acknowledge that AI is not something different from software; it is software.

Cloud security incidents are alarmingly on the rise, with 61% of organizations reporting breaches within the last year, marking a significant increase from 24% the year before, according to Check...

Google has announced a slew of privacy and security features in Android, including a suite of advanced protection features to help secure users' devices and data in the event of a theft. These...

Your profile can be used to present content that appears more relevant based on your possible interests, such as by adapting the order in which content is shown to you, so that it is even easier for you to find content that matches your interests. Content presented to you on this service can be based on your content personalisation profiles, which can reflect your activity on this or other services, possible interests and personal aspects.

In this Help Net Security interview, Chris Peake, CISO & SVP at Smartsheet, explains how responsible AI should be defined by each organization to guide their AI development and usage. Peake emphasizes that implementing responsible AI requires balancing ethical considerations, industry regulations, and proactive risk assessment to ensure that AI is used transparently.

Your profile can be used to present content that appears more relevant based on your possible interests, such as by adapting the order in which content is shown to you, so that it is even easier for you to find content that matches your interests. Content presented to you on this service can be based on your content personalisation profiles, which can reflect your activity on this or other services, possible interests and personal aspects.

Red teaming exercises are one of the best ways to find novel risk, making them ideal for finding security and safety concerns in emerging technologies like generative AI. This can be done using a combination of penetration testing, time-bound offensive hacking competitions, and bug bounty programs. With this clear focus on safety, security, and accountability, red teaming practices are likely to be considered favorably by regulators worldwide, as well as aligning with the UK government's vision for responsible AI development.

"GenAI and AI have the potential to both disrupt and enhance security operations. As the acting gatekeepers - CISOs are under immense pressure," said Harold Rivas, CISO, Trellix. With cyberattacks on the rise, AI pressures mounting, and responsibilities growing, it's no surprise 90% of CISOs find themselves under increased pressure.

In this Help Net Security video, Itamar Sher, CEO of Seal Security, discusses how AI affects the risk and operational aspects of managing vulnerabilities in open-source software. One of the core issues around open-source vulnerability patch management has been the coupling between security patches and other code changes.