Security News

Many popular generative AI projects are an increased security threat and open-source projects that utilize insecure generative AI and LLMs also have poor security posture, resulting in an environment with substantial risk for organizations, according to Rezilion. "On top of their inherent security issues, individuals and organizations provide these AI models with excessive access and authorization without proper security guardrails. Through our research, we aimed to convey that the open-source projects that utilize insecure generative AI and LLMs have poor security posture as well. These factors result in an environment with significant risk for organizations."

It's an open secret where this might be heading; AI will eventually become a primary cybersecurity system that not only helps out but performs threat detection and response without human intervention. AI will become cybersecurity system, taking over threat triage in a way that matches or even surpasses what human SOC teams can do.

A February 2023 generative AI survey of 1,000 executives revealed that 49% of respondents use ChatGPT now, and 30% plan to tap into the ubiquitous generative AI tool soon. 1 - Threat Actors Can Exploit Generative AI to Dupe SaaS Authentication Protocols #. As ambitious employees devise ways for AI tools to help them accomplish more with less too, do cybercriminals.

China has a playbook to use IP theft to seize leadership in cloud computing, and other nations should band together to stop that happening, according to Nathaniel C. Fick, the US ambassador-at-large for cyberspace and digital policy. The ambassador described China's actions in the telecoms industry as "a playbook" and warned the nation will "Run it in cloud computing they will run it in AI, they will run it in every core strategic technology area that matters."

What if AI turns out to be the one tool able to identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most? There are legitimate reasons to be concerned that AI could spread misinformation, break public comment processes on regulations, inundate legislators with artificial constituent outreach, help to automate corporate lobbying, or even generate laws in a way tailored to benefit narrow interests.

The rise of Generative-AI apps and GPT services exacerbates this issue, with employees of all departments rapidly adding the latest and greatest AI apps to their productivity arsenal, without the security team's knowledge. From engineering apps such as code review and optimization to marketing, design and sales apps such as content & video creation, image creation and email automation apps.

In this Help Net Security interview, Sunil Potti, GM and VP of Cloud Security at Google Cloud, talks about how new security and networking solutions powered by AI help improve security so Google customers can address their most pressing security challenges and remain ahead of an ever changing threat landscape. AI plays a significant role in Google Cloud's recently announced new security and networking solutions.

While the use of Infrastructure as Code has gained significant popularity as organizations embrace cloud computing and DevOps practices, the speed and flexibility that IaC provides can also introduce the potential for misconfigurations and security vulnerabilities. IaC misconfigurations are mistakes, or oversights, in the configuration of infrastructure resources and environments that happen when using IaC tools and frameworks.

In this Help Net Security round-up, we present parts of previously recorded videos from experts in the field that discuss about how AI technologies will impact the cybersecurity industry in the next few years. Complete videos Diego Pienknagura, VP of Growth & Global Operations at Inspectorio, talks about how the role of AI can be a driving force for the supply chain.

The firm used its own AI models to determine that certain emails sent to its customers later identified as phishing attacks were probably AI-generated, according to Dan Shiebler, head of machine learning at Abnormal. "The danger of generative AI in email attacks is that it allows threat actors to write increasingly sophisticated content, making it more likely that their target will be deceived into clicking a link or following their instructions," he said, adding that AI can also be used to create greater personalization.