Security News

They raise legitimate questions about the usage and permissions of AI applications within their infrastructure: Who is using these applications, and for what purposes? Which AI applications have access to company data, and what level of access have they been granted? What is the information employees share with these applications? What are the compliance implications? Each AI tool presents a potential attack surface that must be accounted for: Most AI applications are SaaS based and require OAuth tokens to connect with major business applications such as Google or O365.

Enter generative AI. Many cybersecurity companies - and more specifically, threat intelligence companies - are bringing generative AI to market to simplify threat intelligence and make it faster and easier to harness valuable insights from the vast pool of CTI data. Gain insights into AI models, cybersecurity importance, advanced threat intelligence, CTI accessibility, and choosing the right solution.

Many popular generative AI projects are an increased security threat and open-source projects that utilize insecure generative AI and LLMs also have poor security posture, resulting in an environment with substantial risk for organizations, according to Rezilion. "On top of their inherent security issues, individuals and organizations provide these AI models with excessive access and authorization without proper security guardrails. Through our research, we aimed to convey that the open-source projects that utilize insecure generative AI and LLMs have poor security posture as well. These factors result in an environment with significant risk for organizations."

It's an open secret where this might be heading; AI will eventually become a primary cybersecurity system that not only helps out but performs threat detection and response without human intervention. AI will become cybersecurity system, taking over threat triage in a way that matches or even surpasses what human SOC teams can do.

A February 2023 generative AI survey of 1,000 executives revealed that 49% of respondents use ChatGPT now, and 30% plan to tap into the ubiquitous generative AI tool soon. 1 - Threat Actors Can Exploit Generative AI to Dupe SaaS Authentication Protocols #. As ambitious employees devise ways for AI tools to help them accomplish more with less too, do cybercriminals.

China has a playbook to use IP theft to seize leadership in cloud computing, and other nations should band together to stop that happening, according to Nathaniel C. Fick, the US ambassador-at-large for cyberspace and digital policy. The ambassador described China's actions in the telecoms industry as "a playbook" and warned the nation will "Run it in cloud computing they will run it in AI, they will run it in every core strategic technology area that matters."

What if AI turns out to be the one tool able to identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most? There are legitimate reasons to be concerned that AI could spread misinformation, break public comment processes on regulations, inundate legislators with artificial constituent outreach, help to automate corporate lobbying, or even generate laws in a way tailored to benefit narrow interests.

The rise of Generative-AI apps and GPT services exacerbates this issue, with employees of all departments rapidly adding the latest and greatest AI apps to their productivity arsenal, without the security team's knowledge. From engineering apps such as code review and optimization to marketing, design and sales apps such as content & video creation, image creation and email automation apps.

In this Help Net Security interview, Sunil Potti, GM and VP of Cloud Security at Google Cloud, talks about how new security and networking solutions powered by AI help improve security so Google customers can address their most pressing security challenges and remain ahead of an ever changing threat landscape. AI plays a significant role in Google Cloud's recently announced new security and networking solutions.

While the use of Infrastructure as Code has gained significant popularity as organizations embrace cloud computing and DevOps practices, the speed and flexibility that IaC provides can also introduce the potential for misconfigurations and security vulnerabilities. IaC misconfigurations are mistakes, or oversights, in the configuration of infrastructure resources and environments that happen when using IaC tools and frameworks.