Security News

The usage of platforms like Cash App, Zelle, and Venmo for peer-to-peer payments has experienced a significant surge, with scams increasing by over 58%. Additionally, there has been a corresponding rise of 44% in scams stemming from the theft of personal documents, according to IDIQ. AI voice technology. The report also highlights the rise of AI voice scams as a significant trend in 2023.

In this Help Net Security interview, Nadir Izrael, co-founder & CTO of Armis, discusses the global efforts and variations in promoting responsible AI, as well as the necessary measures to ensure responsible AI innovation in the United States. What are your initial impressions of the Biden-Harris Administration's efforts to advance responsible AI? Are they on the right track in managing the risks associated with AI? The effort to address the issue of responsible AI is a proactive step in the right direction.

They raise legitimate questions about the usage and permissions of AI applications within their infrastructure: Who is using these applications, and for what purposes? Which AI applications have access to company data, and what level of access have they been granted? What is the information employees share with these applications? What are the compliance implications? Each AI tool presents a potential attack surface that must be accounted for: Most AI applications are SaaS based and require OAuth tokens to connect with major business applications such as Google or O365.

Enter generative AI. Many cybersecurity companies - and more specifically, threat intelligence companies - are bringing generative AI to market to simplify threat intelligence and make it faster and easier to harness valuable insights from the vast pool of CTI data. Gain insights into AI models, cybersecurity importance, advanced threat intelligence, CTI accessibility, and choosing the right solution.

Many popular generative AI projects are an increased security threat and open-source projects that utilize insecure generative AI and LLMs also have poor security posture, resulting in an environment with substantial risk for organizations, according to Rezilion. "On top of their inherent security issues, individuals and organizations provide these AI models with excessive access and authorization without proper security guardrails. Through our research, we aimed to convey that the open-source projects that utilize insecure generative AI and LLMs have poor security posture as well. These factors result in an environment with significant risk for organizations."

It's an open secret where this might be heading; AI will eventually become a primary cybersecurity system that not only helps out but performs threat detection and response without human intervention. AI will become cybersecurity system, taking over threat triage in a way that matches or even surpasses what human SOC teams can do.

A February 2023 generative AI survey of 1,000 executives revealed that 49% of respondents use ChatGPT now, and 30% plan to tap into the ubiquitous generative AI tool soon. 1 - Threat Actors Can Exploit Generative AI to Dupe SaaS Authentication Protocols #. As ambitious employees devise ways for AI tools to help them accomplish more with less too, do cybercriminals.

China has a playbook to use IP theft to seize leadership in cloud computing, and other nations should band together to stop that happening, according to Nathaniel C. Fick, the US ambassador-at-large for cyberspace and digital policy. The ambassador described China's actions in the telecoms industry as "a playbook" and warned the nation will "Run it in cloud computing they will run it in AI, they will run it in every core strategic technology area that matters."

What if AI turns out to be the one tool able to identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most? There are legitimate reasons to be concerned that AI could spread misinformation, break public comment processes on regulations, inundate legislators with artificial constituent outreach, help to automate corporate lobbying, or even generate laws in a way tailored to benefit narrow interests.

The rise of Generative-AI apps and GPT services exacerbates this issue, with employees of all departments rapidly adding the latest and greatest AI apps to their productivity arsenal, without the security team's knowledge. From engineering apps such as code review and optimization to marketing, design and sales apps such as content & video creation, image creation and email automation apps.