Security News > 2021 > April > AI industry alarmingly unprepared for real-world attacks

Adversa has published comprehensive research on the security and trustworthiness of AI systems worldwide during the last decade.
The research considers the impact of ongoing regulations concerning AI security in the EU and USA. "Building trust in the security and safety of machine learning is crucial. We are asking people to put their faith in what is essentially a black box, and for the AI revolution to succeed, we must build trust. And we can't bolt security on this time. We won't have many chances at getting it right. The risks are too high - but so are the benefits," said Oliver Rochford, Adversa Advisor.
Eugene Neelou, Adversa's CTO states: "To raise security awareness in the field of Trusted AI, we started more than a year ago a project to analyze the past decade of academic, industry, and governmental progress. The eye-opening results reveal an exponential growth of interest in testing AI systems for security and Bias and the absence of adequate defenses. Organizations should keep up with the latest threats, implement AI security awareness initiatives, and protect their AI development life cycle especially now."
The recent exponential growth of AI has motivated governments, academia, and industry to publish more research on AI security for the past two years than for the previous two decades totaling over 3500 papers.
The AI industry is woefully unprepared for real-world attacks against AI systems, every ML model of top 60 commonly used in industry are prone to at least one vulnerability.
"Unfortunately, our investigation shows that the AI industry is alarmingly unready for the wave of coming real-world attacks against AI systems. Public perception of how trustworthy AI is will be a core criterion determining whether societies and businesses will adopt AI for good or face another AI winter," said Alex Polyakov, CEO, Adversa.
News URL
http://feedproxy.google.com/~r/HelpNetSecurity/~3/5DKiFnc8zrc/
Related news
- CrowdStrike Security Report: Generative AI Powers Social Engineering Attacks (source)
- How New AI Agents Will Transform Credential Stuffing Attacks (source)
- YouTube warns of AI-generated video of its CEO used in phishing attacks (source)
- MINJA sneak attack poisons AI models for other chatbot users (source)
- New ‘Rules File Backdoor’ Attack Lets Hackers Inject Malicious Code via AI Code Editors (source)
- ⚡ THN Weekly Recap: GitHub Supply Chain Attack, AI Malware, BYOVD Tactics, and More (source)
- AI-Powered SaaS Security: Keeping Pace with an Expanding Attack Surface (source)