Security News > 2024 > March > In the rush to build AI apps, please, please don't leave security behind
Code components available from public repositories can contain hidden backdoors or data exfiltrators, and pre-built models and datasets can be poisoned to cause apps to behave unexpectedly inappropriately.
Backdoored or malware-spiked libraries and models, if incorporated into shipped software, could leave users of those apps open to attack as well.
"If you think of a pie chart of how you're gonna get hacked once you open up an AI department in your company or organization," Dan McInerney, lead AI security researcher at Protect AI, told The Register, "a tiny fraction of that pie is going to be model input attacks, which is what everyone talks about. And a giant portion is going to be attacking the supply chain - the tools you use to build the model themselves."
How to weaponize LLMs to hijack websites Google open sources file-identifying Magika AI for malware hunters and others California proposes government cloud cluster to sift out nasty AI models OpenAI shuts down China, Russia, Iran, N Korea accounts caught doing naughty things.
"So in theory, an attacker could have submitted any change to any repository and made it look like it came from Hugging Face, and a security update could have fooled them into accepting it. People would have just had backdoored models or insecure models in their repos and wouldn't know."
Trying to beef up security in the AI supply chain is tricky, and with so many tools and models being built and released, it's difficult to keep up.
News URL
https://go.theregister.com/feed/www.theregister.com/2024/03/17/ai_supply_chain/
Related news
- CIOs want a platform that combines AI, networking, and security (source)
- Generative AI in Security: Risks and Mitigation Strategies (source)
- Unlocking the value of AI-powered identity security (source)
- Can Security Experts Leverage Generative AI Without Prompt Engineering Skills? (source)
- Eliminating AI Deepfake Threats: Is Your Identity Security AI-Proof? (source)
- Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security (source)
- Best AI Security Tools: Top Solutions, Features & Comparisons (source)
- How AI Is Changing the Cloud Security and Risk Equation (source)
- Google claims Big Sleep 'first' AI to spot freshly committed security bug that fuzzing missed (source)
- HackerOne: Nearly Half of Security Professionals Believe AI Is Risky (source)