Security News > 2023 > June > Popular generative AI projects pose serious security threat

Popular generative AI projects pose serious security threat
2023-06-29 04:30

Many popular generative AI projects are an increased security threat and open-source projects that utilize insecure generative AI and LLMs also have poor security posture, resulting in an environment with substantial risk for organizations, according to Rezilion.

"On top of their inherent security issues, individuals and organizations provide these AI models with excessive access and authorization without proper security guardrails. Through our research, we aimed to convey that the open-source projects that utilize insecure generative AI and LLMs have poor security posture as well. These factors result in an environment with significant risk for organizations."

Rezilion's research team investigated the security posture of the 50 most popular generative AI projects on GitHub.

The research utilizes the Open Source Security Foundation Scorecard to objectively evaluate the LLM open-source ecosystem and highlight the lack of maturity, gaps in basic security best practices, and potential security risks in many LLM-based projects.

The following best practices and guidance is recommended for the secure deployment and operation of generative AI systems: educate teams on the risks associated with adopting any new technologies; evaluate and monitor security risks related to LLMs and open-source ecosystems; implement robust security practices, conduct thorough risk assessments, and foster a culture of security awareness.

While there are significant security challenges concerning the adoption of generative AI models and their ecosystem, AI technologies are exciting, powerful, and here to stay.


News URL

https://www.helpnetsecurity.com/2023/06/29/generative-ai-security-risk/