Security News > 2023 > September > Microsoft AI Researchers Accidentally Expose 38 Terabytes of Confidential Data
Microsoft on Monday said it took steps to correct a glaring security gaffe that led to the exposure of 38 terabytes of private data.
"The exposure came as the result of an overly permissive SAS token - an Azure feature that allows users to share data in a manner that is both hard to track and hard to revoke," Wiz said in a report.
Specifically, the repository's README.md file instructed developers to download the models from an Azure Storage URL that accidentally also granted access to the entire storage account, thereby exposing additional private data.
In response to the findings, Microsoft said its investigation found no evidence of unauthorized exposure of customer data and that "No other internal services were put at risk because of this issue." It also emphasized that customers need not take any action on their part.
"Due to the lack of security and governance over Account SAS tokens, they should be considered as sensitive as the account key itself," the researchers said.
"AI unlocks huge potential for tech companies. However, as data scientists and engineers race to bring new AI solutions to production, the massive amounts of data they handle require additional security checks and safeguards," Wiz CTO and co-founder Ami Luttwak said in a statement.
News URL
https://thehackernews.com/2023/09/microsoft-ai-researchers-accidentally.html
Related news
- Researchers Uncover Vulnerabilities in Open-Source AI and ML Models (source)
- Microsoft Notepad to get AI-powered rewriting tool on Windows 11 (source)
- Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform (source)
- Microsoft Ignite 2024 Unveils Groundbreaking AI, Security, and Teams Innovations (source)
- Dell Unveils AI and Cybersecurity Solutions at Microsoft Ignite 2024 (source)
- Microsoft says it's not using your Word, Excel data for AI training (source)
- Microsoft Fixes AI, Cloud, and ERP Security Flaws; One Exploited in Active Attacks (source)
- Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI (source)