Security News > 2023 > October > Generative AI Security: Preventing Microsoft Copilot Data Exposure
In this post I'm going to focus specifically on data security and how your team can ensure a safe Copilot rollout.
Microsoft relies heavily on sensitivity labels to enforce DLP policies, apply encryption, and broadly prevent data leaks.
Microsoft paints a rosy picture of labeling and blocking as the ultimate safety net for your data.
The efficacy of label-based data protection will surely degrade when we have AI generating orders of magnitude more data requiring accurate and auto-updating labels.
It's critical to have a sense of your data security posture before your Copilot rollout.
Varonis protects thousands of Microsoft 365 customers with our Data Security Platform, which provides a real-time view of risk and the ability to automatically enforce least privilege.
News URL
Related news
- Microsoft Ignite 2024 Unveils Groundbreaking AI, Security, and Teams Innovations (source)
- Microsoft Fixes AI, Cloud, and ERP Security Flaws; One Exploited in Active Attacks (source)
- Best AI Security Tools: Top Solutions, Features & Comparisons (source)
- Microsoft Entra "security defaults" to make MFA setup mandatory (source)
- Microsoft Delays Windows Copilot+ Recall Release Over Privacy Concerns (source)
- How AI Is Changing the Cloud Security and Risk Equation (source)
- Google claims Big Sleep 'first' AI to spot freshly committed security bug that fuzzing missed (source)
- Microsoft Notepad to get AI-powered rewriting tool on Windows 11 (source)
- HackerOne: Nearly Half of Security Professionals Believe AI Is Risky (source)
- AI’s impact on the future of web application security (source)