Security News > 2023 > October > Generative AI Security: Preventing Microsoft Copilot Data Exposure
![Generative AI Security: Preventing Microsoft Copilot Data Exposure](/static/build/img/news/generative-ai-security-preventing-microsoft-copilot-data-exposure-medium.jpg)
In this post I'm going to focus specifically on data security and how your team can ensure a safe Copilot rollout.
Microsoft relies heavily on sensitivity labels to enforce DLP policies, apply encryption, and broadly prevent data leaks.
Microsoft paints a rosy picture of labeling and blocking as the ultimate safety net for your data.
The efficacy of label-based data protection will surely degrade when we have AI generating orders of magnitude more data requiring accurate and auto-updating labels.
It's critical to have a sense of your data security posture before your Copilot rollout.
Varonis protects thousands of Microsoft 365 customers with our Data Security Platform, which provides a real-time view of risk and the ability to automatically enforce least privilege.
News URL
Related news
- Microsoft Delays AI-Powered Recall Feature for Copilot+ PCs Amid Security Concerns (source)
- Microsoft Build 2024: Copilot AI Will Gain ‘Personal Assistant’ and Custom Agent Capabilities (source)
- Organizations go ahead with AI despite security risks (source)
- America's War on Drugs and Crime will be AI powered, says Homeland Security boss (source)
- Microsoft's Brad Smith summoned by Homeland Security committee over 'cascade' of infosec failures (source)
- AI’s rapid growth puts pressure on CISOs to adapt to new security risks (source)
- Cloud security incidents make organizations turn to AI-powered prevention (source)
- Google takes shots at Microsoft for shoddy security record with enterprise apps (source)
- Windows 11 to Deprecate NTLM, Add AI-Powered App Controls and Security Defenses (source)
- CISOs pursuing AI readiness should start by updating the org’s email security policy (source)