Security News > 2023 > October > Generative AI Security: Preventing Microsoft Copilot Data Exposure

In this post I'm going to focus specifically on data security and how your team can ensure a safe Copilot rollout.
Microsoft relies heavily on sensitivity labels to enforce DLP policies, apply encryption, and broadly prevent data leaks.
Microsoft paints a rosy picture of labeling and blocking as the ultimate safety net for your data.
The efficacy of label-based data protection will surely degrade when we have AI generating orders of magnitude more data requiring accurate and auto-updating labels.
It's critical to have a sense of your data security posture before your Copilot rollout.
Varonis protects thousands of Microsoft 365 customers with our Data Security Platform, which provides a real-time view of risk and the ability to automatically enforce least privilege.
News URL
Related news
- Microsoft raises rewards for Copilot AI bug bounty program (source)
- Severe Security Flaws Patched in Microsoft Dynamics 365 and Power Apps Web API (source)
- Microsoft Sues Hacking Group Exploiting Azure AI for Harmful Content Creation (source)
- Microsoft Takes Legal Action Against AI “Hacking as a Service” Scheme (source)
- Microsoft sues 'foreign-based' cyber-crooks, seizes sites used to abuse AI (source)
- How AI and ML are transforming digital banking security (source)
- 3 Actively Exploited Zero-Day Flaws Patched in Microsoft's Latest Security Update (source)
- Microsoft eggheads say AI can never be made secure – after testing Redmond's own products (source)
- AI-driven insights transform security preparedness and recovery (source)
- Sage Copilot grounded briefly to fix AI misbehavior (source)