Security News > 2019 > October > AI development has major security, privacy and ethical blind spots
![AI development has major security, privacy and ethical blind spots](/static/build/img/news/alt/Data-Cybersecurity-Predictions-medium.jpg)
Security, privacy and ethics are low-priority issues for developers when modeling their machine learning solutions, according to O’Reilly. Major issues Security is the most serious blind spot. Nearly three-quarters (73 per cent) of respondents indicated they don’t check for security vulnerabilities during model building. More than half (59 per cent) of organizations also don’t consider fairness, bias or ethical issues during ML development. Privacy is similarly neglected, with only 35 per cent checking for issues … More → The post AI development has major security, privacy and ethical blind spots appeared first on Help Net Security.
News URL
http://feedproxy.google.com/~r/HelpNetSecurity/~3/75mrOrKdSNA/
Related news
- AI’s rapid growth puts pressure on CISOs to adapt to new security risks (source)
- Core security measures to strengthen privacy and data protection programs (source)
- Cloud security incidents make organizations turn to AI-powered prevention (source)
- Windows 11 to Deprecate NTLM, Add AI-Powered App Controls and Security Defenses (source)
- Windows’ new Recall feature: A privacy and security nightmare? (source)
- CISOs pursuing AI readiness should start by updating the org’s email security policy (source)
- Personal AI Assistants and Privacy (source)
- Anthropic’s Generative AI Research Reveals More About How LLMs Affect Security and Bias (source)
- Microsoft Revamps Controversial AI-Powered Recall Feature Amid Privacy Concerns (source)
- Apple Launches Private Cloud Compute for Privacy-Centric AI Processing (source)