Security News
Microsoft and MITRE, in collaboration with a dozen other organizations, have developed a framework designed to help identify, respond to, and remediate attacks targeting machine learning systems. The Adversarial ML Threat Matrix, which Microsoft has released in collaboration with MITRE, IBM, NVIDIA, Airbus, Bosch, Deep Instinct, Two Six Labs, Cardiff University, the University of Toronto, PricewaterhouseCoopers, the Software Engineering Institute at Carnegie Mellon University, and the Berryville Institute of Machine Learning, is an industry-focused open framework that aims to address this issue.
DEF CON is perhaps the ultimate "Come one/come all" hackers' convention, now in its 28th year, and it famously takes place in Las Vegas each year in a fascinating juxtaposition with Black Hat USA, a corporate cybersecurity event. The DEF CON Villages are breakout zones at the event where where likeminded researchers gather to attend talks and discussions in research fields all the way from Aerospace, Application Security and AI to Social Engineering, Voting Machines and Wireless.
Using machine learning under the hood and based on over 20 years of fraud detection expertise from Amazon, Amazon Fraud Detector automatically identifies potentially fraudulent activity in milliseconds-with no machine learning expertise required. Amazon Fraud Detector provides a fully managed service that uses machine learning for detecting potential fraud in real time, based on the same technology used by Amazon.com-with no machine learning experience required.
Machine learning already powers image recognition, self-driving cars, and Netflix recommendations. The Hacker News recently partnered with professional trainers to offer their popular artificial intelligence online training programs at hugely discounted prices.
Machine learning already powers image recognition, self-driving cars, and Netflix recommendations. The Hacker News recently partnered with professional trainers to offer their popular artificial intelligence online training programs at hugely discounted prices.
Abstract: Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, "What are the potential legal risks to adversarial ML researchers when they attack ML systems?" Studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act, the primary United States federal statute that creates liability for hacking.
The library "Boasts a suite of tools for machine learning and data analytics tasks, all with built-in privacy guarantees," according to Naoise Holohan, a research staff member on IBM Research Europe's privacy and security team. Differential privacy allows data collectors to use mathematical noise to anonymize information, and IBM's library is special because it's machine learning functionality enables organizations to publish and share their data with rigorous guarantees on user privacy.
ABBYY launched NeoML, an open source library for building, training, and deploying machine learning models. Available now on GitHub, NeoML supports both deep learning and traditional machine learning algorithms.
The Kubeflow open-source project is a popular framework for running machine-learning tasks in Kubernetes. Because Kubeflow is a containerized service, these various tasks run as containers in the Kubernetes cluster, and each can present a path for an attacker into the core Kubernetes architecture.
Microsoft is sponsoring a Machine Learning Security Evasion Competition this year, with partners CUJO AI, VMRay, and MRG Effitas, the company has announced. The competition, which welcomes both machine learning practitioners and cybersecurity professionals, will allow researchers to exercise their defender and attacker skills, Microsoft says.