Security News
Cybersecurity researchers have disclosed six security flaws in the Ollama artificial intelligence (AI) framework that could be exploited by a malicious actor to perform various actions, including...
Threat actors are exploiting a CMS editor discontinued 14 years ago to compromise education and government entities worldwide to poison search results with malicious sites or scams. Search engine crawlers index the redirects and list them on Google Search results, making them an effective strategy for SEO poisoning campaigns, leveraging a trusted domain to rank malicious URLs higher for specific queries.
The researchers first trained the AI models using supervised learning and then used additional "Safety training" methods, including more supervised learning, reinforcement learning, and adversarial training. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models.
Continuous integration and continuous delivery (CI/CD) misconfigurations discovered in the open-source TensorFlow machine learning framework could have been exploited to orchestrate supply chain...
The publication, "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations," is a key component of NIST's broader initiative to foster the creation of reliable AI. This effort aims to facilitate the implementation of NIST's AI Risk Management Framework and aims to assist AI developers and users in understanding potential attacks and strategies to counter them, acknowledging that there is no silver bullet. "The risks of AI are as significant as the potential benefits. The latest publication from NIST is a great start to explore and categorize attacks against AI systems. It defines a formal taxonomy and provides a good set of attack classes. It does miss a few areas, such as misuse of the tools to cause harm, abuse of inherited trust by people believing AI is an authority, and the ability to de-identify people and derive sensitive data through aggregated analysis," Matthew Rosenquist, CISO at Eclipz.io commented.
A team of researchers from UC Irvine and Tsinghua University has developed a new powerful cache poisoning attack named 'MaginotDNS,' that targets Conditional DNS resolvers and can compromise entire TLDs top-level domains. The concept of DNS cache poisoning is injecting forged answers into the DNS resolver cache, causing the server to direct users who enter a domain to incorrect IP addresses, potentially leading them to malicious websites without their knowledge.
Given that we've known about model poisoning for years, and given the strong incentives the black-hat SEO crowd has to manipulate results, it's entirely possible that bad actors have been poisoning ChatGPT for months. We don't know because OpenAI doesn't talk about their processes, how they validate the prompts they use for training, how they vet their training data set, or how they fine-tune ChatGPT. Their secrecy means we don't know if ChatGPT has been safely managed.
Vas pup April 7, 2023 5:56 PM. The phones that detect earthquakeshttps://www. "Google's Android operating system have on-board accelerometers - the circuitry which detects when a phone is being moved. These are most commonly used to tell the phone to re-orientate its display from portrait to landscape mode when it is tilted, for example, and also helps provide information about step-count for Google's onboard fitness tracker."
The researchers explain that attackers using search engine optimization poisoning are generally more successful "When they SEO poison the results of popular downloads associated with organizations that do not have extensive internal brand protection resources." SEO poisoning attacks consist of altering search engines results so that the first advertised links actually lead to attacker controlled sites, generally to infect visitors with malware or to attract more people on ad fraud.
In a new post by MetaMask, the developers warn of a new scam called 'Address Poisoning' that relies on poisoning the wallet's transaction history with scammer's addresses that are very similar to addresses that a user recently had transactions. The threat actor then sends the targeted sender's address a small amount of cryptocurrency, or even a $0 token transaction, from this new address so that the transaction appears in their wallet's history.