Security News
LLM-controlled droids easily jailbroken to perform mayhem, researchers warn Science fiction author Isaac Asimov proposed three laws of robotics, and you'd never know it from the behavior of...
As chatbots become a ubiquitous element of modern life and permeate many of our human-computer interactions, they have the potential to subtly reshape how we think about both computers and our fellow human beings. Chatbots are growing only more common, and there is reason to believe they will become ever more intimate parts of our lives.
In their 2024 cybersecurity outlook, WatchGuard researchers forecast headline-stealing hacks involving LLMs, AI-based voice chatbots, modern VR/MR headsets, and more in the coming year. During 2024, the WatchGuard Threat Lab predicts that a smart prompt engineer whether a criminal attacker or researcher will crack the code and manipulate an LLM into leaking private data.
In the wake of WormGPT, a ChatGPT clone trained on malware-focused data, a new generative artificial intelligence hacking tool called FraudGPT has emerged, and at least another one is under development that is allegedly based on Google's AI experiment, Bard. Both AI-powered bots are the work of the same individual, who appears to be deep in the game of providing chatbots trained specifically for malicious purposes ranging from phishing and social engineering, to exploiting vulnerabilities and creating malware.
Find out how Google Cloud approaches AI data, what privacy measures your business should keep in mind when it comes to generative AI and how to make a machine learning application "Unlearn" someone's data. Google Cloud approaches using personal data in AI products by covering such data under the existing Google Cloud Platform Agreement.
A new phishing attack is using Facebook Messenger chatbots to impersonate the company's support team and steal credentials used to manage Facebook pages. In a new campaign discovered by TrustWave, threat actors use chatbots to steal credentials for managers of Facebook pages, commonly used by companies to provide support or promote their services.
A new phishing attack is using Facebook Messenger chatbots to impersonate the company's support team and steal credentials used to manage Facebook pages. In a new campaign discovered by TrustWave, threat actors use chatbots to steal credentials for managers of Facebook pages, commonly used by companies to provide support or promote their services.
Phishing attacks are now using automated chatbots to guide visitors through the process of handing over their login credentials to threat actors. The URL button in the PDF takes the victim to a phishing site where they are supposed to resolve issues causing a package to be undeliverable.
Or, as I said when I finished playing a new online Turing Test game called Bot or Not, NAILED IT!! Bot or Not is an online game that pits people against either bots or humans. The creators of Bot or Not - a Mozilla Creative Awards project that was conceived, designed, developed and written by the New York City-based design and research studio Foreign Objects - say that these days, bots are growing increasingly sophisticated and are proliferating both online and offline.
In this episode, Duck discusses the iPhone "Word of death", Peter shares a shocking ransomware story and I talk about a chatbot that shows empathy. Host Anna Brading is joined by Naked Security regular Paul Ducklin, threat response expert Peter Mackenzie and me.