Security News
Army researchers developed a deepfake detection method that will allow for the creation of state-of-the-art soldier technology to support mission-essential tasks such as adversarial threat detection and recognition. Researchers at the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory, in collaboration with Professor C.-C. Jay Kuo's research group at the University of Southern California, set out to tackle the significant threat that deepfake poses to our society and national security.
Cybercriminals are increasingly sharing, developing and deploying deepfake technologies to bypass biometric security protections, and in crimes including blackmail, identity theft, social engineering-based attacks and more, experts warn. A drastic uptick in deepfake technology and service offerings across the Dark Web is the first sign a new wave of fraud is just about to crash in, according to a new report from Recorded Future, which ominously predicted that deepfakes are on the rise among threat actors with an enormous range of goals and interests.
In 2019, the director of the National Geospatial Intelligence Agency, the organization charged with supplying maps and analyzing satellite images for the U.S. Department of Defense, implied that AI-manipulated satellite images can be a severe national security threat. To study how satellite images can be faked, Zhao and his team turned to an AI framework that has been used in manipulating other types of digital files.
On Wednesday, proposed US legislation to fund defenses against realistic computer-generated media known as deepfakes was approved by the US Senate and the bill now awaits consideration in the US House of Representatives. Introduced last year by US Senators Catherine Cortez Masto and Jerry Moran, the Identifying Outputs of Generative Adversarial Networks Act aims to promote research to detect and defend against realistic-looking fakery that can be used for purposes of deception, harassment, or misinformation.
As COVID-19 continues to threaten the world, these types of attacks are expected to persist, according to cyber threat intelligence provider Check Point Research. In a report released Tuesday titled Securing the 'next normal, Check Point discussed its 2021 predictions in the face of the pandemic.
Microsoft has developed a deepfakes detection tool to help news publishers and political campaigns, as well as technology to help content creators "Mark" their images and videos in a way that will show if the content has been manipulated post-creation. The technology has been perfected since then and will surely continue to evolve and go on to produce ever more difficult-to-spot deepfakes.
A study explores the possible range and risk of attacks from military robots and autonomous attack drones to AI-assisted stalking. A study published in the journal Crime Science analyzed a vast spectrum of AI-enabled crimes in the years ahead ranging from military robots and autonomous attack drones to AI-assisted stalking.
Cybercriminals are using AI and ML to exploit vulnerabilities such as user behavior or security gaps to gain access to valuable business systems and data. Tech companies are trying to stay ahead of the game by creating AI technologies that fight against AI attacks, including deepfakes.
In a RSA 2020 simulation, the Red Team compromised email accounts, created deepfake videos and spread disinformation on Election Day in Adversaria.
In a RSA 2020 simulation, the Red Team compromised email accounts, created deepfake videos and spread disinformation on Election Day in Adversaria. At RSA 2020, Cybereason assembled a group of journalists and other conference attendees to be the Red Team, in charge of creating just enough chaos to cause residents of the fictional city Adversaria to doubt the results of the election.