Security News
More than 1,000 technology experts and academics from organizations such as MIT, Microsoft, Harvard and Google have signed an open letter denouncing a forthcoming paper describing artificial intelligence algorithms that can predict crime based only on a person's face, calling it out for promoting racial bias and propagating a #TechtoPrisonPipeline. The paper describes an "Automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal," according to a press release about the research.
Tech giants love to portray themselves as forces for good and as the United States was gripped by anti-racism protests a number of them publicly disavowed selling controversial facial recognition technology to police forces. The technology has a dark side, with facial recognition integrated into China's massive public surveillance system and its social credit experiment where even minor infractions of public norms can result in sanctions.
Microsoft is joining Amazon and IBM when it comes to halting the sale of facial recognition technology to police departments. "We will not sell facial recognition tech to police in the U.S. until there is a national law in place We must pursue a national law to govern facial recognition grounded in the protection of human rights," Smith said during a virtual event hosted by the Washington Post.
While newer regulations like the EU's General Data Protection Regulation and the California Consumer Privacy Act are steps in the right direction to protect consumer privacy, there is a need for tighter regulation for facial recognition technology. Facial recognition vs. facial authentication.
The company now says its masked facial recognition program has reached 95 percent accuracy in lab tests, and even claims that it is more accurate in real life, where its cameras take multiple photos of a person if the first attempt to identify them fails. Counter-intuitively, training facial recognition algorithms to recognize masked faces involves throwing data away.
In the midst of the ongoing coronavirus pandemic, facial recognition technology is being adopted globally as a way to track the virus' spread. But privacy experts worry that, in the rush to implement COVID-19 tracking capabilities, important and deep rooted issues around data collection and storage, user consent, and surveillance will be brushed under the rug. "While facial recognition technology provides a fast and zero-contact method for identifying individuals, the technology is not without risks. Primarily, individuals scanned by facial recognition services need to be aware of how their data is being used."
In the midst of the ongoing coronavirus pandemic, facial recognition technology is being adopted globally as a way to track the virus' spread. But privacy experts worry that, in the rush to implement COVID-19 tracking capabilities, important and deep rooted issues around data collection and storage, user consent, and surveillance will be brushed under the rug. "While facial recognition technology provides a fast and zero-contact method for identifying individuals, the technology is not without risks. Primarily, individuals scanned by facial recognition services need to be aware of how their data is being used."
The American Civil Liberties Union has filed suit the Department of Homeland Security over its use of facial recognition technology in airports, decrying the government's "Extraordinarily dangerous path" to normalize facial surveillance as well as its secrecy in making specific details of the plan public. "Our lawsuit seeks to make public the government's contracts with airlines, airports, and other entities pertaining to the use of face recognition at the airport and the border; policies and procedures concerning the acquisition, processing, and retention of our biometric information; and analyses of the effectiveness of facial recognition technology," Ashley Gorski, a state attorney for the ACLU, wrote in a blog post about the lawsuit published online Thursday.
This week we discuss the latest in the Clearview AI debacle, get more tales from the ransomware swamp and discover how often our smart speakers are listening to us. Host Anna Brading is joined by Sophos experts Paul Ducklin and Peter Mackenzie, and me!
Canada's privacy watchdog on Friday announced an investigation into a US software startup reportedly capable of matching images of unknown faces to photos it mined from millions of websites and social media networks. In a statement, the Office of the Privacy Commissioner said Clearview AI's collection and stockpiling of more than three billion photos potentially violated Canadian law, if the photos were obtained without permission.