They've found a means of using a voice-activated smart speaker system without it having to listen to everything you say - and no, it's not "Pressing a button." "There are a lot of situations where we want our home automation system or our smart speaker to understand what's going on in our home, but we don't necessarily want it listening to our conversations," said the aptly named Alanson Sample, associate professor of electrical engineering and computer science at the University of Michigan.
These Skills can often have security gaps and data protection problems, as a team of researchers from the Horst Görtz Institute for IT Security at Ruhr-Universität Bochum and North Carolina State University discovered, together with a former PhD student who started to work for Google during the project. In their study, the researchers around Christopher Lentzsch and Dr. Martin Degeling studied first-time the ecosystem of Alexa Skills.
Researchers have uncovered gaps in Amazon's skill vetting process for the Alexa voice assistant ecosystem that could allow a malicious actor to publish a deceptive skill under any arbitrary developer name and even make backend code changes after approval to trick users into giving up sensitive information. Amazon Alexa allows third-party developers to create additional functionality for devices such as Echo smart speakers by configuring "Skills" that run on top of the voice assistant, thereby making it easy for users to initiate a conversation with the skill and complete a specific task.
An Amazon spokesperson told Threatpost that the company conducts security reviews as part of skill certification, and has systems in place to continually monitor live skills for potentially malicious behavior. Finally, before the skills can be actively made public to Alexa users, developers must submit their skills to be vetted and verified by Amazon.
In research presented on Wednesday at the Network and Distributed System Security Symposium conference, researchers describe flaws in the process Amazon uses to review third-party Alexa applications known as Skills. "We show that not only can a malicious user publish a Skill under any arbitrary developer/company name, but she can also make backend code changes after approval to coax users into revealing unwanted information," the academics explain in their paper, titled "Hey Alexa, is this Skill Safe?: Taking a Closer Look at the Alexa Skill Ecosystem." [PDF].
The same group of researchers already had discovered ways that various forms of technology can potentially violate user privacy by engaging in what they call "Acoustic snooping." Last year, they published research on how a smartphone app has the ability record the sound from its microphones and figure out from that what someone has typed, giving it the potential to steal PINs and passwords. The new research also builds on previous research that found that voice assistants could record the typing of keys on a computer to determine someone's input, Anderson wrote in a blog post.
Imagine someone hacking into an Amazon Alexa device using a laser beam and then doing some online shopping using that person account. The same team that last year mounted a signal-injection attack against a range of smart speakers merely by using a laser pointer are still unraveling the mystery of why the microelectro-mechanical systems microphones in the products turn the light signals into sound.
Windstream Enterprise has added new Google Assistant and Amazon Alexa voice command features to its SD-WAN solution, enabling network administrators to work more efficiently. WE already includes Google Assistant and Amazon Alexa integration in its award-winning OfficeSuite UC® solution.
Attention! If you use Amazon's voice assistant Alexa in you smart speakers, just opening an innocent-looking web-link could let attackers install hacking skills on it and spy on your activities remotely. According to a new report released by Check Point Research and shared with The Hacker News, the "Exploits could have allowed an attacker to remove/install skills on the targeted victim's Alexa account, access their voice history and acquire personal information through skill interaction when the user invokes the installed skill."
The attacks involved a Cross-Origin Resource Sharing misconfiguration and Cross Site Scripting bugs identified on Amazon and Alexa subdomains, which eventually allowed the researchers to perform various actions on behalf of legitimate users. Successful exploitation of these vulnerabilities could allow an attacker to retrieve the personal information of an Alexa user, as well as their voice history with their Alexa, but also to install applications on the user's behalf, list installed skills, or remove them.