Malicious hackers can use verbal commands to perform SQL injections on web-based applications run by virtual assistants such as Amazon's Alexa, researchers say.
"Leveraging voice-command SQL injection techniques, hackers can give simple commands utilizing voice text translations to gain access to applications and breach sensitive account information," reports Baltimore, Maryland-based Protego Labs, in a blog post this morning. (Protego shared a copy of the post with SC Media in advance of publication.)
The flaw that enables voice-based attacks doesn't lie within Alexa or, for that matter, Google Assist, Cortana, Siri and similar technologies. Rather, the problem are the apps themselves, Protego explains. According to the blog post, an application can be attacked via voice-based SQL injection if three conditions are met: the Alexa function/skill is using SQL as a database, the Alexa function/skill is to vulnerable to SQL injection, and one of the vulnerable SQL queries includes an integer value as a component of the query.
The company has also released a video demonstration of such an attack, performed by Protego Head of Security and Ethical Hacker Tal Melamed. In the demo, Melamed uses merely account numbers and text to gain access to a sample online banking application and SQL database that he built himself for research purposes.
First, Melamed attempts to access an admin account he is not privileged to view. Alexa then denies his request for access after he enters his name identification and account ID. But then Melamed is able to bypass the security measure by verbalizing a random number and then adding "or/true," which allows him to access any line in the database.
"If additional application security measures were in place, whether hosted in serverless or other cloud infrastructure, Alexa wouldn’t be able to access any secure data, even when attempting an SQL injection such as this," the blog post concludes.