New tool in 'voice spoof' fight
Australian researchers have developed a new technique to protect consumers from being scammed through ‘voice-spoofing’ attacks.
Consumers use voice assistants like Amazon Alexa or Google Assistant to shop online, make phone calls, send messages, control smart home appliances and access banking services.
Fraudsters can record a person's voice and replay it to impersonate that individual. They can also stitch samples together to mimic a person's voice in order to spoof, or trick third parties.
The new solution, called Void (Voice liveness detection), can be embedded in a smartphone or voice assistant software and works by identifying the differences in spectral power between a live human voice and a voice replayed through a speaker, in order to detect when hackers are attempting to spoof a system.
Researcher Muhammad Ejaz Ahmed says privacy-preserving technologies are becoming increasingly important.
“Although voice spoofing is known as one of the easiest attacks to perform as it simply involves a recording of the victim's voice, it is incredibly difficult to detect because the recorded voice has similar characteristics to the victim's live voice,” he said.
“Void is game-changing technology that allows for more efficient and accurate detection helping to prevent people's voice commands from being misused.”
Unlike existing voice spoofing techniques which typically use deep learning models, Void was designed relying on insights from spectrograms — a visual representation of the spectrum of frequencies of a signal as it varies with time to detect the 'liveness' of a voice.
This technique provides a highly accurate outcome, detecting attacks eight times faster than deep learning methods, and uses 153 times less memory, making it a viable and lightweight solution that could be incorporated into smart devices.
The Void research project was funded by Samsung Research, and tested using Automatic Speaker Verification Spoofing and Countermeasures challenges, achieving an accuracy of 99 per cent and 94 per cent for each dataset.
CSIRO's Data61 tech research group has provided the following tips for consumers on how to protect their data when using voice assistants:
- Always change voice assistant settings to only activate the assistant using a physical action, such as pressing a button
- On mobile devices, make sure the voice assistant can only activate when the device is unlocked
- Turn off all home voice assistants before leaving the house, to reduce the risk of successful voice spoofing while away
- Voice spoofing requires hackers to get samples of a voice. Consumers should make sure they regularly delete any voice data that Google, Apple or Amazon store
- Try to limit the use of voice assistants to commands that do not involve online purchases or authorisations – hackers may record a person issuing payment commands and replay them at a later stage