AI-Assisted Voice Cloning is the newest cyber threat – and Indians are the most vulnerable

05th May 2023
AI-Assisted Voice Cloning is the newest cyber threat – and Indians are the most vulnerable
Dozens of Voice cloning tools combined with text-to-voice software make Voice Phishing a real and present danger today. Image:McAfee

By Anand Parthasarathy
May 5, 2023: Geoffrey  Hinton the ‘Godfather of AI’   recently quit his job with Google to warn of the imminent dangers of  the unbridled use – or misuse -- Artificial Intelligence.  The same week sees the emergence of a new global threat that confirms his worst fears.
What’s more, Indians seem to be most vulnerable – and are already the worst hit.
In the first months of 2023, Cyber security specialists McAfee took note of a new and emerging threat: Cyber baddies used Artificial Intelligence (AI) to drive voice-cloning technology that impersonated real people  and  approached their friends or family members asking for money.  Recipients, deceived into thinking they were really talking to a loved one in distress, often parted with large sums.
This is an old trick – but the voice imitation tended to be crude and often did not fool the target.  
Then in  2016, the  security news site CSO Online  reported  on an annual event of  the US-based maker of design software, Adobe, where  the company  demonstrated a product called VoCo, which   cloned an individual’s voice accurately enough to deceive. But it was a  cumbersome process that needed a long  voice sample: Adobe’s pitch was “With a 20-minute voice sample, VoCo can make anyone say anything.”  
Possibly due to the potential for misuse, Adobe has to date, not commercially released VoCo.  Others have been less caring of the consequences, but the tools were mainly in the non-commercial OpenSource project space.
Products available  in recent years like WaveNet, Resemble AI, 15.ai and Lyrebird,  cut the required voice sample length to a few minutes. Combined with free text -to-voice  tools, one can  make a cloned voice say anything you type in.
Then   AI -- specifically Generative AI -- changed all that. 
This recent stream of AI is capable of generating new text, images, voice or other media after the software has “learned” from the patterns and structure of the input data. (ChatGPT is one example). The ‘learning’ ability of new-generation voice cloning  tools improved dramatically. 
'Artificial Imposter' is here
Earlier this week, McAfee released a report entitled “The Artificial Imposter, detailing  how artificial intelligence  technology is fueling a sharp world-wide  rise in online voice scams. Today, all it takes to clone a victim’s voice is just three seconds of audio, from something he or she has placed on the Internet – may be a YouTube Video, a Whatsapp audio file or a Facebook post.
Says the study: “In the past, those wishing to create these assets needed to have both the time and technical ability to replicate voices, but applications and tools that produce near instantaneous, highly believable results are now just a few clicks away.”
In April 2023, McAfee commissioned research with 7,054 adults worldwide from seven countries, U.S., U.K., France, Germany, Australia, Japan – and India -- to  understand the level of awareness and first-hand experience of AI voice scams.
The survey found that 53% of all adults worldwide share their voice online at least once a week, with 49% doing so up to 10 times in the same period.
The practice is most common in India, with 86% of people making their voices available online at least once a weekfollowed by the U.K. at 56%, and then the U.S. at 52%.
“While this may seem harmless, our digital footprint and what we share online can arm cybercriminals with the information they need to target your friends and family. With just a few seconds of audio taken from an Instagram Live video, a TikTok post, or even a voice note, fraudsters can create a believable clone that can be manipulated to suit their needs,” suggests the study.
How common are AI voice scams?
A quarter of adults surveyed globally have experience of an AI voice scam, with one in 10 targeted personally, and 15% saying somebody they know has been targeted.|
When you break it down by country, it’s most common in India, with 47% of respondents saying they had either been a victim themselves (20%) or knew somebody else who had (27%).  This is double the global average (25%). 
he United States is second, with 14% saying it happened to them and 18% to a friend or relative. The U.K. comes third with 8% saying it happened to them directly and 16% to someone they know.
Indians have suffered in very real terms: 83% of Indian victims said they had a loss of money, with 48% losing over Rs 50,000.

Can people tell the difference between real and fake?
The McAfee research has found that  AI-fuelled voice-cloning tools are capable of replicating how a person speaks with up to 95% accuracy, so telling the difference between real and fake certainly isn’t easy.|
It reveals that more than half (69%) of Indians think they don’t know or cannot tell the difference between an AI voice and real voice.
There is a psychology behind this: “Humans make mental shortcuts daily for problem-solving and to reduce cognitive overload—known as heuristics in psychology. Our brain is more likely to cut corners and believe the voice we’re hearing is, in fact, that of the loved one, as it’s claiming to be. Because of this, a near-perfect match may not even be required, as our brain will automatically make the shortcut and often motivate us to act and, in most cases, send money.”
Alarmingly,  such high tech scams are becoming more affordable by the day: Subbarao Kambhampati, a professor of computer science at Arizona State University, told the  US   National Public Radio (NPR) last month that   that the cost of voice cloning is  dropping, making it more accessible to scammers: 
"Before, it required a sophisticated operation," Kambhampati said. "Now small-time crooks can use it."  (NPR report here)
How to protect oneself from being voice-cloned 
 In this scary environment, some common-sense precautions suggested in the McAfee study:
-        Create a verbal ‘codeword’ that only your friends or family know. Ask for it when in doubt who’s calling, especially if the caller asks for money.
-        Be skeptical: If it’s a call, text or email from a known or unknown sender, stop, pause and think. Does that really sound like them? Hang up and call the person directly.
-        Think before you click and share.  Be thoughtful about the friends and connections online. The wider your connections the more risk that your identity may be cloned for malicious purposes. 
-        Avail of identity theft protection services that come bundled with Net Security software tools which ensure your personally identifiable information is not accessible or notify you if it makes its way to the Dark Web.  The tools can also restore your identity should the unexpected happen.
Hopefully, with these precautions, nothing untoward will happen. Safe surfing   -- and authentic conversations!

This article has appeared in Swarajya