Saturday, July 27, 2024
HomebusinessinvestWhen your voice is being ‘cloned’ to dupe banks

When your voice is being ‘cloned’ to dupe banks

A Bank of America employee in Florida recently received two phone calls from investor Clive Kabatznik to discuss a big money transfer he was planning to make. The scary part was that the second phone call wasn’t made by Mr Kabatznik. Rather, a software program had artificially generated his voice and tried to trick the banker into moving the money elsewhere.

Cyber-security experts are bewildered as to how the scammers were able to replicate his voice, pick the correct bank to call and at the most opportune time as well. What is certain is that Mr Kabatznik and his banker were the targets of a cutting-edge scam attempt that has sent shockwaves through the financial community – the use of artificial intelligence (AI) to generate voice deepfakes, or vocal renditions that mimic real people’s voices.

The problem is still new enough that there is no comprehensive accounting of how often it happens. But one expert whose company, Pindrop, monitors the audio traffic for many of the largest US banks, said he had seen a jump in its prevalence this year – and in the sophistication of scammers’ voice fraud attempts. Another large voice authentication vendor, Nuance, saw the first successful deepfake attack on a financial services client in late 2022.

In Mr Kabatznik’s case, the fraud was detectable. But the speed of technological development, the falling costs of generative AI programs and the wide availability of recordings of people’s voices on the Internet have created the perfect conditions for voice-related AI scams.

Customer data such as bank account information that has been stolen by hackers – and is widely available on underground markets – helps scammers pull off these attacks. It becomes even easier with wealthy clients, whose public appearances, including speeches, are often widely available on the Internet.

Finding audio samples for everyday customers can also be as easy as conducting an online search – say, on social media apps such as TikTok and Instagram – for the name of someone whose bank account information the scammers already have.

“There’s a lot of audio content out there,” said Mr Vijay Balasubramaniyan, chief executive officer and co-founder of Pindrop, which reviews automatic voice-verification systems for eight of the 10 largest US lenders. It’s common for 20 calls to come in from fraudsters each week.

So far, fake voices created by computer programs account for only “a handful” of these calls, he said – and they’ve begun to happen only within the past year. Most of the fake voice attacks that Pindrop has seen have come into credit card service call centres, where human representatives deal with customers needing help with their cards.

Using fake voice to cheat

In an actual recording of such attempts, a banker can be heard greeting the customer. Then the voice says: “My card was declined.”

“May I ask whom I have the pleasure of speaking with?” the banker replies.

“My card was declined,” the voice says again.

The banker asks for the customer’s name again. A silence ensues, during which the faint sound of keystrokes can be heard.

According to Mr Balasubramaniyan, the number of keystrokes correspond to the number of letters in the customer’s name. The fraudster is typing words into a program that then reads them. In this instance, the caller’s synthetic speech led the employee to transfer the call to a different department and flag it as potentially fraudulent.

Calls like the one he shared, which use type-to-text technology, are some of the easiest attacks to defend against: Call centres can use screening software to pick up technical clues that speech is machine-generated.

“Synthetic speech leaves artefacts behind, and a lot of anti-spoofing algorithms key off those artefacts,” said Mr Peter Soufleris, CEO of IngenID, a voice biometrics technology vendor.

More On This Topic

Can you trust your ears? AI voice scams rattle the US

Voice deepfakes are calling and getting more persuasive

But, as with many security measures, it’s an arms race between attackers and defenders – and one that has recently evolved. A scammer can now simply speak into a microphone or type in a prompt and have that speech very quickly translated into the target’s voice.

One generative AI system, Microsoft’s VALL-E, could create a voice deepfake that says whatever a user wishes using just three seconds of sampled audio.

On the 60 Minutes television programme in May 2023, security consultant Rachel Tobac used software to so convincingly clone the voice of Ms Sharyn Alfonsi, one of the programme’s correspondents, that she fooled a 60 Minutes employee into giving her Ms Alfonsi’s passport number. The cloning took only five minutes to put together, said Ms Tobac, CEO of SocialProof Security.

Threat is growing

While scary deepfake demos are a staple of security conferences, real-life attacks are still extremely rare, said Mr Brett Beranek, general manager of security and biometrics at Nuance, a voice technology vendor that Microsoft acquired in 2021. The only successful breach of a Nuance customer, last October, took the attacker more than a dozen attempts to pull off.

His biggest concern is not attacks on call centres or automated systems, like the voice biometrics systems that many banks have deployed. He worries about the scams in which a caller reaches an individual directly.

That’s what happened in Mr Kabatznik’s case. According to the banker’s description, he appeared to be trying to get her to transfer money to a new location, but the voice was repetitive, talking over her and using garbled phrases. The banker hung up.

“It was like I was talking to her, but it made no sense,” Mr Kabatznik said she had told him. After two more calls like that came through in quick succession, the banker reported the matter to Bank of America’s security team.

Concerned about the security of Mr Kabatznik’s account, she stopped responding to his calls and e-mails – even the ones that were coming from the real customer. It took about 10 days for the two of them to re-establish a connection, when he arranged to visit her at her office.

“We regularly train our team to identify and recognise scams and help our clients avoid them,” said Mr William Halldin, a Bank of America spokesman.

Although the attacks are getting more sophisticated, they stem from a basic cyber-security threat that has been around for decades: a data breach that reveals the personal information of bank customers. From 2020 to 2022, bits of personal data on more than 300 million people fell into the hands of hackers, leading to US$8.8 billion in losses, according to the US Federal Trade Commission.

Once they have harvested a batch of numbers, hackers sift through the information and match it to real people. Those who steal the information are almost never the same people who end up with it. Instead, the thieves put it up for sale. Specialists can use any one of a handful of easily accessible programs to spoof target customers’ phone numbers – which is what likely happened in Mr Kabatznik’s case.

Recordings of his voice are easy to find. On the Internet, there are videos of him speaking at a conference and participating in a fund-raiser. “I think it’s pretty scary,” he said. “The problem is, I don’t know what to do about it. Do you just go underground and disappear?” NYTIMES

More On This Topic

Scammers use deepfakes to create voice recordings and videos to trick victims’ family, friends

Invest microsite: Get more investment and career tips

Join ST’s Telegram channel and get the latest breaking news delivered to you.

p.st_telegram_boilerplate:before {
display: inline-block;
content: ” “;
border-radius: 6px;
height: 6px;
width: 6px;
background-color: #12239a;
margin-left: 0px;
margin-right: 13px;
}

a.st_boilerplate {
font-family: “SelaneWebSTForty”, Georgia, “Times New Roman”, Times, serif;
}

RELATED ARTICLES
- Advertisment -

Most Popular