Artificial intelligence (AI) has huge potential for improving financial services operations – such as writing code and learning customer preferences – but it is also increasingly being used to perpetrate fraud.
Indeed, such is the rate at which AI fraud is increasing that cybersecurity teams are struggling to keep up. Speaking on a new episode of GlobalData’s Thematic Intelligence podcast, Martin Rehak, CEO and founder of cybersecurity company Resistant AI, pointed to the importance of a multi-faceted approach, which he describes as “defence-in-depth”.
Types of AI fraud
Rehak explained that AI fraud is predominantly enabling a change of scale without engendering a change of scope.
“Basically, all [types of AI fraud] are extensions of the typical or traditional fraud forms that have existed for a couple of years, especially using AI to target onboarding processes …” he said.
“It’s used to scale out the creation of fake accounts and theft of real business identities and then basically open accounts with financial institutions globally which are then used to defraud people, or to do money laundering.”
“The second [use] is when you actually steal the money. This is where you go to companies or people and use AI to produce plausible enough conversations and relationships that convince people to send you the money.”
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThe latter are the cases that tend to make headlines. The news last year that a Hong Kong finance worker was tricked into sending $25m to fraudsters by a deepfaked video call broke the topic out of its niche and into the mainstream.
The trouble is, most fraud is much less flashy. Retail Banker International reported in March that ID fraud may account for half of all bank-related fraud by 2025.
Explaining why this is an issue, Rehaks said: “There are a couple of high-profile cases where someone steals $25m, and that’s nice, but typical cases that we hear about every day range from $5,000 to $50,000. If you lose that much money, it doesn’t make the news, but the real news is how normal this crime is. If you look at the rates where these crimes are investigated and they apprehend the perpetrators, they are essentially zero.
“Typically, this means someone walking away with the money, and not much money left for the victim. There are some exceptional cases where the gangs get prosecuted, but most of the crime is targeting different countries for political reasons, and convincing police to investigate a case that spans multiple countries and makes them do 60 different paperwork requests in five different languages is very hard. They are trying to combat the crime of the 21st century by using the means of the 19th.
How to stop AI fraud
If law enforcement can’t be trusted to stop these crimes and current techniques are outdated, the solution may be drawn from the same font as the cause.
Rehak believes that the best way to defeat bad actors’ tools is with AI of one’s own, explaining: “We need to put AI in front of the AI that’s doing the onboarding to stop the criminals from leveraging automation. We don’t want scalable attacks, we don’t want thousands of fake identities onboarded as customers of banks.
“The way we do it is interesting because the danger of AI is that the attackers are learning all the time. Fraud prevention projects used to be built at a scale of months; a fast-track project might get done within three to six months and typically a year.
“Today, you have to work on the scope of minutes or hours. If you don’t fix your system within a couple of hours of you being aware of a vulnerability, it’s very easy for the attackers to exploit your weaknesses at scale … We see this on social media, we see this from financial institutions and we see this a lot in the payments industry.”
Fraud has always existed, and likely always will, but AI provides an edge unattainable by having only humans on a team. This is especially important now as infrastructure enabling fraud is incredibly detailed.
“The second thing that’s placed in their favour is that they have very efficient black markets,” said Rehak. “In the old times, a hacker who’d found an exploit would have to design the whole way to make money on that.”
Today, though, there are a host of available services not unlike hiring contractors for legitimate tasks. A typical process involves buying fake identities from one group, sending them to another that produces documents from them and giving those to an identification specialist who will open accounts on your behalf. Those accounts can then be sold to people wanting to commit financial crimes including money laundering and sanctions evasion.
A digital fortress
The most important thing to remember, Rehak said, “is the oldest security concept we have: defence in depth. You shouldn’t have only one security system or layer of security. Financial institutions need to work with specialists that know what’s going on in the financial criminal industry and that can build AI models fast enough to basically be on par with the attackers.
“Last but not least, you should never fall asleep. You should always think that the other side will be smarter. Testing and never trusting yourself are the two main bits of advice I can get.”