AI/ML is currently posing challenges for regulators and governments worldwide. Countries grapple with new uncharted territory following the recent launches of AI chatbots such as ChatGPT or Google’s Bard.
Yet, in the context of fighting fraud and money laundering, AI/ML tools are proving increasingly popular. With GlobalData analysts expecting the overall AI market value to reach $383.3bn in 2023, financial institutions (FIs) are looking into ways to automate things like transaction monitoring processes, aware of the growing volume of work.
In 2018, JP Morgan launched the Contract Intelligence platform, which uses natural language processing to handle legal documents and extract essential data. One year later, Spanish CaixaBank introduced facial recognition to its ATMs, enabling customers to withdraw money without the need to enter a PIN code.
However, AI/ML challenges persist. In a world where data privacy is becoming a major concern to regulators, a balancing act must be performed between adopting emerging technologies and cutting unnecessary data exposure.
Raffael Maio is chief strategy officer at NetGuardians, a Swiss fintech providing AI tools to more than 80 banks worldwide that help them combat fraud.
Maio’s company has recently published a white paper arguing that the current anti-money laundering (AML) systems are unfit to cope with the digital age.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataNetGuardians, he says, has been looking to address issues in the sector. This includes finding a solution for overfitting, where some AI techniques learn a specific type of pattern or fraud so well that they can no longer detect other types of fraud. But the fight against financial crime continues beyond that point.
Q: AI has come a long way in many sectors, including banking. How does your AI/ML technology work in the context of fraud prevention in retail banking?
AI is a vast field with various applications, and one such application is chatbots. However, at NetGuardians, we specialise in fraud prevention.
In retail banking, the primary challenge is the sheer volume of transactions, which can range from hundreds of thousands to millions every day. Traditional fraud prevention techniques like rules-based systems describing the modus operandi are no longer effective.
Instead, by leveraging AI/ML models, we can adopt a more risk-based approach and learn from every client. By understanding an individual’s transaction patterns, we can create a tailored mechanism to protect them from fraud. In contrast, a rules-based approach is based on averages and is less effective.
ML models allow us to be more precise, resulting in a higher detection rate and a lower false positive rate. By adopting ML-based fraud prevention techniques, we can tackle fraud more efficiently, maximising detection while minimising unnecessary alerts.
Q: Your company, NetGuardians, provides fraud prevention and anti-money laundering services to a wide range of clients. How has AML compliance changed in the past 12 months, ever since Russia invaded Ukraine and how has NetGuardians adapted to the new challenges?
I want to clarify that the development of AI technology in FIs began approximately 4-5 years ago, long before the renewed war in Ukraine. A few companies, like ours, were early adopters of this technology and started the journey towards its implementation.
But now, the need for AI is increasing since fraudulent activities are on the rise. And I would link that to Covid, because we’ve seen a massive surge of fraudulent attempts since the pandemic began, more than since (Russia launched its full-scale) war in Ukraine.
Moreover, the economic slowdown has compelled all institutions to rationalise their costs and look for ways to improve their operational efficiency. As digital transactions become more common and fewer people use cash, businesses are looking for new technologies to enhance efficiency and handle a higher volume of transactions.
Therefore, FIs needed more (AI) tools to accomplish things more efficiently while keeping the same number of people.
So that is one of the key reasons institutions wanted to adopt AI/ML technology – to tackle more fraudulent activities, handle the increasing volume of transactions and safeguard their clients’ money.
Q: What fraudulent activities have your clients identified in the retail banking sector? Which ones have become prevalent in the past two-three years?
We are seeing a considerable rise in Authorised Push Payment (APP) scams, where the fraudster introduces himself to the end user on behalf of a company. There is a type of scam that is quite intriguing. Some people receive messages, such as SMS, from their bank. When they call the number, the voice on the other end sounds exactly like the bank’s customer service centre. The scammers have already requested (in the email) that the user provide their password and other sensitive information. Later on, the user realises that their money has been stolen.
That’s what we’ve seen in the past 18 months.
Q: How do your retail banking services differ from those provided by other competitors in the market? What is NetGuardians’ USP?
We have two specific aspects. One is that we use AI solutions tailored to the retail banking space. We use a 3DAI technology composed of multiple techniques involving Anomaly Detection (unsupervised machine learning), Fraud recognition (supervised learning) and adaptive learning (active learning), where we learn from all these aspects continuously.
So that’s a key element on the technology side.
In addition, we have developed a feature called Community Scoring & Intelligence, which allows our users to share information with one another. When fraudulent activity occurs, this information is shared within our user community, helping to prevent similar fraud from occurring in the future. This approach combines technology and data to achieve more precise and accurate results, thereby reducing the rate of false positives.
Q: Can you clarify the term “community”?
Currently, I am referring to our clients, but our future goal is to expand our reach. We believe that, in order to effectively combat fraud, we must work with other organisations. Our aim is to foster connections beyond our current client base and engage with a global community. By sharing information and insights on a global level, we can more effectively address the cross-border nature of fraudulent activities. Our ultimate goal is to create a worldwide community to fight fraud together.
Q: How is NetGuardians coping with ever-changing fraud tactics and techniques? How do you make use of AI in the process?
That is a very good question because it goes back to one concern in the AI field, namely the issue of overfitting. This is the result of many years of research at universities in Switzerland, where the goal was to tackle overfitting. This issue arises when some AI techniques learn a specific type of pattern or fraud so well that they become unable to detect other types of fraud. This one-directional learning problem is known as overfitting.
However, the technology we have developed solves that issue. If you want, the system can stop learning at a particular moment and instead become aware (of its surroundings) – ignoring the modus operandi and aiming to catch as much fraud as possible. The importance is to have a fine balance between what the system catches and what turns out to be false positives. More specifically, to uncover the highest number of fraudulent activities, all while maintaining an acceptable rate of false positives.
Q: And how do you see AI/ML in banking evolve in the upcoming years, especially when it comes to data privacy?
There are a few aspects. Firstly, we are in the adoption phase. It started a couple of years ago. More institutions are recognising the benefits of AI and its efficiency. We see (FIs continuing) the adoption of AI.
Data privacy has always been a significant concern, especially in finance. However, we notice a desire to enhance collaboration. Open banking was the first milestone to hit. And probably in the future, there will be more AI in banking – that is clear, as well as more information sharing.
Many regulators and governments are already heading the initiative to share information among banks. And now, there are more techniques which enable to share information without compromising data privacy. So, we will continue to keep the same principle with data privacy, but with a mindset of being able to share more like figurative learning, which is a big trend nowadays, and it’s evolving quite fast.