The promise of AI in the fintech sector is limited only by the imagination. This examines AI’s potential and cautions on the balancing act to ensure it is safely deployed

At Money 20/20 last year, Acrew Capital launched a report that revealed that 76% of Financial Services companies have launched AI initiatives with the main focus being on cost savings and revenue growth. The report also found that there remains ample opportunity in this space for new entrants, particularly to create solutions in high-risk areas such as fraud prevention and wealth management.

Businesses across the world are using chatbots as customer service engines, using AI to develop website content, and employees are using AI personal assistants to perform administrative tasks like data entry and email management. Almost every business sector is harnessing AI to speed up workflows, save time and work more efficiently.

Whilst organisations race to innovate, on a more threatening note, we are increasingly at risk of the potential for global crises when AI is used by malicious actors. One example is how generative AI is being used to generate fear and mistrust via ‘online deception campaigns,’ especially around major elections such as the recent US election and additionally, recent reports suggest that almost half of US companies using ChatGPT have already replaced staff with AI. This is putting future job security at risk which is causing major concern.

Citi, Deutsche Bank AI bans

AI is clearly becoming divisive. As a result, major banks, including Citigroup and Deutsche Bank are banning the use of AI in their businesses over concerns about leaking confidential data. In the fintech sector, the safety of financial data, mitigating fraud, and maintaining trust is crucial. Without a steadfast commitment to AI security, the fintech sector risks becoming a vector for sophisticated cyber threats. The fintech industry’s reliance on artificial intelligence for mission-critical applications such as fraud detection, credit scoring and risk assessment has been a driver of technological progress which offers immense potential for innovation and optimisation.

Fintech companies investing in and deploying GenAI need to be mindful that the quality of AI output is directly related to the quality of input as well as understanding the source of the data and training methodology. The information we give AI programmes is the only way they can learn. However, if the programme is given faulty or untrustworthy data, results could be inaccurate or biased. As a result, the intelligence or effectiveness of AI is only as good as the data provided. Consistency of data is one of the key obstacles to the implementation of AI. Businesses trying to benefit at scale from AI face difficulties since data is frequently fragmented, inconsistent and of poor quality, leading to big issues and in some cases, reputational damage too.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

McDonalds AI mishap

For instance, McDonalds had to pull its new AI enhanced, speech-enabled Drive-Thru programme last summer following the publicising of a number of videos on social media showing confused and frustrated customers trying to get the AI to understand their orders. One TikTok video features a couple ordering 260 Chicken McNuggets as the bot kept adding additional items to the order whenever the pair tried to cancel it. McDonalds had to end its partnership with IBM as a result of the failed pilot.

On the financial services side, in 2019, Capital One experienced a massive data breach that exposed the personal information of over 100 million customers. The breach was caused by a misconfigured firewall in Capital One’s cloud infrastructure, which allowed a hacker to access sensitive data. Capital One had implemented AI-based monitoring tools to detect potential security breaches, but they failed to detect the misconfiguration that led to the breach.

To avoid similar mishaps, companies in the fintech space should have a well-defined plan in place from the beginning for gathering the data that AI will need. Whilst widely available, LLMs are language models and not necessarily fit for industry specific tasks such as fraud detection and credit scoring. Specialist models require specialist training and companies must be aware in advance of the costs – training is often expensive. Companies can optimise training efforts by measuring AI output effectiveness; only training when they know they need to and treating source data and training lifecycles like production code: using version control and documenting it.

AI cost challenges of mining, storing and analysing data

Companies investing in AI must be mindful of the cost of mining, storing, and analysing data in terms of hardware and energy use. Businesses that lack in-house expertise or are unaccustomed to AI, frequently have to outsource, which presents problems with cost and upkeep. Smart technologies can be expensive due to their complexity and incur additional fees for continuous maintenance, repairs and the computational costs associated with building data models.

The majority of companies today have moved past the trial stage when it comes to deploying AI and are experiencing good results and a positive impact on their bottom lines. However, there is still work to be done in determining the boundaries of AI use. Given current constraints, safety in AI is crucial. With AI expanding unpredictably and quickly in every industry and particularly in banking and financial services, fintechs have a responsibility to put safety ahead of innovation.

Brian Wagner is CTO, Revenir AI