Last month the UK Prime Minister Sir Keir Starmer made a commendable effort to boost growth and deliver more efficient services in the UK, as he set out the Government’s AI Action Plan to support the country’s ailing economy.

However, while it’s promising to see core services such as the NHS and data centres taking centre stage, it’s disappointing that vital critical industries such as financial services and insurance – one of the most important industries within the UK in terms of economic output – is not being prioritised as part of this plan.

This represents a lost opportunity to leverage AI and unlock and supercharge growth in the City of London, given it’s the fourth largest financial services sector amongst OECD nations and contributed £208.2bn to the British economy in 2023. Contrastingly, the lack of emphasis comes after the Chancellor Rachel Reeves returned from a trip to China in which co-operation in key sectors such as financial services was given the red carpet.

What’s more, the Government’s focus on maximising opportunities, growth and innovation is laudable, yet the scant attention paid to “safety” in the plan raises concerns. Compliance and regulation should be critical components of an AI proposition, and it’s important we do not sideline these elements, particularly when applying technology to financial services.

These concerns have been compounded since when the UK recently refused to sign a declaration on “inclusive and sustainable” AI at the landmark Paris AI Action Summit, dampening hopes for a concerted approach to developing and regulating the technology in financial services.

As the UK fails to prioritise AI in financial services as part of its action plan, other financial hubs, which were signatories of the declaration, have surged ahead. For instance, Singapore’s Smart Nation initiative has made significant strides in embedding AI within financial services, laying the groundwork to forever outpace the UK and its strategy for technology.

With the right support, AI can be the solution

The action plan comes at a time when AI adoption is reaching critical mass. Figures recently published from the Bank of England show that 75% of financial services firms are already using AI, with a further 10% planning to use it over the next three years.

The technology could be instrumental for the City of London, becoming an ubiquitous tool for identifying, addressing and helping to prevent a series of formidable challenges the City faces, particularly an increase in incidents of non-financial misconduct such as bullying.

Encouragingly, there is a groundswell of support for adoption of AI solutions from within the financial services workforce, with two-thirds of employees in the sector being receptive to the use of the technology in identifying and mitigating non-financial misconduct within their organisations. This openness to innovation must be capitalised upon, as adoption through public consent is paramount to fostering trust in the technology – something the industry and wider economy must overcome to fully leverage emerging technologies. Establishing clear ethical guidelines will be critical to maintaining trust, particularly in a sector with a history of misconduct and one that deals with sensitive personal and financial data.

Explosion in the volume of data

Another important challenge AI can help to address is the explosion in the volume of data generated by highly-regulated professionals since the advent of generative AI (GenAI). Compliance teams have become inundated with data and are unable to process this deluge of information effectively, especially as GenAI has subsequently been embedded into workflows – and the volume of data will only continue to increase.

Organisations now face the challenge of understanding AI’s full potential, with its ability to identify and mitigate compliance risks a key consideration for business leaders, as well as manage the immense increase in the data produced. However, while it has helped to create a challenge, AI also provides the solution. The technology can be deployed to ensure that captured data adheres to the Financial Conduct Authority’s (FCA) guidelines, and while some organisations have embraced this paradigm, it also must be recognised at the top level.

With popular support for AI being deployed in compliance around financial services, the message to the British business community should be one that promotes the strategic, responsible and ethical use of the technology, rather than deprioritising a sector that, if abused or harmed, has a direct and meaningful impact on the financial health of the UK economy.

Simon Patteson is General Manager, EMEA at Smarsh