Is AI fairness the new data privacy? Fairly AI CEO and co-founder David van Bruwaene is on a mission to make AI more ethical and transparent. He tells RBI editor Douglas Blakey why Fairly is tapping into a huge need to promote and protect human rights at a time of growing concern of AI model development
The banking sector is increasingly relying on AI for its decision making. Indeed, AI can revolutionise a bank with use cases ranging from enhancing client interactions through chatbots to providing better loan terms through data-driven risk assessments and the automation of laborious back-end processes.
And so banks can realise the benefits of AI in cost savings, quality improvements, an expansion of their services, and increased personalisation. There is a but and it is a big but that is increasingly being recognised by regulators and consumer groups.
With many lending decisions being entrusted to machines having a tremendous impact on consumers’ lives, how can the banking industry ensure that decades of fighting injustice and bias in lending is not undone?
Governance, risk and compliance playing catch up: David van Bruwaene
Fairly AI co-founder David van Bruwaene warns about the danger of handing too much power to AI and machine learning in the interests of efficiency. He says that governance, risk and compliance are playing catch up. Fairly AI’s mission is to support the broad use of fair and responsible AI by helping organisations accelerate safer and better models-to-market.
With AI, risks and errors can be unintentional-but the consequence of getting it wrong can be profound if bias and discrimination impact on minorities.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThe concept of Fairly AI started in 2015 as an interdisciplinary research project involving philosophy, cognitive science and computer science. After extensive product concept and design iterations, Fairly was formally incorporated in April 2020, with its headquarters in Kitchener-Waterloo, Ontario, Canada.
Fairly goal: the go to for AI model risk management
Built to help businesses accelerate responsible AI models-to-market, Fairly’s enterprise-scale, API-driven productivity and risk management tools feature automated and transparent reporting, bias detection, and continuous explainable model risk monitoring, thus streamlining AI development, validation, audit and documentation processes across the three lines of defence for financial institutions around the world.
Just as quality assurance exists for software as an industry wide established risk management function, Fairly aims to establish itself as the go to for AI model risk management.
AI regulation is coming
Recall, for example, an investigation in 2021 by The Markup that found that lenders in the US in 2019 were more likely to deny retail mortgages to people of colour than to white people with similar financial circumstances.
And when the Associated Press looked deeper into this matter by city, it found that Chicago lenders were 150% more likely to reject Black applicants than similar white applicants.
More recently, Apple endured adverse publicity last November when it was reported that New York’s Department of Financial Services has opened an investigation into claims Apple’s credit card offered different credit limits for men and women.
Indeed, earlier this month the US attorneys Jenner & Block reported that potential discrimination and bias resulting from consumer tools based on artificial intelligence and automated data will be an enforcement focus of regulators this year.
van Bruwaene tells RBI: “The erosion of trust is felt very widely. We are seeing regulations pop up across sectors. In financial services we see an indicator of what is to come in many other areas such as healthcare and education and anywhere where consumers are affected by decisions made in full or in part by algorithms.
“An early example we see as consumers is in Google and with auto complete, very often when you start to type you may see some things that may appear a little off-colour.
Consumers have glimpsed it and are feeling it in their own interactions. If you live in certain neighbourhoods, where the zip code will have an affect on the algorithm, you do feel this, you are aware of this and your friends and neighbours are also in a similar situation.
Much consumer cynicism
“People want to be treated fairly by an agent whether artificial or not. The difference for a lot of applications is that people are not aware of the full extent of the decision making and the statistical regularities across a larger population where some of these issues can arise. There is a lot of cynicism around these decisions.”
He adds that there are technical as well as organisational solutions that financial services providers need to apply. This, combined with policies of transparency about the processes in place all combine to provide an overall strategy.
He adds: “The first thing is to have processes of regularly reporting on and examining and making corrections to data that is used to train models as well as to test them.
“So, a simple test is representation of people that belong to legally protected categories by race, age, gender, ethnic origin and religious status to determine if there is enough data to represent each of these groups with accurate models. In addition, these is a need to determine whether there are other inputs to the model or features that could be corelated with these protected classes and have a potentially adverse or discriminatory impact on the output of the model.”
Regulation versus self-regulation?
van Bruwaene says that he has mixed feelings about the need for regulation. He sees a lot of interest within the banking sector in terms of hiring and creating specialist teams tasked with identifying and avoiding AI bias.
“There is a lot of work being spoken about and potentially put into practice but there is always the profit motive and unless there is a regulatory push there will not be the same priority. There is a lot of work for banks to do in handling data and guidance from regulators will be really helpful.”
AI in banking: GlobalData research
Retail Banks to spend $4.9bn on AI platforms by 2024
Based on data drawn from GlobalData’s deals, patents, and jobs databases, company filings, and GlobalData’s CXO survey, banking scores the highest of any sector for AI adoption.
Retail banks are expected to spend $4.9bn on AI platforms worldwide by 2024. This is up from $1.8bn in 2019, representing a CAGR of 21.8%. Despite the Covid-19 pandemic AI platform spend in 2020 increased by 9% on the previous year to $2bn. This was spurred by banks’ need to enhance their digital channels while millions of people were confined to their homes. The impact of the pandemic will encourage investment in AI in the banking space. Total spending on AI technology is almost certainly higher, but it is difficult to estimate.
There are two main reasons for this. Firstly, AI is an intrinsic part of many applications and functions, making it almost impossible to identify revenue explicitly generated by AI. Secondly, the range of sub-sets and technologies that make up AI can be challenging to locate and track.
Fairly AI CEO and co-founder David van Bruwaene speaks with RBI editor Douglas Blakey