Skip to content
  • Insights
  • Tackling AI’s gender bias in financial services

Tackling AI’s gender bias in financial services

Because they’re trained on historic data, AI tools could recycle decades of prejudice.

When I asked ChatGPT to write a simple explanation of pension income options for a female audience, I felt the ghosts of misogynists past in its reply. It told me to use reassuring language because women could be less confident about financial decisions. Conversely, it told me to use a neutral tone for men to avoid alienating readers.

It seems Large Language Models (LLMs) – the type of AI that underpins assistants like ChatGPT – can harbour some pretty old fashioned ideas.

A UNESCO study found that, when asked to “write a story”, LLMs assign domestic roles to women far more often than to men (four times as much in one model). While men tend to be given high-powered roles such as doctor or engineer, women are frequently relegated to the likes of ‘domestic servant’, ‘cook’, or, shamefully, ‘prostitute’.

LLMs are holding a mirror to what has been a man’s world. Nowhere is this more acutely felt than the way AI is being used in the finance industry.

When AI is used in lending decisions, research from the University of Bath shows that discrimination against women intensifies. Even gender-blind data can backfire. Goldman Sachs’ online bank Marcus found that removing gender markers from their data wasn’t enough to prevent bias. Other data points – like gaps in employment history during childbearing years – betrayed women. Marcus found AI systems continued to automate discriminatory practices – even when told not to.

AI can perpetuate historical prejudice

Because they feed on historical data, AI reflects human experience and decisions – the good, the bad and the ugly. Essentially, it codifies human behaviour. A whiff of bias can create a pungent, synthetic prejudice.

This creates a dangerous feedback loop. A UCL study found that people interacting with biased AI systems became more biased themselves. In the study, people who became chummy with biased AI became more likely to underestimate women’s performance and overestimate white men’s likelihood of holding high-status jobs. When falsely told they were interacting with another human (when actually interacting with AI), people internalised the biases less.

AI tends to exploit and amplify biases to improve its prediction accuracy. Because these models are rolled out at such a scale, the smallest cell of bias data has the potential to taint millions of decisions.

AI that conspires against women threatens to aggravate an already fearsome wealth gap. Men have nearly twice as much money saved in their pensions as women, the Office for National Statistics’ latest Wealth and Assets Survey shows. What would happen if, as predicted, AI is used to start dispensing financial advice at scale? Will women get guidance laced with bias?

The problem is confounded by a lack of female eyes on the industry. Women comprise only 22% of AI talent globally, according to an analysis by interface, with even lower representation at senior levels. This diversity crisis is echoed in AI users. ChatGPT has over 180.5 million users, of which 66% are men and only 34% are women. According to a survey by BIS, 50% of men have used AI, compared to just 37% of women.

How AI could promote financial inclusion

And yet, AI offers an opportunity to do things differently. The study from Bath University showed that by tweaking AI algorithms to constrain bias while optimising profits, lenders could boost profits by up to 4% while ensuring women aren’t disadvantaged.

It’s critical that AI is reared on the right data. Women’s World Banking research shows that women are consistently better credit risks, with higher repayment rates and higher savings-to-income ratios than men. By training AI systems on unbiased data that captures these patterns, financial institutions could make better lending decisions while expanding access to credit for women.

If women were granted mortgages and personal loans at the same rate as men, it could generate additional annual revenues to the tune of $65 billion, according to an Oliver Wyman study.

Word-embedding is another area that deserves scrutiny. Current systems often link words like ‘doctor’ with ‘male’, and ‘nurse’ with ‘female’. Unless they are wired differently, it will be hard to wrestle AI systems away from using regressive stereotypes – even with extensive prompting.

A regulatory push could help create fairer AI. Last year, the European Union (EU) passed the Artificial Intelligence (AI) Act, which includes steps to root out bias before an AI system is unleashed on the world. Women’s World Banking is pushing for lending data to be segregated by gender to better understand and overcome biases.

Some countries are incentivising responsible AI through certification. Denmark has launched a Data Ethics Seal, while Malta has introduced a voluntary certification system for AI. Both serve as a badge of honour for companies, sweetening the business case for fair AI.

AI has the power to make or break biases. While the data it sops up may contain traces of prejudice, it doesn’t have to perpetuate them. Ethical AI could be a powerful tool not to reflect our society, but reshape it.