Skip to content

Response to FCA Mills Review consultation

Quietroom's response to FCA Mills Review consultation

Summary

AI is already changing the relationship between financial services firms and their customers. Members are using AI tools to get information and make decisions. These tools often give incorrect or incomplete information. This leads to poor outcomes.

The root cause is poor quality content on financial services websites. The solution is to improve that content and to establish clear regulatory expectations for AI providers.

Dear Sheldon Mills and the FCA team,

Thank you for the opportunity to respond to your review into the long-term impact of AI on retail financial services.

Quietroom is a communications consultancy. We make pensions, investment and insurance accessible for people who need them. Our clients include pension schemes, trustees, pension providers and insurance companies.

We are responding to Theme 2 (Future impact of AI on markets and firms) and Theme 3 (Future consumer trends). Our response draws on research we've conducted with pension schemes and their members over the past two years.

Theme 2: Future impact of AI on markets and firms

Who will control the customer relationship by 2030?

Financial services firms are already losing control of the customer relationship to AI providers.

Our research shows that members are increasingly using AI tools to get information about their pension. More than half of UK adults have used AI to manage their money. 1 in 3 do it at least once a week.

People look for answers using tools like ChatGPT and Google's AI Overview. They don't always go straight to the source. One client we work with saw a 91% swing from human visitors to AI visitors in a single month in 2025.

This matters because AI tools are becoming a primary interface between members and their pension. The AI provider controls what information members see. The AI provider decides how that information is presented. And the AI provider captures data about what members are asking.

Financial services firms no longer control the customer relationship in the way they once did.

"AI tools are becoming a primary interface between members and their pension."

Could AI systems provide regulated services while staying outside the regulatory perimeter?

Yes. AI tools are already providing advice-like services without being regulated.

When members ask ChatGPT questions like "should I transfer my pension?", it gives them recommendations. When they ask "which pension provider is best?", it ranks options. When they ask "how much should I save?", it suggests amounts.

These recommendations are functionally equivalent to guidance or advice. But AI providers are not regulated as financial advisers.

The line between information, guidance and advice was already blurred. AI makes it more so. Members don't distinguish between "this is what you could do" and "this is what you should do" when an AI tool presents both with equal confidence.

There's also a risk that AI tools direct members to unregulated or disreputable third parties. Our research found AI tools directing members not to their scheme or administrator, but to financial advisers. We cannot verify whether these advisers are appropriate or reputable.

Evidence of risks and poor outcomes

AI tools are serving up bad information about pensions and other financial services. Our research has found AI tools:

  • Serving information that's incorrect, incomplete, irrelevant - or all three
  • Confidently giving answers from a different scheme with a similar name
  • Directing members not to the scheme or its administrators but to third parties, including disreputable financial advisers

The root cause is the quality of content on financial services websites. Pension scheme websites often contain content that is:

  • Too long and complex
  • Written in jargon
  • Poorly structured
  • Difficult for AI to access and understand

When AI tools encounter this content, they struggle. They make errors. They fill gaps with information from other sources. They invent answers that sound plausible but are wrong.

This creates two problems. First, members get poor outcomes because they make bad decisions based on bad information. Second, financial services firms lose their reputation as the trusted source of information.

"Research shows AI discriminates against women in lending decisions."

Theme 3: Future consumer trends

How might consumers benefit from AI and what are the greatest risks?

AI could help members by:

  • answering questions quickly, 24 hours a day
  • summarising complex documents
  • explaining jargon in plain language
  • helping people who struggle with reading or financial literacy

They could indirectly benefit providers who use AI to:

  • analyse data and gather insight
  • develop products and services
  • run member services and back-office operations more efficiently
  • automate processes and reduce cost
  • deploy time and money savings into areas that deliver more value to consumers

But the risks are significant.

Risk 1: Inaccurate information

AI tools give wrong answers. Our research shows they regularly serve information that's incorrect, incomplete or irrelevant. Members trust these answers without checking sources. This leads to poor financial decisions.

Risk 2: Loss of agency and understanding

When members delegate to AI, they understand less about their own financial situation. They become dependent on tools they don't control. They can't spot when answers are wrong.

Risk 3: Direction to unregulated providers

AI tools direct members to third parties outside the regulatory perimeter. Members may end up with inappropriate or poor value products. They may be vulnerable to scams.

Risk 4: Amplification of existing bias

AI tools are trained on historic data. This data contains bias. Research shows AI discriminates against women in lending decisions. It perpetuates gender stereotypes in financial communications. It could deepen existing inequalities.

Which consumer segments might 'win' or 'lose'?

Winners: People with good digital skills and financial literacy. They can use AI tools effectively. They can spot errors. They can verify information.

Losers: People with poor digital skills or low financial literacy. They're more likely to trust AI answers without checking. They're more vulnerable to scams. They may not spot when information is wrong or inappropriate.

Older people may also lose out. They're less likely to use AI tools (only 10% of over-55s use ChatGPT for money advice, compared to two-thirds of 25-34 year olds). As services optimise for AI users, non-AI users may get worse experiences.

Women may lose out. AI tools show gender bias. They suggest women need "reassuring language" because they're "less confident" about financial decisions. They assign domestic roles to women far more often than to men. This bias could lead to worse financial outcomes for women.

"AI tools may inadvertently direct members to fraudulent schemes or disreputable advisers."

How might delegation to AI affect consumer understanding and vulnerability?

Members are already delegating important tasks to AI. They upload 40-page scheme booklets to ChatGPT. They ask it to summarise the key points. They ask it what decision they should make.

This delegation has consequences:

Reduced understanding: When AI does the thinking, members understand less. They can't explain why they made a decision. They don't know if the information was correct.

Increased vulnerability: Members who don't understand their own finances are more vulnerable to scams, poor advice and bad outcomes. They can't spot when something is wrong.

Loss of control: Once members delegate to AI, it's hard to take control back. They become dependent on tools they don't understand.

The risk is greatest for people who are already vulnerable. People with low financial literacy. People under financial pressure. People making complex decisions about retirement.

How could AI-driven fraud evolve?

AI makes fraud easier and more convincing. Fraudsters can use AI to:

  • Create deepfake videos of trusted individuals (like pension scheme trustees)
  • Generate personalised phishing emails that sound legitimate
  • Impersonate customer service representatives
  • Create fake websites that look identical to real ones

Our particular concern is that AI tools may inadvertently direct members to fraudulent schemes or disreputable advisers. When ChatGPT suggests a financial adviser, members assume that adviser is legitimate. They don't verify. This creates an opportunity for fraud.

What might help make AI-driven decisions more trusted?

Action 1: Set minimum content standards

Financial services firms should be required to meet minimum standards for their online content. Content should be:

  • Written in plain language
  • Structured logically
  • Answering members' actual questions
  • Easy for AI to access and understand

This would reduce the risk of AI tools giving incorrect answers.

Action 2: Require transparency about AI use

AI tools should clearly disclose when they're providing information about regulated financial products. They should explain their limitations. They should direct members to regulated sources for important decisions.

Action 3: Monitor what AI tools are saying
Financial services firms should regularly test what AI tools say about their products. They should identify inaccuracies. They should take steps to fix them.

The Pensions Regulator expects communications to be "accurate, clear, concise, relevant and in plain English" and "reviewed in light of innovations in technology that become available". Firms should consider AI tools as part of this requirement.

Action 4: Establish regulatory expectations
The FCA should clarify when AI tools are providing guidance or advice. It should establish what responsibilities AI providers have when they give information about regulated financial products.