How might delegation to AI affect consumer understanding and vulnerability?
Members are already delegating important tasks to AI. They upload 40-page scheme booklets to ChatGPT. They ask it to summarise the key points. They ask it what decision they should make.
This delegation has consequences:
Reduced understanding: When AI does the thinking, members understand less. They can't explain why they made a decision. They don't know if the information was correct.
Increased vulnerability: Members who don't understand their own finances are more vulnerable to scams, poor advice and bad outcomes. They can't spot when something is wrong.
Loss of control: Once members delegate to AI, it's hard to take control back. They become dependent on tools they don't understand.
The risk is greatest for people who are already vulnerable. People with low financial literacy. People under financial pressure. People making complex decisions about retirement.
How could AI-driven fraud evolve?
AI makes fraud easier and more convincing. Fraudsters can use AI to:
- Create deepfake videos of trusted individuals (like pension scheme trustees)
- Generate personalised phishing emails that sound legitimate
- Impersonate customer service representatives
- Create fake websites that look identical to real ones
Our particular concern is that AI tools may inadvertently direct members to fraudulent schemes or disreputable advisers. When ChatGPT suggests a financial adviser, members assume that adviser is legitimate. They don't verify. This creates an opportunity for fraud.
What might help make AI-driven decisions more trusted?
Action 1: Set minimum content standards
Financial services firms should be required to meet minimum standards for their online content. Content should be:
- Written in plain language
- Structured logically
- Answering members' actual questions
- Easy for AI to access and understand
This would reduce the risk of AI tools giving incorrect answers.
Action 2: Require transparency about AI use
AI tools should clearly disclose when they're providing information about regulated financial products. They should explain their limitations. They should direct members to regulated sources for important decisions.
Action 3: Monitor what AI tools are saying
Financial services firms should regularly test what AI tools say about their products. They should identify inaccuracies. They should take steps to fix them.
The Pensions Regulator expects communications to be "accurate, clear, concise, relevant and in plain English" and "reviewed in light of innovations in technology that become available". Firms should consider AI tools as part of this requirement.
Action 4: Establish regulatory expectations
The FCA should clarify when AI tools are providing guidance or advice. It should establish what responsibilities AI providers have when they give information about regulated financial products.