Skip to content
  • Insights
  • How AI will change the way people read pension communications

How AI will change the way people read pension communications

Members are reading AI-generated summaries of pension communications. What does this mean for the channels you use and the way you write?

In the future, some pension scheme members won’t read pension communications as they were written. Instead, they’ll read AI-generated summaries on their phones and computers. This technology is here now. It’s being rolled out to apps members already use, on devices they already own.

Based on what we know about the way people read, members will be more likely to read these summaries than the actual communications pension schemes send them.

We know how people read, but the pensions industry hasn't adapted

Experts have conducted decades of research on how users read.

They’ve learned that people are more likely to read communications that:

  • are relevant to their needs
  • they can scan
  • start with the most important information
  • use clear language, and are concise

The pensions industry hasn’t adapted to these findings. Outside of pockets of best practice are scheme booklets that run to dozens of pages, 8-page letters with multiple appendices, and reams of legal and pension jargon that no member can reasonably be expected to understand.

These communications might come from a place of good intentions, but they don’t lead to good member outcomes. In trying to please the views of the many authors writing these messages, our industry creates communications that are hard for members to read, understand, and use.

If communications don't improve then members will use AI to read them instead

Researchers and developers have been working on a type of AI called the large language model.

Large language models are programs trained to understand human data and give accurate, human-like responses. One way of using these models is as a chatbot. If you’ve ever spoken to ChatGPT, you’ve used a large language model. 

Among the many great capabilities of large language models are the ability to:

  • summarise text 
  • understand images 
  • answer questions about text and images, based on the data they’re trained on

Developers are using these models to create tools that read and summarise communications. These summaries will give members an alternative to reading communications as they were originally written. This alternative will shorten long messages, translate jargon in to clear language, suggest next steps, and give users the ability to ask questions to a chatbot and get AI generated answers.

Members will choose to read these summaries instead of original communications if they decide that the level of time, effort and energy to interact with a communication is too high.

This isn’t the stuff of fiction. This is happening right now in the apps members already use, on the devices they already own.

Every major communication channel will be affected

1. Email

When members open an email, they’ll get the option to read an AI-generated summary instead of the full, original communication. Some apps will show this summary automatically.

Members are likely to engage with these summaries because they’ll:

  • be shorter, so they won’t take as long to read
  • use clear language, so they’ll be easier to read
  • summarise the actions, so there’s less to figure out

Microsoft is currently piloting this feature in Outlook.

2. Letters and other paper communications 

When members receive a paper communication, they’ll be able to take pictures of the pages and upload them to a chatbot. 

Members can ask this chatbot any questions they like. They might ask for the content to be summarised. Or they could ask for an explanation of images like tables, graphs and infographics. A chatbot will answer questions using the original communication, the data it’s trained on, and the internet. 

OpenAI lets users do this in the ChatGPT app.

Microsoft allows users to talk to interact with images in Bing Chat

3. Search engines 

Right now, when a user searches for something online they get a list of results. They then click a result to be taken to a webpage where they can read all the information in context. 

Soon, users will be less likely to visit your website because a large language model will do the heavy lifting. AI will look at the search results first, find information on relevant websites, and then write the user an answer without them ever seeing the original content on your webpage. 

Users can ask follow-up questions, and AI will use its knowledgebase and the internet to find answers. Users can still visit the web pages if they like, but they’ll be less likely to do so if they can find an answer straight away.

Microsoft is piloting this in Bing Chat in the Microsoft Edge browser. 

Users can get browser extensions to search the web using AI with Google Chrome.

Are these tools any good?

When they get things right, yes. But they aren’t without their problems. 

Large language models are prone to:

  • Hallucination – They can make up answers and present them convincingly.
  • Variable answers – They don’t give the same answer every time.
  • Bias – These models are trained on data. If the data contains bias, the answers they give might contain bias too. And if the data is wrong, the models will get it wrong. 
  • Black box thinking – Experts don’t fully understand how models come up with their answers. 

But the biggest limitation of these tools when it comes to summarising content and answering questions isn’t the models they’re based on – it’s the communications they’re provided with.

The summaries and answers that AI can generate are only as good as the source material they’re given. If a communication is long and complex, AI is more likely to make a mistake. If it doesn’t give members what they need to make a decision, AI might take a guess. And if a communication doesn’t make sense, AI could make up the answer.

What you can do

Microsoft, Google and Apple are all investing billions in to developing AI products and rolling them out to their phones and computers. These tools will soon be part of a member’s everyday experience of reading messages, asking questions and searching for answers.

We can’t control what members do on their own devices – and so we can’t stop members from using AI tools to interact with our content. 

You can help members by creating user-driven content. That’s concise content, built around your user’s needs, and written in clear language. That way, when you write to members you’ll make it less likely that they’ll turn to AI tools to generate summaries or ask questions.

And if members do still get automated summaries, or if AI is interpreting content from your communication or your website, well-designed content will help keep you safe. Writing clearly, avoiding jargon, and making complexity accessible will make it more likely that AI tools will generate accurate answers and summaries from your content.