Skip to content
  • Insights
  • How AI is changing the way people read pension communications

How AI is changing the way people read pension communications

Members are reading AI-generated summaries of pension communications. What does this mean for the channels you use and the way you write?

This insight was written by Thomas Joy, who worked at Quietroom from 2022 to 2025. It was published in October 2023 and updated in February 2026.

Every month, fewer pension scheme members are reading the pension communications that schemes send them. Instead, they’re reading AI-generated summaries of that content. This technology is here now – rolled out to the apps members already use, on devices they already own.

We know how people read, but the pensions industry hasn't adapted

Experts have conducted decades of research on how users read.

They’ve learned that people are more likely to read communications that:

  • are relevant to their needs
  • they can scan
  • start with the most important information
  • use clear language, and are concise

The pensions industry has been slow to adapt. Outside of pockets of best practice are scheme booklets that run to dozens of pages, 8-page letters with multiple appendices, and reams of legal and pension jargon that few members can be expected to understand.

These communications might come from a place of good intentions, but they don’t lead to good member outcomes. In trying to please the views of the many authors writing these messages, our industry creates communications that are hard for members to read, understand and use.

Members are already using AI tools to read communications

A type of AI called the large language model (LLM) has become part of how millions of people use the internet.

LLMs are trained to identify patterns in data and generate human-sounding responses. One way of using these models is as a chatbot. If you’ve ever used ChatGPT, you’ve used an LLM.

Among the capabilities of LLMs are the ability to:

  • summarise text
  • understand images
  • answer questions

Developers have used these models to create tools that analyse and extract key points from communications. The summaries they generate give members an alternative to the original communications. This alternative will shorten long messages, rewrite jargon, suggest next steps, and let users ask questions to a chatbot and get AI-generated answers.

Many members will choose to read these summaries instead of original communications if they decide that the level of time, effort and energy to interact with a communication is too high.

Every major communication channel is affected

1. Email

Often, when members open an email, they’re offered an AI-generated summary alongside the original communication. Some apps show this automatically.

If those summaries are simpler and quicker to read or offer next steps, many members will prefer them to the original.

2. Letters and other paper communications 

When members receive a paper communication, they can take pictures of the pages and upload them to a chatbot.

They can then ask the chatbot to do things like summarise the content, explain key terms, translate images from tables and graphs, or suggest next steps. The chatbot will answer questions using the original communication, its training data and the internet.

3. Search engines 

Traditionally, search engines offered users a list of webpages to choose from.

Now, tools like Google AI Overview answer questions directly, using content drawn from those webpages. Users can still visit the webpages if they like, but most people won’t: not if they’re getting a direct answer to their question that’s simple to read and sounds plausible.

Are these tools any good?

When they get things right, yes. But they aren’t without their problems.

LLMs are prone to:

  • invent (or ‘hallucinate’) answers that are wrong but sound convincing
  • give different answers to the same question
  • perpetuate any bias that exists in their data training

These problems have not gone away. But the tools have improved. They hallucinate less often. They handle longer documents better. And they are now built into everyday software like Outlook, Google, and mobile phones. Members don’t need to go out of their way to use them.

But often the biggest limitation of these tools isn’t the tool itself – it’s the quality of the communications they’re provided with. If a communication is long and complex, AI tools are more likely to make mistakes. If it doesn’t give members what they need to make a decision, AI tools might try to fill the gap.

What you can do

Microsoft, Google and Apple are investing billions of dollars in developing AI products and pushing them out to people’s phones and computers.

We can’t control what members do on their own devices – and so we can’t stop members from using AI tools to interact with our content.

You can help members by creating user-driven content. That’s concise content, built around your user’s needs, and written in clear language. That way, when you write to members you’ll make it less likely that they’ll turn to AI tools to generate summaries or ask questions.

And if members do still get automated summaries, or if AI is interpreting content from your communication or your website, well-designed content will help keep you safe. Writing clearly, avoiding jargon, and making complexity accessible will make it more likely that AI tools will generate accurate answers and summaries from your content.