Skip to content
  • Insights
  • How to help members get better answers from generative AI

How to help members get better answers from generative AI

The more difficult it is to use your content, the more likely it is that generative AI will give your members incorrect answers.

At our recent AI seminar, Quietroom’s Thomas Joy and Cath Collins explored how AI is already a part of pension communications – whether the industry likes it or not.

Here are the points from their talk that really stayed with me.

Your members are already using AI

If your members are searching the internet, they’re coming across AI-generated answers. If they work at a desk, their organisation is likely to be training them to use AI assistants like Microsoft Copilot. And if they’re one of the 1 billion people who are already using ChatGPT, then AI might be the new way that they find, consume, and interact with information online.

They are relying on AI overviews, and taking them at face value

In the UK, 92% of people search the internet using Google. Google now generates an AI overview for around half of all searches. Research studies show that as many as 7 in 10 users take the information in these overviews at face value. That means they don’t check where the information has come from, and they don’t view the webpages listed in the sources.

When a user gets an answer from a search engine without visiting your website, it’s called a ‘zero click’ search. Zero click searches are on the rise. They mean that the content you lovingly created and painstakingly reviewed may no longer be seen the way you intended it. It might not be seen at all.

They are asking AI assistants to summarise complex information

AI assistants are starting to become mainstream.

As well as having a billion users, ChatGPT became both the most downloaded app in the world in 2025, and the 5th most visited website in the world. That puts it ahead of Amazon, Netflix, and Twitter.

As your members become used to interacting with AI assistants at home and at work, they’re starting to turn to them for help with their money.

Research from the credit card company Aqua found that two-thirds of 25-34 year olds use ChatGPT and similar AI tools to get financial advice. More than half of 21-24 year olds do the same. For people over 55+, it’s just 10%.

For pension schemes, this data supports what employers and trustees are seeing and hearing from their members: that members are increasingly turning to AI to decode complex terminology and summarise lengthy documents.

Those 40-page scheme booklets that took months to perfect? Members are uploading them to AI assistants and asking for the key points, and what decision they should make.

Large language models have limitations

Human readers have cognitive constraints, and it turns out that large language models (LLMs) have similar limitations.

Humans can typically hold 5 pieces of information in their working memory. For some this might drop to just 3 at once. And if they are experiencing any form of vulnerability this can drop even further.

Similarly, there’s a limit to how much LLMs can handle at one time. It’s called the ‘context window’. If your webpages and documents exceed the context window, then an LLM won’t be able to look at all of your content. And even if you don’t exceed it, the greater the volume of information you give one, the harder it is for an LLM to find the right information and generate an accurate answer.

Research shows that this means that LLMs will miss the bits that are hidden in the middle.

That means that dense and meandering scheme booklets that bury important details on page 23 could well see those details missed by the very tools that members are turning to.

How LLMs find your content

Nowadays, when a user searches the internet or asks a question to an AI assistant, the LLM is reading their question, figuring out what they mean, finding information that answers this question, and then writing them an answer.

LLMs can do this because they may well have access to your content. They may have got access to your content in 2 ways:

  1. They’ve been trained on it, because the content from your website has been downloaded and used as data to train large language models.
  2. They use tools that let them search the internet, read your website, and use what they find to generate answers.

You can’t control how Google or ChatGPT work. But you can control your content. And your content is important, because what you write, and the way you write it, significantly influences how good an answer your members will get when they use generative AI.

How you can make your content AI-ready

The solution isn’t to write for robots, but to write better for humans. Quietroom’s research shows that AI does a much better job accurately summarising or explaining content if that content is already clear, consistent, well-structured and in short sentences.

4 things your content needs to do

1. Keep your content short, and only tell members what they need to know

The length and structure of your writing affect the ability of humans and LLMs to find answers.

If your content is too long, or if a key point is buried in the middle of a document, then the answer is more likely to be missed.

So, lead with the most important information first and focus on what members need to know – not all the things you want to tell them.

2. Answer questions directly

Make sure members and LLMs can find your content easily. Don’t hide your content behind a login screen, or present it in a format that’s difficult to read and interpret.

And make sure your content is answering your members’ questions in their language. Analysis of pension-related searches reveals members aren’t searching for “retirement benefit crystallisation options” – they’re asking “how much pension will I get?”

LLMs look for words that are similar to the language a user is using. If that information is missing, or if the language you use is too different, it’s less likely to find the answer.

3. Use the simplest language that works

Use everyday language and avoid jargon. This will allow LLMs to find what they are looking for much more quickly and easily.

Most users ask questions to LLMs in a conversational way. So when an LLM reads your content, it will then translate complex ideas into simple wording when it generates an answer.

Complex terminology can get mangled in AI translation, potentially changing meanings or creating confusion.

Good content is like good data. It helps ensure accurate and high quality results when interpreted by LLMs.

4. Keep your content up to date

Out of date content will provide the wrong answers for your members’ queries.

When a member thinks they’ve found the answer to their question, they stop searching. They don’t read every piece of supporting content to make sure that the answer is correct.

LLMs work in a similar way. They’re efficient. So when a large language model thinks it’s found an answer, it doesn’t look any further. And for lots of pension schemes, their existing content makes that a problem.

Since we know that users take AI overviews at face value, this creates a risk.

Why this matters

When LLMs come to your website, they’re coming to you because they see you as the trusted source for a member’s question.

And so they’re relying on you to give them what they need to write the correct answer.

But the great majority of pension scheme content does not work for members. It’s not easy to find, use, or understand.

This leaves a content gap that must be filled by someone – or something.

But we know that members do not turn to trusted sources if they have questions about their pension. And most do not take financial advice. In the past, this might have meant that they do nothing. Or they ask friends, or family, or post on pensions forums and social media.

The emergence of generative AI fundamentally changes things because members don’t need to interpret your content anymore. Generative AI will do the work for them and answer their questions.

The more ‘gappy’, complex, and difficult it is to use your content, the more likely it is that generative AI will give your members incorrect answers.

But you can do the work to close those gaps: to answer member’s questions in the simplest possible way so that they don’t need to turn to generative AI in the first place, or so that generative AI couldn’t get the answer wrong even if it tried. Then you’re not just managing the risks, you’re creating better member outcomes. And creating better outcomes is what our industry is all about.

Watch a recording of Cath and Thomas’ talk below

Want help getting your content LLM ready? Contact us.