The more difficult it is to use your content, the more likely it is that generative AI will give your members incorrect answers.
Your members are already using AI
If your members are searching the internet, they’re coming across AI-generated answers. If they work at a desk, their organisation is likely to be training them to use AI assistants like Microsoft Copilot. And if they’re one of the 1 billion people who are already using ChatGPT, then AI might be the new way that they find, consume, and interact with information online.
They are relying on AI overviews, and taking them at face value
In the UK, 92% of people search the internet using Google. Google now generates an AI overview for around half of all searches. Research studies show that as many as 7 in 10 users take the information in these overviews at face value. That means they don’t check where the information has come from, and they don’t view the webpages listed in the sources.
When a user gets an answer from a search engine without visiting your website, it’s called a ‘zero click’ search. Zero click searches are on the rise. They mean that the content you lovingly created and painstakingly reviewed may no longer be seen the way you intended it. It might not be seen at all.
They are asking AI assistants to summarise complex information
AI assistants are starting to become mainstream.
As well as having a billion users, ChatGPT became both the most downloaded app in the world in 2025, and the 5th most visited website in the world. That puts it ahead of Amazon, Netflix, and Twitter.
As your members become used to interacting with AI assistants at home and at work, they’re starting to turn to them for help with their money.
Research from the credit card company Aqua found that two-thirds of 25-34 year olds use ChatGPT and similar AI tools to get financial advice. More than half of 21-24 year olds do the same. For people over 55+, it’s just 10%.
For pension schemes, this data supports what employers and trustees are seeing and hearing from their members: that members are increasingly turning to AI to decode complex terminology and summarise lengthy documents.
Those 40-page scheme booklets that took months to perfect? Members are uploading them to AI assistants and asking for the key points, and what decision they should make.
Large language models have limitations
Human readers have cognitive constraints, and it turns out that large language models (LLMs) have similar limitations.
Humans can typically hold 5 pieces of information in their working memory. For some this might drop to just 3 at once. And if they are experiencing any form of vulnerability this can drop even further.
Similarly, there’s a limit to how much LLMs can handle at one time. It’s called the ‘context window’. If your webpages and documents exceed the context window, then an LLM won’t be able to look at all of your content. And even if you don’t exceed it, the greater the volume of information you give one, the harder it is for an LLM to find the right information and generate an accurate answer.
Research shows that this means that LLMs will miss the bits that are hidden in the middle.
That means that dense and meandering scheme booklets that bury important details on page 23 could well see those details missed by the very tools that members are turning to.
How LLMs find your content
Nowadays, when a user searches the internet or asks a question to an AI assistant, the LLM is reading their question, figuring out what they mean, finding information that answers this question, and then writing them an answer.
LLMs can do this because they may well have access to your content. They may have got access to your content in 2 ways:
- They’ve been trained on it, because the content from your website has been downloaded and used as data to train large language models.
- They use tools that let them search the internet, read your website, and use what they find to generate answers.
You can’t control how Google or ChatGPT work. But you can control your content. And your content is important, because what you write, and the way you write it, significantly influences how good an answer your members will get when they use generative AI.
How you can make your content AI-ready
The solution isn’t to write for robots, but to write better for humans. Quietroom’s research shows that AI does a much better job accurately summarising or explaining content if that content is already clear, consistent, well-structured and in short sentences.
Want help getting your content LLM ready? Contact us.

