Back
Back
Back
19/08/24

Opinion: AI Fatigue – Are We Becoming Overexposed to Generative AI?

Written by Caitlin Faulks

online ai platforms infographic

 

Have you ever used ChatGPT for work?

It’s no secret that AI is both a help and a hindrance.

Generative AI and the use of large language models (LLMs) such as ChatGPT, Gemini (formerly known as Bard) and Claude have all contributed into bringing AI into the mainstream workplace.

Since the launch of ChatGPT in 2022 – the platform has 200 million monthly users worldwide (Backlinko, 2024) and 82% of companies are now using or exploring the use of AI in their organisation (ExplodingTopics, 2024).

Predictive analytics, content creation and tailored ad messaging can all be done quickly and effectively with the use of AI. So, in this way – these tools are a massive help.

Want to make a search query? AI can do it for you.

Want to answer that search query? AI can do it for you.

AI can produce relatively high-quality content in a minimal amount of time, gaining businesses and clients more bang for their buck. Commonly used platforms such as Meta, Canva and Later have noticed this, and now the majority of them have AI features, with ‘Options to use AI’ as default. This has led to generative AI becoming a marketer’s bread and butter.

Which might have been okay if it was just marketers using this, but the world is becoming more accustomed to generative AI. More and more industries now use this in their daily work.

That means your friends, colleagues and even your grandma might be using it too.

So, what’s the knock-on effect here? AI Fatigue.

And it may be bad news for marketing.

 

Understanding AI Fatigue

 

How good are you at spotting a bit of AI-generated text? Answer is, if you use it enough – quite easily.

If you ever find yourself skimming over a piece of text and finding the words ‘delve’, or ‘nestled’ or find a random out-of-place word with more than three syllables, it’s likely AI has had a part to play somewhere.

And if you’re a regular user of ChatGPT, you can spot this kind of text a mile away.

 

prevalence chart chatgpt

 

This overexposure to the same words, phrases and structures built within these language models can lead to a thing called ‘AI fatigue’.

What do we mean by AI fatigue in this context? Here, we’re referring to the concept of getting too overfamiliar with AI generated text, specifically.

Here’s an example.

I logged onto Google the other day to go on about my daily work and I typed in a search query.

I was returned for the first time with an AI overview.

 

ai overview example google

 

 

At the top, I was shown a clearly labelled AI generated response from Gemini, Google’s generative AI LLM.

However, I didn’t need the label ‘AI Overview’ to tell me it was AI.

The generic holiday resort chat with listed bullet points is a classic go-to generated by the platform.

So, what’s the problem? It returned a tailored search query?

The problem is I already have a negative association with this type of text.

As someone who works frequently with these programs – I’m familiar with the fact that these language models often have subtle inaccuracies or give similar, vague responses across a range of similar topics.

In my work, I’m used to skimming through this information returned by the language models – and editing it, scouring it for misinformation or misinterpretation.

Sometimes the information is correct, sometimes it isn’t.

For example, ChatGPT was trained as a language model with data that previously only went up to 2021, based on informative sources like Wikipedia, journals, articles and books).  Alternatively, other chatbots like Gemini can provide specific facts and references from the web – but the same ‘bank’ of words are being used across different concepts.

For this reason, if you’re looking to make unique ads in marketing, or content that stands out – it’s generally bad practice to rely solely on AI.

Therefore, I’ve learned in my industry – not to fully trust text generated by AI or to solely rely upon it for what I write.

So, why should I fully trust it when it comes to reading it as a consumer?

 

The Problem with AI Overexposure in Marketing

AI Ad Copy Suggestions

 

I’m not only a marketer, but also a consumer.

Which means– if I’m thinking it – it’s likely others are too.

The mainstream use of AI-generated responses in search engines like Google, ad-copy and daily social media could all lead to over-familiarisation with this language, the more I’m exposed to it.

As marketers, we think that creating a targeted ad tweaked by AI is a sure-fire way to understand and reach our audience. It helps us maintain brand voice and keep a professional front.

But we don’t consider the fact that people are seeing these vague language structures. Every. Day.

This oversaturates the use of AI in the ad/content space, and now the search query space.

 

What impact could regular exposure to AI-generated text potentially have on consumers when viewing an ad for the first-time?

I asked ChatGPT to write me ads for two different products, based on similar briefs. I wanted to see how much the two responses had in common.

 

One for lipstick:

lipstick ad chat gpt example

 

And one for eyeshadow:

eyeshadow ad chat gpt

 

The results are strikingly similar, and that’s because AI-generated text follows a formula.

When asking ChatGPT itself, what these two ads have in common, it returned:

 

The last point highlights how similar the language used, actually is.

 

Let’s try it with a completely different industry.

I asked ChatGPT to create an ad for a car company, selling a BMW to young men aged 20-30.

bmw ad chat gpt

It follows the exact same formula. Description, 4-bullet point list. Headline afterwards, with some selling phrases.

Here we have completely different industries, but AI uses the same formulaic structures and language patterns.

If one company used this, it could go under the radar. But imagine, thousands of companies using this type of generated text every day in ad copy.

 

If I read copy that hasn’t been tweaked to give a human or factual element – does this ultimately mean I trust it less?

Am I less likely to click through and convert if I feel like a robot has written this company’s content?

Or worse, if I repeatedly see these language structures and my brain is trained to humanise AI copy, will I develop a negative association with the next ad that I recognise as AI-generated?

According to marketing research, nearly 70% of consumers must trust the company to continue buying a product. Relatability, authenticity and resonating with your audience still remains a key part of marketing today. Something that could be potentially at stake, if a growing number of companies are all writing ads that sound like each other.

This isn’t just a trend in ad copy, authenticity continues to be a big factor in social media marketing – with unedited, candid and relatable posts becoming more popular. Imagine how it would feel if multiple companies used the same creative templates – and you are continuously seeing these duplicates in your feed all the time.

The market would feel oversaturated – and this is what could be happening with generative text.

Relatability is key and this underscores the importance that human-related content has in marketing – even amidst rising AI technological developments.

 

Humans have the benefit of being – human

Although AI can provide a professional and grammatically accurate response, there is an invaluable element to human content.

As we’ve seen with Google, search engines are now trialing AI-generated search results, which display above organic search results from official sites.

The problem?

A top organic search listing is more likely to give a user these things than an ‘AI Overview’ snippet:

  • Factual content (facts, figures, references that are traceable). AI Results are usually vague and tend to rely less on statistics and detail.
  • Credibility – organic search results are pushed to the top of a SERP by a complex algorithm primarily built on customer trust, backlinks and account user behaviour.
  • Transparency – organic results typically come from well-trusted sources and high authority sites. It’s not completely clear where LLM’s obtain their information.

 

Ultimately, humans have the benefit of being human and their language is more credible.

Their view of their audiences, what they’re selling and what makes consumers tick gives them the one gift that language models cannot – real life context.

We have the ability to speak from an infinite amount of words, rather a select bank.

This gives us the opportunity to be unique, and for our brand to stand out, reducing this AI fatigue in consumers.

 

The Takeaway about AI Fatigue

In summary, increasingly more industries are using generative AI. This means it is no longer the sole language of marketers or content creators– it is used by more and more people, and therefore consumers.

Excessive generative AI can lead to ‘AI Fatigue’ – meaning people are becoming increasingly tired of seeing AI-generated text (which is easily spotted), which could have a negative impact on the way consumers are viewing a brand.

By understanding the psychological impact of AI-generated content and how this could potentially lead to negative biases about brand trust, this could help to inform better future marketing decisions and practices.

At Talking Stick Digital, we embrace AI to boost efficiency but remain committed to the human creativity and personalisation that truly resonates with audiences. We take inspiration from AI, but our content is predominantly human-centred, enabling us to tailor content across niche industries.

Have any interesting thoughts or feedback about this topic? Contact us today, to continue the discussion.