If AI copywriting is so great, why do the biggest brands avoid it?

In November 2022, the general public finally got a taste of generative artificial intelligence and what it is capable of. ChatGPT showed us an AI copywriting engine that wowed the world – and turned the copywriting industry on its head.

Suddenly we had a tool that could automatically create reasonably high quality text in a matter of moments. Want a 500-word blog post on Software as a Service? No problem. Ten suggestions for a new brand tagline? Simple. A mid-term paper for your degree studies? Piece of piss.

Within months, people were turning to ChatGPT (or Bard or Claude or Copilot) to do the hard work of writing for themselves. And freelance copywriters like myself started to see work dry up overnight.

Such is the nature of technology, I hear you say. Freelance copywriting won’t be the first industry to disappear under the march of technological progress – just ask the machine-smashing Luddites of the 19th Century who lost their jobs as skilled fabric weavers  to mindless automated machinery.

But as things stand, there are several significant pitfalls with generative AI. And while the tech bros are trying to distract you with tall tales about how AI will wipe out humanity, the real dangers are going largely ignored.

So before you fire your copywriter, here’s a few things you must consider:

Big tech doesn’t trust AI copywriting

AI text is amazing, right? ChatGPT is undeniably clever technology, applying predictive statistics to write natural-sounding text.

But the truth is, most big brands don’t trust the technology. A quick search of leading job sites shows that those in the know are actively refusing applications from AI users. 

Screenshot-2024-05-21-at-12.31.16-500x96.png

In many ways, generative AI is the new copy content mill – and the smart brands already avoid low-priced, low-quality junk. Which means that the cost savings made through genAI are minimal at best.

Generative AI copy is already out of date

Working in the IT industry I have seen businesses invest millions of dollars every year in building systems that provide real time access to data. Why? Because the fresher the data, the more accurate their decision-making and operations become.

Which exposes another major flaw with generative AI – the data used by the models is out of date. The latest and greatest, ChatGPT 4o, is already SEVEN MONTHS old. And with every passing minute, the information it produces becomes more obsolete.

Screenshot-2024-05-21-at-10.14.23-500x299.png

Disturbingly, less than one-third of marketers (31%) see a problem with generative AI and factual inaccuracy. Which is weird because anything you generate with AI is at risk of being factually incorrect – and that could be embarrassing and damaging for your brand.

Generative AI makes stuff up

Like the best of the bullshitters, if genAI doesn’t know the answer to something, it is liable to make something up. Known as ‘hallucinations’, these AI fever dreams are quite funny – until they’re not.

Microsoft’s Sydney chatbot admitting to falling in love with users and spying on Bing employees. Hilarious stuff. But what if your AI engine of choice starts handing out faulty medical or legal advice? The consequences could be devastating. And we cannot assume this will never happen – people already diagnose their illnesses and ailments using Google searches, so why not a chatbot?

Screenshot-2024-05-21-at-11.45.54-500x312.png

The issue of hallucinations is likely to get worse rather than better. As AI improves, people will come to trust chatbots more, making them even less likely to question the reliability of the answers they receive.

Despite the many small print disclaimers in chatbot T&Cs, it is only a matter of time before someone is killed/maimed/traumatised by AI and tries to sue the developrs – or the company who publishes dodgy AI content.

Generative AI content is biased

Historically, IT systems could be relied on to give black-and-white answers. However, machine learning and artificial intelligence have broken that trust.

AI models can only respond according to the data they have been trained on. Give an AI copywriting algorithm white supremacist materials and you can expect it to generate pro-Nazi responses. 

That’s an extreme example, but the principle is the same – whatever the developers put in at the backend is what you’ll see on screen. It also means that generative AI cannot be relied on to produce fair, balanced or even accurate information.

But we can fix that right? Well yes, if we bend the definition of truth. Imagine an AI model used to generate car insurance quotes. The training data clearly proves that statistically, women are safer drivers than men. Logically, premium prices should be lower for female policy holders. Except that under anti-discrimination law, it is now illegal to adjust pricing based on gender. So the AI decision-making algorithms have to be re-engineered to ignore reality and generate a legally-acceptable result.

You don’t have to imagine this scenario though – this is happening at UK insurers right now.

Generative copy is just plain wrong

Ever since the beginning of the computer age, one principle has held true – garbage in, garbage out (GIGO for short). The idea is that if you put garbage data into a computer application, you can only expect it to return more garbage. Which makes perfect sense – unless you work for Google.

At the same time as re-engineering their search algorithms to ‘down-rank AI-generated, unreliable content’, some genius at Google decided to use Reddit to train their AI search tools. What could possibly go wrong when plugging into 10+ years of user-generated content from a site that is famous for containing vast amounts of sarcasm?

Unsurprisingly, Google has come under immense ridicule for the garbage now being generated in search results. Users are being advised to eat rocks, glue cheese to their pizzas, imbibe neurotoxins and (allegedly) kill themselves.

IMG_3801-139x300

Pizza-Cheese-Glue-500x476.jpeg

Artificial intelligence tools can tell jokes based on what they have been told is funny. But when it is hard for the average internet user to detect sarcasm or dark humour, what chance does a dumb algorithm have?

The GIGO scenario would be hilarious – until you remember that people really do turn to Google for advice. GIGO-affected results are actually terrifying; two-thirds of consumers would be willing to seek advice from generative AI for personal relationships or life and career plans.

Think about that for a second. 66% of people would seek relationship advice from an algorithm that advises them to eat rocks…

Generative or generic copy?

Have you received an email from one of your contacts in the past few months and thought, ‘You didn’t write that!’. Congratulations, you can spot AI copywriting. 

Part of this is because you recognise the writing style of your contact. Part of it is because AI copy is incredibly generic. The system knows statistically which word should come next in a sentence. Which means it strips out any potential for creativity or surprise in an effort to craft the perfect phrase. Predictive analytics do not allow for creative variance.

In 2011, Google released the Panda algorithm update intended to prevent ‘web spam’ ranking highly on their search engine. Much of the update focused on detecting generic, poor quality or duplicated content. Services like Copyscape became incredibly important, helping businesses check whether their articles were original or contained content plagiarised from other sites.

Screenshot-2024-05-21-at-12.04.00-500x217.png

Bizarrely, generative AI is leading us back to that place, by creating generic, repetitive copy based on statistical inferences rather than linguistic ability. The internet will become increasingly vanilla and boring as machines take over traditionally ‘creative’ tasks, starting with AI copywriting.

While most of what analyst Forrester predicts becomes a kind of industry gospel, senior decision makers seem to have gone out of their way to ignore one particular insight. In October 2023, Principle Analyst Laura Ramos warned “Thinly customised generative AI content will degrade the purchase experience for 70% of B2B buyers.”

B2B buyers are already drowning in irrelevant marketing guff – so why not throw some more at them? Why not piss off two-thirds of your sales leads? Unless you want to be one of the small minority who concentrate on writing genuinely useful content for them.

You know, the kind of assets that convert into sales…

Generative AI is ethically dubious

Leo Tolstoy spent six years writing the epic novel ‘War and Peace’. And AI took just seconds to ingest and analyse the book, adding it to the knowledge ‘soup’ used to generate clever text. The same is true of AI-powered image and music creation tools – they are all built and trained on the hard work of genuinely creative professionals, turning their masterworks into fodder that is reused, manipulated and regurgitated for the undiscerning. 

You may not be Leo Tolstoy, Vincent Van Gogh or Beethoven, but your creative talents have almost certainly been abused by AI models in the same way. Any images saved to Facebook or Google, posts written for your travel blog or shared to a social network will have been scraped and mined for reusable content.

Did OpenAI ask for permission to scrape and reuse your content? Of course not, they simply assumed you wouldn’t/couldn’t do anything about it. Those pesky, complicated T&Cs used by most websites demand that you hand over share all rights to your content in perpetuity. Worse still you receive nothing in return for training most AI models – not even a thank you.

Some people now argue that content creators should not be paid any kind of royalty for their content because the financial sums on offer are tiny. Instead, they believe that AI systems should be open to all because this will ‘benefit creators’.

Screenshot-2024-05-28-at-16.53.09-500x356.png

But this is the same ‘the internet should be free’ ideological garbage that has allowed businesses like Google and Amazon to grow big and fat – at the expense of everyone else. It makes zero sense. ‘Give us all your hard work for free and we’ll let you have access to AI bastardised versions of your own stuff. But for free!’

Seriously, if that sales pitch was worded properly, who would fall for it?

Some firms are now waking up to this new reality, but it is too little, too late. Your hard work and unique creativity has become search engine fodder for an uncaring audience.

In many ways, the issue of AI ethics is an indication of something more fundamentally wrong with the internet. Shockingly, just 33% of consumers are worried about copyright issues and even fewer (27%) are worried about the use of generative AI algorithms to copy competitors’ product designs or formulas.

Either people place no value on their own creativity or they simply do not understand that their work is being plundered for profit.

Sour grapes or reality?

So am I just a 21st Century Luddite or is there a real problem? Artificial intelligence is here to stay and it definitely has many incredibly valuable uses – mainly for non-creative tasks. But is AI copywriting really the future we want? Will it really help your business excel online?

Personally, I doubt it.