This post was written by TCCS member, Nigel Bowen
ChatGPT isn’t the end. It’s not even the beginning of the end. But it may well be the end of the beginning.
Anyone who claims they can predict the long-term or even short-term effects of generative AI (e.g. ChatGPT and its much lower-profile competitors) is either deluding themselves or misleading you.
Indeed, when I peppered ChatGPT with queries about its likely effects, it was humble enough to keep giving me the same response:
I am not able to predict future events as my knowledge cut-off is 2021 and I am a machine learning model that can only provide information based on what I have been trained on. Additionally, it would be impossible for me to predict future events as they are not certain and can be influenced by a wide range of factors.
Nobody knows anything
Even the most brilliant minds in the tech industry can only make educated guesses as to how a particular technology will pan out. A decade ago, Sergey Brin and Larry Page were convinced internet-connected ‘smart glasses’ would be the Next Big Thing. But despite a short-lived buzz, Google Glass turned out to be a spectacular failure.
(Just to confuse matters further, the fact consumers mocked Google’s smart glasses and labelled their wearers ‘glassholes’ doesn’t mean we won’t all be clamouring for iGlasses in the future. A design-conscious tech company will probably make smart glasses cool at some point, and disrupt the optometry industry in the process.)
I can’t predict the future. But having written about business for ten years and the tech industry for five, I know a few concepts that can help explain how technological progress typically unfolds. Let’s run through a few of the most important ones.
The Gartner hype cycle
When it comes to new technology, most people overestimate its short-term effects and underestimate its long-term effects. This led the American tech company Gartner to develop the Gartner hype cycle. It’s not always accurate – most commentators underestimated what a gamechanger smartphones would be when the iPhone first appeared in 2007. But it’s on the money most of the time.
The hype cycle for artificial intelligence (AI) has played out across seven decades. Computer scientists have been plugging away at AI since the 1950s. At various junctures, expectations became inflated as advances were made. But in each case, disappointment set in soon after.
Because while the AI could do some cool party tricks, they weren’t much use in the real world. It’s all very well having a supercomputer that can defeat champion chess and Go players. But if it can’t come up with a cure for cancer, predict how the share market will perform next quarter, or even write a poem, people soon lose interest.
AI experienced a long ‘Trough of Disillusionment’ phase. And its enthusiasts started grumbling about enduring an ‘AI winter‘ of limited public interest in (and funding for) AI research. It didn’t attract much attention, and certainly didn’t generate the kind of media frenzy ChatGPT has enjoyed over the past couple of months.
But AI moved into the ‘Plateau of Productivity’ phase quite a while time ago, being used in technology such as:
- digital voice assistants (e.g. Siri and Alexa)
- Google Maps
- search engines
- The image-recognition technology that lets you unlock your phone with your face
- online banking services
- the page on Netflix that tells you what shows you might enjoy
- the chatbots that pop up on websites to offer assistance.
And contrary to what many content providers believe, AI muscled its way into the writing game ages ago. If you use Grammarly to find spelling mistakes, grammatical errors or typos, you’re relying on AI. If you use Hemingway to tighten up your prose, you’re relying on AI. Even if you check your spelling in Microsoft Word, you’re relying on AI.
It’s not just subeditors that AI has been putting out of business. ‘Automated journalism‘ (a.k.a. robot journalism) has been around for several years. Media companies often use AI to generate articles structured around data such as scores in a sporting contest or movements in a company’s share prices.
Is ChatGPT being overhyped?
At first glance, ChatGPT seems to be going through two phases of the hype cycle at the same time – high expectations and bitter disappointment. But what’s actually happening is that the people with little to lose and plenty to gain from ChatGPT are enthusiastically talking it up, while those who believe they have much to fear from it are desperately talking it down.
The best book I’ve ever read about the likely trajectory of technology in the digital age is Kevin Kelly’s The Inevitable: Understanding the 12 technological forces that will shape our future. And it includes the following checklist of how humans react to technological change:
The seven stages of computer/robot replacement:
- A computer can’t do my job
- A computer can only do some of my tasks
- A computer can do everything I do, but needs me when it breaks down, which it often does
- A computer can consistently and flawlessly do my job, but it can’t do anything new or out of the ordinary
- OK, the computer can have my old, boring job, which a human should never have been doing
- My new job is better-paying and more interesting than my old job
- I’m so glad a computer will never be able to do my new job
The good news and the bad news (depending on your perspective) is that ChatGPT is only at Step Two.
The consensus (which I agree with) is that ChatGPT can produce passable but unimpressive copy. The content it creates is flat, predictable and generic, much like the answers you get from a chatbot. (As Nick Cave famously pointed out, ChatGPT has no soul and doesn’t understand what it’s like to feel joy and sorrow. So it’s unlikely that it will ever create great art.)
The bad news is that many businesses will be more than happy with ‘good enough’ content generated for free (or close to it) by ChatGPT.
The really bad news is that tech companies are now pouring vast sums of money into AI. If I were a betting man, I’d wager that in the next few years AI will be able to produce copy that passes the writing equivalent of the Turing Test.
So what’s the good news?
Firstly, unless you source them from ‘race to the bottom’ online marketplaces, your clients won’t be content with ‘good enough’ copy no matter how cheap it is.
And while you may call yourself a copywriter or content writer, you probably do a lot more than just write. Chances are you do things AI won’t be able to do anytime soon, such as:
- pitching story ideas
- interviewing people
- attending events
- liaising with graphic designers
- providing the emotional drivers that make people want to purchase a good or service or wade through a 5,000-word whitepaper.
Secondly, if the technology does start encroaching on your marketable skills, you have some time to adapt. (I’m considering moving up the value chain and doing more of the tasks done by those who commission me to write, such as creating content strategies for businesses.)
Hope is not a strategy
A little over a decade ago I was a magazine journalist. The magazine I worked for no longer exists, and the publications I either worked for or contributed to have either shut down or degenerated into pale shadows of their former selves.
For a long time, print journalists resolutely denied what was staring them in the face. And many were shocked and disorientated when they found themselves out of a job, despite witnessing endless waves of redundancies wash through the media business they worked for.
As I said at the beginning, neither I nor anyone else can predict how generative AI tools such as ChatGPT will mess with our livelihoods. But I’ve learned the hard way that while there’s nothing wrong with hoping for the best, when it comes to looming industry disruption you should always plan for the worst.
Over to you
If you liked this article, please share:
About Nigel Bowen
Nigel Bowen writes about technology and business. Check out his website and his Substack.