How do you know you are not talking to a robot in disguise? Would you mind, if it said all the right things? Should journalists flag up any usage of the new AI writing service people are talking about? And are we all guinea pigs in a giant experiment? All this and more is the topic of human writer Denis Haman.
The AI-based chatbot/writer ChatGPT is, to quote Zoolander’s Mugatu, so hot right now.
"Chat Generative Pre-Trained Transformer" has everyone from academics to writers to cybersecurity experts to novelists to SEO experts aghast at the possible effect that an on-tap source of pick-your-own content could have for or on them.
The system will, scaremongers say, generate perfect written answers to any question, and mean amongst other things that all journalists should scramble to retrain as dentists or pilots or something thus far still human-reliant.
In fact, adding that completely unnecessary Zoolander reference and a bit of salty opinion is the only way I can prove this text isn’t in fact generated by ChatGPT, since it would do neither: it would just stick to the widely-accepted facts and generalisations. Or maybe that is exactly what it would say, to deflect suspicion?
So is ChatGPT the beginning of the end for writers and thinkers? Can it be leveraged by journalists with any sense of safety or morality? How can you spot ChatGPT content? Will using its output see your site win or lose for SEO? And, err… how exactly does it work?
Let's give it some context. OpenAI, the overarching AI project behind ChatGPT, was set up in 2015 by some of the most illustrious alumni of the tech world, with support from juggernauts such as AWS. It had significant financial backing to the tune of over $1bn, and before long had moved from non-profit to for-profit and jumped into bed with Microsoft. Not so ‘open’ any more, but motivated to move a bit faster.
ChatGPT – Chat Generative Pre-trained Transformer - is the public-facing point of access, and in short you just ask it to answer your questions and in a few seconds it will churn out pretty solid-looking responses – structured much as an article might be, minus the headline and other summary text. It’s like a less sarcastic version of Knight Rider’s KITT, when the producers needed to explain something complex to kids.
After messing around with it quite a bit over the festive break, a few immediate thoughts:
That’s my summary so far.
For people worried/hoping it is a Google killer, it’s not – at least in the business sense. Will it move the dial? Yes. Will it force Google, after decades of taking search for granted, to actually innovate and improve. Yes. Will it see Google reframe relationships with media companies that were key planks of its success and gifted all that lovely content to wrap advertising around? Time will tell, but hopefully this is a jolt that will spur Google into serious action. Microsoft has just announced that ChatGPT will be wrapped into Bing , so they aren’t hanging around.
Back to ChatGPT and my little experiment, on topics that I was objectively assured I should know more about than the AI.
It seems to be reasonably well informed and useful at a very high level – low-on-detail, high-on-generalities explainers, that sort of thing. “What is a headless CMS?” yields a very succinct and factually correct answer, and given the range of topics it can answer, those sort of answers are extremely impressive. Don’t forget that, regardless of funding, we are talking about a relatively short 7 years to get to this point.
However, the more specific you get, the more it struggles and goes back to what it knows – it is after all a pre-trained model, using texts from the internet to create its own versions - with a tendency to supplement bland wishy-washy answers with a bit too much “sort-of/could-be” opinion that verges on the weaselly “people say” get-out beloved of those who don’t know enough to know they don’t know enough, and consequently assign similar weight to vastly factually mismatched viewpoints.
As said, that’s perhaps not surprising, given that it is all derived from pre-trained, accepted knowledge fed into the AI brain. So how will it ever work with or surface new concepts that are paradigm shifts and perhaps still have little traction but are actually correct? I guess that is not it’s job – that is still the task of humans, thankfully. As long as people know that when they ask it questions.
And what about the thorny issue of the truth - is there a tipping point for truth? Let's not forget that the PT stands for Pre-trained Transformer, and the P here is the sticking point if the understanding of its uses is wrong. But anyway - how could anyone get that bit wrong?
Well, after a short while, I forgot myself. It was so trivially easy to forget and just accept the unfolding ChatGPT texts as a sum of all knowledge, and set your expectations accordingly. And I went into it consciously knowing better.
This is dangerous territory, as by definition its current knowledge model is at least a year out of date, having been fed its store of information at the end of 2021. A lot has changed since then, and it perhaps helps that it leans a lot on the wishy-washy caveated answers. And if the answers can be read any way you like, then it is not really actionable advice is it – which ChatGPT is definitely not meant for. (But who uses information from the internet to make decisions, eh?)
And how will it deal with smaller topics? How will it divine nuance? In many ways, like every bit of amazing breakthrough tech, ChatGPT is raising some fundamental questions which will not be solved in a hurry.
It will certainly need bigger brains than mine. If this is where the OpenAI team has got to in 7 short years, then together with DALL-E 2 (image generation from text prompts), and other inevitable generators of content, then what the future brings will be fascinating.
It surely won’t be long before my kids are asking it to generate it for them a film where Bambi's mother lives, and in fact sprouts light-saber horns to take down the hunter. And what's to stop them livestreaming it and taking a handsome few ad dollars on YouTube? "Hey AI-Movie maker - use this live report of the World Cup Final to generate a 'live' replica of the match in real time, with Pokemon players!" Perhaps it will be children that finally break the internet free.
Where does that leave content creators as a whole, and media and entertainment businesses in particular?
On the subject of creating content, in particular the current SEO craze for "answering questions" which leads to a lot of abuse by major players churning out content that is largely useless but ranks and carries ads, well, I don’t think real journalists have anything to fear in the near future or otherwise.
Will AI-writing end up disrupting our continually disrupted industry? Of course it will. It is as an industry that we proactively need to show we can use these tools responsibly without becoming reliant on them, otherwise – what value the price of a subscription if a reader can get the same answer themselves? As such I foresee a round of backbiting where media outlets are on the hunt to show where their rivals have just used AI to create content, and writers are getting fired for it. Such fingers will also be pointed at politicians, celebrities, and experts aplenty, but they are unlikely to get the sack for it.
Google already claims it can detect AI-generated content, and at some point it will almost certainly penalise those who overused it to get ranking content. Other firms - who don’t control where you rank in search results - can spot it too, to highlight plagiarism and so on in education coursework and the like.
ChatGPT may become the catalyst needed for Google to realise that search, and doing it well, is what will save it in the end. With its size and dominance Google has become bloated and lazy: with legislators circling, and a breakup being openly discussed, perhaps investing in its core search service is a timely move for all sorts of reasons.
I will leave you with the last question I asked ChatGTP: “Will ChatGPT be the end of Google?”
ChatGPT: "It is unlikely that ChatGPT, a natural language processing model developed by OpenAI, would be the end of Google. Google is a major technology company that offers a wide range of products and services, including search, advertising, cloud computing, hardware, and more. ChatGPT is just one aspect of the artificial intelligence research and development that is being conducted by OpenAI. It is a tool that can be used to generate human-like text based on a given prompt, but it is not capable of replacing the many diverse products and services offered by Google."
On that one, I tend to agree.
Denis Haman is the CEO of Glide Publishing Platform, and has worked in media and publishing technology for over 25 years.