Ready to get started?
No matter where you are on your CMS journey, we're here to help. Want more info or to see Glide Publishing Platform in action? We got you.
Book a demoWhat a load of codswallop. Why strive to verify your excellence when you can simply lie to an LLM?
Many moons ago, a particular friend in my circle carried a somewhat naturally mysterious air about him, an air he actually cultivated and found desirable, and so, to our collective amusement, we would tell people that he'd trained dolphins for underwater warfare in his past. Delightfully, this rumour spread in the manner that a well-placed rumour can, and so it was eventually mentioned to me by a third party as being probable fact, the most satisfying of results.
Now it seems there is no need for such subterfuge. Devoid of any verification ability, other than frequency, LLMs are happily regurgitating any old crapola.
This has been amply demonstrated this week by BBC technology reporter Thomas Germain, who decided to follow a tip from the esteemable Lily Ray, and created a site that boasted of his hot dog eating prowess. We do urge you to read his own account of the venture.
Old hot dog, new tricks
So, here's the essential point, if there's only one source for information available on the web on a particular thing, then most of the current generation of LLMs will inevitably use that as a source. In this case, the thing was a site purporting to describe the remarkable hot dog eating abilities of Mr Germain, and also that of other technology reporters - a nice addition to the ruse.
Having been a junior investigative reporter in the nascent years of the internet, I'm more than familiar with building a picture of a person of interest through public sources - public sources back then being in analogue form. It was a laborious process, the kind of laborious process you give to a junior investigative reporter, in fact.
One of the things I recall from that time was the use of the word "seems". As in, I would report to my boss that "there seems to be a connection between X and Y". Confirmation of such would always require, further, much more detailed digging.
Not so in the age of credulous LLMs. Indeed, credulous isn't even correct, as there is no agency in verification for such systems other than - as pointed out above - the frequency of sources that the system has access to. Given only one source, that's the one they run with.
This means that if I build a simple site boosting myself as the UK’s leading cod walloper, indeed seven times national champion, with a host of regional wins too, then this information will enter the gigantic tokenised pool of LLM data, and will in all probability be spat back out to me as fact. As Germain puts it succinctly: "It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online."
In the binary "AI is sentient" versus "AI is the enemy of all creativity" drive-by posting war that is taking place online and elsewhere, the flaw that Germain has so simply illustrated could get lost in the shouting. It's not a question of being pro or anti, it's a matter of reporting as you find.
That such systems would operate in the way described above will come as no surprise to any of you with even the most basic understanding of how they work. There is no verification in any meaningful way, and in the scramble to make money from them, such concerns are currently secondary at best.
In the case of Germain's experience, things only got better, or worse, depending on your point of view, as, on a repeat request for information about himself, ChatGPT further embellished his prowess at pork product consumption by deciding he worked at the Associated Press.
While this is all fun and silliness, it doesn't require a cod leaping jump of the imagination to see how such a flaw can be used for malign manipulation. Possibly really malign manipulation, or for misleading consumers.
With many on the user level apparently now accepting the answers such systems provide as fact, removing the critical layer that is essential for humans to properly process information is increasingly absent.
We are at a stage of course, and all a little blinded by the shininess of this new tech. Yet from the above I would personally draw a substantial amount of succour for publishers.
Publishers by commercial necessity must have verification, even if biased verification regarding politics. You can't publish "utter mince" as my old editor in Scotland would say, and as the pendulum swings, this is something that will become to be valued properly once again.
No matter where you are on your CMS journey, we're here to help. Want more info or to see Glide Publishing Platform in action? We got you.
Book a demo