Publishers and machine learning: Herding cats or all about the training?

By: Rob Corbidge, 06 April 2023

Different approaches to generative AI are emerging, with one interesting prospect being the internal use of such technology to act as an organisational memory.

Publishers thinking about the use of generative AI are already seeing some illustrative cases of what that means in action. On the one side, we have a report into one site's attempt to harness generative AI for travel writing, with interesting results. On the other, we have a different approach at the specialist financial news giant Bloomberg, with almost all the data the organisation holds fed into BloombergGPT to create "institutional memory". Retrievable institutional memory.

Having been involved in the production of many short, snappy travel pieces in my time, I fully comprehend why a publisher might see that as fertile ground for generative AI. They are essentially advertorial content, with only nice things to say about the places they are imploring you to visit, yet they take time to produce to any meaningful quality.

It doesn't quite work in the Buzzfeed example. A short form such as they've chosen means the writing is mercifully short, but the repetition is well, repetitive. "Hidden gem" is used so often you'd think the planet was an alluvial diamond field. 

There's also an element of uncanny valley. Once you realise the phrase "I know what you're thinking" - repeated often in these examples - is generative AI, then the likely response from any sensible human will be "No you don't HAL". False bonhomie is bad enough in a person.

Buzzfeed obviously have their reasons for moving so quickly on generative AI, the large part of which is surely being seen to be moving quickly on generative AI. I doubt these results are moving much traffic. However, they are trying and experimenting.

Truth is, the issue such ventures are going to run into time and again is the data whatever generative AI they are utilising was trained on. Buzzfeed has been attempting to do something very specific on a broad platform that is in development. The result is cliched and lacks warmth, as if a lowest common content denominator has been chosen as a starting point for the generated content.

At the other end of the scale we have BloombergGPT. A business founded on providing high-quality data quickly, the kind of quality people will pay for. Bloomberg is an unusual publisher in that is also a software business, and therefore has technical know-how on tap. 

In the BloombergGPT model we may see the future for generative content, or a future. What the business intends to do with this GPT system trained on its own  "FinPile" dataset is not yet clear, but in the example we've seen it can write a Bloomberg headline that is recognisably a Bloomberg headline and it can retrieve data from a natural language query. 

It is built on focused data, and that is the key point. If your ChatGPT can do anything, likely it will do little meaningful. If it possesses all the data accumulated over time by an organisation that has been collecting specific data, then there must be some utility for it, and quite probably its main value is an internal one, rather than the external exposure we see in the Buzzfeed example.

One of my former journalistic roosts was a newspaper founded in 1817, and in continuous publication since then. Imagine if you will, transferring all that accumulated data into a ChatGPT-type system. Given that the newspaper has covered a very specific part of the UK for all that time, the potential granularity of such data would be quite something to have at your fingertips and it would be a marketable asset as a dataset.

There are likely many hidden gems in this generative AI revolution.