Ready to get started?
No matter where you are on your CMS journey, we're here to help. Want more info or to see Glide Publishing Platform in action? We got you.
Book a demoA leading figure of the AI revolution is revealed as predicting the effects of not paying the true value of the content such systems are trained on. His words could haunt the industry.
Look now on any discussion thread concerning the utility of the assorted machine learning systems that we have come to call AI, and you will witness a kind of Manichaean struggle of good against bad taking place, except everyone is convinced they're on the side of the light, not darkness.
The blasts in each direction are strong. In the same thread which explains why coding has benefited exponentially from the technology, I read a haematologist explain why AI simply doesn't give the precision she requires for blood analysis. Beneath a post extolling the video creation abilities of the latest open source LLM from China, I see a post slamming the opening ceremony of the Winter Olympics for using flawed graphics created by another such system.
In the same hour a friend told me about successfully building a vulnerability scanner widget for sites he manages, using Claude, I read predictions of AI-enabled financial scammery on scales previously unimaginable, and a deflated YouTube content creator telling their audience how his voice and content has been hijacked by impostor channels.
I can go on, as surely many of you can. It's an unbalanced moment, for sure, with my own reservations of unease at "thinking" systems which cannot say "I don't know", when "I don't know" has historically been the basis for all effective inquiry.
Here's the thing though, amid the cacophony of querulous takes, claim and counter-claim, shilling and doomerism, it's important for those of us in content creation in media and publishing or as humble at-homers, to keep our eyes on a simple fact: our original work is the feedstock of much of this technology and we are not being rewarded or incentivised for its use. Everything else is irrelevant. Whether OpenAI manages to grow anti-gravity cabbages on Jupiter or simply becomes a sweaty adult entertainment network, we should care not. Do what you will, but pay us.
Ed Newton-Rex makes another appearance in my thinking for this column this week. As one not opposed to the technology around AI but endlessly scrutinous of the industry building it, he does invaluable work.
This time, he has shown that those driving the AI revolution know exactly the value of the data they have taken without recompense, and more importantly, can see a world in which they should have to pay for it.
We refer to his revelation that, back in 2021, Dario Amodei, CEO of Anthropic accepted the idea those who produce the content that LLMs are trained on should receive payment for their contribution.
Specifically, documentation recently unsealed from the 2021 Bartz v. Anthropic case, in which three authors originally alleged copyright infringement by Anthropic for the use of their books for AI training, shows Amodei was and presumably still is more than aware of the imbalance of any system that takes without giving, writing: "what is happening macroeconomically is that distributed human labor is being used to train AI models by large, centralized actors who concentrate the resulting profits while in the long run making the human labor obsolete".
He continues: "An alternative that may work better for everyone, is to compensate data/content producers for their labor, not at a flat rate, but with a fraction of the profits from the model produced."
Now you're talking Dario!
That almost sounds like a form of collaborative good. Or, how about both payment models can exist? If the share risk terms of an AI business are promising, and so are its prospects, then take that route.
If a big lump of data is sought, and no questions asked, then a one-off payment can be appropriate. It's called a market, and the market system, while flawed, remains the best and most reactive system we have.
To this end, some encouraging news this week in a report which states Amazon have "signaled to publishing industry executives that it is planning to launch a marketplace where publishers can sell their content to firms offering artificial intelligence products".
AWS have apparently circulated slides showing AWS "grouping the marketplace with its core AI tools, including Bedrock and Quick Suite, when describing products publishers can use in their businesses".
A range of good marketplaces for content people and AI people to interact is sorely needed, and you'd argue Amazon know marketplaces better than any. Microsoft too are looking at creating one, as are many others of course.
One suspects future generations will look askance at how we arrived at the absurd situation we are in now.
Either that, or they'll have been slopped into utter insensibility. In which case, we know the marketplaces didn'twork, and the court cases went the wrong way.
No matter where you are on your CMS journey, we're here to help. Want more info or to see Glide Publishing Platform in action? We got you.
Book a demo