Ready to get started?
No matter where you are on your CMS journey, we're here to help. Want more info or to see Glide Publishing Platform in action? We got you.
Book a demoAn AI giant is upset their training data has been accosted by rivals in a way that looks pretty familiar to anyone who has been raided by AI already.
One might suppose that if you were you were claiming to build a thing that can exhibit self-awareness, then you might try and exhibit some self-awareness yourself. Not so in the case of Anthropic, who, with an offensive level of audacity only readily available to the terminally ignorant, have this week angrily protested at the theft of their training data by rival AI outfits.
It all sounds like a Soviet-era joke, in which the punchline is a corrupt official saying: "Yes, I only stole from the people... but this is being stolen from ME!"
This terrible crime was announced in a post on X, in which Anthropic said: "We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax.", adding that "distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers. But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems."
The irony was not lost on countless hooting voices pointing out the position AI companies find themselves in when it comes to moaning about IP, as well as industry rivals too including Elon Musk having a customary pop.
For us in media and the creative industries, it seems a bit rich that a business in an industry that seems to ignore or be offended by all ideas of protection for Intellectual Property can so suddenly be interested in borders. Anthropic, don't forget, settled for $1.5bn in Bartz vs Anthropic by way of compensation for authors whose works had been included in a "shadow library" of training data.
It's hardly the sort of behaviour that would encourage one to sympathetically run for the Anthropic barricades if you hear they're under attack, and they are one of the better ones.
The butt-hurt post did yield some splendid responses though, the kind of responses that assure me that probabilistic pattern recognition isn't going to wipe the floor with us meatbots yet. A favourite LLM botherer and exposer, Pliny the Liberator, immediately responded with "it’s only Claude if it’s distilled in the Silicon Valley region of California" to which another replied "Otherwise it's just sparkling autocomplete."
Again, to restate the position of this column, it is of no particular interest to us concerning the tasks to which AI systems are put, whether successfully or not. It's simply that the data on which they are trained and are meaningless without, is paid for where there is an identifiable owner.
This week, Anthropic looks likely to discover whether it can keep its valuable contracts with the US government, particularly the Department of War. Previously, Anthropic's systems were being used under a deal made last summer, which meant use of its systems by the military came under the company's own Usage Policy.
As was entirely predictable, the Pentagon has now decided that these terms are too constricting, and have demanded that Anthropic agrees to usage under "all lawful purposes". Dangling over Anthropic is the possibility they are officially named as a "supply chain risk” if they don't comply.
This is no surprise. Potential military uses will never sit comfortably for long under terms designed for civilian purposes, that's just how it is.
Yet this is the situation that caused Anthropic to cry wolfishly at the threat of these Chinese distillation attacks from DeepSeek, Moonshot AI, and MiniMax, all from atop its massive pile of looted data. The best justification left to them is "Yes it's loot, but it's our loot". Tell that to all the Americans whose creative efforts have contributed to the loot pile without recompense.
Notably, the thieves-protesting-at-thievery post comes as the company also announced that its safety policy, seen by those concerned with such things as being the most transparent of all the big AI players - although personally I don't believe a thing any of them say in this area - had been revised.
Central to this revision was Anthropic's Responsible Scaling Policy. Previously, the company had voluntarily adhered to a rule that said it would "never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate". That's being dropped, more as a reflection of market reality than anything else I think. Other people are doing stuff, so Anthropic needs to do that stuff too to stay in the money-burning game. In a headlong investor rush, you simply can't restrict your ability to burn money, that would be foolish.
There's no doubt the AI madness is wobbling somewhat. A single investment briefing this week, containing some pretty dubious assumptions it has to be said, moved entire stock markets, after it predicted that Agentic AI would destroy the US economy. The note itself is actually fascinating, but more worthy of sci-fi than as solid investment advice.
This all has the feel of a Tulip Mania to me, helped along by such savoury characters as Sam Altman, who managed to raise eyebrows again this week in defending AI's data centre environmental impact by saying "One of the things that is always unfair in this comparison is people talk about how much energy it takes to train an AI model ... but it also takes a lot of energy to train a human", comparing the food a human eats in life with the cost of running a data centre.
As a wise soul observed in response, "Remember the biggest threat is not that we will see machines as humans, but that we we start to see humans as machines."
No matter where you are on your CMS journey, we're here to help. Want more info or to see Glide Publishing Platform in action? We got you.
Book a demo