arrow Products
Glide CMS image Glide CMS image
Glide CMS arrow
The AI-boosted headless CMS for media, sports and entertainment. MACH architecture gives business freedom, AI gives prompting power.
Glide Go image Glide Go image
Glide Go arrow
Ready to go enterprise sites for media and large audience projects. Select styles, choose components, add content, Go. Glide CMS, AI, hosting, support, maintenance included.
Glide Nexa image Glide Nexa image
Glide Nexa arrow
AIP with audience authentication, entitlements, and preference management in one system designed for media and content businesses with engaged audiences.
For your sector arrow arrow
Media & Entertainment
arrow arrow
Built for any content to thrive, whomever it's for. Get content out faster and do more with it.
Sports & Gaming
arrow arrow
Bring fans closer to their passions and deliver unrivalled audience experiences wherever they are.
Publishing
arrow arrow
Tailored to the unique needs of publishing so you can fully focus on audiences and content success.
Use cases arrow arrow
Technology
arrow arrow
Unlock resources and budget with low-code & no-code solutions to do so much more.
Editorial & Content
arrow arrow
Make content of higher quality quicker, and target it with pinpoint accuracy at the right audiences.
Developers
arrow arrow
MACH architecture lets you kickstart development, leveraging vast native functionality and top-tier support.
Commercial & Marketing
arrow arrow
Speedrun ideas into products, accelerate ROI, convert interest, and own the conversation.
Technology Partners arrow arrow
Explore Glide's world-class technology partners and integrations.
Solution Partners arrow arrow
For workflow guidance, SEO, digital transformation, data & analytics, and design, tap into Glide's solution partners and sector experts.
Industry Insights arrow arrow
News
arrow arrow
News from inside our world, about Glide Publishing Platform, our customers, and other cool things.
Comment
arrow arrow
Insight and comment about the things which make content and publishing better - or sometimes worse.
Expert Guides
arrow arrow
Essential insights and helpful resources from industry veterans, and your gateway to CMS and Glide mastery.
Newsletter
arrow arrow
The Content Aware weekly newsletter, with news and comment every Thursday.
Knowledge arrow arrow
Customer Support
arrow arrow
Learn more about the unrivalled customer support from the team at Glide.
Documentation
arrow arrow
User Guides and Technical Documentation for Glide Publishing Platform headless CMS, Glide Go, and Glide Nexa.
Developer Experience
arrow arrow
Learn more about using Glide headless CMS, Glide Go, and Glide Nexa identity management.

Tears over a giant AI "heist" find few sympathies in the real world

An AI giant is upset their training data has been accosted by rivals in a way that looks pretty familiar to anyone who has been raided by AI already.

by Rob Corbidge

Published: 15:14, 26 February 2026
A very small tiny violin being held in a human hand.

One might suppose that if you were you were claiming to build a thing that can exhibit self-awareness, then you might try and exhibit some self-awareness yourself. Not so in the case of Anthropic, who, with an offensive level of audacity only readily available to the terminally ignorant, have this week angrily protested at the theft of their training data by rival AI outfits.

It all sounds like a Soviet-era joke, in which the punchline is a corrupt official saying: "Yes, I only stole from the people... but this is being stolen from ME!"

This terrible crime was announced in a post on X, in which Anthropic said: "We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax.", adding that "distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers. But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems."

The irony was not lost on countless hooting voices pointing out the position AI companies find themselves in when it comes to moaning about IP, as well as industry rivals too including Elon Musk having a customary pop.

For us in media and the creative industries, it seems a bit rich that a business in an industry that seems to ignore or be offended by all ideas of protection for Intellectual Property can so suddenly be interested in borders. Anthropic, don't forget, settled for $1.5bn in Bartz vs Anthropic by way of compensation for authors whose works had been included in a "shadow library" of training data.

It's hardly the sort of behaviour that would encourage one to sympathetically run for the Anthropic barricades if you hear they're under attack, and they are one of the better ones.

The butt-hurt post did yield some splendid responses though, the kind of responses that assure me that probabilistic pattern recognition isn't going to wipe the floor with us meatbots yet. A favourite LLM botherer and exposer, Pliny the Liberator, immediately responded with "it’s only Claude if it’s distilled in the Silicon Valley region of California" to which another replied "Otherwise it's just sparkling autocomplete."

Again, to restate the position of this column, it is of no particular interest to us concerning the tasks to which AI systems are put, whether successfully or not. It's simply that the data on which they are trained and are meaningless without, is paid for where there is an identifiable owner.

This week, Anthropic looks likely to discover whether it can keep its valuable contracts with the US government, particularly the Department of War. Previously, Anthropic's systems were being used under a deal made last summer, which meant use of its systems by the military came under the company's own Usage Policy. 

As was entirely predictable, the Pentagon has now decided that these terms are too constricting, and have demanded that Anthropic agrees to usage under "all lawful purposes". Dangling over Anthropic is the possibility they are officially named as a "supply chain risk” if they don't comply.

This is no surprise. Potential military uses will never sit comfortably for long under terms designed for civilian purposes, that's just how it is.

Yet this is the situation that caused Anthropic to cry wolfishly at the threat of these Chinese distillation attacks from DeepSeek, Moonshot AI, and MiniMax, all from atop its massive pile of looted data. The best justification left to them is "Yes it's loot, but it's our loot". Tell that to all the Americans whose creative efforts have contributed to the loot pile without recompense.

Notably, the thieves-protesting-at-thievery post comes as the company also announced that its safety policy, seen by those concerned with such things as being the most transparent of all the big AI players - although personally I don't believe a thing any of them say in this area - had been revised. 

Central to this revision was Anthropic's Responsible Scaling Policy. Previously, the company had voluntarily adhered to a rule that said it would "never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate". That's being dropped, more as a reflection of market reality than anything else I think. Other people are doing stuff, so Anthropic needs to do that stuff too to stay in the money-burning game. In a headlong investor rush, you simply can't restrict your ability to burn money, that would be foolish.

There's no doubt the AI madness is wobbling somewhat. A single investment briefing this week, containing some pretty dubious assumptions it has to be said, moved entire stock markets, after it predicted that Agentic AI would destroy the US economy. The note itself is actually fascinating, but more worthy of sci-fi than as solid investment advice. 

This all has the feel of a Tulip Mania to me, helped along by such savoury characters as Sam Altman, who managed to raise eyebrows again this week in defending AI's data centre environmental impact by saying "One of the things that is always unfair in this comparison is people talk about how much energy it takes to train an AI model ... but it also takes a lot of energy to train a human", comparing the food a human eats in life with the cost of running a data centre.

As a wise soul observed in response, "Remember the biggest threat is not that we will see machines as humans, but that we we start to see humans as machines."

Latest articles

A giant cod driving down a street like a bus
A truth gap so wide you can drive a bus-sized cod through it
arrow button
Cash for content at scale from AI companies
AI firms and paying for content revisited
arrow button
Many media site owners and major brands are jeopardising the things that matter in the chase for AI search visibility, says Steven Wilson-Beales
Opinion: All this talk of AI visibility is killing your news business
arrow button

Ready to get started?

No matter where you are on your CMS journey, we're here to help. Want more info or to see Glide Publishing Platform in action? We got you.

Book a demo