A weariness and wariness over some aspects of tech is apparent both in our bodies politic and the general public at large. This is good for publishers.
"The Gauls have only one fear - that the sky may fall on their heads tomorrow."
Goscinny and Uderzo's Asterix the Gaul series of comic books contained many wonderfully memorable characters. One of those was the Gaul's village leader, Chief Vitalstatistix, a good-natured chap who did not fear the Roman occupiers but was extremely concerned about the sky falling down. His wife, Impedimenta, often regarded him rather coolly.
This sense of fearlessness actually has historical basis. It is actually recorded that Alexander the Great met "an embassy of tall Celts" asking for his friendship. Alexander, being somewhat of a Big Shot, asked them what they were frightened of, hoping they would say his name. Instead, they told him that they feared only that the sky would fall on their heads. He liked that, because he was a Big Shot and that was Big Shot talk.
Of course, the other side of such apparent bravery is that someone actually does think the sky will fall on their head tomorrow and is transfixed by the thought, even if, as Vitalstatistix says "tomorrow never comes".
Certainly the belief in a tech doomsday is a little prevalent at the moment. Harsh lessons have come to humanity from the use of social media, and there is a view, certainly in the corridors of power, that the technology itself is to blame for the worst excesses, rather than it being a mirror of who we are as users of such platforms, whether we like the reflection or not.
We are seeing the fruits of this social media harvest in the reaction to generative AI. The rapidity with which legislators have responded to the wider public availability and use of such applications, a use barely months old, with suggestions on the technology's governance is in the starkest of contrast to the somnambulic shuffle performed by governments while the tech giants gathered everyone's data.
Hence we see proposals such as those floated by US Senate Majority Leader Chuck Schumer this week for the transparency of AI tech:
Never mind the fact we're still working on a full release of the "Transparent and strong ethical boundaries" patch for humans themselves, the other potential requirements would certainly provide a degree of clarity in order to assess the intent of any particular application.
Is this it desirable? Certainly we've also seen responses from with the AI industry itself, preemptively. The recent call for a moratorium on AI research was signed by Elon Musk, Emad Mostaque, Yoshua Bengio and Steve Wozniak, to name but a big few.
Even if you're a big publisher, a great deal of the current furore over AI won't seem too relevant right now, as good uses for it are still at the experimental stage, and even more so if you're just trying to run a profitable small- or medium-sized operation.
Yet this urge to regulate, this request for a moratorium, all seem to point to a broader concern about "big things". Big tech, social media overload, powerful bots masquerading as humans, even the phrase "big data" isn't being paraded around quite as proudly as it was a few years back.
It's good news for publishers. Our industry by and large deals in the close and personal. Hobbies, health, leisure, news, personal concerns, and such things out in the world as we're directly curious about.
Much has been made of automated "personalisation" in recent times. I deal with my personalisation personally and that's made possible by the excellent content produced by the many publishers whose sites I frequent. The human imperfection of my data gathering is a feature, not a bug.
The sky isn't about to fall on our heads, or, as the Pessimists Archive demonstrates, it has always been about to; yet if public feeling is against the big and impersonal face of tech, then it's time to remind people that we're publishers, and we've got something for everyone, made by someone.