AI: What’s Really Happening Here?

Freedom and Agency

Too many people are writing about artificial intelligence: the boom/bust prospects, its threat, the nefarious plans of those Silicon Valley giants. I hesitate to add to the verbiage with another diatribe. But I feel I must.  I believe the trendlines point to something akin to the Luddite revolt in North England some 215 years ago. If you know your history, you are aware that smashing the looms in that bygone era was not about technology, per se, but about ownership and control of the means of production for entire communities. Marx and Engels were aware of the meaning of the insurrection, as they were of slave revolts in the Caribbean; they knew this. It was about freedom and agency.

Those steam and water-powered looms were not meant to improve the life of the workers in the mills, but to replace them. Advanced technology, to the mill owner, meant cost reduction – hence, profit maximizing. The demand for the products of the mills was not seen as infinitely elastic – it was limited. Hence, the incentives were to produce the same amount of clothing more cheaply. The only growth would be to the owners’ profit margins — to line their pockets. And those power looms consumed natural resources, like water or coal.

While it is admirable to celebrate the ingenuity of discovery, and the diligence of engineering, incentivizing these for the purpose of profiting the few is less admirable. Fire was not discovered so somebody could make money on the discovery. Neolithic cave drawings were not made for external rewards, but for self-expression. Likewise, Gutenberg’s printing press was created to spread knowledge, not to produce a market (that came later). Saving labor costs was never a motivator for the best human achievements, but that freedom and agency. Henry Ford did see a growing market for automobiles enabled by mass production – bestowing freedom and agency unprecedented in the memories of generations then alive. Labor shortage, not surplus, spurred his invention. Unfortunately, labor surplus would eventually come – due, at least partly, to his success.

AI is now upon us, promising both increased consumption of resources (electricity and water) and displacement of labor (starting with software engineers?). What are its benefits to our daily life? Is it addressing any real shortages? Its Fordist premise, based on labor shortage, needs to be sold to consumers – what will we gain from new AI capacity? At present, it is far from clear. Of course, it may be that AI just isn’t good enough, yet. “You’ll see what it can do in a few years, if you front us a few hundred billion more in debt!” … say Sam Altman and his ilk. Relying on the Pentagon for rescue may not be good enough — in fact, it may be a disincentive in the consumer marketplace. The labor surplus, as from Ford’s assembly line, hasn’t yet materialized — mostly it’s still imagined in some dystopian future, except for the software industry (Claude Code?). Will we start to see some attempted sabotage, much like the Luddite loom smashers? Elsewhere, there is not much evidence, yet, of mass layoffs due to AI.

In the continuing debate between AI “accelerationists” and AI “doomers” I tend toward a more cautious, but doomer-adjacent, posture. It seems to depend on ownership and control of the technology. The technology itself won’t eliminate humans, but that doesn’t make it our friend. For one thing, it doesn’t really look like there is anything “creative” in what we’ve seen so far from Claude or ChatGPT or Gemini. Perhaps the secret sauce is just around the corner for AGI, but Artificial General Intelligence does not mean anything more than super-fast inference, so far. Creative, emergent productivity is still a mystery – much like consciousness has been, in philosophy, for some time.

AI ownership should be largely public – whether that means a publicly held corporation, or taxpayer-supported government entity, is optional. Whatever shape AI takes, it needs to function in the public interest. Control of AI development needs to exist within the framework of laws – there are many, not yet fully understood, impacts on public safety and welfare from rapid AI advances. Local, state and federal lawmakers need to be involved. One obvious example now is the astounding pressure the industry is placing on grid resources via massive data center buildouts. If the industry is the main beneficiary of these data centers, the industry should bear the main burden of paying for them! Thus far, the industry hasn’t exactly done a bang-up job of convincing the public that these data centers will be a net benefit for society. Governments need to listen to the public, not those business interests.

Looming in the background, beyond the overt cost to ratepayers for building all these data centers, is the prospect of social harm being caused by some uncontrolled uses of the technology. Youth are frequently mentioned in this regard. But the psychological harms visited upon youth, already way too addicted to their screens even before AI entered the picture, is sufficient cause for regulation – schools and some states are already on the bandwagon for these controls. What about the consumer whose customer support experience is worse, not better, since the advent of AI phone trees and online query sessions? (I can attest to this on more than one occasion recently, failing to communicate with a chatbot while attempting customer service interactions.) It seems that control of AI services has trickled to the top, rather than down to the customer. Is this the direction for jobs as well? We are the workers, the consumers, the taxpayers. We are not the cost-cutting, profit-maximizing, grifting CEOs of big tech companies. And we have yet to see any benefits from AI.

We seek political leadership that can either convince us that some benefits will accrue from AI investment, or that they have a plan for controlling the direction of that investment. Absent either of those arguments, we will continue to lose confidence in our elected representatives to manage an AI-dominated future – will it be dystopian nightmare or abundance opportunity for all? As of now, in mid-2026, I cannot see the positive impacts of most of the generative AI offered to the public. Perhaps I’m too close to the initial impact zone in software development (Northern Virginia, my home, is data center alley currently), where it’s easy to imagine many younger entrants in the information-allied professions being displaced by AI. I was probably lucky to have gotten out before the current conflagration. But I do listen and read – and nobody much in my sphere is an AI accelerationist! Maybe I just live on the wrong coast.

— William Sundwick

Leave a comment