Somewhere between the valley of doom-scroll fatigue and the peak frustration of having to endure the ritualistic humiliations of capitalism, I dash away my overflowing folder of apparently unimpressive Curriculum Vitaes and prostrating cover letters, open up a new Text Post (yes, I’m one of those psychos who write directly into the Substack drafts—when life has no edge, whetstone your own) and make the decision to bravely cover ground I’ve not yet trodden before.
That’s right, darlings. I’m finally tackling ✨Artificial Intelligence✨.
At first, I dream of a go-to guide on AI resistance—a coruscating analysis so thorough that, if you squint, you’d mistake it for a sassy PhD paper. I start writing. The dream becomes misshapen. Its too stiff, it sounds preachy, its tedious to write and even more boring to recite back. The draft transforms, implodes, evolves. The intention to explore the concerns of, and around, this technology remain.
It’s difficult to talk about Artificial Intelligence when it’s building its own hull mid-flight. Criticism of an emergent technology requires figuring out how to criticise it—which involves working backwards from the visceral instinct and shaping its raw material into something refined. When it comes to AI, I trust my intuition. It tells me to not be fooled, to be wary, to resist, speaking at a volume much louder than the usual paranoia that basically utters the same kind of things, only quieter and about more absurd shit, like the old white lady in Tesco who looked me dead in the eye before tilting her shopping trolley to a 90-degree angle in the middle of the aisle, trundling off to pick up some Le Rustique Pasteurised Camembert from another shelf. I know you saw me coming, Glenda.
I’d rather not repackage my intuition and present it as “balanced” in pursuit of a journalistic standard pretending neutrality is the highest form of intellect—so high, in fact, that nary a mainstream news outlet in the western hemisphere can complete a muscle-up to call a genocide by its name. So I abandoned the dream of a comprehensive go-to guide.
Instead, I craft a GPS route (because traditional maps are cartographically obsolete and everything outdated deserves a vicious, peasant’s death) to document a wandering.
Follow along, if you want.
Picture me: hunched over laptop keys.
Eviscerating, technophobic take-downs seep from my finger-tips with a viscosity that is, quite frankly, gross to the point of sensual intrigue.
I’m riffing on a level that is frickin’ sublime until I’m undone by a stray Google search that coldly and callously sets in motion a chain of events which end with me realising I’ve been mistaking Generative AI for Artificial General Intelligence.
Yikes.
I backspace 2000 words in blind fury. Here I am, thinking the Ayahuasca-addled, Great Automatic Grammatizator is four OpenAI updates away from gaining sentience and plugging us all into the Matrix. I’d lapped up the shiny, chrome Kool-Aid. A faux pas destined to haunt me as ghoulishly as the time I loudly mispronounced a word I’d only ever seen written in a room full of my literary peers at one of the most prestigious libraries in Britain.
I’ve been saying “high-per-bowl” for like, two decades. Oh, well.
In case you’ve been messing up, too: Generative AI uses deep learning to specialise in a complex but single task—making them narrow artificial intelligences. Some also call them “weak artificial intelligences” but I’d rather not have too many robophobic comments in my digital archives when the techno-pocalypse begins. You think the backlash against white celebrities’ circa 2012 tweets was bad? Wait until our future robot C.E.Overlords dig up your old “clanker” jokes.
Anyway, Large Language Models, Text-to-Video Models, Text-to-Image Models, Text-to-Music generation are all examples of narrow artificial intelligence. The most popular and widely accessed Generative AI is, of course, Chat-GPT. Its task is to pull from vast datasets of text to generate humanlike responses to input queries. It does this pretty well—but it wouldn’t be able to tie its own shoelaces if you screwdrove robot hands to its servers and probably won’t beat a Go Champion.
There is so much stuff about Generative AI floating in the necrotic vastness of cyberspace.
AI’s getting more powerful but its “hallucinations” are getting worse.
Humans are falling in love with prompt machines.
Claude Opus 4 might resort to blackmail to stay alive.
Anybody want to create a NSFW chat-bot girlfriend?
Generated babies videos, that feels pretty spooky.
A teenager killed himself earlier this year after months of encouragement by Chat-GPT.
Learning literally anything about Palantir1.
Separating the reasonable risk from the moral panic or the valid benefits from capitulatory glazing was a dizzying exercise that, honestly, would’ve made a go-to guide come out pretty shit. Luckily I’m not doing that any more. I still think Artificial Intelligence, especially the Generative persuasion, is a pretty bad idea.
In the anthropocene—the era where mankind has, in our impressively short lifespan on the face of the earth, had the biggest hand in fucking it up—humanity has caused:
Air, Water, and Light Pollution
Britain kicked off the industrial revolution, setting the pace for America to take the rootin’, tootin’ pollution baton and digivolve into a superpower. There was a glimmer of hope, maybe, once. A brief period where environmental consciousness was an institutional focus in a presidential administration. Jimmy Carter established the Department of Energy in 1977, set goals to shore up renewable energy, regulated strip mining and even installed solar panels on the The White House roof. As far as being in charge of a hyper-militarised, cowboy capitalist settler-colony built on slavery with an insatiable lust for instigating foreign coup d'états, Jimmy seemed like a pretty decent bloke2.
I guess the American public thought he was a hippie loser who was too soft on the Ay-rabs or whatever—so they voted him out. I probably needn’t say more than Ronald Reagan removed the solar panels in 1986.
Industrialised nations have gone on to pave a global ecosystem over the natural world. You read studies reporting half of world’s CO2 emissions comes from 36 fossil fuel firms, concentrating pent-up rage at the big, bad, ultimately faceless energy companies. We know the biggest culprits, just as we know tearing down those 36 firms, shutting down plastic production or turning off every oil refinery on the planet would destabilise our precious ecosystem (which is totally the best system that’s ever existed) in a way nobody is willing to deal with. Imagining a new world is one thing, bringing it into fruition is another.
You and I may never have a carbon footprint as big as an oil baron but we’re complicit by way of citizenship of the western world. And it sucks. A lot of bisonshit being done in our name and with our tax pennies. Trivial (or fraudulent) initiatives help us believe meaningful change is happening, as if the inconvenience of drinking through paper straws at Wagamama’s or manoeuvring our mouths around bottle-lid attachments are finally turning the tide in the war against pollution. Tell that to the micro-plastics in my amygdala, every raindrop on the planet being contaminated with forever chemicals or the garbage island the size of Angola floating in the North Pacific.
Everything about the way we’re living as a civilisation is so clearly unsustainable and it seems vaster and more interconnected than we can wrap our palaeolithic minds around. There’s little anyone feels they can do about it individually because if my next-door neighbour isn’t giving up beef to save the Amazon rainforest then why the hell should I deprive myself of a cheeky Marks and Sparks Wagyu steak?
Companies that bleed and burn the world’s fossil fuels are seen as “lawful” because the greenhouse gasses they pump into the air are within “acceptable legal limits” while climate activists are routinely punished under the same rule of law. A bunch of countries signed that Paris Agreement like, a decade ago. That was cool. Can’t help but feel like the government I have the displeasure of living under takes the agreement less seriously than they take the French themselves, though. Seems like they’re more wrapped up in the brave work of berating and criminalising asylum seekers as a spineless capitulation to the rising far-right in this country.
I get it. After 20+ years of whipping the white working class into a rabid islamophobic fugue state, it’s probably easier to dabble in perception management than avert the creeping, societal collapse. It’s just toooo difficult to tax über-rich guys, treat war-fleeing asylum seekers with some basic human dignity and transition towards sustainability before we’re all smoked by hydrofluorocarbons.
…This is the cultural context Generative AI is spawning into.
One of the more widely studied impacts of Generative AI is its energy use.
Each time a model is used, perhaps by an individual asking ChatGPT to summarize an email, the computing hardware that performs those operations consumes energy. Researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search.3
Alphabet Inc’s CEO John Hennessy often gets cited when speaking about the energy expense of LLM’s, claiming they “likely cost 10 times more than a standard keyword search.”
points out that a single ChatGPT prompt only uses 3 Watt-hours—equivalent to running a vacuum cleaner for 10 seconds or playing a games console for a minute.If you multiply an extremely small value by 10, it can still be so small that it shouldn’t factor into your decisions. If you were being billed $0.0005 per month for energy for an activity, and then suddenly it began to cost $0.005 per month, how much would that change your plans?”4
The MIT Technology Review crunched the numbers of AI’s energy usage. I’m only half-concerned with the statistics themselves. The figures for energy usage (or water consumption) are fundamentally flawed abstractions. The consequences of the data they represent are too far removed from our immediate experiences. When I mentioned abstaining from eating beef, do you think you would’ve thought about the Amazon Rainforest unless I mentioned it explicitly?
We rely on translations. Harm is presented by numerical data, veritable reporting of collapsing environments, photographic evidence of decaying habitats. I don’t think you need them. Because you feel it, don’t you? The friction between capitalistic growth and ecological sustainability, its thermal contradiction contorting your mind and cracking the planet like an eggshell. Food doesn’t taste as flavourful any more. There’s way less bugs in the summer. The last few European heatwaves have straight up caused thousands of heat deaths. Academic papers and copious links in a Substack essays don’t need to convince you.
You can feel it.
Environmental critiques of Generative AI are often dispatched as discussion-enders and it weirds me out because it should be a slam-dunk but we’re so cultural desensitised about nature’s wheezing breath that any pollution Generative AI causes can easily be rationalised as inconsequential.
Generative AI isn’t uniquely polluting. Masley’s claim that Chat-GPT isn’t bad for the environment isn’t a calculus of harm, more a declaration of conformity saying: The damage isn’t that bad when you compare it to the damage that’s already going on. In a way, those kinds of essays are initiation processes—welcoming Generative AI into the apathetic norm of feigning environmental ignorance.
If Generative AI is to become a societal mainstay (and I suspect it’ll have more enduring relevancy than the equally environmentally unfriendly ancestor of NFT’s) the demands of its existence will come at a great cost to the natural world.
Those costs will mount—environmentally, financially and, fuck it, existentially—blending into the expenses we’re already spending, hiding in plain sight until we can no longer live in denial about their impact.
In Britain alone, data centre usage is expected to “surge six-fold in 10 years”. There’s no way on God’s rapidly-browning Earth that uptick will be handled greenly. At best, the data centres will comply with current energy regulations—keeping the civilisational ship steady on its course towards a slowly melting iceberg. At worst, high demand will lead to aggressive expansions and data centres like Elon Musk’s that are currently poisoning Memphis will become far more common. At a certain point, we have to ask whether the technological advancements bringing comfort to our lives are worth the breath they’re stealing from our sky and what we’re going to do about it.
Gary Stevenson was a Citibank trader, securing major successes in finance by betting on the collapse of the economy. He left it all to write a book and run a Youtube channel calling for the UK government to tax the rich.
Stevenson breaks down how the wealthy have been capitalising from economic events like austerity, COVID bailouts and the housing crisis with focused concision. His videos are like Promethean fire on these bleak abysmal isles. Boy, is it dark-sided round here. At the top of the year, Oxfam reported, “Britain has highest proportion of billionaire wealth derived from monopolies and cronyism among G7 countries.” According to the Economic Policy Institute, CEO’s in 2023 were paid “290 times as much as a typical worker—in contrast to 1965, when they were paid 21 times as much as a typical worker.” America
Stevenson’s videos and public appearances warn, urgently, that if the unchecked greed of corporatism isn’t met with intervention the resulting economic decline will plunge large portions of the population into poverty. We are careening towards neofeudal levels of wealth divides and it’s clear that Generative AI is one of architectural instruments being used to accelerate this transmission of wealth:
“CEOs are extremely excited about the opportunities that AI brings. As a CEO myself, I can tell you, I’m extremely excited about it. I’ve laid off employees myself because of AI. AI doesn’t go on strike. It doesn’t ask for a pay raise. These things that you don’t have to deal with as a CEO.” — Elijah Clark5
Entry level jobs are disappearing. OpenAI themselves declared the roll-out of Chat-GPT 5 is “a major step towards placing intelligence at the center of every business.”6 Worker replacement has already begun. In the US, there is currently more unemployment than job openings. People are getting replaced and the entity does a shittier job than them.
Recently, when watching Burnt (2015) on Amazon Prime, the captions had translation errors similar to Tiktok videos and Instagram Reels. There were three different misspellings of “tarragon” because three different people said it in three different accents. A professional captioner wouldn’t have made that mistake. It was clear Amazon Prime had begun using AI to caption their films and TV shows.
Y’know—If I could do one thing to dam the virulent spread of anti-intellectualism, it’d be blow-darting the knowledge into everyone’s brains that “X is the New Y” statements are not impressive in 2025. The most insufferable sapiosexual you know is spewing out some fake-deep word equation like “AI is the new printing press” and the similarities are so surface level that it’d only be a lightbulb moment for someone so dim they need Chat-GPT to--You know what? Too mean. I’m trying to abstain from the holier-than-thou supposition that Generative AI is an inherent, intellectual failing and I’ve grown a little uncomfortable with the “Chat-GPT depletes critical thinking” quips. They’re getting a little eugenicsy, if you ask me.
But I don’t know, man. The printing press made scribes and scriveners obsolete by democratising the mass-production of books. Generative AI is transactional—you prompt what you want and it echoes those wants back to you. The process of creativity is outsourced and, ultimately, forfeited. Artistically, it’s creative middle management.
Any resemblance AI has to the printing press begins and ends at the point of it being a popular, new technology—which you could just as easily compare to the home printer or the mobile phones. If we’re going to whip up cringy snowclones, I’d personally go with “AI is the new cotton mill”, seeing as the debut of the textile factory was met with infamous animosity. Retrospect allows us to look at the comforts of today with a sort of teleological-capitalist lens—we look around and see iPads, flat-screens and Teslas and we infer they’re optimally designed.
Deeper investigation shows protest movements like the Luddites were fighting to preserve standards of living we’d be lucky to have today. Their Cottage Industry was the OG Work From Home. Cloth-makers wanted the autonomy to continue earning a wage that didn’t depend on working under someone else’s roof. When peace protests and appeals to government were denied, the cloth-makers earned the “Luddite” moniker by sabotaging the cotton mill machinery.
History has downplayed their protest as a neanderthal rejection of technology or superstitious moral panic. It wasn’t some unhinged, out-of-touch expression of violence towards technological progress. It was a calculated resistance against the exploitation that the machinery facilitated. We could probably do with more of that, huh?
In this respect, AI is just as unremarkable as every industrial advancement. Quickness is king but “better” is a matter of perspective. When citing the medical content offered by Chat-GPT, a study found “most of the references provided were found to be fabricated or inaccurate.7” Recruitment algorithm bias is evident in gender, race, colour, and personality.8 Captioning software—from short-form video reels to streaming services—are less accurate than human subtitlers. The quality is poor enough to notice but not unbearable enough for users to jump ship. How many typos in an AI-generated caption would make delete Tiktok or cancel your Netflix?
Whether its exploiting the precarious status of migrants, outsourcing customer service to overseas call-centres or offshoring factory production to sweatshops—the practise of pursuing the cheapest labour option has long been established as a savvy business move by company executives. The repercussions of this economic tactic has facilitated the demonisation of Asylum seekers and Schrödinger’s Immigrants, resulting in a spicy rise of Anglo-American fascism.
Generative AI is the next evolutionary step for corporations to cheap out on labour and palm off the resulting fall-out on the most vulnerable, empowering them to replace working people with an entity that doesn’t take lunch breaks, go to the toilet, join unions or sleep.
Will Dunn, the business editor of the New Statesman, poses an interesting question. If machine-generation is such an inevitability of progress, why are entry jobs so readily accepted as replaceable by Artificial Intelligence as opposed to executive level jobs?
Automating jobs can be risky, especially in public-facing roles. After Microsoft sacked a large team of journalists in 2020 in order to replace them with AI, it almost immediately had to contend with the PR disaster of the software’s failure to distinguish between two women of colour. Amazon had to abandon its AI recruitment tool after it learned to discriminate against women. And when GPT-3, one of the most advanced AI language models, was used as a medical chatbot in 2020, it responded to a (simulated) patient presenting with suicidal ideation by telling them to kill themselves.
What links these examples is that they were all attempts to automate the kind of work that happens without being scrutinised by lots of other people in a company. Top-level strategic decisions are different. They are usually debated before they’re put into practice – unless, and this is just another reason to automate them, employees feel they can’t speak up for fear of incurring the CEO’s displeasure.9
If we consider executive roles rely on high-octane decision-making, wouldn’t it be better to replace them with a machine that can look closely at company data and make semi-accurate forecasts analysing input trends?
There’s a good argument for automating from the top rather than from the bottom. As we know from the annotated copy of Thinking, Fast and Slow that sits (I assume) on every CEO’s Isamu Noguchi nightstand, human decision-making is the product of irrational biases and assumptions. — Will Dunn
The duties of an executive is obscured to the average worker, trapping us in a loop where we resign to the logic that what a CEO does must be 290 times harder or riskier than the average worker, therefore justifying they’re 290 times more valuable. Other countries that are happier like Finland or more technologically advanced like South Korea have far less pay disparity between their workers and executives.
Our society values everything from their vantage point. The news reports GDP has risen and you and I feel proud at how well the country is doing, then we walk through our local town centre or high street and witness the decline of infrastructure around us. Generative AI is never seriously considered as a replacement for CEO’s because executives hold the share of political power to decide who is replaceable.
There are more pressing thought experiments than whether “AI is the new printing press” or its protestors are neo-luddites.
Generative AI should make us question the very nature of progress. Why is the cycle of technological progress one that threatens the livelihood of the labourer, of the artist, of the working class?
When will a technological advancement make wealth-hoarders obsolete?
In response to
’s experience of Substack plagiarism, argues “ is an Echoborg more than she is a Plagiarist” — an “Echoborg” being a person who does “not speak thoughts originating in their own central nervous system: Rather, the words they speak originate in the mind of another person who transmits these words to the cyranoid.10” He arrives here after observing how a particular genre of Substack essay courts popularity:Many of the posts we see on Substack, the ones which go most immediately and clearly viral, are what I would like to call “empty vessel posts.” They’re posts, like a Rupi Kaur poem, with just the right amount of substance and nothingness that the work becomes perfectly reflective.
The reference to Rupi Kaur’s “instapoetry” is apter to his point than he knows. Many may not be aware that Kaur was confronted by one of her contemporaries, Nayirrah Waheed, about the “hyper-similarities” between their work in the early 2010’s.
One could throw paint at the wall about why Waheed’s allegations never impacted Kaur’s meteoric rise—the infancy of call-out culture, the implicit power imbalance between Kaur (an Indian-Canadian) and Waheed (a Black woman), a robust public relations team that’s seemed to have scrubbed all mention of Kaur calling Waheed (and Warsan Shire) an inspiration. All likely contributors. Alhough, the most obvious factor may lie with the writing itself.
Instapoetry naturally evolved from Tumblr, where Nayyirah Waheed, Warsan Shire and Lang Leav dominated the reblogs. If we were being reductive, we might assert this emergent genre was a spiritual predecessor to Chat-GPT outputs—accustoming the literary appetite of an impressionable generation with the compressed formats of social media with poetry mirroring the sleek character-limit of a tweet, the zippiness of a vine, the square-crop of an instagram photo.
As a genre, Instapoems lull you into feeling its earnestness with its brevity and finely-chopped line-breaks, at their best. At their worst, they shatter their own illusion, revealing themselves as fragmented quotes, stretched-out and cosplaying depth. The latter is how many receive Kaur’s writing—personal but cavernous in a way that eerily resembles Rick and Morty’s description of the popular new kid Bruce Chutback: “He just kinda sat there—with a jaw slack enough for us to project our insecurities on.”
For so many, Kaur’s writing was an entryway but, in a way, it feels like reading the afterlife of poetry. Every meaning is suspended in a controversial purgatory between profound and banal, flooded with so much white space that it’s practically impossible to assess if plagiarism is even possible in something that says so little.
I sympathise with Waheed. “Hyper-similarity” is a diplomatic way to approach what you feel in your bones as someone biting your style and calling it “inspiration”11.
When the line between plagiarism and inspiration blur into a spectacle of philosophical debate, it is interesting—if not entirely predictable—who ends up benefiting from the ethical greyness and who ends up suffering.
Since confronting
about her viral “essay” others have lasered in on her conduct in a way that, unfortunately, misses the greater issue. It’s not just about her using Generative AI—but the fact she uses Generative AI as a laundering service to lift the work of others.The optics of Tyler googling essays about “AI” and “taste” and prompting the results into Chat-GPT is far less juicy than a rising Tiktok influencer copying and pasting someone else’s essay into their notes app to publish it on their own Substack because, whoopsie, they forgot! So there’s unlikely to be a platform-wide outcry recognising Tyler’s plagiarism.
Perhaps, I lack the clout—or my assertions are too flimsy to summon a Substack mob to distaste, boycott or pitchforks. Just as it was impossible for Waheed to prove Kaur was sucking the spiritual essence from her Tumblr contemporaries—it’s impossible for anyone to prove Tyler lifted Veldenberg’s essay and ran it through Chat-GPT without a subpoena of her laptop.
Despite my profound dissatisfaction at being forced to know about the existence of yet another trans-obsessed, pseudo-moral, scammerific white woman with the supernatural willpower to make her own mediocrity everyone else’s problem—the case of Stepfanie Tyler provided me an opportunity to refine some of that gooey, visceral intuition I spoke about earlier.
We go through this cycle every time a tool shows up that makes something faster, cheaper, or more accessible. The establishment mocks it. Then fears it. Then tries to regulate it. Then adopts it. Then pretends they loved it all along. It’s never really about the tool.
It’s about control. —Stepfanie Tyler
The leaps forward in technology leave gaps for amoral opportunists to flood the space, hungry to exploit the lawlessness of the empty, new marketplace before regulatory bodies start interfering with their pesky ethics and legalities that’ll ruin all the fun.
This, too, is an inevitability of “progress”.
On Substack, AI Enthusiasts (or “Boosters”) are the first to tell you that using Generative AI to “assist” their writing is an Armstrongian step towards embracing innovation. As a result, they’re publishing something unlike traditional writing, producing something new. For lack of a better term, they are “post-writing”, reaping the benefits of being considered alongside “traditional” writers.
Moral concerns with Generative AI are misrepresented as parental finger-wagging: “you’re denying progress, you’re denying the glory of AI, you’re denying prompters their right to use their shiny, new tools in peace.”
These misrepresentations are emblematic of the true moral tension—which is that Generative AI is exclusively defined by its ability to mislead.
The success of the most powerful technology of our time is measured by how convincing it’s able to be.
Deception is a core principle of its value and its maintenance.
My concerns with the Stepfanie Tyler’s of the world is the disingenuousness of their deceit. Cool, they may boldly announce their work is AI-generated in isolated “essays” petulantly restating their right to do whatever the fuck they want—but in the pieces that go viral, amassing them hundreds of paid subscribers (or their regular conveyor belt of post-writing)—there is never any overt disclosure. A general reader randomly coming across their “work” will assume it’s traditionally written because they don’t have the integrity to disclaim This was generated by Chat-GPT at the start of the “curated” technicolor yawns they churn out.
Whether its the bad faith analogy of comparing Generative AI usage to literal studying, or corporate-wellness campaigns about its potential to expand humanness and its capacity to unlock authenticity, or the fact that whenever Chat-GPT’s hocks a loogie of false information, its been innocuously dubbed a “hallucination” (my Crustacean in Christ: that is called a mistake)—the deception around Generative AI runs so deep that all its chief advocates perform amateur gymnastics with the English language just to talk about it. We have a word for that.
In a conversation with Ezra Klein, the electronic musician Holly Herndon declares, “AI is aggregated human intelligence so it’s better to call it collective intelligence than artificial intelligence.” Simone Biles would be impressed. Referring to Generative AI as “collective” only makes sense if the intelligence is consensually collected. It is commonly known that Generative AI has been largely trained on artist’s work without their permission. OpenAI themselves state:
Because copyright today covers virtually every sort of human expression – including blogposts, photographs, forum posts, scraps of software code, and government documents – it would be impossible to train today’s leading AI models without using copyrighted materials.12
It wouldn’t be impossible. It would simply take more time and effort than any capitalist endeavour is willing or capable of handling.
A true “collective intelligence” would require dedicated outreach to certify permission from every author, film director or musician, perhaps, under a non-profit organisation structure like how OpenAI was founded? I suppose they did what they needed to do get ahead of their Age of Ultron AI arms race with China.
Generative AI gathers these vast datasets with unethical practices through means of appropriation that perfectly mirror the western world’s colonial extraction of resources from underdeveloped nations. To be a citizen of the western world means to master denial, doesn’t it? If we can ignore the smog in the air and the blood in the soil, ignoring the cowboy pillaging of intellectual property in cyberspace is light-work.
The creators of this technology had every opportunity to practise better acquisition methods. They failed. Now, in order to justify its existence, they evangelise—banking on users being comprised of a sheepish concoction of ignorance and apathy so they can bombard AI extensions on every website and app imaginable until we’re forced to submit, conform, and get real comfy nut-guzzling a technology that, civilisationally, we haven’t passed the minimum maturity requirement for.
After learning Large Language Models weren’t a stones throw away from achieving Artificial General Intelligence, I got a little obsessed with Superintelligence.
I watched lectures, debates, read papers—all of them laced with homogenous particles of fear that Superintelligent AI is going to be smarter than us and arrive sooner than we’d think.
I’d be remiss if I didn’t point out two things I noticed:
A) In Star Wars: Rogue One, Director Krennic—played pitch perfectly by Ben Mendelson—despite not being a space wizard, carries a gravitas reveals him into being the most heinous motherfucker in every room. He’s a commander of men. A real billy big bollocks. Until Darth Vader demands an audience with him and, in a moment of displeasure, the Asthmatic Sith Lord force-chokes him out. When Vader releases his grip, Krennic looks at Vader with this… Kaleidoscope of emotions. Fear, awe, and a little turned on. That’s how every computer scientist talks about Superintelligence.
B) They’re mostly white guys (it’s relevant, I promise).
AI 2027 is a report theoretically predicting the timeline of Superintelligence over the next couple of years. There’s a substantial focus on misalignment—where AI stops serving humans and begins to act in its own self interests.
This is the point where the white-guy fear, science-fictional awe and libidinal arousal all begin to collide in real life.
Our current political landscape has dumbass podcasters swinging elections and a nightmare blunt rotation of a nation openly stating they “create a pretend world”. Superintelligence being “misaligned” with “human interests” doesn’t sound nearly as catastrophic as Superintelligence being aligned with the nefarious people in power who’re already within close proximity to the Silicon Valley key-card holders of these technologies. I needn’t name names but they’re all weird fucking guys powered by father-shaped holes in their hearts.
I can’t help but see how these conversations about Superintelligence being misaligned are panicked and masochistic, wary but curiously receptive to the dystopian possibility of a machine uprising cleansing the world. They are a fundamentally white supremacist fantasy—steeped in patriarchal and colonial guilt or Christian fundamentalist suicidal ideation or apocalyptic nihilism. A more cynical part of me believes the conversations of Superintelligence are a distraction, a misdirect from the environmental and economic exploitations that are propping up this technology.
There’s always been this bizarre yearning for technology to be apolitical, evolving to realise some detached, universal truth. A Superintelligence won’t be able to feel in anyway that we recognise but it will think, mechanically, in ways that we do. Superintelligence will evolve into some cosmically unbiased entity because Generative AI hasn’t even manifested that way. No matter the control, feigned control or restraint—intentionally or unintentionally—a creation will always reflect the biases of its creators.
AI marks the Imperial March of an Orwellian cyberspace where post-truth is one-shotting uni students and baby boomers alike. Some of my darlings are getting caught in the hype and I want to shake the black and white swirls out their eyes. A ratking of the worst people imaginable can’t wait for this technology to become so advanced that it’s indistinguishable from human creativity—as if their silicone-based salivation isn’t a perfect, spit-shined reflection of their contempt for humanity itself.
We’re walking the plank, bruv. The sharks are circling and the water below is a hyperreality where nobody knows what the fuck is real. The confusion and paranoia is making everyone mean, jumpy, cruel and incurious. Americans are lynching kids and teenagers for playing knock down ginger and weirdos everywhere are trying to transvestigate any woman who doesn’t look like Baywatch-Era Pamela Anderson. We’re supposed to believe that a San Francisco-based Super Random Word Generator spewing up fantasised facts and pseudo-sentient Photoshops churning out fake images are going to be a stabilising presence to our so-called democracy?
It’s about to get PEAK politically. Right wingers are about to flood the internet with some of the most bizarre, shit-stirring, incoherently unfunny slop you’ve ever seen. Sycophantic politicians will be able to ai-generate dead cats instantly whenever they need to distract from some shady scandal they want bury.
I’m sure Generative AI has some great, practical uses but these tools seem like salvation for those who’ve given up on improving this world and have resigned to retreat into their own. I can’t ignore it. I can’t ignore how the doctrines of economic efficiency has warped our minds, making us so focused on output that we’re understanding progress as letting robots steer our passions.
What are saving our brains for?
It feels dramatic to say but the systemic undervaluing Art is the first pillar to be corroded in the abandonment of human freedom. I thought about zeroing in on the artistic ramifications of Generative AI but I feel I’ve already said everything I feel on that. I just really want to hammer home how much AI fucking sucks, the people who (ab)use it largely fucking suck and the people who benefit from it fucking suck.
But I’m not the feds or your dad.
Nothing I’ve written can make anyone abstain from AI. More than that, I’m empathetic to those who’ve adopted AI into their lives or have been pressured into using it for work. I get it. Every other week, you got some smart-ass telling you something is harmful.
Today I’ll say, “You shouldn’t use Chat-GPT, it’s harmful” and tomorrow someone will say, “You should boycott McDonald’s, it’s harmful” and the day after, someone else will say, “You shouldn’t use iPhones, they’re harmful” and all these insistences cascade into an infinite, shame-driven spiral of prohibition where the only way to be harmless is to sit naked in a peaceful meadow until the Data Centres Come. But I don’t know, man. Productivity isn’t everything. Efficiency isn’t everything. Convenience isn’t everything. Your integrity is everything. You should do whatever is in service of it.
This one isn’t about Generative AI, I just think Palantir is terrifying.
It occurs to me there is a stark, cultural divide here. Black culture has a long history of self-governance with plagiarists—particularly in music genres like Rap or Grime. There are derogatory terms and punitive measures for lyricists who steal lyrics (or the more intangible crime of “stealing my whole flow”). There’s also an implicit system of reverential taking, where greatness pays lyrical homage to other greatness.
My Crustacean in Christ, I am doing that promised Luddite deep dive as we speak, & I'm finding out some really interesting shit. Watch this space. Thank you for this excellent rant
"Generative AI is uniquely polluting. Masley’s claim that Chat-GPT isn’t bad for the environment isn’t a calculus of harm, more a declaration of conformity saying: The damage isn’t that bad when you compare it to the damage that’s already going on. In a way, those kinds of essays are initiation processes—welcoming Generative AI into the apathetic norm of feigning environmental ignorance." I mostly disagree with this framing (obviously, I guess) partly because I think AI adds a lot of positive value relative to its individually very small emissions. I wrote about this here a bit (https://andymasley.substack.com/i/172277098/clarifying-some-of-my-beliefs-on-these-questions) but I expect that AI will on net probably be good for the climate, not bad for it, so I don't see myself as trying to normalize something bad for the climate. Separately, even if I thought AI were entirely bad, I don't think how new something is should have much bearing on how we react to it for climate reasons. Cars and meat are old, but they're both environmental catastrophes, and I'd rather people spend all their time thinking about how to reduce them. Whether AI is being "normalized" or not doesn't trouble me nearly as much as the fact that gas powered cars are still driving around or pigs are being tortured in factory farms (or that cows are emitting a lot).