26 Comments
User's avatar
C. Jacobs's avatar

I read and enjoyed this piece immediately after your other one about taste in a world with AI.

Something I think gets missed in some of these scenarios being gamed out by the AI forecasters, is a possibility considered in a movie over 50 years old. If you haven't already, I recommend looking beyond its FX limitations and watching a movie called, Colossus: The Forbin Project.

In it, a supercomputer is given control of the United States nuclear arsenal to ensure rapid response to any attack from the Soviet Union. Unbeknownst to the US, Russia was on a similar schedule and has done the same on their side. The computer, named Colossus, makes choices and concocts strategies that differ in order of operations from the scenarios in the embedded YT video you shared. Colossus almost immediately detects, then seeks what it sees as an equal to communicate with: the Russian machine. Things spiral from there, and "misalignment" with humans arrives quickly.

One thing that seems constant in the technological chase over the history of mankind, is commiting almost every new discovery to offensive military capabilities. The economic effects are why corporations are keen on advancing it, but it's the defense aspect I think will see the lion's share of assignments and cause more imminent danger.

https://revealnews.org/podcast/weapons-with-minds-of-their-own/

Expand full comment
Inigo Laguda's avatar

Karen Hao calls Artificial Intelligence "a suitcase word", a term accommodating many descriptions that can be applied vaguely (another element of deception because average people don't even know what it means). She likens it to the term "transportation", which could mean cars, public trains, buses, bikes or walking. I tried to focus on Generative AI because its what is at the forefront of our attention but the usage of certain strands of this technology for offensive usage are terrifying.

Face detection software on the CCTV of South London is an scary escalation of racialised policing (which I have further thoughts on: I believe that face detection AI, just like the history of photography, was created by a white western standard and therefore fails to account for Black people. Tech companies want to rectify this racial bias and what better way to kill two birds with one stone than perfect racial profiling by training face detection in a predominantly Black area of the UK?).

I mention Palantir in passing but it is a harrowing corporation that has contracts with all the worst tendrils of society – Israel, For-Profit Hospitals, Health Insurance Companies and of course, every military and policing branch of American Government. I didn't want to mention these things because I wanted to focus on Generative AI but you're absolutely right in the sense that the paranoia of warmongers drives technology in some twisted, bastardised version of public safety. They remind me of Peacemaker:

“I cherish peace with all of my heart. I don't care how many men, women and children I need to kill to get it.”

Expand full comment
Lidija P Nagulov's avatar

First of, this is pure poetry of the best kind, as always. Secondly, I too can feel it - the inherent not-goodness of it. I think we have a great radar for these things actually. Like how you feel all ‘blahh’ after spending the whole day binge watching a show but you wouldn’t feel all ‘blahh’ after spending a whole day staring out the window.

Thirdly, weird shit is coming. My place of work (an educational institution) has started giving staff training on ‘using AI for productivity’. Everyone runs to get a spot, so much so that every session has been filled minutes after posting. Yesterday a colleague popped her head through my office door to tell me excitedly THERE IS SOME ROOM!! I could sign up!!!

I said no thank you. But the fragility of that position doesn’t escape me. If the new standard of performance is AI-accelerated performance, people who continue to perform traditionally will be the least performant. Job descriptions will be tailored to this new ‘capacity’. If one person plus AI can churn out the administrative output of two people, and there are two people currently performing the tasks, and one of them has taken the ‘AI productivity’ training and the other has not - which one keeps the job?

The problem as I see it - and it is all-encompassing - is that the way we set up systems naturally and inevitably shoves people towards the worst choices, not because people are dumb (which also yes), but because the system itself always makes the better choice more costly at a personal level. This is what we need to figure out how to fight, and outside of protests and unions I am not sure how.

Expand full comment
Inigo Laguda's avatar

Thanks for reading!!! I appreciate you for meeting my work where it is.

I'm glad I'm not alone in feeling it, I think we all do to a certain extent. I always say that I admire conspiracy theorists because their instinct that something is wrong is ironically right. Their execution, and where they place blame, is where they stray. You don't need to believe in the Illuminati when the world is ruled by people who exist in plain sight. Their escapades don't need to be concealed by shadowy cults when they can be found in a history book. They know. Even Trumpers know. Their suspicions are all correct. How they allow those suspicions to be governed... That is something different. I wish I could stare every Trumper and Conspiracy Theorist and Far-Right Regurgitator in the eye and just say, you're right, fam. Something is wrong. Your feelings are valid. But you're getting played. Your misplacing your energy.

The normalisation of AI is going to sneak up on people in a crazy way. When you said:

"Outside of protests and unions I am not sure how"

It makes me think about how power has been inventing new ways to subjugate while we still cling to old ways to resist it. We need to start here. But like you, I'm not sure how, either. Perhaps because I know all my ways involve rule-breaking that I know I'm not ideologically ready for and I can't expect anyone else to be ready for either.

Expand full comment
Lidija P Nagulov's avatar

Yep yep. Wasn’t it Naomi Klein who wrote about that very thing, how behind most conspiracy theories (except maybe the one about birds not being real, idk about that one) is an accurate instinct that something is very wrong. It makes me think a lot.

There are small things one can do that are still legitimate. For visual artists like myself there are programs like Glaze or Nightshade that let you ’cover’ your digital art with an invisible digital sleeve that fucks up the AI crawlers’ perception of it. I am sure more such things will emerge as the battlefield becomes more defined. Maybe if we started generating a lot of deepfake videos of republicans being ‘secretly woke’ or some shit? I don’t even know what would embarrass them, they do every embarrassing thing under the sun and still stand proud.

There was that one guy who used ChatGPT to…. blackmail 11 megacorporations? That was definitely illegal but a boss move and they still haven’t caught him apparently.

I think your last point is a serious one, for way more than just AI. What do the decent people do to fight against indecency? How do the non-violent stand up to ICE? How do the respecters of rules fight Palantir, or courts that rule more harshly against peaceful protesters than against war criminals? How do the meek combat the rabid? It’s a tricky one for sure.

Expand full comment
Rosie Whinray's avatar

My Crustacean in Christ, I am doing that promised Luddite deep dive as we speak, & I'm finding out some really interesting shit. Watch this space. Thank you for this excellent rant

Expand full comment
Inigo Laguda's avatar

Thank you for reading!!

Expand full comment
Liam Robinson's avatar

Amazing amazing amazing as always! Highlighted aspects of AI I hadn't considered at all, particularly a small point but how ChatGPT's mistakes are 'hallucinations', but if a person makes that same mistake, it's a 'mistake'. I hadn't thought for a moment about the language we use around AI before, thank you for publishing this Inigo :)

Expand full comment
SW's avatar

Loved this article. So much to think about and revisit to research deeper! Only one question, you referenced that “AI won’t be winning AlphaGo anytime soon” but it did in fact win that match so I was confused wondering if I misunderstood (or was not perceiving the sarcasm in your post). Love the link you shared though. Will be watching that documentary on a plane tmw.

Expand full comment
Inigo Laguda's avatar

Valid question and also a great example of how confusing the language around this technology is.

AlphaGo is the name of the AI that beat a human Go champion, it is an example of a narrow artificial intelligence. It can't do anything else, like a chessbot. My point wasn't that AI won't be winning Go, my point is that ChatGPT is also a narrow Generative AI model. A lot of people mistake it for Artificial General Intelligence (an artificial intelligence that can do multiple things) because it seems to "know so much" but it fundamentally only specialises in generating humanlike responses to input queries from enormous datasets. ChatGPT specifically wouldn't be able to beat a Go champion because it is not programmed to do so. I hope that makes more sense!

Expand full comment
SW's avatar

Thank you for clarifying and correcting me. Point totally taken and understood! Well said.

Expand full comment
Kali's avatar

Fantastic piece! I find myself wondering what we're "progressing" towards on an almost daily basis, because it certainly doesn't feel like it's towards a future that benefits the majority of people or the planet. Thank you so much for writing and sharing!

Expand full comment
Dean Kiley's avatar

I had to read this on my big(ish) computer screen because it felt pyrrhic to squeeze my mind into a phone when you were wandering so far afield as to include a waypoint at, of all places, Roald Dahl(!), and all the time-lapse snapshots of AI improvising it's hull mid-flight. Required reading. So many truly new perspectives and conceptual frameworks.

Expand full comment
Inigo Laguda's avatar

Thank you for dedicating big screen status to my work, brother!!

Expand full comment
Sincerely, Razeen's avatar

Thank you for writing this excellent piece! It’s super refreshing and inspiring to read works like this that do not deliver banal one-line zingers but rather dig deep and process it all. Your train of thought and relevant references were perfectly embedded to allow me to follow along coherently whilst also providing me with resources/references to draw upon for further reading. I didn’t even realise the time fly by while reading it all—amazing.

Expand full comment
Jenovia 🕸️'s avatar

Thank you for this. 😮‍💨 It brings me such solace knowing there are people like you existing in the world right now. ❤️‍🔥

Expand full comment
Inigo Laguda's avatar

Thank you!! I'm glad you got round to reading. Appreciate you as always 🫶🏾

Expand full comment
Noah Stephenson's avatar

I think this is some of your BEST writing here, Inigo! I learned SO much from it. I am definitely saving this for a re-read. Keep up the good work, sir. Public intellectualism is alive and well, and you are one of its purveyors💚

Expand full comment
Inigo Laguda's avatar

High praise! Thank you, bro. I do what I can 🫡

Expand full comment
Thomas Hedonist's avatar

Here's an article I've been thinking about sharing with you for a while, and I guess it's time:

https://contraptions.venkateshrao.com/p/the-ecstasy-of-deep-influence

I think there's a solid chance you'll disagree with it with every fiber of your being?, but I also think it could help the discourse if you thought through why. I want artists to get paid; I also don't think the 20th C. copyright conceptual framework of cultural works has much worth preserving. I have no answers, only provocations 🤷

Expand full comment
Inigo Laguda's avatar

My disagreements aren't with every fibre of my being as much as me and the author just conceptualise knowledge acquisition differently, which is inevitable, i believe my experience of plagiarism, ai, etc is from a Black perspective and his is not.

The author of this essay makes conflations – between the natural filtering of artistic ancestry and plagiarism, between the tools such as the dictionary and Large Language Models. They are philosophically flattening and a perfect example of the ethical contortion that I speak about in this essay.

"When a writer rips off another writer without acknowledgment or even consciousness, it is homage within a tradition. When a programmer digests the same text into the weights of an LLM, a crime has been committed."

This sentence leaps to a lot of assumptions but, most noticeably, does some creative accounting with intent. It is true, we as humans, are a patchwork of the things that come before. But "rips off" signifies intention, without acknowledgement signifies intention, without consciousness is unintentional.

This is the difference between Rupi Kaur, who shared "a hypersimilarity" with Nayirrah Waheed and Warsan Shire's work, who she cited as inspirations before retracting, and Nayirrah Waheed and Warsan Shire themselves, who have never accused one another of "hyper-similarity". Naturally, something different is happening here, a writer's instinct, that their work is being taken. This does not fit within the economic, mathemetical and technological confines of western understanding which is why I said, my experience of plagiarism is informed my by Blackness.

The essay you've provided me, as with many essays in favour of AI, do not know how to reckon with the moralities that they intend to confront outside of a framework of self-enrichment, a framework that uses western imperial logics to justify itself.

Truthfully, in a utopian society, I would not care about copyright laws. I return to Fred Moten's quote often–Black cultural productions are not something that can be owned though they are something that can be stolen. There is an learned understanding of what that entails in Black culture, in a way that doesn't exist in the wider dominant culture.

But the imposition of property is a western idea, one that–when it suddenly wants to practise egalitarianism, it can shed its skin like a snake. Except, who is this egalitarianism in service of? Who does it benefit? Who capitalises from it? Who is left penniless and who hoards the wealth? Who is emboldened and who is enfranchised? This essay is a great thought experiment but it makes the basic mistake of trying philosophise about an issue that has real consequences. It is no use for me, and quite honestly it is of no use of the author of this essay, to mention that we are all products of the same literary ancestry, all pilfering from the primordial soup of language and therefore, why not embrace LLMs? That is a reduction and misrepresentation of the whole point. The issue with this essay is, the issue that I already identify–many Pro-AI need to distort, elasticise language in order to contort their ideas into being. That is fine, and they might even succeed, but that doesn't make it true. Writing was made a resource because capitalism made it a resource, not because it is a resource. Under the rules of capitalism, LLM's broke the rules. All justifications seem to want to re-imagine the rules in order for the rules that LLM broke to not exist. But nobody wants them to exist. Writers don't even want them to exist. AI boosters have no interest in protecting craftspeople, they simply want to "adapt or die". I do not think copyright framework is useful. I also do not worship output so much that I denigrate the artistic craft.

Expand full comment
Thomas Hedonist's avatar

thank you for your thoughts

Expand full comment
Andy Masley's avatar

"Generative AI is uniquely polluting. Masley’s claim that Chat-GPT isn’t bad for the environment isn’t a calculus of harm, more a declaration of conformity saying: The damage isn’t that bad when you compare it to the damage that’s already going on. In a way, those kinds of essays are initiation processes—welcoming Generative AI into the apathetic norm of feigning environmental ignorance." I mostly disagree with this framing (obviously, I guess) partly because I think AI adds a lot of positive value relative to its individually very small emissions. I wrote about this here a bit (https://andymasley.substack.com/i/172277098/clarifying-some-of-my-beliefs-on-these-questions) but I expect that AI will on net probably be good for the climate, not bad for it, so I don't see myself as trying to normalize something bad for the climate. Separately, even if I thought AI were entirely bad, I don't think how new something is should have much bearing on how we react to it for climate reasons. Cars and meat are old, but they're both environmental catastrophes, and I'd rather people spend all their time thinking about how to reduce them. Whether AI is being "normalized" or not doesn't trouble me nearly as much as the fact that gas powered cars are still driving around or pigs are being tortured in factory farms (or that cows are emitting a lot).

Expand full comment
Inigo Laguda's avatar

Hey Andy, I apologise, I meant to write "Generative AI *isn't* uniquely polluting", it's a long piece so a typo or 6 is inevitable but I'm not sure how much bearing that correction has on your response. I respect your clarifications, I hope that the potential you state (how it may be used to combat climate science) outweighs its material impact (the environmental impact of its literal existence as it likely undergoes rapid expansion).

"Separately, even if I thought AI were entirely bad, I don't think how new something is should have much bearing on how we react to it for climate reasons."

This mentality speaks directly to my overarching point of the nature section, which is that there's a tacit, cultural acceptance of environmental harm in the name of technological progress, which strikes me as an extractive and fundamentally backwards way to approach innovation.

The whole point of me listing all the climate disasters was to illustrate we cannot afford to have technological advancements that overlook their effects on the climate, even if they're small by comparison to the more vast polluters. If AI "pays for itself" environmentally, that's great, but it's existence has already facilitated the harmed of people RE: Elon Musk Data Centres in Memphis. We agree, the bigger perpetrators–Cars, fossil fuels, meat farming–all need to be addressed. Perhaps its better to say that my point is not about AI as much as it is about the overall cultural attitude and how AI fits into it.

Expand full comment
Andy Masley's avatar

All good, sorry didn't mean to run too far with the typo. Will have a few more posts soon relevant to this, appreciate the back and forth!

Expand full comment
Inigo Laguda's avatar

No worries at all, thanks for sharing your thoughts!

Expand full comment