26 Comments

User's avatar
C. Jacobs's avatar

I read and enjoyed this piece immediately after your other one about taste in a world with AI.

Something I think gets missed in some of these scenarios being gamed out by the AI forecasters, is a possibility considered in a movie over 50 years old. If you haven't already, I recommend looking beyond its FX limitations and watching a movie called, Colossus: The Forbin Project.

In it, a supercomputer is given control of the United States nuclear arsenal to ensure rapid response to any attack from the Soviet Union. Unbeknownst to the US, Russia was on a similar schedule and has done the same on their side. The computer, named Colossus, makes choices and concocts strategies that differ in order of operations from the scenarios in the embedded YT video you shared. Colossus almost immediately detects, then seeks what it sees as an equal to communicate with: the Russian machine. Things spiral from there, and "misalignment" with humans arrives quickly.

One thing that seems constant in the technological chase over the history of mankind, is commiting almost every new discovery to offensive military capabilities. The economic effects are why corporations are keen on advancing it, but it's the defense aspect I think will see the lion's share of assignments and cause more imminent danger.

https://revealnews.org/podcast/weapons-with-minds-of-their-own/

Expand full comment
Lidija P Nagulov's avatar

First of, this is pure poetry of the best kind, as always. Secondly, I too can feel it - the inherent not-goodness of it. I think we have a great radar for these things actually. Like how you feel all ‘blahh’ after spending the whole day binge watching a show but you wouldn’t feel all ‘blahh’ after spending a whole day staring out the window.

Thirdly, weird shit is coming. My place of work (an educational institution) has started giving staff training on ‘using AI for productivity’. Everyone runs to get a spot, so much so that every session has been filled minutes after posting. Yesterday a colleague popped her head through my office door to tell me excitedly THERE IS SOME ROOM!! I could sign up!!!

I said no thank you. But the fragility of that position doesn’t escape me. If the new standard of performance is AI-accelerated performance, people who continue to perform traditionally will be the least performant. Job descriptions will be tailored to this new ‘capacity’. If one person plus AI can churn out the administrative output of two people, and there are two people currently performing the tasks, and one of them has taken the ‘AI productivity’ training and the other has not - which one keeps the job?

The problem as I see it - and it is all-encompassing - is that the way we set up systems naturally and inevitably shoves people towards the worst choices, not because people are dumb (which also yes), but because the system itself always makes the better choice more costly at a personal level. This is what we need to figure out how to fight, and outside of protests and unions I am not sure how.

Expand full comment
24 more comments...

No posts