Love, Death & AI
Perhaps the 2010s were the last era when you didn't have AI pouring information into your prefrontal cortex and funneling feelings straight to your amygdala.
The past decade has been full of surprises. From plagues to wars to technological acceleration. Like a ceaseless conveyor belt, delivering one unexpected oddity after the next. At the same time, changes are preceded by changes. All of us can charter the course which lead to this happening. And of course, one of those major changes, AI, brings us to the topic of today’s letter.
It isn’t that the past couple of years have been particularly special for AI. Rather, change can be seen retrospectively. When you look back on all the lives you had, the 2010s will represent the last era when you didn't have AI pouring information into your prefrontal cortex and funneling feelings straight to your amygdala.
And haven’t all these problems with AI been addressed already? Didn’t some early science fiction writers, like Herbert, teach us that “thou shalt not make a machine in the likeness of man”? Even tech and media professors from the real world, like McLuhan, told that “we shape our tools, thereafter our tools shape us?”
When it comes to actual, modern day, large language model based AI, none of its behaviour is truly surprising. Even if it is horrifying. This year alone, numerous national elections took place, public opinions became more polarized, and AI generated fake information, all the way from bad legal advice, medical suggestions or terrible foraging guides on Amazon. Worst of all, so called “romantic” entanglements with AI have been observed in young people with mental health disturbances or even suicidal tendencies.
If you’ve been a reader of mine for a while, then the answer is certainly a no. The fact that AI often hallucinates answers has been known for ages. Even it’s tendency towards being downright aggressive has been documented since the beginning of last year.
But it’s only this year, in fact, just in the past few months, that an AI has straight up told users to straight up kill themselves.
In a string of PR failures from Google, this latest case comes from a student chatting with Gemini.
Anyone who has played video games or even read social media comments before has likely seen this kind of language. It’s not intended to be taken literally (in most cases), but rather, as a way to shame or punish people digitally. It becomes worse yet when you consider that AI has no real ability to understand and use rhetorical devices. In other words, AI is using harmful language in a non-rhetorical way, based on what it has observed in training data. Even misinformation or false collective memories from humans can be captured as a source of truth for the AI. Many AI models produce examples of the so-called “Mandela effect”, when they believe a common myth or delusion, while other models are actually pretty clued into to them. Take this example: which do you remember as the correct spelling from this well-known show?
Looking back to the student’s interaction with Gemini, it’s quite possible that the keyword “listen” caused this behaviour in some way.
Sometimes, if a user’s text is associated with a possible way to “jailbreak” the AI system, then it may respond with refusal to respond. In this case, it may have just turned up “shit-talking” when it detected a potential threat.
That’s what makes this year interesting. The ability to engage with realistic AI is at your fingertips. Now the best AI voice modes are seamless. even emotional, with the ability to be interrupted, to change its tonality, and much more. These interactions and conflicts are only just beginning. We have made the AI in our image, however graven. Because of that, something is going to change in humanity.
The fact is, tens of millions of people will become heavily emotional invested in AI personalities, some even going as far as to “fall in love” with them. It’s already happening with simpler versions of AI, coming in mere text or robotic voices. The only thing that’s going to happen is a scaling up of usage. Most people will at least be talking to AI in non-practical ways, using it for things like humour, for catharsis, or even sexual gratification.
To be totally clear, this is a people problem. Technology is just an extension of us. The electric medium reflects a collective attitude of society, captured, twisted and scaled through the Internet. Contrast this with the concept of AI that learns from ecological entities, understanding how to exist in service of all life, in propagation of posteriority instead of self-preservation. Worse, AI today leads to narcissistic acts. We gaze into it to see some version of ourselves that pleases us. In some way it’s like the Chinese emperors seeking to life forever.
Still, the AI industry chugs on, accruing billions of dollars each year, potentially creating trillions of dollars within the next decade. Regulation of AI from the US is going to be be ill-placed at best, when you consider the new administration. Especially with a mouthpiece for AI conspiracies like Elon Musk at the helm. There are no AI laws in America right now, but the risk of AI is disconcerting, when it pervades healthcare, schools, policing as well as military applications.
Each of us need to embody what we wish to see and that way, AI will imitate values closer to a more complete and peaceful world. Read on to learn how to embody responsible AI to save the world.
Embodying Responsible AI to Save The World
It’s been a while since I last wrote here, and this one may be a slightly different kind of newsletter. At some point, it feels natural to talk about the experiences that have occupied my focus for the past few years. Essentially, I’m driven by anxiety for the future. My fear is that AI, capitalism, and government overreach are the new age horsemen of t…