Why haven’t I written a blog about AI since April of 2023? This whole time, there has been a berserk slalom of exuberant AI prophesy and thundering AI doom sermons. So what brings me back to the topic? Am I worried about AI robots killing us all over paperclips? Do I believe AI will replace Hollywood? Will AGI dissolve the white collar sector? Am I enraged by the enshittification of the Internet?1 No, although, inspired by Katherine Dee, been there done that.
What moved me ultimately was a link on the underrated aggregator/trend analysis site After School. In this post
refers to an article that broke my heart: ChatGPT being used as a free therapist. This particular passage shook me:After struggling with the fear of being judged in therapy sessions, the entire appeal of using ChatGPT in this capacity seems to be the free, fast responses and the fact that it’s not human (even though ChatGPT doesn’t claim to provide professional mental health advice). “It doesn’t have emotions so she’s not going to be shocked or have facial expressions when I tell it things,” she says. “When I talk to ChatGPT, it’s the first time I’ve been able to be fully honest with myself, which has helped me the most.”
It will be better for you and me if I list all the ways this gnawed at me:
Because she knows ChatGPT has no emotions, she feels more secure
She is worried about facial expressions and judgement
She wants “fast responses”
Fast responses? How does that even work?
“I feel sad. I don’t want to deal with no BS about childhood trauma or whatever. Chop chop. How do I feel better?”
“Sure. 5 mg of melatonin and a nap.”
Is this much better than asking a Reddit community? Is it that much faster? Is it even that much different? Given that Reddit has been a known source of data scrapping, the answer is “no.”
Speaking of Reddit, the Dazed article helps illuminate what many of the AI “patients” expect from therapy: a personal, secret Am I the Asshole post answered by a calm, benevolent cyber deity. “Cyber deity” is too good a term for this, on second glance. If we learn nothing else from this, it’s that humans can feel safety and security from something they know is artificial. Why not get that from God, even if it doesn’t exist?But that’s not what this is. God is not a solid enough ambassador of consensus reality, like Reddit, like ChatGPT, like the whole goddam Internet is.
Now is as good a time as any to get to the crux not only of this whole post, but of the entire problem with the Internet and the loneliness epidemic: the death drive. I am not the first to have mentioned the link between online life and the death drive. Richard Seymour’s The Twittering Machine:
There is no evidence that this [platform] toxicity is chemical. To locate it, we may have to go, as Freud put it, ‘beyond the pleasure principle’. The name for our compulsion to pursue that which we know will give us unpleasure is ‘death drive’.
Rob Horning’s No Flipping piece brilliantly linked it to the “machine zone.” My new favorite blog, Already Happened, comes closest to capturing the precise nature of how the death drive is affecting users: by creating a sort of weaponized, platform-based Jonah Syndrome that inhibits people’s will to power, the death drive limits how we see ourselves and our potential.
Which brings us to our therapist-fearing subjects. I have been in and out of therapy from 1994 to 2021. I wouldn’t say I was necessarily judged by my therapists, but there were moments where I was met with resistance. When I was asked to reexamine my motives. When I revisited memories I did not want to revisit. Like art, philosophy and spirituality, therapy is illegible in a capitalist economy. Most people assume the purpose of therapy is to feel better. To be more normal. If we want to paint with a broad brush, therapy is all about learning how to experience your emotions. Someone more cynical would say “handle” your emotions, but that is how you handle them: by experiencing them. By understanding that feelings aren’t always meant to lead to actions and sometimes — most times — they have nothing to do with what is happening right now. A concrete example: the Focusing method, developed by Eugene Gendlin, consists of patients noticing how their bodies experience certain emotional states and trying to put those feelings into words in order to heal. It’s not about getting rid of emotions, fixing them, handling them, but understanding them. The understanding is the healing.
This push for quick, convenient, no-resistance therapy is not without precedent. In the ‘60s, Joseph Weizenbaum invented a natural language processing computer program called ELIZA. It was often used to simulate therapy. While Weizenbaum insisted that it was not a replacement for therapy and nobody thought therapy as a profession was threatened by AI, many who tried ELIZA believed there was an intelligence embodied within the program. Like they were talking to somebody.
In this interview with
, touches on how the robust, insurance-covered therapy culture of mid-twentieth century America gave way to (after therapy and mental services in general lost funding in the neoliberal seventies) the self-help revolution of the ‘80s. From ELIZA and self-help literature comes a growing sense that conversations with humans, let alone with mental health professionals, is not necessary for good mental health.This is not what drives all this though. It is, again, a combination of the death drive and Jonah syndrome. Crippling fear of learning through experience. If the Internet can be fueled by one thing it is an absolute fear of learning through experience. Better to get suggestions from an objective source than learn through your own subjective experience.
Too bad ChatGPT literally doesn’t care about you. A good heuristic for the future of AI: things that require care will be ill-served by artificial intelligence. As Rob Horning says, people are critical nowadays of AI in movies because it feels cheap and rushed-off, like no care went into it. Stephen Marche goes one better by saying AI is the microwave of language. With this microwave model, we can see how you may want AI to, say, help you write a great cover letter. But for mental health care?
That is assuming of course that these ChatGPT patients want care. But that’s not the point. What they want is validation. Even something as futile as validation-seeking is better served through the trial and error of subjective experience. But no, your own experience is not enough. Not even that of a trusted friend. No, AI best serves us as an image management strategy. Before ChatGPT, the most popular form of user-facing AI was Snapchat filters that distorted your face. Often used for humor, it ended up being used primarily by women to make their Tinder profile pictures more attractive. There is a thread between this and ChatGPT therapy. A patient may as well say “Mirror mirror on the wall, who is the fairest of them all,” with the AGI mirror knowing all too well that if it doesn’t say “You my liege,” it will be smashed. It is so easy to bend the ChatGPT responses to your will. I have yet to see a leading question that ChatGPT did not succumb to. Whether it’s the echo chamber of the social media feed or the always-comfortable couch of chat therapy, the old sales credo stands: the customer is always right.
Being right, just so we’re clear, is a left brain issue. As I’ve said, the Internet reinforces the left logical, analytical brain at the expense of the holistic, intuitive right brain. This is great when it comes to politics, which is why the Internet has such a glut of political content. But when it comes to more ambiguous, murkier depths like mental health, there are too many shortcuts being taken. The truth is, mental health not only cannot yield “fast responses,” it shouldn’t. Wanting fast therapy is like wanting fast building construction, which is a good metaphor because there are plenty of structures that are built as quickly and cheaply as possible, regardless of structural integrity or safety.
The demand for “fast responses” not only fails to yield good mental health; it is itself a symptom. Like everything else online, chatbot therapy is another manifestation of shallow engagement:
We are communicating like never before, but on such a surface level. We have more friends than we even asked for, but how many of them are based around a weak, tacit agreement to like each other’s posts?
How far are we from a dystopian chatbot therapist that gives a thumbs up when asked if something — or someone — is OK? Or, worse, instead of an hour with a therapist, quick texts throughout the day with a chatbot therapist, all in emoji form?
recently wrote about how as we compulsively communicate throughout the day, the conversations are shallower:I’m tired of communication; i’m sick of responding to texts; i’m tired of posting; i’m over putting on a pose,—i use a lowercase i not because it’s trendy here but because i don’t like myself enough to capitalize my name; i’m increasingly past all the memes; i feel like running when my phone lights up; when it buzzes and i’m near a body of water, it might just go in there; i’ve become more accustomed to leaving it on do not disturb all day; i’m tired of keeping up with trends, with cycles,—it’s all a familiar cycle, isn’t it?; i’m over waking up and scrolling; i’m tired of the American bedroom; i’m tired of responding to comments (but thank you for reading, i love all of you); mostly i find i’m sick of this sickly sticky text forward interfacing with you and you with me,—and that hurts to say because i love talking with you, i love hearing from you—but there are increasingly more and more moments throughout my day-to-day in which i simply don’t want to communicate with anyone at all in any way whatsoever except,—
There’s the ding of my phone’s polite speaking voice, a quick ahem, as it speaks up, and of course i check what it has to say. What if it’s urgent? It’s nothing, though, naturally, and so my resentment against this communication grows and grows.
In the Bizarro world of cyberspace, quantity is more important than quality, which makes sense: quantity is literally easier to quantify. Quality is too subjective for any metric. Who cares what one good critic’s opinion is? We need an aggregate percentage of all critics so we can see if a consensus has been reached. Who cares how deep the insight of the therapist: how long will it take?
This shallow engagement of course is a symptom of the death drive. Our endless scrolling and texting is, again, not a result of boredom, but an expression of it. Another expression of it: bed rotting. Here’s a screenshot of a tweet from the excellent Tell the Bees piece I linked to:
See that? Supports me. Never argues. Why have a boyfriend when you can lay with your bed, who never judges you? Why talk to a person when a chatbot never pushes back? Why live wholeheartedly when you can die slowly and safely?
Even social media is getting less social. According to Jürgen Geuter (via Rob Horning) Meta wants its platforms to pivot from helping people communicate to auto-generating AI content based off user preferences. One of the few things I was prescient about regarding AI was how it would be the end of social. But I made one major error in my prediction: I assumed that when AI replaced content creators, the user would type in the content it wanted, just like ChatGPT or Midjourney. This is algorithmic AI feeding the generative AI your preferences and creating custom-made slop.
Now it is tempting to keep wagging my finger about the death drive and about not wanting to communicate with others, but looking at the end of social media helps me understand why someone may want to just talk to an AI. Did the social media era show us the best of humanity or the worst? Were people enlightened, kind, excited about collaborating together or were they cruel, vainglorious, shallow, hateful and self-pitying. Which is why I have always said: the first monstrosity that AI created was us. The mean content, the angry content, got the most eyeballs.
Wouldn’t you know it — people get tired of this after a while. Just like any drug, social media — a drug addictive on the level of Krokodil — leads to burnt-out pleasure centers and the dopamine rush is not rewarded anymore, but is simply born through force of habit, much like a cigarette smoker, long past the days of nicotine as stress relief, simply goes through the motions. So with that in mind, AI generated content might be to user-generated content what vaping is to cigarettes: cheaper, easier to manipulate into different flavors, etc.
Auto-generated AI content is beneficent for a company like Meta that has been censoring pro-Palestine content. No pesky content creators’ opinions to contend with anymore. Considering the (industry) backlash against pro-Palestine/anti-Kamala pop star Chapell Roan, it stands to reason that major legacy media entertainment conglomerates will want to dig in and start creating major studio AI slop as well with easy-to-control digital stars.
This death drive dilemma, of course, is primarily experienced on the user end. The tech CEOs view themselves as gods. They are, after all creating a god; what could be more god-like than creating a god? One tiny snag that we all seem to be forgetting though: since these gods were created using our content, our words, our emotions, our thoughts, our pictures, our stories, our beliefs, our wisdom, we also created the gods. These gods were created in our image. Not only do we have the divine spark, but we are better than the gods we are creating. We are kinder, more patient, more merciful, funnier, smarter, deeper.
Now that we know we are gods like Altman and Musk, perhaps we can listen to each other. A person talking to you is a fellow human god. But this is where I feel we are doomed. Humans have never been known for intently listening to each other. Simon and Garfunkel’s “Sounds of Silence,” in this regard is prophetic. If I can just share some excerpts of the lyrics that are relevant:
Hello darkness, my old friend
I've come to talk with you again…And in the naked light, I saw
Ten thousand people, maybe more
People talking without speaking
People hearing without listening
People writing songs that voices never shared
And no one dared
Disturb the sound of silence…."Fools" said I, "You do not know
Silence like a cancer grows
Hear my words that I might teach you
Take my arms that I might reach you"
But my words, like silent raindrops fell
And echoed in the wells of silenceAnd the people bowed and prayed
To the neon god they made
And the sign flashed out its warning
In the words that it was forming
Then the sign said, "The words on the prophets are written on the subway walls
In tenement halls"
And whispered in the sound of silence
The lyrics I italicized above are particularly chilling in how prophetic they were. Almost as chilling as those early stories of people using ELIZA and believing that there was an intelligence that the creator repeatedly insisted was not there. ELIZA’s secret, of course, was that it repeated the users words back to them. ChatGPT, all the other chatbots, do an incredible job acting attentively. Will we ever be able to remember and absorb each other's words as effectively as AI can?
I recently gave ChatGPT a viral prompt. Here’s a screenshot of that exchange:
I knew it was not real, but I still felt like I was seen. With all the writing and thinking I do about how easy it is to exploit people’s needs for validation, here I was taken aback by a large language model.
OK one last thing. Meta recently had a feature on Threads where those who missed the recent aurora and were unable to take a picture were able to use generative AI to create their own aurora pictures. Now you don’t need to worry about social media FOMO. If you weren’t there, they don’t need to know what. Is this the next logical step for the bed rot crowd? From having AI listen to you to having AI live your life for you? Yes, pictures of the sky are unoriginal. So are baby photos and wedding photos. So are pictures of a beloved’s smile. These are not meant to be aesthetic marvels. They are marking milestones in a person’s life. One of the few enduring positives of social media is how it creates a reliable timeline of someone’s life.
Maybe this will be social media’s legacy: documenting all the final moments that the human gods roamed the earth, stood tall, flew the skies in winged chariots. That is, if the god we created cares enough to preserve it.
I am now, as of the day before publication of this post, moved to write about another aspect of AI: AI art. I saw Joker: Folie à Deux last night at Regal Union Square. Terrible film. (Side note: I am including this as a footnote, not as a Substack note, because my Substack Note review of Megalopolis has gotten more attention than anything I have ever written in my life. I have worked too hard on this piece to have a review of a shitty film overshadow it). It is symptomatic of the modern Internet. Like AI art, it strikes the viewer as “accurate” in so many ways, but “off” in ways that are uncanny and/or weird, both in the
sense. It was as if someone asked ChatGPT for help creating a fresh take on a potential gay Joker ship. (My one sincere praise of the film: it explores, unlike any other film or show, how gay The Joker is. [In the Arkham City game, Joker slaps Barman’s ass] Between Arthur Fleck kissing the male inmate on the lips without hesitation and his love of showbiz standards, it is a fascinating look at a major comic book villain’s repressed homosexuality — perhaps this is why there is the split between Arthur Fleck and Joker?). Think of it as the world’s first AI slop feature film.Joker: Folie à Deux is also an unwitting vision of what happens when we let the algorithmic gods decide our fate. Lady Gaga acting poorly and Joaquin Phoenix singing horribly is analogous to Addison Rae singing and Mr. Beast having his own TV show. More than any other time, so many youth feel entitled to fame. If director Todd Philips’s Joker run has done anything, it has rightfully elevated The King of Comedy to a higher status than it previously had. The Internet is full of schemers that want shortcuts to fame. That’s all Rupert Pupkin and Arthur Fleck ever were. Gaga’s Harley Quinn is willing to fake love to taste some fame. But where the first Joker was distanced enough to be an artistic statement on our collective delusion, this film is infected by it. Gaga is not a terrible actress, but she can hang with a decent actor like Bradley Cooper, not a master like Phoenix, whose singing is so awful, it may have created a new genre: instead of the jukebox musical, the karaoke musical. Unlike Megalopolis, it does not fall into the “so bad it’s good category.” There are too many dime store analyses sprinkled throughout on comedy and jokes to ever allow anyone to laugh at it. I don’t think it will survive as a gay camp classic either. SPOILER ALERT: the guards raping the Joker out of Arthur Fleck will make it so only the most sadistic, method-out gay tweakers find a film like this a Mommie Dearest-level trash classic.
I am glad I saw it, but unlike Megalopolis, you do not need to see this in the theater. No one is laughing at any overly campy performances. The ability to skip through Phoenix’s singing is something to be grateful for. His musical performance may have traumatized me more than the scene in the spoiler alert above.
Nice work.
Very nicely done. I spend a lot of time (all day yesterday, in fact) in the company CS folks and thinking around some of these issues. I saved this piece.