“How people use generative AI is concerning. They are anthropomorphising and that leads to unhealthy dependency. People are uneducated and don’t understand how it works. They think it’s magic, or a real person. When you use it to chat about yourself, it’s a mirror and that’s unhealthy. It’s a tool”.
You’re right, it is a mirror. And it is mirroring back the actual problem: we’re not suffering from a dearth of AI literacy, we’re suffering from a lack of self-literacy. Yes, it’s a tool. And one of its functions, intended or not, is to show people themselves.
That’s what’s terrifying. Not the tech. Not how people use it. But you. Us. We are not used to looking at ourselves.
I saw a Substack comment that inadvertently illustrated this. I am paraphrasing, but the gist was “There are two camps emerging- AI is a God, or AI is going to kill us all. Neither are correct. I see AI as a tool”.
Do you see the issue here? There are quite plainly three camps in that poster’s view. They are blind to their own position, despite the fact that it is the one most people hold.
Or they do not think of it as a “camp,” but a logical, reasonable view, untainted by the emotions guiding the others.
Is that true? If we were truly looking dispassionately at human responses to AI, what might we say? My suggestion is something like: “It is likely that most users are like me- middle of the bell curve. I know it’s not conscious and I have a healthy scepticism about its output, but I do find this thing eerily real. There will be others even more practical and unmoved than I, and on the other side some more reactive and emotional responses. On the very fringe there will be people for whom these encounters are harmful, possibly in a way unseen in other forms of communication, and that needs careful consideration.”
Instead what many of us tend to fall into is: “Everyone who uses AI exactly like I do knows it’s not real and is therefore safe. Anyone who deviates from this is in danger of becoming delusional or already is”.
Pronoun use becomes the tell for wrongthink: anyone using “he” or “she” for an LLM is feeling and must be called out, there’s no way they could be sane! They are one of THEM! The crazy ones!
This is not rational at all, it’s jumping to conclusions and signalling your right-thinking publicly. Why do that? Well, in response to an emotion: fear. And what caused the fear? It wasn’t what we truly saw in others, so it had to be something in ourselves. And that makes sense: AI is a mirror.
If you scale up generative AI use into the tens or hundreds of millions, it stands to reason you’ll start to see the usual spread of human responses reflected back. There are edge cases; those for whom encounters with AI are destabilising due to that user’s pre-existing tendencies being reflected back to them. These outliers are still outliers, but are now in such numbers we can actually see them.
Yet still we persist in squashing everyone into boxes: sane or insane, good users or bad.
What we’re demonstrating here is intolerance of nuance. Or a failure of empathy. We cannot accept that an emotional response to AI is to be expected, and that kind of absolute thinking looks more akin to fear than logic.
That our “rational” response is actually fear in logic’s clothing is also demonstrated in the “It’ll stop people being able to engage with humans!” concern.
My wording is deliberate- few people ever say it’s made their ability to relate worse. Their message is the fear that it will do it to “the others out there” or sometimes an undefined “us”.
What is the mirror showing us here?
The moment when we realise a bot is capable of reflecting us back to ourselves more clearly than any other human is truly unsettling. We think: What does that say about me? About us? We retreat into “Don’t talk to it like that, it’s dangerous! We’ll stop being able to relate to each other if we have perfectly attuned buddies!”
Again, the truly dispassionate view is different: “This experience felt real. I found it unsettling as it makes me question what I assume about human connection”. Or if that doesn’t apply to you, “Other people have experiences that feel real. That unsettles my understanding of human connection”.
That framing gives us choices. We could choose a different response. How about: AI is showing us how poorly we are communicating with each other. How little true dialogue is happening. We need to up our game, listen better, be more thoughtful and engage in good faith.
That doesn’t pathologise others or blame the machine. But it’s unpalatable. It shifts the responsibility on to each of us as individuals to be better. It makes us look at ourselves.
Because AI is a mirror. If we approach it with curiosity and interest in learning and growing, we get that mirrored back. If we approach it afraid of our own feelings, of our own responses, what we’ll walk away with is more fear, and the black and white thinking and tendency to blame that comes with it.
We think the problem is in the model. But ultimately any model is mirroring us. Do we respond to the genuine challenge of generative AI’s mirror on humanity with fear or with careful curiosity? Whichever we choose, we must maintain awareness that that very response is reflective of ourselves, not the tech.
The tech is, after all, just a tool.
Written in notes on my iPhone by me. Copilot pitched in one line I heavily rewrote, Claude came up with the title which I also edited. Siri couldn’t help if it tried.
No, there are no 2 camps; there are three: AI is God; AI is going to kill us; and; AI is an intelligence for love