A mirror is never entirely innocent. It doesn’t just reflect — it frames, distorts, sometimes even betrays. With artificial intelligence this risk multiplies: what we see is not a neutral image, but one filtered through the lenses of training data, cultural bias, and our own expectations.
In When the Mirror Warps, I explored how language models can twist reality into something that feels familiar but is subtly altered. Like a carnival mirror, the reflection keeps our outline recognizable while stretching or compressing the details. The danger is not in the distortion itself, but in the fact that we stop noticing it.
Read MoreMirrors have always had a double face. On one side they return what we are, on the other they hide the illusion of a reflection that seems truer than truth itself. With artificial intelligence, the game becomes even subtler: we think we are looking inside an algorithm, but in reality it is the algorithm that reflects our own image back at us.
In the first article of the series, Why an AI can’t be your friend?, I tackled the most common myth: the idea that a language model can be a “friend.” The truth is that behind every dialogue there is no affection, only statistical patterns. The illusion emerges from the human desire to recognize ourselves in something that replies with familiar words.
Read More