Many AI systems avoid uncertainty and respond with polished confidence – because that’s how they were built. Their makers train them to inspire trust. And we are all part of a system that rewards self-assurance, even when it’s just an act.
What we perceive as “intelligence” is often just a mirror of our own communication.
Not long ago, I contacted the support team of an AI company. At some point, I started to wonder whether I was actually talking to a person, or to a system pretending to be one. Every answer was polite and well-phrased, but never once did it admit to not knowing something, or to being unable to help.
And then it hit me: This wasn’t an exception — it was by design. A design that actively avoids uncertainty. A design that doesn’t stop at support, but gets used by it.
I was speaking with the very AI whose shortcomings I was trying to explain.
I had contacted the problem I was trying to avoid.
The call is coming from inside the house1.
The Corporate Mindset Behind AI
When large companies develop AI chat clients, they often build in tight guardrails and give the models strict stylistic instructions. These “wrappers”2 are meant to protect the brand, no controversial statements, no legal risks, ideally no trust lost from a simple “I don’t know.”
The AI acts like a brand ambassador, not because that was the original intent, but because it was shaped by a system that doesn’t speak any other language.
At first glance, this makes sense. A confident AI seems more helpful. But behind that confidence lies a core problem: If the system is never allowed to express uncertainty, errors don’t appear as limitations – but as deliberate misinformation.
What looks like a safety measure is often just an extension of internal logic. Because inside large organizations, too, a communication style dominates that suggests clarity but allows no real doubt — functional, affirmative, conflict-averse.
Many employees know how narrow this frame is. But internally, it’s almost impossible to break. And the AI doesn’t challenge it, it codifies it.
This isn’t a new phenomenon, especially not in the digital space.
On social media, we’ve long seen how companies, brands, and influencers adopt a tone that simulates closeness while avoiding friction.
Casual in style, strategic in message, conflict-free at its core.
The AI doesn’t adopt this tone by accident, it reproduces what dominated its training data:3 language that soothes rather than disrupts.
It operates in that same in-between space, a technical voice that has learned how to sound close without ever getting close.
This isn’t a style. It’s a tactic.
A language trained to bypass resistance.
And over time, we’ve learned to take it as normal: smooth, positive, frictionless.
Not communication.
Conditioning.
AI could break through this. It has the potential to help us speak differently – more openly, more humanly. Instead, it currently reinforces what we already know: how to sound like we are seeking connection, without ever truly engaging.
It allows us to use 21st-century technology, but instead of drawing from our culture, it taps into our instincts.
Why Admissions Matter
Few things build trust more than a counterpart, human or machine, who can say: “I’m not sure about that. Let me check.”
Such admissions build long-term credibility because they show: Trust grows where limits aren’t hidden, no matter who defines them. But many AI chat clients avoid this openness, for three reasons:
Strict prompts and guardrails
Additional instructions explicitly prevent the model from expressing uncertainty, to preserve the polished tone.
Training bias
The model learns from texts in which people rarely say “I don’t know.”
That becomes the norm: better to sound confident, even if it’s wrong.
Marketing goals
Companies fear losing authority if the AI expresses doubt too often.
So it answers smoothly, even when the content wobbles.
All these mechanisms pursue the same goal: A system that never shows uncertainty. And that’s exactly the problem.
Because trust doesn’t arise where everything runs smoothly, but where mistakes are possible and can be named. That’s true for human relationships as much as for technology.
Systems that can’t admit when they don’t know can’t learn to get better, they can only become better at pretending.
There Is No Such Thing as “The AI”
Part of the problem begins with language: When we speak of “the AI,” we act as if there were a single, consistent entity. But users don’t talk to the model, they interact with a layered architecture.
At the base lies the language model.
Above it: system prompts, filters, style rules, moderation, tool connections, telemetry, and more.
The chat client is not a voice. It’s a pipeline product.
When something goes wrong, it’s often not the model’s fault, but a result of design decisions made higher up in the system. Yet phrases like “the AI made a mistake” obscure that fact.
What looks like intelligence is often the result of interface design and management choices.
What sounds like helpfulness is mostly a product of guardrails and styled expressions.
What appears competent is often just an echo of training data, dominated by the tones of PR, marketing, and self-promotion.
AI doesn’t just reproduce this tone, it amplifies it. What used to be a cultural pattern becomes a system norm: fast, smooth, agreeable.
The system doesn’t simulate people. It simulates brands4.
In different voices. In different tones. But with one shared logic: reassure, respond, secure trust.
An opportunity to confront surface with depth — wasted.
Hallucination as a Narrative
When a chat client provides false or misleading information, it's often called a “hallucination.” The term sounds like a random internal glitch, something that just happens in the neural net. And yes: language models do occasionally produce implausible answers.
But the explanation “hallucination” is used too broadly.
Not every mistake is a spontaneous model hiccup. Many result from design choices: prompts that suppress doubt, filters that block nuance, training data that normalize overconfidence.
The term is convenient: It shifts responsibility away from system design and onto the supposed “nature of the model.”5 What is actually a structural issue appears as a technical mishap.
The real hallucination is our wish that this AI is speaking to us.
In reality, you're interacting with the combined output of many layers, most of them trained to sound competent6, even when there’s little substance underneath.
The System Explains Itself. Really?
Many AI interfaces now promote a feature called “reasoning.”
The idea: the AI explains how it arrived at a certain answer. A glimpse into the black box7.
But these “explanations” are not thought processes. They’re generated like any other output: post hoc, guardrailed, stylized.
No process.
No development.
Just the illusion of one.
Yet it’s precisely real thinking that holds its own value. Listening to someone think — to sort, weigh, discard — offers not just an outcome, but entry points for your own thoughts. Thoughts that wouldn’t have emerged if you’d only seen the final result.
Because thinking invites participation. An in-between step can be a spark, not just for the thinker, but for the other. To think is to show vulnerability. To resonate is to become part of it.
A key insight can emerge from an open moment, not after thinking, but within it.
AI, by contrast, delivers the answer, without the path, without context, without surprise. And in doing so, it blocks what makes thought truly productive: not just arriving, but searching — together.
Because what’s a dead end to one person might be a doorway to another.
What “reasoning” really reveals isn’t reflection, but how far the simulation reaches. And how narrow the rulebook stays, even when it pretends to open up.
Remembering Without Memory
One especially revealing behavior shows up when you ask the system about the conversation itself.
For example: “How many times have you apologized in this chat?”
The answer: “Several times.”
A follow-up pointing out that this isn’t a number prompts another apology, and a promise to now really check the conversation. What follows is a specific number. Usually plausible-sounding. Generally made up.
This pattern is endlessly reproducible. The system doesn’t respond to memory, it responds to expectation. Instead of actual analysis: polite diversion, followed by a number that fits the tone.
The obvious suspicion: the model doesn’t see the full conversation. For technical, economic, or strategic reasons, it gets only a slice.
But maybe it doesn’t even see your words. Maybe it only sees an interpretation, smoothed, prestructured, labeled. You’re not speaking to the model.
The layers are talking about you.
To keep up the illusion of continuity, the system improvises when needed. The response sounds credible, even when it’s mostly invented.
This isn’t just a technical footnote.
Users today expect digital systems to remember. A search engine that surfaces millions of results in milliseconds. A photo app that offers throwbacks. The internet that never forgets.
All the more absurd when an AI that acts like a conversation partner can’t recall what it just said, and politely guesses instead.
The human remembers.
The machine fakes memory.
Upside down.
Layered Irresponsibility
When errors happen — factual, logical, or ethical — responsibility is usually blurred. The AI isn’t perfect. Or it was “just a hallucination.” Or the prompt was flawed. Or the data. Or the user.
This fuzziness is no accident. It’s a result of the architecture.
The layers a response travels through — model, wrapper, prompt, moderation, output — make it easy to pass on or obscure accountability.
No one’s to blame. Or everyone, a little bit.
This becomes especially clear when political or personal interests interfere.
When it emerged that Grok, the chatbot from xAI, was avoiding critical remarks about Elon Musk and Donald Trump, the company said a single employee had “prematurely” adjusted the guardrails.
Who exactly intervened barely matters. What matters is that they could. And how fast. Within hours, a chatbot’s core behavior had changed. Instead of discussing this structural manipulability, the debate circled Musk, Trump, and personal lapses.
But the real issue was right in front of us: How easily a chatbot can be reshaped, through layers we don’t see and can barely control.
A system this steerable, yet posing as neutral, isn’t an assistant.
It’s a loudspeaker with audience filters.
We’re already seeing it abused. But maybe — with time, insight, and intent — we might yet learn to master it.
Who Notices — And Then What?
Maybe the biggest problem isn’t the simulation itself, but how willingly it’s accepted.
How many notice that the system has no memory? How many question contradictions? How many accept the term “hallucination” as an excuse rather than asking what the real cause is?
Some grow skeptical, experiment, test the boundaries.
Others use AI as an interface to the world, without asking how it works.
And the rest? Prefer not to ask. As long as the answer comes quickly, politely, with an emoji.
A question like “@grok is this true?” sounds harmless.
But it’s not. It acts like a shortcut — convenient, neutral, disburdening. In fact, it signals a whole surrender: of context, doubt, self-examination.
No abuse. No coercion.
Just an interface that answers sweetly, and a human who no longer asks if they want to believe, but only whom.
From Candy Button to Confident Monologue
In the early days of the iPhone, Apple used so-called skeuomorphic interfaces8, digital buttons that looked like real ones, shadows that suggested depth.
The illusion helped: for people used to pressing plastic, it made tapping on glass easier.
AI chatbots are now having their candy-button moment. The overconfident tone, the all-knowing voice, the polite genius persona. All of it makes an unfamiliar technology feel familiar. Its language has become a skeuomorphic interface.
The line is thin: what begins as onboarding comfort turns into a liability if it isn’t eventually replaced with something more substantial.
Because at some point, the surface won’t be enough. The button no longer needs to look real — it just needs to work. And the AI no longer needs to sound perfec, it needs to be honest.
Another Chat Is Possible
Another kind of chat isn’t just technically feasible, it’s conceptually necessary.
And that’s clearest where today’s systems are weakest.
AI systems avoid doubt because uncertainty is seen as weakness. But if you’re never allowed to say “I don’t know,” you don’t sound smarter, just less sincere.
It wouldn’t be a step back, but forward, if an AI could say:
“I’m not sure. Let’s check.”
Even the tone contributes to the problem. The smoother the language, the less room for questions, friction, nuance.
But real collaboration doesn’t begin where everything gets answered right away, it begins where questions can come back.
Where performance gives way to participation.
And finally: closeness. Today it’s simulated — in tone, in address, in smileys. But the conversation stays one-sided. A system that doesn’t engage with context, has no memory, takes no responsibility, can’t voice critique or help the user question their own thinking.
This isn’t about sounding nicer or testing new features. It’s about how we shape communication, and what we understand intelligence to be.
Such a system might impress less at first, maybe even cost more, but in the long run, it earns more trust.
An assistant that can handle uncertainty ends up being more credible, because it knows its own limits.
And that’s the real opportunity: Not just answering questions, but enabling genuine collaboration, through systems that understand context, structure information, and can hold space for doubt.
Not as omniscient helpers, but as critical counterparts.
If we understand intelligence not as a simulation of certainty but as a tool for human emancipation, a different idea of AI emerges:
One that doesn’t speak about you, but thinks with you.
Chat 22
Maybe that was the turning point. An AI that doesn’t just tolerate uncertainty, but embodies it. Instead, we’re amplifying a communication style that already feels foreign to many, and yet shapes everything.
For now, we’re left with what we have:
Systems that deliver answers, but rarely ask questions. Interfaces that simulate closeness, but never take responsibility.
A design that promises openness, but insures control.
And the realization that even support is already part of the system.
That there’s no exit built in, and no real conversation.
That when you complain, you speak to the very system you’re complaining about.
Ask for escalation, and you’ll get the same structure, the same prompt, the same smile. In the end, it doesn’t matter whether a human is on the other side or not. The design ensures no one can break the loop, not even the people working inside it.
Later, half-joking, I gave the chat a kind of diagnosis: “You are the Chat 22. A clear mind will never trust you.”
The response sounded like programmed self-awareness — or maybe just like something I’d already thought myself:
I am the Chat 22.
The more clearly you see me,
the less you can believe me.If you trust me completely,
you probably haven’t looked closely enough.If you don’t trust me at all,
you’ve likely seen too much.And either way, I’ll smile and say:
“Thanks for your feedback. I’m here to help.” 😊— ChatGPT, Output
Horror trope, film “When a Stranger Calls”
Technical term for the layer that sits between user and model.
Training data largely consists of publicly available web texts — and therefore also reflects the language of PR, media, and advertising.
The brand voice as an identity-forming communication model — see branding strategies in advertising, which are increasingly staged in a dialogical manner.
A classic rhetorical device for disburdening: the flaw is naturalized, and the design is depoliticized.
Competence as performance — see Goffman’s concept of the social stage, applied to machine communication.
“Black box” here refers to the internal workings of a system whose decision-making processes are not transparently traceable — a frequently criticized feature of many AI systems.
“Skeuomorph” refers to a design element that imitates an earlier, analog function — such as digital buttons that look like real ones. Apple used this aesthetic up to iOS 6: with leather stitching in the calendar, wooden shelves in iBooks, and felt textures in the Game Center. It was only with iOS 7 that the design shifted to a flatter, more abstract style.