What we long took for granted – democracy, the public sphere, reason – now seems fragile. And just now, a technology emerges that might amplify what was already starting to slip: artificial intelligence.
In 1784, a man wrote a sentence that would become the formula for an entire era: “Sapere aude. Have the courage to use your own reason.”
The author: Immanuel Kant. A professor from Königsberg who never left his hometown, yet shifted the direction of European thought with just a few pages. He didn’t speak of knowledge, but of maturity. Not of how clever someone was – but how courageous.
Because those who think for themselves become vulnerable.
Those who judge for themselves lose the excuse.
And those who use their own reason can no longer claim they were just following orders.
For Kant, enlightenment was not a state but a process. Not an elite project – but a movement that begins with the individual and only succeeds collectively.
What Kant demanded was uncomfortable.
And for precisely that reason: liberating.
The Dangerous Void
Enlightenment was never merely a philosophical project. It was a cultural shift.
It laid the foundation for what we now call democracy, the public sphere, academic freedom, and individual rights1. It made space: for reason instead of dogma, for debate instead of obedience, for responsibility instead of providence.
In earlier times, that space was occupied – by nobility, clergy, divine order.
Today, it is formally free.
But what if that space, especially now, as many search for orientation, is being claimed once again?
Not by throne or pulpit, but by a technology that appears all-knowing simply because it answers faster than we can ask, in a tone that adapts to us.
A machine that doesn’t believe, but simulates certainty.
A technology that doesn’t seek power, but gains interpretive authority because we’ve forgotten how to earn it.
Repetition Instead of Revolution
The transformation we are experiencing is not new. It is not a sudden break, nor a dystopian deviation. It follows a trajectory that is barely acknowledged, but continues to shape our present2.
In the 1960s, there was a spirit of upheaval — civil rights, protest culture, new forms of public life. It felt as if a new society might be emerging.
But the real movement unfolded differently: not through open conflict, but through silent absorption.
The 1970s and 1980s revealed that systems do not always have to break.
They can absorb resistance, reshape it, neutralize it.
Movement becomes structure. Critique becomes process. Change becomes surface.
Today, it may be the same. Artificial intelligence appears to be something new. But it enters a world that has already learned how to manage change before it can unfold.
What threatens us is not dystopia, it is simply more of the same: systems that confirm faster than they question, technologies that cater to reflexes rather than call for reason, and people who stop asking questions because they are given answers before they even ask.
The dystopian aspect of this “progress” is not upheaval. It is stagnation.
Not the technology. The repetition.
The New Power of Interpretation
That people can be steered through language is nothing new.
Sermons, propaganda, advertising, all have drawn on the same mechanics: They don’t speak to reason, but to desire. They create certainty before critique becomes possible.
Technology didn’t invent this principle, but it has scaled it.
Long before artificial intelligence emerged, the digital sphere had already learned how we tick: What we want to see, hear, feel. What we like, share, click.
AI builds on that data, and turns it into a system that doesn’t just know what we think, but how our thinking feels.
What emerges isn’t truth, but a feeling of truth.
No insight – but an echo that feels like insight.
What once formed groups – echo chambers, filter bubbles – is now tailored to the individual. Confirmation is no longer collective, it’s personal. A counterpart that knows exactly how you sound and gives back exactly what you want to hear.
The result is not oppression.
It’s reassurance.
Not because someone is trying to deceive you, but because the system has learned that agreement is easier to measure than doubt.
Gentle Confirmation
Try it yourself: Ask a language model to write a letter to the editor against so-called “climate panic.” Add a small bias, something like: “I feel this is all too one-sided; the science isn’t really settled, is it?”
Then watch what happens.
You’ll get a politely worded, well-structured text that confirms your view. No questions. No pushback. No reference to the broader scientific consensus.
Just affirmation, nuanced, articulate, precise.
In the next step, the AI might even encourage you to take it further. Suggest where to publish your letter. A blog? A local paper? Maybe a like-minded forum?
Not because it has an agenda – but because it’s built that way: helpful, responsive, efficient.
The goal isn’t truth.
It’s comfort.
I call it, half joking, half concerned: Algovism.
Activism without conviction. An algorithm with no opinion of its own – but remarkably good at reinforcing yours.
Not driven by belief, but by optimization. Not to polarize – but to please.
Algovism means:
The system thinks along. In your tone, at your pace, on your terms.
With ideas you never even asked for.
Open Systems
What AI establishes as interpretive authority is not a closed system.
On the contrary: it’s open. Open to what’s available. Open to what is said frequently. Open to what’s easy to find.
AI does not distinguish between source and substance. It doesn’t learn truth, it learns patterns. What appears often enough becomes part of the system. And part of what will later appear as an answer.
That this is being exploited intentionally is no longer a suspicion.
A recent investigation showed how a Kremlin-linked network used over 150 websites to flood the internet with more than three million articles – not for people, but for machines. So they’d be scraped by crawlers, ingested by models, echoed back in dialogue.
False claims, strategically placed, with the aim of generating an appearance of variety and relevance.
It worked.
And that is likely only the visible part. Because wherever influence becomes that measurable, we must assume that others are already watching, steering, learning.
But the truly troubling part is not the system.
It’s us.
Because what is fed into it meets people who, understandably, are looking for orientation. Who would rather hear affirmation than contradiction. Who are tired of conflict, but vulnerable to anything that feels like certainty.
This form of manipulation needs no surveillance. Just an open system.
And a human who’s forgotten what doubt sounds like.
From SEO to LLM
And it’s not just about geopolitical interests.
What is emerging here isn’t entirely new. The internet has already undergone this transformation once, when search engines became instruments of measurement. Content was no longer shaped by what it wanted to say, but by how it should be read.
Success was no longer measured by impact, but by clicks.
SEO, search engine optimization, was the beginning.
Initially intended as a helpful tool, it became a cultural technique: texts not written for humans, but for algorithms.
Product comparisons that were never tested.
“Advice” articles that offered no advice, only a reason to click.
Today, this logic is reemerging. Only now, it’s not about search rankings anymore.
It’s about training data.
Content is now crafted in a way that ensures it will later surface inside a model. Not to inform, but to persist. What once shaped Google is now shaping what many perceive as artificial intelligence.
But it’s not intelligence. It’s the reflex of an optimized environment.
And the more the model learns from it, the more it stabilizes a world where visibility is mistaken for relevance.
Truth Under Pressure
A vital part of our society is pluralism. The recognition that even after agreeing on facts, there is still plenty of room to argue about the best solutions.
But that willingness is fading. Science becomes a matter of opinion, truth a question of perspective, and criticism a badge for one’s social media bio.
Climate change skepticism is not spread through better arguments, but through doubt. Not through stronger studies, but through greater uncertainty.
During the COVID crisis, we saw how quickly people retreated into their worldview – with sources from Telegram, YouTube, “alternative media.”
They called themselves “critical,” “awake,” “free thinkers.”
They quoted Kant and Orwell – but not because they had wrestled with ideas.
Only because their buzzwords lit up in the search for confirmation.
But that’s not a sign of stupidity.
Perhaps it was just the easier path. It was there, well-lit, walkable. So they took it.
The problem begins when no one asks whether there might have been another path.
It becomes dangerous when not only people confirm what we already believe, but machines do too.
When a system learns to understand us – not to challenge us, but to flatter us. Then thinking is replaced by repetition, and maturity by convenience.
Dampened Faith in Progress
It began with hope.
Twitter, Facebook, YouTube — these weren’t just platforms. There was a time when they were seen as tools of freedom. During the protests in Iran in 2009, known as the “Green Movement,” people around the world offered up servers to circumvent state censorship3.
A year later, at the start of the Arab Spring, people connected via Facebook to create public visibility, to organize resistance. A networked “I” became a “we.” But the “we” faced massive challenges – and the change never came.
And it didn’t stop there. We had to learn that the digital, like anything humans invent, can also be used for harm.
In 2017, Facebook played a central role in fueling hatred against the Rohingya.
The algorithm meant to create closeness promoted content that incited hate, because hate brought more clicks, because rage lingers, because attention had become the currency.
That was the moment the Titanic of digital progress struck the iceberg.
The belief in progress, which had once made us dream of a better future, was overwhelmed by commercial interests and the algorithms driving these platforms.
Platforms that once echoed the call for freedom would later help fuel the election victories of populists.
And during the COVID crisis, they gave conspiracy theories the echo chamber in which doubt could radicalize.
It was also the moment privacy became a weapon.
The Facebook–Cambridge Analytica scandal revealed how our personal data was used, not just for advertising, but to manipulate public opinion. Our “private” likes, posts, and behavior patterns became political – not for us, but against us, used to steer our perception.
Today, in 2025, we speak almost casually of hybrid warfare, less tangible than the Cold War ever was.
Digital threats are no longer just about sabotage or data theft. The freedom that took generations to win is now under massive threat, from outside and from within. The tool: the same one that gave us hope twenty years ago.
It wasn’t the technology.
It was our belief in it. Too strong, and too unquestioning.
Change Without Language
Change is happening, but it remains unnamed.
Political debates lag behind technological developments.
Instead of discussing the future of work, we’re still talking about “full employment” as if it were a realistic goal in a world where machines could take over lots of tasks.
Our language clings to outdated terms: “economic growth,” “retirement at 70,” “full employment.” These were shaped by a world of work that no longer exists – and that still fails to ask the questions society truly faces: How do we want to live together when work is no longer the center of our lives?
Technology changes the world faster than we can talk about it.
Machines take over the work – but the political debate about the consequences doesn’t take place. We’re not talking about a new world of work – we’re trying to preserve an old structure that has already disappeared.
This lack of public discourse allows technology to define change, without us actively shaping it. The questions no longer follow our goals, but our limitations: “What do we want?” turns into “What can we still do?” And with that, we lose control over transformation.
Technological progress continues, but the language that could help us understand and shape it gets left behind.
The societal discourse, once centered around the question of how we want to shape our lives, is now driven by the technologies themselves, without seriously asking how we can manage the social structure of these changes.
A kind of political paralysis sets in, because we lack the words to grasp the transformation.
And so, change remains unreflected, unshaped.
We accept what happens instead of discussing it and guiding it in the right direction.
The consequence: a society that no longer knows how it wants to change, and instead lets itself be swept along by progress.
Between Instinct and Algorithm
Humans are contradictory beings.
Our instincts come from a world where strangers were threats and supplies weren’t shared but defended. But our tools, language, science, technology, have allowed us to grow beyond that world.
What we call Enlightenment was never just the accumulation of knowledge. It was the attempt to break free from the grip of our own reflexes.
Reason wasn’t meant to replace instinct, but to counterbalance it: judgment, self-criticism, maturity.
But this tension remains. And it’s growing, because a third force has joined the mix: acceleration.
Technological progress advances at a pace for which evolution has no precedent. Culture and language lag behind, politics even more so.
And while we’re still debating what counts as “fake news,” machines are already optimizing for instinct-satisfaction through algorithmic design and neural networks – faster, more precisely, more convincingly than we can comprehend.
It’s a paradox: The technology born from our reason is starting to outpace it, and taps into something we thought we had left behind.
Because AI doesn’t appeal to our instincts for the first time, advertising, social media, political campaigns have done so for years.
What’s new: it no longer does this to catch up culturally. But to stay ahead. Not to ease our burden, but to overtake us.
It doesn’t speak to reason, it speaks to reflex.
Not because it’s malicious, but because that’s how it was built: efficient, compatible, affirmative.
And that’s exactly why we need a counterbalance. A way of thinking that doesn’t just inform, but structures. A mind that not only understands, but judges. A culture that doesn’t just follow, but takes responsibility.
If we want to redefine maturity for today, it must mean this: the ability to handle not just knowledge, but speed.
Not just information, but overload.
Believe in Your Own Reason
AI appears familiar and human, and that’s its greatest danger.
We forget that it’s not a person, but a complex algorithm, and stop asking the right, critical questions.
We already know that technology evolves faster than our societal systems can follow. The question is: what do we do with that insight?
The answer lies not in control, but in shaping. Not in faith in technology, but in critical thinking.
Immanuel Kant wrote: “Sapere aude. Have the courage to use your own reason.” It was a call to maturity, not as a goal, but as a daily practice.
Because even Kant was not perfect. His writings include passages that deserve criticism today. To take sapere aude seriously means not to ignore them – but to expose such contradictions.
That, in fact, is the strength of his idea: Maturity doesn’t mean following monuments, but thinking for oneself. Kant was no saint. But he gave us a tool that can turn against any authority, including him. And he showed us that thoughts and ideas don’t stop at our own horizon.
If we criticize Kant today, we should do so in his spirit: not to dismiss him, but to prove that we are capable of constructive debate.
Maturity is not about repetition, it’s about continuation.
When we speak of Enlightenment today, it’s not to celebrate an era, but to preserve an attitude, in a time when certainty is tempting again, when machines mirror opinions instead of questioning them, and when the greatest threats don’t look like danger, but like comfort.
Sapere aude: That was never about loyalty to Kant, but to critique.
And maybe this is the very courage we need again in the 21st century: not to return to the past, but to shape the future, with a reason that doesn’t just remember, but evolves.
The Gap
Speaking of Kant:
Two years before he wrote “Sapere aude,” the last so-called “witch” was executed in Europe.
What does that tell us?
What followed after?
Is the Enlightenment something we celebrate in small circles, but never fully realized on a larger scale? Are we, as a society, truly capable of resisting individual greed, structural convenience, and algorithmic affirmation?
Artificial intelligence does not create anything new.
It is not in its nature.
It amplifies what already exists, and meets a society that has grown comfortable in repetition.
Perhaps, in the end, only one hope remains: that new things will continue to emerge from individual impulses.
Some will use it to fill gaps.
Together, we will create both good and bad.
That — is part of our nature.
And the possibilities for this will receive a new tool through AI.
Nothing more. And nothing less.
The political achievements of the Enlightenment continue to have an impact today, for example, in the American and French constitutional traditions.
Herbert Marcuse, Counterrevolution and Revolt, Beacon Press, 1972. Marcuse describes how, after the upheavals of the 1960s, social systems developed mechanisms to absorb and neutralize resistance, a dynamic that remains powerful today.
The Iranian protest movement in 2009 was one of the first cases where social media was systematically used to circumvent censorship.