Right now, many people talk about artificial intelligence as if a leap in innovation were imminent. The hope: AI will solve the things we find too complex, too expensive, or too slow.
Max Reisböck was a trained bodywork technician at BMW.
What he needed wasn’t a big deal. Just a car with more space for his family.
But there wasn’t one.
Not in the catalog.
Not on the roadmap.
Not in the system.
So he took a sedan, cut open the roof, extended the body, and added a tailgate.
In his free time. In a garage.
That’s how the first 3 Series Touring was born.
No assignment. No team. No department.
Just a person with a need, and the willingness to do something that wasn’t officially intended.
At first, the company was skeptical. Then impressed.
Today, the Touring is a given.
Maybe that’s how real progress begins:
Not with strategy, but with a problem. Not with target markets, but with reality.
And not with grand promises, but with the courage to stop leaving the wrong thing as it is.
Progress or Just Efficiency?
Max Reisböck was a quiet system-breaker. Not because he wanted to criticize the system, but because it had no place for his problem.
He didn’t break rules, he acted outside their logic.
He did what was needed, without asking whether it was allowed.
And that’s rare today.
Because in the corporate mindset, progress rarely starts with a need. It starts with budgets, KPIs, scaling. Not from necessity, but from process. What counts is being compatible, predictable, efficient.
In this logic, innovation often becomes efficiency with a new name. What’s sold as “the future” is usually just an accelerated rerun of the familiar. And the stronger this logic becomes, the better a technology like AI seems to fit.
To illustrate this mindset, let’s look at what I call the touchscreen fetish:
a technology that became a formula, not for its function, but for its symbolism. When smartphones took off, many companies treated the touchscreen as the digital hammer and every problem became a nail.
Touchscreens were installed wherever possible. Often exactly where they made no sense at all.
In fast-food chains, they’re a symbol of efficiency, and they do help streamline processes: less staff, fewer wrong orders, fewer unsold items.
The touchscreen becomes a lever to convert the entire system to “on demand.” What’s marketed as acceleration mainly slows down one thing: the experience.
First, we queue at the terminal. Then we wait for pickup.
Exactly where things were once meant to be quick, the process becomes a test of patience, and that’s without even touching on hygiene.
In cars, touchscreens replaced knobs you could use by feel. With flat glass that demands full attention.
It used to be a standard driving school question:
How far does a car travel in one second at 130 km/h when you glance at the radio?
Today, that second is standard. Packaged in glass, design, animation.
Its name: touchscreen.
Ironically, in exactly those areas where companies once led, from in-car voice interfaces to streamlined ordering, they abandoned optimization without resistance, choosing instead to serve the screen.
Progress wasn’t designed. It was imitated.
In pixels, not in principles.
In surface, not in orientation.
Maybe AI fits so well because it does something the system has learned to prefer:
It recognizes what’s already there. It sorts, condenses, optimizes. It doesn’t ask why, and is all the more readily deployed because of it.
But what does that really mean? What is the nature of this technology that so effortlessly fits into the logic of the corporate mindset?
Maybe understanding begins where data ends, and patterns begin.
The Machine’s Thinking Limits
Artificial intelligence can now recognize things that long eluded human perception. It detects patterns too complex, too subtle, or too deeply buried to find manually.
That’s impressive, and often useful. Some even call it “intelligent.”
But at its core, it remains: pattern recognition. Always based on what already exists. Always bound to what has happened before.
What it sees, was there.
What it doesn’t know, stays invisible.
AI doesn’t generate new questions. It doesn’t propose hypotheses that lie outside the dataset. It doesn’t suggest problems for which there are no answers yet.
And maybe that’s the boundary, hard to grasp because it doesn’t clash with speed, but silently accompanies it.
Over years, decades, maybe longer.
A movement that looks like progress, but ends in a very long dead end. Because as long as humans still dream, they feed the system with what it doesn’t yet know.
But if we stop asking new questions, because the machine gives us so many answers, then progress, too, will eventually stop.
Not visibly. Not suddenly.
But quietly, as the gradual disappearance of the new.
And that’s why its progress, without us, is finite. Not because it fails. But because we might forget what lies beyond what already exists.
That’s not a flaw. It’s its nature.
The Human Gap
AI recognizes what was. It finds patterns, organizes them, optimizes them.
But it doesn’t know why.
It can’t distinguish between what is merely efficient and what truly needs to change. It doesn’t ask whether it’s easing a symptom or addressing a cause.
Because it doesn’t want anything.
Humans can. Or at least: they could.
Not because they have more data, but because they have a sense for what’s missing.
For what is absent, even when it can’t be measured.
For what feels wrong, even when it can’t be explained.
For what is needed, even when it has no name yet.
Humans know lack1. Not just as a deficit, but as a spark. They suffer, they yearn, they imagine how things could be different. And sometimes that’s enough: Not knowing how, but wanting something that doesn’t yet exist.
What we experience as a mistake, contradiction, irritation, rupture, is just noise for the machine. But for humans, it can be the beginning of something new.
Not every doubt leads to insight.
But without doubt, there is no insight.
Maybe that’s the real gap: The machine calculates, the human dreams.
But if we forget that dreaming is a skill, even dreams will one day seem like errors.
What we call progress has rarely been a single idea. It’s what happens when many people take their own gaps seriously, and together build a new reality from them.
Not calculated.
Not probable.
But willed.
The Origin of the New
Benjamin Franklin didn’t invent the lightning rod to found a startup2.
He did it because people were dying, and no one understood why.
What he created was more than a tool. It was a shift in worldview:
Lightning was no longer divine wrath, but a phenomenon that could be understood and redirected.
That’s how real progress begins: Not by working on a solution, but by understanding the problem.
But that’s exactly where institutional logic seldom remains, along with a way of thinking that extends far beyond large companies.
It wants impact without disruption, and answers without ambiguity.
And so the new is often treated like the old: plannable, efficient, compatible.
Design Thinking once promised to teach us how to think differently. A method to reframe problems. But what’s left of it when it meets a system that already knows the outcome before the thinking begins?
A hype3 with an empty promise.
Today it’s a format, efficient, compatible, shallow.
You hire coaches, supply Post-its, Sharpies — sometimes even Lego4.
Two days of “thinking differently.” With colleagues you normally only see in the cafeteria.
It feels good. Like reinventing the company.
A nice way to end the week, and on Monday, you’re back in the same meetings with the same slides and the same expectations.
Design Thinking was an invitation to enter the problem space. In big companies, it’s purchased without understanding its point.
Because the corporate mindset doesn’t want to think.
It wants to calculate.
It only understands ideas if they are compatible. It only understands change if it’s already been clarified beforehand.
Responsibility in Our Time
There’s no shortage of ideas. And the desire to change things isn’t rare either.
But who actually decides?
In many companies, everyone wants a say, but hardly anyone wants to be accountable. Decisions are prepared, managed, pre-sorted, until what’s left are fields of yes-or-no questions5.
No open options. Hardly any real alternatives.
Responsibility sounds great, as long as no one has to carry it. Especially not when it gets uncomfortable: decisions without certainty, with long-term impact but short-term risk.
Today, most corporate leaders don’t decide direction.
They decide pace. Wording. Timing.
In KPIs, reports, and markets that respond instantly, but rarely listen.
CEOs are usually not owners. They’re employees. Evaluated. Replaceable.
Their job is often not to disrupt, but to stay compatible.
To mediate: between investors and staff, between innovation and risk, between speed and substance.
Before neoliberalism’s rise, the financial sector served the real economy. Today, it moves multiples of it. Capital was once fuel. Today, it sets the pace, and measures success in quarters, not decades.
And that pace reaches further than we think. It shapes decisions long before they’re made. Responsibility is no longer denied, it’s structurally avoided.
And that’s the danger — especially with AI.
Anyone making decisions on a quarterly clock will be tempted to treat AI not as long-term infrastructure, but as a short-term lever. A tool for efficiency.
A savings program in shiny packaging6.
But artificial intelligence is no add-on. It’s a fundamental decision. A system that cuts deep into processes.
Whoever thinks operationally here risks strategic dependency:
on vendors with opaque incentives, on models whose inner logic is not only obscure, but also potentially subject to external manipulation7.
On platforms that present themselves as helpers, but quietly replace infrastructure.
What we need isn’t just tech deployment, but tech policy.
A plan that goes beyond the next release. An idea of what AI should mean for our economy in five or ten years.
Not as a showroom project, but as strategic architecture.
AI as Transformation
AI doesn’t have to replace us. It can strengthen us.
Not by making us more efficient. But by thinking together.
Creating. Expanding.
Co-creativity, not as a method, but as a mindset.
Because meaning doesn’t emerge from calculation. It arises where people ask questions, allow doubt, start over.
If that stops, only repetition remains.
At the same time, AI can relieve us. Not through convenience, but through simplification. It can help clear away what weighs down work: Documentation. Reporting duties. System maintenance. Everything that’s mandatory, but never inspired.
Not what people were hired for, but what consumes their time.
If these layers thin out, space opens up.
For meaning. For orientation.
For an economy that not only scales but asks: To what end?
Our Hope, Our Promise
Max Reisböck didn’t want to launch an innovation project.
He just wanted a car that didn’t exist.
So he built it. In his free time, in a garage. And then brought it to BMW.
His direct supervisor immediately recognized what had been created, and set everything in motion.
On the second day, the CEO came to the workshop in person.
He saw the car. Reacted emotionally. Shook Reisböck’s hand.
That gesture expressed more than approval. It was resonance. A moment when the system — for a brief second — was open to something it hadn’t anticipated.
Today, Reisböck might have a digital assistant.
He could generate sketches, calculate materials, simulate designs. His scope would be broader than ever.
But maybe today, he’d have no one to talk to.
No department responsible for what has no name yet.
Maybe his job would’ve already been rationalized away.
And maybe no one would be left to say: “Well done.”
AI will bring change. No doubt about that.
But whether it moves us forward depends not on how powerful it is, but on how we use it.
Not just for optimization. But for discovery.
Not just for efficiency. But for possibility.
If we truly mean what we write in mission statements — responsibility, creativity, empowered employees — then AI shouldn’t demand less of it.
But enable more of it.
Maybe even tomorrow, real progress won’t begin with a dataset. But with a person who sees something that’s missing. And a system curious enough to listen.
Not systems that know everything.
But systems ready to learn something new.
Lack as a driver: It is not the availability of resources, but the gap within the existing that generates innovation — cf. Ernst Bloch, The Principle of Hope.
Franklin published his theory of lightning in 1750. The first public demonstration of a lightning rod was in 1752. His experiments combined science, protection, and politics.
Design Thinking as a method thrives on an open problem space, but is often misunderstood as a tool for validating pre-made decisions.
“Serious Play” — originally conceived as a creativity technique, today often a symbol of simulated innovation readiness.
In complex systems, decision-making processes tend to become binary, for the sake of controllability, not clarity.
Artificial intelligence is often marketed as innovation, but implemented as cost reduction.
“Data poisoning” refers to deliberate attempts to manipulate the training or usage data of AI models in order to influence their behavior. In the case of language models, this can involve the mass publication of strategically formulated texts designed to infiltrate the system and shape future outputs.