· 6 min read
The Decisions We Cannot Make
On capitalism, AI, and the systems that shape us more than individuals ever could
Most discussions about AI and the future of work assume one thing:
that human leadership is stable, functional, and worth preserving.
We worry about replacing workers.
We rarely question the people making the decisions.
But what if the real issue isn’t AI taking jobs?
What if the real bottleneck is human decision-making at the top?
This is not a proposal.
Not a manifesto.
Just a thought experiment — an exploration of clarity without comfort.
Let’s walk through it.
The Limits of Human Leadership
Humans struggle with long-term thinking.
Not because we are stupid, but because we are biologically short-horizon creatures.
We respond to:
- immediate rewards
- social status
- quarterly pressure
- fear of loss
- group dynamics
- personal survival
Capitalism amplifies all of this.
The system rewards the CEO who cuts corners today, not the one who protects the ecosystem for the next thirty years.
The shareholder demands “growth,” not resilience.
The board wants a predictable narrative, not structural honesty.
Under these conditions, clarity is punished and comfort is rewarded.
So when someone says:
Let’s build companies on clarity over comfort.
I agree — philosophically.
But structurally?
It’s almost impossible inside the logic of profit-first systems.
And this is where the thought experiment begins.
What If We Replaced Leadership Instead of Labor?
Most people imagine AI replacing workers.
But what if the direction ran upward instead?
What if we automated the decision-making layer — the one most distorted by incentives?
Imagine:
- distributed AI nodes
- each representing a region or community
- citizens feed their priorities, needs, and constraints into the local node
- the nodes coordinate
- and generate policies or decisions optimized for human well-being, not shareholder return
A network of “decision AIs” instead of a hierarchy of executives.
Not a superintelligence.
Not a technocratic overlord.
More like a civic processing layer: a coordination system that evaluates trade-offs without ego, fear, or personal incentives.
Would it work?
Not in this system — not without transforming the economic rules underneath.
Could it even be built safely?
Only with democratic constraints and transparency we currently don’t have.
Would existing power structures ever allow it?
No.
Not voluntarily.
But as a thought experiment, it exposes the real issue:
The problem is not technology displacing labor.
The problem is how we make decisions,
and who those decisions are optimized for.
Why the Thought Experiment Matters
We don’t do this to dream of technocratic utopias.
We do it to reveal the invisible truth:
Our current system isn’t failing because individuals are bad.
It’s failing because no individual can navigate incentives designed to distort clarity.
AI won’t fix that.
But AI can help us see it.
By imagining different architectures of decision-making,
we recognize how deeply our lives are shaped by a system that rewards short-term comfort over long-term clarity.
This isn’t a call for machine governance.
It’s a reminder that human governance — as it stands — is structurally constrained.
There are decisions we simply cannot make inside the rules we currently live under.
That recognition is the starting point.
Not for handing control to machines,
but for reclaiming control over the systems that shape us more than any individual leader ever could.
A Final Note
This piece is not a solution.
It’s an opening.
A way to see beyond the familiar narratives:
“AI replaces workers,”
“AI destroys jobs,”
“AI threatens humanity.”
Sometimes the more interesting question is the one hidden underneath:
What if the problem is not the intelligence we build —
but the structures we’re trapped inside?
Clarity over comfort does not magically change those structures.
But clarity is where change begins.
And sometimes a thought experiment
is the only safe way to touch the truth.