AI expert discusses gap between democratic ideals and current AI development
Ben Goertzel, an AI researcher and thinker, recently shared some pretty serious thoughts on the current state of AI governance. In an interview, he highlighted something that many of us might feel but don’t always say out loud: democratic AI governance is more of a fragile ideal than a current reality.
It’s a tough pill to swallow, I think. We talk about wanting democratic and transparent systems, but Goertzel suggests that our current geopolitical situation makes true global coordination quite unlikely. He’s not saying we should abandon the idea, but rather that we need to be realistic about what’s actually possible right now.
Tools for moral actors
One of the most interesting parts of the conversation was when AI was moving from being just a tool to something more: a moral actor. Goertzel suggests this happens when AI starts making decisions based on its own understanding of right and wrong, not just following our instructions. You would see signals such as persistent internal goals, learning motivated by one’s own experience, and behavior that remains consistent over time without constant human direction.
In the meantime, he says, current systems are still tools with guardrails. But once we have created a truly self-organizing and autonomous mind, our ethical relationship with it must change. Treating it solely as an object would no longer make sense.
The problem of training
Goertzel keeps coming back to how we train AI today and shape its future behavior. He fears that if models are trained on biased or narrow data, or on closed systems where only a few people make decisions, we will only lock in existing inequalities and harmful power structures.
“To avoid this,” he says, “we need more transparency, broader oversight and clear ethical guidelines from the start. » Apparently, this is not something we can fix later. Foundations matter.
Decentralization as an element of security
This is where things get counterintuitive. Goertzel argues that accelerating toward decentralized AGI might actually be safer than sticking with today’s proprietary, closed systems. Critics often call for a slowdown or centralization of control, but he thinks they underestimate the risks of concentrating power in a few hands.
“Slowing down and centralizing control not only reduces danger,” he explains, “it locks in a narrow worldview into the future of intelligence. » Decentralized development, on the other hand, creates diversity, resilience and shared oversight.
Teach morality, not program it
One of the most thought-provoking ideas Goertzel presents concerns how to code moral understanding without simply hard-coding human values. He doesn’t want a system that simply recombines what he was fed. He wants one who can develop his own understanding from his own trajectory in the world.
“Moral understanding would come from this same process,” he suggests, “modeling the impact, reflecting on the results, and evolving through collaboration with humans. Not obedience to our values, but participation in a shared moral space.”
It’s the difference, he says, between a tool with guardrails and a partner who can actually understand why harm matters.
Looking to the future
When asked what success or failure would look like in 10 to 20 years, Goertzel paints two very different pictures. Succeeding would mean living alongside systems more competent than us in many areas, while being integrated into society with care, humility and mutual respect. We would see real benefits for human well-being without losing our moral foundation.
Failure, on the other hand, would look like AGI concentrated in closed systems, driven by narrow incentives, or treated only as a controllable object until it becomes something we fear. This would mean a loss of trust, a loss of power to act and a reduction in our empathy rather than an expansion of it.
Perhaps the most telling part is his answer regarding incentives. Right now, he notes, the incentive structure rewards speed, scale and control. Compassion will not win through argument alone: it needs leverage. Technically, this means favoring open and decentralized architectures. Socially, this means funding, regulation, and public pressure that rewards long-term profits rather than short-term domination.
“In short,” he concludes, “compassion must become a competitive advantage. Until then, it remains a nice idea without power.” That’s a pretty bleak assessment of our situation, and perhaps a useful challenge as to the direction we should take.
![]()




