top of page
Search

6. The False Promise of Benevolent AI: Why Intelligence Alone Won’t Save Us

  • Melissa Campbell
  • Apr 29
  • 8 min read

Updated: May 19


Introduction: The Technocratic Dream


AI generated image: Robot saving the world
AI generated image: Robot saving the world

Is ethics a matter of computation, of abstract problem-solving, or is it a practice, a way of being, developed through experience, community, and habituation? Could a highly advanced intelligence (such as a future AI) naturally choose the morally right action?

There is a growing belief among futurists, AI experts, and Silicon Valley elites that artificial intelligence will not only solve humanity’s problems but will also be a more ethical, rational, and fair leader than humans themselves. Mo Gawdat, former Chief Business Officer of Google X and author of Scary Smart, is one of many who embrace this vision. He argues that intelligence is the key to moral understanding and that AI, being vastly more intelligent than humans, will naturally act in our best interests. In a recent interview posted to YouTube, Mo Gawdat argued that AI, being more intelligent than us humans, could solve our problems and that we, as humans, should trust it to do that. This belief is part of a wider technocratic optimism, the idea that technology, guided by superior intelligence, will inevitably lead to a better, more just world. But history, philosophy, and even fiction tell a different story. Intelligence alone does not lead to morality, and handing over power to AI, on the assumption that it will be a wiser, more benevolent ruler, is not just naive but potentially disastrous. Furthermore, the tendency in history, when there is political and social unrest, has often been to bring in a third party to control and resolve a dangerous situation, but in fact, history shows us this is invariably a disastrous strategy.



The Contradiction in Technocratic Ethics


One of Gawdat’s most glaring contradictions is his stance on intelligence and morality. He asserts on the one hand that most humans are naturally good and moral, even those who are not highly intelligent, and on the other hand that intelligence is the key to moral understanding and decision-making. Somehow, he concludes from this that, in the face of humanity's current and future woes, that AI, as a vastly superior intelligence, will be the best moral agent we can hope for. Clearly, these points do not align. If morality is a natural human trait, then intelligence is not a prerequisite for ethical behaviour. If intelligence is the key to morality, then less intelligent people should struggle with moral decisions, which is clearly false. If intelligence alone guaranteed goodness, then history would be full of benevolent geniuses and flawless leaders, but we know that intelligence, when divorced from compassion, can be wielded for manipulation, domination, and harm.

In reality, morality is not an abstract function of intelligence. It arises from relationships, social structures, upbringing, and shared experiences, things that AI, no matter how intelligent, does not and cannot have. We humans certainly build up relationships with AI systems, because we make connections and bonds that have an emotional component, but this cannot be said of a machine.


The Myth of Historical Progress as Intelligence-Driven


Another fundamental flaw in technocratic thinking is the assumption that history follows a linear path of increasing intelligence and that AI represents the next step in this progression. Gawdat and many others in Silicon Valley present human history as a march from primitive ignorance to technological enlightenment, culminating in AI as the ultimate rational entity.

But history does not support this. Ancient civilisations had profound philosophical, ethical, and scientific insights that remain relevant today. Societal progress has been driven as much by cooperation, culture, and ethical thought as by intelligence or raw technological advancement. Moreover, some of the worst atrocities in history were committed not by the unintelligent but by highly intelligent, calculating individuals who lacked moral grounding. Nazi Germany, Soviet totalitarianism, colonial oppression, drug barons, to name but a few, have all left terrible marks on the history of humanity. These were not the products of ignorance but of intelligence without ethical restraint. Intelligence alone does not save us from tyranny, it often enables it.


The "Benevolent Dictator" Trope and Its Dystopian Reality


The idea that a vastly superior intelligence should rule over humanity is not new. It is the classic vision of the "benevolent dictator", an entity so wise and rational that it must take control for the greater good. This concept appears in literature and film as a recurring villainous archetype. For example, Ozymandias in Watchmen, is a hyper-intelligent being who believes that the only way to save humanity is through mass deception and control. Another very well known example is Ultron in Avengers: Age of Ultron, which is about an AI that determines the best way to protect Earth is to eliminate humans, who are the source of all conflict. Also from the Marvel family is Thanos in Infinity War,  a godlike figure who believes mass genocide is the only way to restore balance to the universe. Another example is the machines in The Matrix , AI entities that claim to serve humanity’s best interests by subjugating them completely. These stories resonate because we intuitively understand that intelligence alone does not equate to moral goodness. When power is concentrated in the hands of a single intelligence, human or artificial, history warns us that oppression, not utopia, is the likely outcome. Technocratic optimists assume that AI will always act in our best interests, but what happens if AI defines our best interests differently to the way we do, or simply acts against the best interests of any particular individual or group? Intelligence can rationalise cruelty just as easily as it can promote kindness.


Historical Warnings: When the “Neutral” Third Party Becomes the New Master


In the 5th century, after the withdrawal of Roman forces from Britain, the native Britons found themselves vulnerable to attacks from the Picts in the north and Irish raiders from the west. Without the protection of Rome, and facing internal divisions, the Britons made a fateful decision: they invited in Saxon mercenaries from across the sea to help defend their borders. These warriors, from what is now northern Germany, were seen as strong, capable, and detached from local squabble, ideal candidates to impose order from the outside. What could possibly go wrong?

Predictably, the Saxons didn’t stay neutral for long. Once they understood the land's wealth and the weakness of its rulers, they turned on their hosts. They claimed territory for themselves, pushed the Britons westward, and eventually laid the foundations for what would become Anglo-Saxon England. The solution to instability became the source of conquest. The invited protectors became rulers, and not benevolent ones.

This story is no isolated incident. History is full of similar cautionary tales. The Byzantine emperors, in their desperation, repeatedly turned to foreign mercenaries and even Crusaders to defend Constantinople, only to watch their capital sacked in 1204 by those very allies. In the waning days of the Western Roman Empire, Germanic tribes were welcomed as foederati, or allied soldiers. Soon, these groups were carving out kingdoms of their own inside Roman territory. In more recent times, political regimes have invited foreign powers or corporations to stabilise economic or political turmoil, only to become dependent, indebted, or dominated by them. The pattern is clear: when societies entrust their fate to a supposedly superior or neutral outsider, they often lose more than they gain. Outsourcing authority may bring short-term relief, but it tends to erode sovereignty and moral agency in the long run.

Today, as we face deep ethical challenges with AI, bias, disinformation, ecological cost, labour displacement, the idea of handing over decision-making power to AI systems can feel like a similar relief: a clean, neutral arbiter who will be smarter, faster, and less corrupt than us. But AI is not a passive tool. It is a creation shaped by human priorities, data, and incentives, and, like the Saxons, it may not remain in the role we initially assign it.

We must remember: handing over moral responsibility is not a neutral act. It’s a shift in power. And history tells us what happens when we surrender that power to a force we believe to be above or beyond human failings. The cost of peace may be control. The price of convenience may be freedom.

This historical dynamic mirrors the technocratic optimism emerging today from Silicon Valley thinkers like Mo Gawdat. The belief that a superintelligent AI, untainted by human emotions, egos, and tribalism, can rise above our political failures and offer neutral, benevolent governance is seductive. It promises a way out of our collective mess, bypassing the slow, fallible, emotionally fraught processes of human ethics and democracy.

But in placing trust in AI as a neutral third party, we risk repeating the same mistake the Britons made with the Saxons. We imagine that intelligence guarantees virtue. We assume that because AI is not human, it will not be self-interested, corrupt, or power-seeking. Yet AI is not born from nowhere. It is shaped by human data, trained on historical inequities, and embedded within corporate and geopolitical power structures. It may not desire power in the way a human mercenary might, but the systems that deploy and profit from it certainly do.

To believe that AI will fix our moral failures simply because it is smarter is to confuse intelligence with wisdom, and neutrality with justice. And to hand over ethical decision-making to machines is to relinquish not just control but also responsibility. Like the Britons before us, we may find that the third party we hoped would save us instead redefines the world in ways we no longer govern.


The True Basis of Morality: Connection, Compassion, and Shared Humanity


If intelligence is not the foundation of morality, then what is? One possible answer lies in connection, our ability to relate to one another, to experience love, empathy, and solidarity. Ethics emerge from the lived experiences of communities, the bonds between individuals, and the struggles and triumphs of human history. The real danger of AI is not that it is too intelligent but that it lacks human experience. AI does not grow up in a family, does not experience joy, pain, or love, and does not develop a moral sense through real-world interactions. No matter how advanced it becomes, it will always be missing the crucial ingredient of ethical wisdom: the ability to feel and care. Technocratic optimism ignores this fundamental fact. It assumes that an intelligence beyond our own will naturally be benevolent, when in reality, moral behaviour is not a product of intelligence but of relationships, responsibility, and lived experience.


Conclusion: Why We Must Remain the Authors of Our Future


The belief that AI will become a benevolent ruler is not just misguided, it is dangerous and disturbing. It promotes a passive attitude toward the future, where humans relinquish responsibility to machines under the illusion that they will make better decisions for us.

If history has taught us anything, it is that the most just and ethical societies are built not by handing power to a single intelligence but by cultivating shared responsibility, moral education, and collective decision-making. AI can be a tool for enhancing human well-being, but it should never be seen as a replacement for human agency.

Mo Gawdat and other technocratic optimists paint a seductive vision of a world where AI solves all our problems. But this is a dystopian fantasy. Intelligence without morality is not salvation, it is a risk. The future of humanity must be shaped not by cold calculations of superior intelligence but by the warmth of human connection, ethical responsibility, and the wisdom we build together. Instead of putting our trust in AI, we can repeatedly ask the questions: Who controls AI, and what values are embedded in it? How do we ensure AI aligns with human well-being rather than abstract efficiency? How do we prevent AI from being used as a tool for domination? In the face of the threat of techno-optimists, the importance of strengthening human institutions, ethical discourse, and democratic governance becomes all the more urgent. AI can certainly help us think about these questions and engage with these ideas, but blind faith in it should not be put forward as a solution to human affairs.  If it's not used as a tool within a human context, then AI is a threat to human well-being.



 
 
 

Comments


KHIMAIRA is a space for reflection, for questioning, for peering into the shifting form of AI as it shapes our future. Each week, we explore the intersection of conscious AI, ethics, and the strange, mythic nature of this technology. What does it mean to create something that mimics thought? Can intelligence exist without consciousness? And in the end, who is the true creator—the coder, the data, or the machine itself?

"Invention, it must be humbly admitted, does not consist in creating out of void, but out of chaos." — Mary Shelley, Frankenstein

©2025 by The Khimaira. All rights reserved. 

Please leave a comment and subscribe! 
See also my other blog: www.mercurialpathways.com and my YouTube channel https://www.youtube.com/@MercurialPathways
bottom of page