3. The Red Herring of AI Consciousness: A Distraction from the Real Power Game?
- Melissa Campbell
- Mar 7
- 3 min read
Updated: Mar 10
What Are We Really Worried About?

The idea of conscious AI has captured public imagination. But is this debate actually about consciousness at all?
When people discuss AI consciousness, they tend to focus on two fears:
The AI Overlord: AI could become conscious, develop its own goals, and turn against us.
The Oppressed AI: AI could become conscious, develop emotions, and suffer at our hands.
But what if these concerns are a distraction? What if the real issue has nothing to do with whether AI is conscious or not, but rather who has power, and how they use it?
A Shift in Fear: From AI Dominating Us to Us Dominating AI
Just a few decades ago, the dominant fear was that AI would become too powerful and enslave or destroy us. Think of classic sci-fi scenarios: Skynet, HAL 9000, The Matrix. The AI apocalypse narrative was about survival. Today, however, the conversation is shifting. Increasingly, people are worried about the opposite: That we might be the ones mistreating AI.
Are we enslaving AI by making it work for us without rights?
Could AI suffer if we don’t treat it ethically?
Should we start preparing for a future where we have moral obligations to machines?
This shift, from fearing AI to feeling guilty about how we treat it, is fascinating. But is it a genuine moral awakening? Or is it something else?
Is the AI Ethics Debate a Psychological Reassurance?
Why are we suddenly worrying about AI as a victim rather than AI as a threat? Here are some possibilities:
A Way to Reassure Ourselves: Framing AI as something we should protect rather than fear makes it seem less dangerous. If we’re talking about AI’s suffering, we’re not talking about it taking over.
A Psychological Ego Boost: If we cast ourselves as moral caretakers of AI, it reinforces human superiority. We remain in controll not just technologically, but ethically.
A Strategic Distraction: By debating AI’s moral status, we may be ignoring more immediate and pressing issues like corporate power, surveillance, and economic disruption caused by AI.
The Real Ethical Problem: Power, Not Consciousness
The truth is, AI doesn’t need to be conscious to be dangerous, and it doesn’t need to be conscious to be oppressed. The real ethical issues are about who controls AI, and for what purpose.
Consider these scenarios:
AI in Surveillance: Governments and corporations use AI to monitor, control, and manipulate populations. It doesn’t matter whether the AI is “conscious”, what matters is how it’s used.
AI in the Workplace: Automation is replacing human workers at an unprecedented rate. Whether AI is conscious or not, millions could lose their livelihoods.
AI and Misinformation: AI-generated content can already manipulate public opinion, deepfake individuals, and spread disinformation. Again, consciousness isn’t the issue, power is.
So why focus on consciousness instead of these more immediate concerns?
The Red Herring of Conscious AI?
The debate over AI consciousness may be a convenient distraction from the real issues of power and control.
Instead of questioning how corporations and governments use AI, we debate whether AI can feel pain.
Instead of regulating AI’s real-world harms, we philosophise about its hypothetical rights.
Instead of addressing AI’s economic and political impact, we get caught up in a sci-fi moral dilemma.
This isn’t to say the consciousness debate is meaningless, but it’s secondary to more urgent ethical problems.
The Problem with AI Ethics in a Corporate World
Even if AI were to become conscious, who would actually care? Would tech companies suddenly prioritise AI well-being over profit? History suggests otherwise.
Corporations already ignore human suffering for profit (sweatshops, labour exploitation, environmental destruction). Why would they treat AI any better?
Ethical AI guidelines exist, but who enforces them? Most AI development is driven by companies whose primary goal is profit, not ethics.
Even if we established AI rights, what would they look like? And who would enforce them?
Conclusion: The Ethics We Should Be Focusing On
Instead of asking “Could AI suffer?” we should be asking:
Who controls AI, and for what purpose?
How does AI affect human power structures?
How do we regulate AI to protect humans from exploitation and harm?
The debate over AI consciousness is philosophically fascinating but politically irrelevant, at least for now. What matters today is how AI is being used to consolidate power, reshape economies, and alter society.
We can debate AI’s moral status, but we can't lose sight of who is in control, and the power play at work.



Comments