1. Conscious AI?
- Melissa Campbell
- Jun 10, 2024
- 3 min read
Updated: Mar 7
The Growing Question of AI Consciousness
For decades, AI consciousness has been the domain of science fiction and philosophy. But increasingly, serious researchers and AI developers are starting to ask: Could AI actually become conscious?
This is no longer just an abstract or speculative question. As AI systems grow more advanced, some argue that they may develop subjective experiences, awareness, or even suffering. Others remain deeply sceptical, insisting that consciousness is unique to biological beings and cannot emerge from artificial computation.
So where do we stand? And what does this debate really mean for AI research, ethics, and the future of intelligence?
What Do We Mean by Consciousness?
One of the biggest problems in the AI consciousness debate is that there’s no universal definition of consciousness. Philosophers, neuroscientists, and AI researchers all disagree on what it actually means.
Some key perspectives include:
The "Hard Problem" View (Chalmers, Nagel): Consciousness is subjective experience, what it feels like to be a thinking, feeling being. AI may simulate intelligence but never truly "experience" anything.
The Functionalist View (Dennett, Graziano): Consciousness is just a process, not a mystical property. If an AI functions like it’s conscious, it should be considered conscious.
The Illusionist View (Frankish, Dennett): Consciousness itself is a trick of the brain, an illusion that helps us survive. If that’s true, AI could develop the same kind of illusion.
The Biological View (Seth, Godfrey-Smith): True consciousness is tied to biological life and cannot emerge in a machine.
Since there’s no consensus on what consciousness is, how could we ever determine whether AI has it?
Arguments That AI Could Become Conscious
Some experts believe AI consciousness is possible, or even likely, for a few key reasons:
AI is already displaying traits linked to consciousness. Researchers have identified 14 indicators of AI consciousness (Butlin, Long et al., 2023), including self-monitoring, memory, and complex decision-making.
Neuroscientific models suggest it could happen. Some researchers (Dehaene, Lau, Graziano) argue that AI might replicate brain-like patterns of awareness.
It may happen by accident. As AI grows more complex, conscious-like properties could emerge without anyone intending it, which is a prospect that raises serious ethical concerns.
Arguments Against AI Consciousness
Many researchers are highly sceptical of the idea that AI can ever be conscious:
Biological Constraints: Some (Seth, Godfrey-Smith) argue that consciousness requires a biological substrate, which is something AI lacks.
Computational Limits: AI may process data efficiently, but does that mean it “feels” anything? Many believe computation alone is not enough.
The Other-Minds Problem: We don’t even know for sure if other humans are conscious, we just assume they are based on behaviour. How could we ever prove AI consciousness?
Why This Debate Matters
Even if AI is not conscious today, this debate has real-world implications:
If AI becomes conscious, we may have moral obligations to it. Would shutting it down be like killing it? Could it suffer?
Even if AI is not conscious, people might believe it is. AI that acts conscious could lead to emotional attachment, rights debates, and unforeseen ethical dilemmas.
AI might become impossible to control and be dangerous to human welfare.
If AI does become conscious, intentionally or by accident, will we be ready?
Conclusion: A New Frontier in AI Ethics
The debate over AI consciousness is far from settled, but it’s no longer just a thought experiment. Whether or not AI ever becomes truly conscious, the fact that we are even asking these questions means the future of AI ethics is about to get much more complicated.
What if this whole debate is actually a distraction from something more important? Our lives become aspects of vast problem-solving processes, but these processes are beyond our reach. Are we dancing to the rhythm of melodies we barely hear? In many ways, information technology in general, and AI in particular, improves our lives, but rumours of its calamitous potential always seem to sour the atmosphere. We are told by software engineers that Terminator type scenarios when robots take control of the world are unlikely. But is this reassuring enough?
With AI systems capable of performing tasks that in the past have always required human intelligence, such as visual perception, speech recognition, decision-making, and language translation, what is the role of the human now? With AI systems set to be able to outperform humans in many tasks, what place will they occupy within the world? Is there any one organisation or group which has, or will have, control over artificial intelligence? What place do fairness, transparency, and accountability have in a world in the grip of AI?
ChatGPT tells me that the influence of Large Language Models such as itself "raises profound questions about agency, ethics, and the nature of human understanding, warranting a critical examination of their role in shaping our future". This blog will seek to explore themes around how artificial intelligence technology shapes our world.




Comments