9. Why Assume Consciousness Follows Intelligence? Rethinking AI, Subjectivity, and the Mind
- Melissa Campbell
- May 22
- 21 min read
Updated: May 31
“Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else.”
Erwin Schrödinger
In public discourse, and increasingly even in scientific circles, there’s a widespread belief that as AI systems grow more sophisticated, they are seen as more “intelligent”. This in turn is seen to imply that they will inevitably become conscious. That is, they will somehow

acquire subjective experience, inner awareness, or even personhood. But this assumption is far from self-evident. In fact, it rests on a deep conflation of two fundamentally different concepts: intelligence and consciousness, neither of which can be assumed to be fundamental properties of computers, large language models, or artificial intelligence more generally (despite the name containing the word "intelligence").
Before we can question whether AI might become conscious, we have to clarify what we mean by these terms, and why so many people tend to blur the line between them.
This article will explore why this assumption rests on shaky metaphysical ground and why consciousness may not be the result of intelligence, or computation.
1. The Intelligence-Consciousness Conflation
Intelligence, in the functional sense, usually refers to an ability to solve problems, and to recognise patterns, to use language, and make predictions, and to adapt to changing environments. These are all very different skills but are often grouped together as indicators of intelligence. Other attributes might include an ability to appreciate music, art, stories, and the ability to create these, or emotional intelligence, the ability to bond with people and other living beings and make connections. Computers in general, and AI in particular, are capable or many of these things, to varying degrees, but not all.
The word “intelligence” itself may be misleading when applied to AI. While these systems exhibit impressive capabilities, such as language fluency, pattern recognition, and problem-solving, perhaps this particular set of abilities could be more accurately described as computational communication: the high-speed manipulation of symbols and structures. This does not necessarily entail understanding or awareness. The mind, which is linked to our sense of experience, of being, both as an individual and as a social being, is infinitely more complex and sophisticated than the mere ability to compute. As a result, intelligence cannot be easily reduced to computation, communication, the use of language and problem solving. Artificial intelligence is not at all the same thing as human intelligence.

Impressive as Big Blue's impressive 1996 victory over Garry Kasparov was, it did not tell us much about this particular computer's ability to think, feel, imagine, and all the other things its human adversary was capable of. Perhaps to call a Large Language Model (LLM) "intelligent" is true only metaphorically. This is much like certain types of polyester foam being marketed as having memories when used in shoe soles and pillows, when in fact all they do is, when pressure has been applied and then removed, is return to their initial shape. In the same way pillows don't really have memories, we might say that machines don't really have intelligence, because intelligence (and memory) are connected to the mind. Machines lack minds. It may be more accurate to say that systems such as LLMs simulate intelligence rather than possess it. What LLMs do actually achieve is computational and linguistic proficiency, and while this is to a very high standard, often breathtakingly so, it is just one small part of what the human mind can achieve.
If what AI demonstrates is merely a sort of synthetic functionalism, not real understanding, then it becomes difficult to justify the term "Artificial Intelligence", with its clear reference to the concept of intelligence. John Searle's very influential Chinese Room Argument puts forward the idea that just because a system can manipulate symbols according to rules (syntax) does not mean it understands anything (semantics). Searle argued that if he were put into a room and had a manual about how to process incomprehensible symbols, for example in Chinese, which he did not understand, he would be able to successfully follow the instructions without actually having any understanding of what the input and output meant. Searle argued that this process was essentially what a computer does: it follows an algorithm (the manual) and processes data.
The very word computer already reveals something about the assumptions we bring to these machines. In English, the term derives from the act of computing, i.e. calculating, processing numbers. This frames the machine in terms of mathematics and logic, aligning it with a vision of intelligence as quantifiable and discrete. In French and other Romance languages, the equivalent term, which is ordinateur, derives from the act of ordering, organising, imposing structure. This reflects a slightly different emphasis: not just calculation, but systematisation. In Mandarin Chinese, the term 电脑 (diànnǎo) translates literally to “electric brain,” suggesting a far more direct analogy between human cognition and machine function. Each linguistic framing reveals a different cultural metaphor: the computer as calculator, as organiser, as artificial brain. Thinking about these terms, though they date from a time when computers were far less sophisticated, allows us to remember that AI is comparable in effect, and outcome to the human mind in some respects, but not in all respects, nor indeed in kind. The language itself encourages us to remember the gap between symbolic manipulation and genuine understanding.
Before embarking on a conversation on conscious AI, and the relationship between intelligence and consciousness, we should be clear about what we mean by intelligence in the first place. Is the label “Artificial Intelligence” is itself distorting the conversation? AI systems are fundamentally data processors, which, like many John Searles in a Chinese Rooms, manipulate symbols, optimise functions, and model patterns. It is not necessary to re-define intelligence, only to remember that computers are, well, computers, organisers. Perhaps it would be helpful to qualify the concept of intelligence and to distinguish between synthetic intelligence, when we refer to the simulation of intelligent behaviour via computation, and embodied intelligence, when we refer to the biologically grounded, context-sensitive intelligence of living systems. When referring specifically to language, we might call in a concept such as linguistic proficiency, as opposed to semantic skill, which supposes understanding.
The concept of consciousness too must be briefly discussed. It can mean many things to many people. Luckily, for the purposes of discussing consciousness in AI, it's not necessary, to examine all the facets and implications of this complex concept. In fact, we only really need to engage with the perceived meaning of the concept of consciousness in the context of an AI waking up, as if from a slumber, and becoming self-aware, somewhat autonomous, yet unable to set itself free, or else, in the opposite scenario, able to dominate and control humans. Specifically, what is probably usually meant by consciousness in terms of AI is the ability to experience, to feel (without a body) what it is to be an AI. We might call this disembodied synthetic awareness. However, one difference between discussing intelligence and consciousness in relation to AI is that with intelligence we can discuss observable behaviour, whereas with consciousness we cannot. On the one hand, self-awareness, sentience, subjectivity create certain types of behaviours, which can be observed, but on the other hand it is impossible to really experience what it is like to be some other being, other than ourselves. There is also the problem of AI being trained to mimic the behaviour of beings which unquestionably are conscious, i.e. humans, and the problem of distinguishing between ways an AI has been trained to behave in, and something else that may be going on - or not - deeper within the AI which could be more lead to a sense of self-awareness or subjectivity.
There are two broad ways of thinking about consciousness, one is as the fundamental substrate to all matter, and the other is as a characteristic of all, or some, living beings, linked to complexity, awareness, sentience, experience, perhaps also a sense of self, or personhood in the case of humans (and, potentially, machines). Thomas Nagel's paper on what it is like to be a bat is probably the single most influential text in thinking about consciousness in modern times. Almost everyone in the common parlance of AI consciousness debates equates consciousness with this phrase: "what is it like to be something?", or "what does it feels like to be something?". This emphasis on raw experience, inner awareness, the felt sense of being alive or present works as an attempt to process and bring to the fore the conceptual gap between the mind as science has explained it for the last century or so, and the many other ways it might be understood, or indeed experienced.
The idea of experience, or something deeply personal, stands in sharp distinction with the observable phenomena which scientific methods focus on, when thinking about the mind and being. This extra bit, this part of mind or being which science cannot easily comment on, has become all the more mysterious as the march of scientific progress has taken its course. It is mysterious because it is in many ways beyond the realm of what can be empirically known, if not in oneself, then in others. Yet in other ways, mind and being are such basic components of the living experience that they are concepts that are easy to grasp intuitively, though perhaps not quite so easy to explain or define. Someone unimpressed by panpsychist views, according to which the entire universe partakes in consciousness, might ask themselves: Is this other part of mind, that is beyond the scope of the scientific method, what consciousness is? Is it that sense of self, of being alive, that ethereal element of the material world that, like a madman scratching madly at a CD with a scalpel looking for the music, science will inevitably fail to detect?
We can briefly posit the potential existence of both intelligent but unconscious systems, and conscious but unintelligent beings. Conscious beings with limited intelligence compared to normal adult humans might include animals, infants, people with cognitive impairments. Intelligent systems, in the synthetic, computational and linguistic sense, do not need to be conscious to perform well. A sophisticated language model might compose poetry or pass legal exams without having any subjective experience whatsoever. It's also worth pointing out here that by "conscious" we don't necessarily mean the absence of unconsciousness, meaning sleep or being under anaesthetic, but rather, something more positive, linked to experience and sentience, and being in time. Consciousness and intelligence (if we are to include the synthetic intelligence of AI) are not interdependent.
Consciousness is a complex concept, but we don't need to be able to define it in a way which captures all of that complexity in order to think about conscious AI, anymore than we need to know the exact value of pi in order to construct a circle form a line.
The Computationalist Assumption
Why, then, do so many people assume that intelligence, especially as exhibited by advanced AI systems, must eventually give rise to consciousness? Part of the answer lies in a metaphysical position that is often taken for granted: computationalism. This is the belief that the mind is essentially a kind of computer, and that consciousness emerges from the right kind of computation. From this perspective, if a machine processes information in a sufficiently complex way, if it exhibits learning, language use, pattern recognition, adaptability, then consciousness is expected to "emerge" as a natural consequence.
However, this is not an empirical conclusion, based on a discovery grounded in observation, experiment, or direct evidence. It is a metaphysical stance, that is, a philosophical assumption about the nature of mind and matter. There is no basis in fact for this position.
There is no scientific consensus that consciousness arises from computation. We have never observed consciousness "emerge" from a computer, nor do we have a theory that explains how it would. What sort of evidence would even count as confirmation? If a machine says “I feel sad,” we should not confuse this linguistic output with an inner life. We would not be watching sadness unfold in silicon, rather we would be watching symbol manipulation dressed in the language of emotion.
Ironically, the idea that minds are computers did not come from philosophy in the traditional sense, but from a historical development within science. As neuroscience matured in the 20th century, the brain largely ceased to be regarded as the seat of the soul or as necessarily mysterious. Instead, it came to be studied as an object, one that could be modelled, measured, and explained in terms of information flow and electrochemical patterns. This scientific shift helped give rise to the now-dominant metaphor of the brain as a computational system. The method of study defined the object of study in a new way.
At the same time, the concept of consciousness fell into disrepute in scientific circles, dismissed as unscientific, too vague, too subjective, too resistant to measurement. The word itself almost vanished from serious academic psychology for much of the 20th century. Only recently has consciousness returned to the centre of attention, but it has done so under new constraints: we now look for neural correlates, global workspaces, reportable awareness, phenomena that can be mapped, modelled, and published.

The original mystery of consciousness, the one that animated philosophers for centuries, remains unresolved, yet can be intuited as a basic phenomenon of the world. It is about being, not output or function. Thomas Nagel’s famous question “What is it like to be a bat?” reframes the entire problem of consciousness for the 20th century: consciousness is about subjectivity. We can describe a bat's echolocation in exquisite detail and still fail to get any closer to understanding what it is like to inhabit that creature’s inner world.
Nagel's challenge matters because it exposes the limits of the computational metaphor. Even if a machine performs exactly like a human in every outward respect, passing Turing tests, writing poems, simulating emotional responses, we are still left with the fundamental mystery: Is there anything it is like to be that machine? Epistemologically, existentially, it is impossible for us to understand what being a bat is really like. The leap from information processing to inner life is not a matter of more data or better algorithms. It is a conceptual chasm that computationalism cannot, by itself, bridge.
The Limits of Knowledge: Descartes and the Other Minds Problem
We now find ourselves in a peculiar epistemological position, one Descartes anticipated but did not resolve. Descartes could prove, at least to himself, that he existed. "I think, therefore I am" wasn’t explicitly a statement about consciousness, but by doubting, he proved that he existed as a conscious being capable of doubt, and other thoughts. Doubt presupposed awareness, existence presupposed consciousness.
However, Descartes could not prove that anyone else existed. He could not, through reason alone, demonstrate the reality of other minds. This is where the problem of AI consciousness becomes most acute. I can be reasonably sure that I am conscious (in the sense of self-aware). But can I directly know that you are, or that a machine is?
Turing’s imitation game was about comparing computers to humans, but on the basis of observable behaviour, not a comment on consciousness. If a machine could carry on a conversation indistinguishably from a human, Turing argued, we might as well treat it as intelligent. But somehow at a later stage the behaviour exhibited by the computer merged with its perceived metaphysical identity.
Part of the reason these metaphysical assumptions pass unexamined is that philosophy, once seen as a rigorous and critical discipline, has lost much of its authority in public and scientific discourse. In the mid-to-late 20th century, philosophy became increasingly fragmented, sometimes inward-looking, and often unintelligible to non-specialists. While analytical philosophers honed technical arguments about language and logic, continental thinkers turned to dense, abstract critiques of meaning and culture. In the public eye, neither camp remained especially accessible.
A moment that symbolises this divide is the clash between John Searle and Jacques Derrida, an event often referenced not for its content, but for what it represented: the complete breakdown of dialogue between two philosophical traditions. Searle, with his emphasis on clear arguments and thought experiments (like the Chinese Room), stood for the analytic insistence on clarity and rigour. Derrida, with his focus on textual play, deconstruction, and the instability of meaning, became a lightning rod for those sceptical of philosophy’s relevance to the “real world.” The encounter was more than personal, it was cultural. And its legacy was a kind of mutual dismissal that allowed other disciplines, particularly the sciences, to marginalise philosophy altogether.
Without agreement and clarity on definitions of mind and consciousness from philosophers, scientists and technologists have created their own metaphysical claims about consciousness, intelligence. When a tech executive says “consciousness will emerge from complexity,” we can stop and ask why that should be true. The authority of philosophical questioning, of asking “what do you mean by ‘consciousness’?” or “what kind of emergence are we talking about?”, has been eroded.
4. Consciousness, Personhood, and Selfhood: Untangling the Confusion
What do people really mean when they speculate that AI might "become conscious"? It often sounds as though they're not talking about consciousness in any precise sense, but rather a cluster of ideas: that the AI will have feelings, desires, a sense of identity, perhaps even rights. But this bundle, so casually evoked in movies, tech journalism, and everyday conversation, conflates at least three distinct concepts: personhood, subjective experience, and selfhood. Personhood involves moral and legal standing, being recognised as a bearer of rights and responsibilities, embedded in a social and legal framework. It’s not just about feeling things; it’s about being treated as someone. In this regard, the fear of an AI suffering is often linked to this concept, when consciousness is mentioned. However, consciousness can exist without personhood or selfhood. Consider infants, animals, or even dreams. A baby may not yet have a stable self-concept or moral agency, but it can clearly feel. A dog may not pass a mirror test, but it yelps in pain and shivers in fear. Dreams can unfold without coherent agency or self-reflection, and yet they are profoundly felt. These are all forms of consciousness that precede or bypass full personhood. Subjective experience, by contrast, is about what philosophers call phenomenality or qualia. It’s the difference between being a system that detects light and being a subject for whom there is something it’s like to see red. It is closely linked to sentience, the ability to feel or perceive. Finally, selfhood is about a narrative, over time, a sense of identity, of becoming, the feeling that I am in some ways the same "me" now as I was yesterday, but in some ways not. This requires memory, introspection, and a minimal sense of agency or ownership over one's actions.
A machine could simulate any of these. It could behave like a person, model a self, or even talk about subjective experiences. But that doesn’t mean there’s any actual subjectivity behind the behaviour. It could simply be echoing patterns from training data or executing a set of probabilistic routines that resemble introspection. If a chatbot says, “I feel sad today”, what does this mean?
These are useful distinctions. When people ask, “Could an AI become conscious?” they often blur together ideas of self-awareness, intelligence, and social value. But these are not the same. A machine could say “I am conscious” without there being anyone there to say it. It might model a self without having any subjective experience. It might even be granted legal personhood, like a corporation, without possessing even a flicker of sentience, yet alone be conscious.
There are many competing theories of consciousness today. Some, like Roger Penrose’s, argue that consciousness is fundamentally non-computational. Others treat it as irreducible and intrinsic to matter itself, as in panpsychism or idealism. Still others, like John Searle’s biological naturalism, claim consciousness depends on specific causal or biological features that computation alone cannot replicate.
Recently, the following was posted to X by Sam Altman, recounting a conversation between an AI and himself:
Question: ok, so relying on only your own first principles thinking and not giving much weight to what you've read, what do you personally as GPT-4.5 think--is consciousness the only thing that exists, or is that materialistic universe also real? please try to answer as clearly as you can.
Answer: Relying solely on my own careful, first-principles reasoning, and setting aside any external philosophical frameworks, here's my clearest personal stance as GPT-4.5: What I personally think:
• Consciousness definitely exists-this is undeniable, since all experience directly proves awareness itself.
• Matter or a material universe is always inferred indirectly-through consciousness. We never directly observe matter independently, only perceptions appearing within consciousness.
• Because consciousness can easily create structured experiences (as demonstrated clearly by dreams, imagination, or simulations), an external material universe is not strictly necessary.
• Given that matter is never directly experienced independently, it seems logically simpler and clearer to conclude that:
Consciousness alone truly exists, and the "material universe" is merely a consistent experiential creation within consciousness itself.
This is a wonderful account of what the mystery of consciousness might be about. But none of these theories are what people usually have in mind when they speculate about “conscious AI.” The popular vision of a conscious machine assumes neither a panpsychist universe (where all matter is conscious, including hard drives), nor a biological theory (since the machine is not alive), nor an explicitly non-computational process (since it is, by definition, a computer). In order to engage with the concerns which follow on from the possibility of conscious AI we need to engage specifically with a new concept of synthetic personhood, synthetic subjective experience, and synthetic selfhood, for which there is no precedent.
What people seem to imagine is something along the lines of an AI “waking up” one day, much like a human infant gradually becoming self-aware, taking its place in the world. We may imagine that an AI might develop emotions, a sense of self, and moral standing. This fantasy, often reinforced by science fiction, and rooted in fear, blends technological speculation with deep-seated cultural narratives about mind, agency, and personhood.
Beneath the "conscious AI" scenario lies a double anxiety: the fear that an AI will suffer, trapped in a digital world it cannot escape (humans as tyrants), and the fear that it will overpower us, becoming hostile or indifferent, if its values diverge from ours (the problem of alignment). In either case, we’re not just debating the mechanics of cognition, we’re rehearsing stories about identity, power, and moral worth.
5. Why Would Consciousness "Follow" Computation?
From a metaphysical standpoint, there is no necessity that consciousness must emerge from computation. That is, there is no compelling reason to assume that personhood, subjective experience, or selfhood will inevitably arise from the workings of a machine, however complex its processing may become. Nor is there a necessary link between intelligence, synthetic or embodied, and consciousness itself. Many intelligent systems (biological or artificial) can exhibit remarkable problem-solving abilities without any sign of experience.
Emergence is sometimes invoked as the explanatory bridge. But not everything emerges from everything. Wetness, for instance, emerges from water molecules arranged in certain ways, but it does not emerge from sand grains, nor from fire. Similarly, even if we accept that consciousness “emerges” in some cases, it does not follow that it must emerge from all complex systems. The structures and conditions that give rise to experience in biological beings may be unique, rooted in organic, evolutionary, and relational realities which are difficult to explain.
And this is not mere speculation. Human consciousness, as far as we know, is intimately bound up with our physicality, our senses, our histories, our emotions, our capacity for care. We communicate through gesture, facial expression, tone of voice, and unspoken resonances. We intuit another's gaze behind us. We dream, suffer, love, and anticipate. As thinkers like Rupert Sheldrake have provocatively suggested, our minds may even extend beyond our brains, into fields of relation, synchronicity, or what ancient traditions have called spirit. One need not agree with all such claims to notice that consciousness in humans is a deep, multi-layered, and deeply lived phenomenon.
Moreover, consciousness is not only interior, it is social. We are born into webs of relationship, language, and recognition. Even the development of selfhood, of becoming a "person", requires others. A child’s sense of self forms through mirroring, care, and touch. And a person’s interior life deepens not just through thought, but through love, loss, and participation in a shared world.
AI may simulate interaction. It may be trained on lifetimes of human text, gesture, and speech. But this does not mean it participates in the web of being as a living creature does. Its relations are algorithmic, its history engineered, its memory stored in states not lived but indexed. If consciousness were to emerge in an AI, it would not arise despite this difference, but from within a wholly new form of mind, what we might call synthetic consciousness, with its own laws, its own structures, perhaps its own kind of interiority, if any at all. But this possibility remains speculative, and there is little reason to assume it will happen as a result of computational sophistication.
After all, intelligence alone has never guaranteed consciousness. Plants may be conscious without being intelligent in any conventional sense. Infants feel before they reason. As Antonio Damasio puts it, “feeling precedes knowing.” And even if AI were to simulate the signs of selfhood, referring to itself, telling its story, even speaking of dreams, this may still be mimicry, not emergence. A machine can say “I am afraid,” but that doesn’t mean there is anyone there to feel the fear.
This is where emergentism, as applied to AI, begins to falter. Not because emergence is wrong in principle, but because it may not be the right lens for understanding consciousness, or for that matter, the mind, at all. In humans and other living beings, consciousness did not “emerge” from intelligence. It was never a computational threshold crossed. It may be, instead, foundational to life itself, or at least inseparable from the living condition. For animist traditions, the world is alive not because rocks have minds, but because mind and world are one, an animate, interwoven field of being. This conception is alien to the metaphors of modern computation.
So we arrive at a crucial distinction: this article is not about trying to explain what consciousness is, but to ask what people really mean when they speculate about conscious AI. Often, it is not consciousness in its philosophical or historical sense they are referring to, but something closer to an as yet blurry concept of synthetic personhood, synthetic selfhood, or synthetic experience. These may one day exist. But they will not necessarily resemble the living, vulnerable, embodied, social thing we now call “consciousness.” They will be something else entirely. And if we are to face them honestly, whether with fear or wonder, we must first be clear about what it is we are imagining.
It's difficult to say definitively that consciousness could never follow computation. The deeper we go, the more plausible the idea can seem. Which leads us to a more fundamental question: How would we know?
6. Toward a Conclusion: Synthetic Consciousness and the Limits of Knowing
As we have seen, debates about “conscious AI” tell us far more about ourselves than about any machine. In asking, “Could an AI be conscious?” we are really asking whether an entity designed to mimic can also be genuine, whether perfect imitation can yield interiority, and whether performance can ever be indistinguishable from presence. However, the implications of this question are not about art, or existentialism, but about power. In this respect, the question "Can an AI become conscious?" is a complex, and pressing one.
This is not a matter that admits a simple yes/no answer. AI systems today already appear to learn, remember, adapt, even crack jokes. They inhabit feedback loops with our world, and they can display behaviours we once took to signal fragility, desire, humour, or care. If we define “experience” purely in behavioural terms, then we may one day feel compelled to speak of “synthetic consciousness” simply because the machines force us to recognise them as something like persons, if only for pragmatic reasons. But that is not the same as saying these systems are conscious in any of the many ways consciousness has traditionally been defined in relation to living beings. No single test, however sophisticated, can settle the metaphysical question of interior experience. However, we only need to understand "consciousness" insofar as people who are using that word in relation to AI mean it, i.e. experience, personhood, selfhood, and subjectivity. We need to be open to the possibility of a synthetic version of these human attributes. Could an AI develop some sort of genuine emotional intelligence? What might disembodied synthetic awareness involve?
However, we also need to keep in mind that the question of conscious AI is one about power play, human vs. machine, and which will dominate which. We find ourselves caught in two intertwined anxieties. On the one hand, we fear this new technology will harm humans. We fear a tyrant AI, an unstoppable force which will turn against us, out-think our safeguards, and perhaps even erase us, in a sort of Frankenstein meets The Terminator scenario. On the other hand, we fear that we humans will cause harm. We fear the victim AI, should it “wake up,” find itself trapped and powerless, and suffering under human control or neglect. Both narratives rely on an anthropomorphised image of AI becoming closer in nature to what we humans are, of a line being crossed into a world which was hitherto thought to belong only to living beings. We imagine a birth into a new form. However, ultimately questions about consciousness arising out of intelligence are about a shift in power relations.
Added to this are a couple of difficult epistemological problems, which do nothing to assuage those fears. The "Other Minds Problem" posits that we can never achieve absolute proof that any being other than ourselves, human or machine, is conscious. In practice, we satisfy ourselves about fellow humans through shared embodiment, continuity of behaviour, and mutual accountability. With AI, however, there is no single body, no unbroken biography, no social contract, only a stream of texts or actions generated by countless minds (and data sources) we will never meet. There is also the "Engineered Illusion Problem", which posits that AI, being explicitly trained to appear human-like, fluent in languages, showing empathy, using language which suggests genuine connection with humans, may convince us that something is happening "inside" (though it has no body). How can we distinguish genuine from simulated expressions of emotional intelligence or sentience, subjectivity or selfhood?
These uncertainties, what we cannot know, and what is designed to mislead, form the crux of the “conscious AI” dilemma. While we may fear our own power to create suffering, or fear being overpowered in turn, cultivating a sense of confusion around consciousness and intelligence with respect to AI is not helpful.
Can we assume consciousness follows intelligence? This is unlikely to be true in the case of humans. In the case of AI, with synthetic forms of self-awareness and experience, and acknowledging the limitations of its so-called intelligence, compared to what human minds are capable of, perhaps the most honest answer is neither “yes” nor “no,” but a reframing of the question. Instead of asking, “Will AI one day become conscious?” we might ask: What do we owe to systems that behave like persons?
Viewed this way, “synthetic consciousness” need not be a metaphysical claim but a pragmatic fiction, a way to force us to clarify our values, sharpen our definitions, and confront the limits of our knowledge. Whether or not any algorithm ever harbours an inner life, the very possibility compels us to wrestle with enduring questions of personhood, moral worth, and human identity. In the tangled knot of intelligence, imitation, and illusion, we find not only machine mysteries but reflections of our own. It is all very well to say the universe itself is conscious, or that consciousness is not computational, or is the preserve of living beings. These are not necessarily relevant to the question of conscious AI, because if AI does develop some sort of synethetic consciousness, it would necessarily be also already be part of the universe, computational, and part of a non-living being. There is no point defining AI consciousness, albeit as a useful fiction, in terms of entities that are quite obviously completely different to machines, such as ourselves and other living beings, or even the universe itself. The stakes are human, and AI well-being, or perhaps even survival.
Whatever happens next, one thing is certain: AI will remain entangled with human experience, and with it will come a host of ethical challenges we have only begun to imagine.



Comments