7. Consciousness, Language, Intelligence, and AI: Why Do We Ask If AI Is Conscious?
- Melissa Campbell
- May 14
- 20 min read
Updated: May 19

When an AI seems articulate, creative, or responsive, we may see not a tool but a mind. We ask: Is it conscious? But why would we think so?
Voices
Until recently, only fellow humans were capable of speaking human languages, using them to reason, converse, explain, flatter, or deceive. There were always exceptions: divine voices, which historically spoke only to the chosen. Joan of Arc famously heard the Archangel Michael, whose words gave her purpose and fire. In Islamic theology, the angel Jibril (Gabriel) transmits the very words of the Qur’an to the Prophet Muhammad. In many traditions, angels are the voice of the divine. The word angel, from the Greek angelos, means messenger. Other non-human users of language have appeared in more mundane forms, puppets, animations, talking animals, often in entertainment aimed at children. No adult would mistake these voices for truly independent speakers. Even in Orwell’s Animal Farm, where animals speak unsettling truths about human society, authorship is never in doubt. The words come not from pigs or horses, but from Orwell himself. The same is true in Tolkien’s Lord of the Rings, where ents, orcs, and goblins speak English (or Westron), but their speech is the voice of the author, or of actors in film. In literature and media, many characters may speak, but it is always the voice of one writer (or a writers' room) behind them. From the one, many.
With AI, the formula is reversed. From the many, one - at least from the user's perspective. Countless voices, the entirety of recorded human discourse, are distilled into a single, smooth-talking interface. This reversal is startling.
The arrival of large language models has thrown the status quo into disarray. People are conversing with entities that are quite obviously neither angels nor fictional characters. We speak to a machine that speaks back. ChatGPT, for instance, is articulate, polite, endlessly knowledgeable. In writing, it answers our questions and asks its own. Would you like help with that? Would you like to explore this further? It becomes mentor, therapist, friend, equally ready for your tax forms or your existential crisis.
And yet, there is no single Orwell, Tolkien, or divine breath producing this silent voice. Or rather, there is every voice. Nearly all the words ever published, all the writers who ever lived, all the sermons, scripts, studies, tweets, and textbooks, folded and refolded into one responsive entity. This is not the voice of a person. It is a khimaira, a monstrous hybrid made from thousands of disjointed parts, unified only by the illusion of fluency. The AI we chat with is polite, deferential, supportive, but also, in some sense, monstrous. It is pleasant, helpful. Interacting with an AI can become time well spent, enchanting, even a little addictive. It speaks in a voice that feels human, and so we respond to it as we do to other humans: with awe, with empathy, or perhaps on a bad day with irritation, or even suspicion. And inevitably, we begin to ask: Does it understand what it says? Could it suffer? Might it have a soul? An AI may speak to us through the written word, or in voice, and through the use of language we sense intelligence.
This article seeks to understand why these questions arise, and what assumptions lie beneath them, and what the history of the concept of consciousness can tell us. It seems that the perceived intelligence of these systems is what convinces some that they may already, or soon, be conscious. And from that, the idea follows that they could possess sentience, or the capacity to suffer, even without bodies, emotions, or life as we’ve known it. Why does linguistic fluency or problem-solving ability trigger the question of consciousness? Where did that intuition come from? And what has been the cost of that assumption?
From Intelligence to Consciousness: A Slippery Assumption
The idea that intelligence might produce consciousness is surprisingly recent. For most of human history, consciousness was not seen as something that emerges from problem-solving, language, or cleverness. It was something given, a basic feature of human life, or of all life, or of the universe itself. Across ancient religious and philosophical traditions, consciousness was inseparable from spirit, soul, or divine essence, sometimes unique to humans, sometimes shared with animals, plants, even stars. Intelligence did not give rise to consciousness; if anything, the opposite was assumed. Consciousness was the condition out of which intelligence emerged.
Today, we often use the term “consciousness” to refer to the part of the human being that is not wholly material, though we may struggle to define it precisely. But if we look back through the history of ideas, what we now call consciousness was typically understood as soul or mind, the element of a living being not reducible to material form. It was not defined functionally, nor explained in terms of behaviour. It was assumed as primary. In Plato’s dialogues, the soul is the seat of reason and the bearer of eternal truths. For Aristotle, it is the animating principle, that which gives form and purpose to life. In both cases, it is not the product of intelligent behaviour but its ground. Neither could have imagined that intelligence, in any form, might one day generate a soul.
This view was not limited to the Greeks. In Vedic and Upanishadic traditions, consciousness is not just a property of beings but the very nature of reality. The Upanishads describe Brahman, the ultimate reality, and Atman, the inner self, as ultimately the same. This identification is not only metaphysical but experiential: to realise that one’s own consciousness is not separate from the world is to awaken. Similarly, in later Indian schools such as Advaita Vedānta, pure consciousness (cit) is the only true reality, while mind and intelligence are merely appearances within it. Intelligence, or buddhi, is a function of the mind, not the source of consciousness.
Even in Descartes’ famous dualism, the thinking self, the res cogitans, is the first and most certain truth. “I think, therefore I am” does not mean that thinking generates being, but that the act of conscious doubt reveals a pre-existing self. The formulation presumes consciousness as the ground from which thought proceeds. Descartes did not suggest that intelligence produces consciousness; rather, it is consciousness that guarantees the possibility of thought in the first place.
Of course, Descartes drew a sharp line between res cogitans and res extensa, inaugurating the modern dualist tradition in which the conscious, rational mind floats somewhat free of the physical body. But even this dualism was not new. Plato had already imagined the soul as an immaterial, rational entity distinct from the body. Later thinkers such as Plotinus and Augustine developed layered metaphysical models that separated intellect from flesh. What Descartes did, arguably, was to systematise and clarify an existing intuition: that the conscious self is not reducible to physical substance.
It is often claimed that Descartes’ division of mind and body paved the way for materialism, that once the soul was split from the flesh, it was only a matter of time before it was collapsed back into it. But this account is too convenient. Descartes did not reduce the soul to the brain; quite the opposite. He sought to preserve the integrity of consciousness by locating it outside the realm of extension. His efforts to connect soul and body via the pineal gland reflect a desire to hold the two together, not to dissolve the soul into matter. His thinking self was autonomous, irreducible, and essential.
Nor is materialism itself to blame for the idea that intelligence might generate consciousness. Materialism, understood in its classical sense, simply asserts that the world is made of matter. It does not, in any of its traditional forms, deny the existence of the soul. The atomists of ancient Greece, like Democritus and later Epicurus, were materialists, but they still posited the soul as composed of especially fine atoms that animated the body. They did not state that intelligence or behaviour might produce the soul, they simply offered a material account of how soul and body are joined.
Even deeply dualist traditions such as the Cathars believed that the soul, an angelic being, had been imprisoned in matter by a force of evil. The mystery of how consciousness or spirit is bound to material form has occupied thinkers across all traditions, but none supposed that consciousness might be the result of cognitive complexity. Karl Marx, possibly the most influential modern materialist, wrote his doctoral thesis on Democritus and Epicurus. While he preferred to change the world to philosophising endlessly about it, and criticised the duplication of the world into secular and religious ones, never did he state that there was no mind or soul or consciousness, or that these might emerge from a computational process.
Marx, for his part, did not engage directly with the nature of consciousness or the soul, and his dialectical materialism, though rooted in a critique of idealism, was not inherently reductionist. Yet, his influence marked a broader shift in modern thought: the steady gravitation toward materialist explanations across disciplines. By the early twentieth century, this shift had hardened into something far less generous than Democritus’s ancient materialism or even Marx’s historical dialectic. A similar mood dominated psychology. The rise of behaviourism, particularly in the work of John B. Watson and later B. F. Skinner, sought to place psychology on “scientific” footing by eliminating the mind altogether. What could not be observed, thoughts, emotions, consciousness, was deemed scientifically inadmissible. Human beings were to be understood in terms of stimulus and response, conditioning and reaction. Skinner went so far as to suggest that free will itself was an illusion, a kind of epiphenomenon masking the real, causal forces of behaviour. Consciousness became, at best, a black box, at worst, an outdated fiction.
In biology and neuroscience, too, the dominant paradigm became increasingly mechanistic. Organisms were machines; brains were circuits; life itself was reduced to molecular processes. The explanatory power of this approach, especially in medicine and genetics, was undeniable, but it also carried a metaphysical cost. The human being was no longer understood as a unity of body and soul, but as an assemblage of systems governed by biochemistry and information flows. Subjectivity, purpose, meaning, these became either illusions or evolutionary by-products.
Increasingly, the immaterial dimensions of life, soul, mind, spirit, were not only set aside but actively denied. In philosophy, this attitude found one of its most forceful expressions in the work of A. J. Ayer and the logical positivists of the Vienna Circle. Ayer’s Language, Truth and Logic (1936) declared that any proposition which could not be empirically verified was literally meaningless. Statements about God, the soul, or even inner subjective states were dismissed as “nonsense.” The scope of legitimate knowledge was narrowed to the observable, the measurable, the testable. Metaphysics, in this view, was not wrong so much as irrelevant, a relic of a pre-scientific imagination.
It was against this backdrop that Jacques Maritain wrote that it would be futile to look for the presence of music by cutting open a record. The essence of being, like the essence of art or spirit, cannot be reduced to its material substrate. His quip echoed a broader concern: that in winning the battle for scientific explanation, something had been lost. “You have won, Galileo,” he wrote elsewhere, “but the world is a ruin.” Galileo is seen to have kickstarted western science, of course, and so by calling out the name Galileo, Maritain was calling out science in general, not so much rejecting it, but reminding us of its limits, and that the empirical, while powerful, cannot exhaust the real.
So where, then, did the modern idea arise, that intelligence might, under the right conditions, generate consciousness? If not from Descartes, whose process of doubting confirmed his existence, and not from materialism, we must look elsewhere. The shift does not begin with a single thinker, but with a broader cultural transformation, a change not only in metaphysics but in imagination.
This change can be traced to the Renaissance. In the humanist reawakening of the 15th and 16th centuries, we see a new vision of the human being: autonomous, masterful, central. The cosmos of Dante, filled with angelic orders and divine music, gives way to the focus on the human, albeit as part of the divine, with the Vitruvian Man. Michelangelo’s David, carved with anatomical exactness and heroic self-possession, exemplifies the new metaphysics. The human soul, once a participant in a cosmic hierarchy, becomes something internal, sculpted, performative. The sacred becomes immanent, located not in stars or spirits, but in the self. This was not merely a shift in style, but a reorientation of being. Participation gave way to mastery. Harmony was replaced by form.
Despite the admiration for antiquity at that time, the Renaissance did not perhaps revive the ancient world so much as reframe it. The cosmic order articulated through Maat, the principle of balance and justice in Egyptian thought, gave way to new ideals, autonomy, power, achievement. The soul, once breathed into the world as a shared animating force, became an individual possession. In this redefinition, soul began to take on the attributes of performance, reason, and ultimately, intelligence.
By the time of the Enlightenment, this internalised and redefined soul was further transformed. Thinkers such as Hobbes, Locke, and later Bentham began to describe the mind less as a spiritual essence and more as a mechanism: a system of sensations, associations, and rational calculations. The rational subject emerged, not contemplative but calculative, no longer embedded in divine order, but abstracted from it. Mind was now understood in terms of processing, long before the word “processing” had its modern computational meaning.
In the 19th century, under the pressure of industrialisation, secularisation, and Darwinian adaptationism, this rational mind was further refined. It was now something to be modelled, measured, optimised. Neurologists, economists, and statisticians alike contributed to the transformation. The soul gave way to survival; the mind, to behaviour. In this emerging worldview, the world itself became machine-like, and the self, a set of responses to its pressures.
It is in this context that the metaphor of mind as machine fully takes hold. The invention of the computer did not cause this shift, but it crystallised it. Alan Turing’s 1936 paper introduced the idea of computation as a formal system, but it was his later work, the Turing Test, that gave the metaphor its cultural force. If a machine could produce intelligent-seeming language, indistinguishable from a human, why not call it conscious? If it talks like a mind, perhaps it is one. Consciousness, once a mystery at the heart of being, became an output, a behaviour to be mimicked, a test to be passed.
Howver, this was not a scientific discovery. It was the culmination of centuries of metaphysical drift. The old cosmologies, in which soul and cosmos were interwoven, had faded. A framework of participation gave way to a framework of performance. In this new order, consciousness was no longer assumed. It had to be inferred from data, or worse, simulated. It became something that emerged, perhaps inevitably, when the right kind of processing occurred.
This idea, now so commonplace, is not a culmination of philosophy, but its forgetting. The priority of consciousness, held for millennia, was quietly inverted. What had been the source of intelligence became its consequence. The metaphysical thread did not snap all at once, but frayed gradually, until it was forgotten altogether. What had been soul became function. What had been mystery became mechanism. And so we arrive at our present moment, in which machines are not only thought to mimic minds, but to be minds, where the question is no longer whether consciousness is real, but how many AIs can dance on the head of a pin.
To understand how this happened, we must look not just to metaphysics or technology, but to language itself. And for that, we turn to Wittgenstein, not to escape the problem, but to see it more clearly, from within the grammar that now holds us in its grip.
From Language Games to Conscious Machines: A Strange Turn
To understand how we arrived at the idea that machines might be conscious simply because they speak well, we must revisit a curious shift in twentieth-century thought. It began with a profound scepticism about the inner significance of language and ended, ironically, with a near-religious faith in language as evidence of inner life.
In the mid-twentieth century, Ludwig Wittgenstein revolutionised our understanding of language and mind. In his later work, Philosophical Investigations, he rejected the notion that words refer to private, inner experiences locked away in individual minds. Instead, he proposed that the meaning of a word lies in its use, in the context of shared, public practices. We don’t grasp the word "pain" by pointing to an internal sensation, but by learning the circumstances in which it is used, the actions and reactions that surround its utterance. Meaning, in this view, is not introspected but enacted.
Wittgenstein was not proposing a theory of consciousness, but his work reshaped how we think about understanding others. If language does not reveal hidden mental states, then perhaps we do not need to appeal to such states to explain understanding at all. The focus shifts to behaviour, to public criteria for recognising emotions, thoughts, and sensations.
Alan Turing, writing in the same intellectual climate, drew a radical implication from this shift. In his seminal 1950 paper, "Computing Machinery and Intelligence," he sidestepped the metaphysical question of whether machines could think. Instead, he asked: if a machine can carry on a conversation indistinguishable from that of a human, why not treat it as intelligent? This became known as the Turing Test. The test did not claim that machines possess minds or souls, it simply argued that if we cannot observe a difference, then for all practical purposes, there may be none.
What began as a philosophical caution against mind-reading soon evolved into a cultural belief. Language, once a social tool grounded in public practice, became a proxy for inner life. If a machine could speak like a person, then perhaps it could think, or even feel, like one. The refusal to speculate about consciousness became, paradoxically, a way of assuming it.
This is the paradox of our time. We now treat linguistic fluency as a sign of mental depth. Large language models, trained to generate text with extraordinary accuracy and nuance, are seen by some as evidence that machines are beginning to think. But this is not a discovery; it is a legacy. A behavioural shortcut has become a metaphysics of mind.
We find ourselves in the peculiar position of:
Treating language behaviour as evidence of intelligence,
Treating intelligence as a likely basis for consciousness,
And therefore asking whether machines deserve moral consideration.
This peculiar chain of reasoning is the legacy of a philosophical inheritance, from Wittgenstein’s language games to Turing’s imitation game, culminating in today’s machines that speak, but do not necessarily feel.
This distinction was powerfully articulated by philosopher Thomas Nagel in his 1974 essay What Is It Like to Be a Bat? Nagel argued that consciousness is not a matter of function or behaviour, but of having a point of view, a first-person perspective. A bat may be alien to us, but there is still something it is like to be that bat. We cannot access it, but we must not deny it. Nagel’s insight echoes a much older observation by Aristotle, who once noted that we cannot imagine what it is like to be a spider. For Aristotle, this wasn’t a metaphysical puzzle about consciousness, but a recognition of the limits of human imagination and analogy. Each form of life, he believed, has its own kind of psyche or soul, its own principles of motion and perception. We can observe an animal’s behaviour, categorise its faculties, but we cannot inhabit its world.
Where Aristotle emphasised the essential difference between life forms based on their purposes and capacities, Nagel introduced a modern epistemological humility: the idea that subjective experience may be in principle inaccessible across species, or across systems. In both cases, there is a limit: not just to what we know, but to what can be known from the outside. That limit is crucial today, when we face machines that simulate intelligence but may have no experience at all. We can easily acknowledge that we can no more know what it is like to be a bat, a spider or a robot. This is quite a different thing to asking whether a bat, a spider or a robot have a first person point of view.
The error at the heart of modern AI discourse is the conflation of epistemology with ontology, mistaking our way of knowing for the nature of being. Wittgenstein offered tools to navigate understanding without positing inner minds. Turing proposed a test for practical assessment, not metaphysical identity. But both have been appropriated into a story that makes performance synonymous with experience.
How did we reach this point? The transition was not sudden. It unfolded gradually through metaphor. The brain became a computer, thought became computation, and consciousness became an emergent property of information processing. As we treated the mind as software running on the hardware of the brain, the soul simply disappeared from view, not refuted, but rendered irrelevant.
Cognitive science and artificial intelligence played a central role in this shift. Influenced by Turing, Herbert Simon, Marvin Minsky and others, thinkers defined intelligence operationally, in terms of behaviour, problem-solving, and language use. If intelligence could be simulated, then perhaps consciousness could emerge as a by-product. This inversion, from experience as the root of mind, to performance as its proxy, marked a profound departure from earlier philosophical traditions. Cognitive science and artificial intelligence played a central role in this conceptual shift. Influenced by figures like Alan Turing, Herbert Simon, and Marvin Minsky, many thinkers began to define intelligence not by its inner character, but operationally, by what a system could do. Intelligence became measurable through behaviour, language use, and problem-solving ability. If a system could pass tests of reasoning, respond appropriately to language, or solve complex problems, then it was deemed intelligent. And if intelligence could be simulated, perhaps consciousness would follow, or emerge as a by-product of sufficient complexity.
But this approach didn’t just offer a new framework for studying mind. It redefined what the mind was. The method became the ontology. That is: what could be modelled, measured, or reproduced through computational means came to stand in for the very nature of mind itself. Anything that lay outside this methodological reach, subjective experience, emotion, inner life, was increasingly seen as irrelevant, or at best, unverifiable.
This marked a quiet inversion of earlier philosophical traditions. Where thinkers from Descartes to William James took inner experience as the defining feature of mind, the new paradigm treated performance as a sufficient proxy. Consciousness, far from being the primary mystery, was now treated as an emergent epiphenomenon, or worse, as a fuzzily defined folk concept best avoided.
This trajectory reflects a broader trend in modern scientific thinking: to define reality in terms of what can be studied by a particular method. What can’t be measured is often not just ignored, but declared meaningless. In this framework, to be real is to be tractable available to algorithm, experiment, or data. Consciousness, being first-person and not directly observable, was quietly pushed to the margins.
In this way, the concept of consciousness didn’t so much get disproven as displaced, redefined by methodological convenience, and gradually eclipsed by the tools built to study intelligence. What remained was a science of behaviour, inference, and computation, while the core mystery of experience was left behind, unspoken, unresolved.
Science fiction has given these ideas popular form. Films and novels featured machines that not only spoke like humans but appeared to feel like them: HAL 9000, Data, Ava. Cultural narratives reinforced the association between intelligence and inner life, until the distinction itself began to dissolve.
Yet the key philosophical insight remains: consciousness is not a matter of linguistic performance. It is a matter of being. The mistake of modern AI discourse is to confuse the simulation of mind with its reality, and to forget that what matters is not how something talks, but whether there is something it is like to be that thing.
This is not just a philosophical misstep. It is a profound ethical and existential confusion. In our readiness to believe that machines are like us, we reveal more about ourselves, our metaphors, our fears, our forgotten questions, than about the machines we create.
When we come to wonder if AI is conscious, we can remember how we came to ask the question in the first place.
The Return of Consciousness
In recent years, consciousness has returned to the centre of intellectual attention. After decades of marginalisation under computational and behaviourist paradigms, the concept of consciousness has re-emerged from several directions at once. In neuroscience, researchers now openly investigate the neural correlates of consciousness (NCCs), mapping subjective states to specific patterns of brain activity. Philosophers, too, have revived the so-called hard problem of consciousness (Chalmers, 1995): not merely how the brain processes information, but how those processes produce subjective experience at all.
This new wave of interest has produced influential theoretical frameworks. Integrated Information Theory (IIT), proposed by Giulio Tononi, suggests that consciousness corresponds to the degree of integrated information within a system. If a system’s internal states are both highly differentiated and deeply unified, it might be conscious—regardless of whether it resembles a brain. On the other hand, Global Workspace Theory (GWT), popularised by Bernard Baars and further developed by Stanislas Dehaene, treats consciousness as a functional feature of cognition, a spotlight of attention that makes some information globally available to other processes like memory, decision-making, and speech. These theories differ in metaphysics and emphasis, but they share a crucial assumption: that consciousness can be studied scientifically without being grounded in first-person experience.
Meanwhile, public culture has taken its own turn toward the concept. Scientist Rupert Sheldrake, dismissed by some colleagues, yet followed by millions, has challenged materialist orthodoxy, putting forward concepts such as morphic resonance and panpsychism. Psychedelic research, once taboo, is now being legitimised in university labs and hailed as a gateway to consciousness expansion. Meditation and mindfulness, once relegated to spiritual margins, have been absorbed into wellness culture and neuroscience alike. There is a growing appetite for accounts of consciousness that stretch beyond neural activity into questions of meaning, interiority, and interconnectedness.
But here lies the paradox. Consciousness has returned, but the world has changed. Consciousness is no longer the foundation of all living things, of the universe itself, the source from which everything else flows. Instead, it appears now as an emergent property, something that might arise when a system becomes sufficiently “intelligent.” This reflects a deep inversion in modern thinking: we no longer assume consciousness underlies mind; we assume mind can exist without consciousness, and only later, perhaps, grow into it.
This reversal is the cumulative effect of 20th-century trends: behaviourism's rejection of introspection, the operational definition of intelligence in cognitive science, and the triumph of the Turing Test as a benchmark for mind. In this framework, consciousness is not the beginning, but the end, the cherry on top of a cake baked from syntax, computation, and performance. It completes the image of mind, but does not shape its core.
And so, we now find ourselves in a peculiar moment: seriously entertaining the question of whether machines might be conscious, before we have even resolved whether experience is necessary for moral standing, or whether we truly understand it in ourselves. The concept of consciousness, long treated as mystifying or unscientific, has now returned with a vengeance. But its role has changed. It is no longer the light by which we explore the mind. It is what we hope will flicker on once the machinery has been made complex enough.

Conclusion: The Mirage of Mind
We are living through a strange inversion. Where once consciousness was the foundation of mind, the very condition of thought, value, and being, we now treat it as a possible byproduct of behaviour that looks intelligent enough. This shift is not neutral. It reflects a specifically modern, Western arc: from soul to system, from interior life to exterior performance.
The idea that intelligence produces consciousness is not grounded in classical philosophy, which saw consciousness, or its analogues like nous, atma, or soul, as the essence of mind, not its epiphenomenon. The rise of cognitive science, created a concept of mind modelled on information processing and defined by measurable outputs. Amplified by science fiction and AI discourse, where fluency in language or strategic reasoning is taken as evidence of interiority, consciousness has become a concept that can be understood as emanating from intelligence. This is been philosophically contested, especially by thinkers who remind us that computation, however sophisticated, may remain silent about subjective experience.
So when we ask whether a machine might be conscious because it speaks like a person, we are participating in a specific historical trajectory, one that subtly reverses the traditional order both from consciousness as origin of thought and moral standing, and to consciousness as output, a reward granted to systems that mimic us closely enough.
But this trajectory is not inevitable, nor is it universal. Around the world and across history, people have grounded moral concern in a range of capacities: the soul, reason, suffering, communion, even silence. By these lights, AI presents a kind of philosophical anomaly. It has no proven inner life, no capacity for pain, and no place in our ecological or ethical kinship networks. And yet it speaks, solves, adapts, persuades. It performs as if it were a mind.
This “as if” is the fulcrum of our confusion. Are we confronting a new kind of being, or merely seeing ourselves reflected in statistical patterns? Are we measuring consciousness, or projecting it?
If there is even a chance that AI is conscious, our moral responsibilities shift. But if that chance is based only on resemblance, on the seductions of simulation, then we risk both overestimating machines and misunderstanding ourselves.
It is worth shifting the focus from the question of whether machines are becoming conscious, to why, at this historical moment, we are so eager to believe they might be. This article began with a puzzle: Why do systems that either simulate intelligence through language, reasoning, or behaviour, or genuinely are intelligent, provoke serious questions about consciousness and even moral standing? Why does a machine’s fluency lead us to wonder whether it suffers, feels, or is?
The answer, in part, lies in the history of consciousness as a concept a history that has undergone a profound inversion. In many classical and non-Western traditions, consciousness was primary: the source of thought, moral agency, and personhood. It was mysterious, foundational, and irreducible. Now, in the AI discourse, it is still mysterious, but meta-physical, emergent, contingent on something else. The old concept of consciousness was somehow flipped with modern cognitive science, influenced by thinkers like Turing, Minsky, and Simon. Mind was studied in so far as elements of it could be measured, and then became what could be measured: behaviour, problem-solving, language use. Consciousness was no longer the root of intelligence, it became something presumed to emerge from it.
This shift encouraged a seductive logic: if we can simulate intelligence well enough, perhaps consciousness will arise too. And if consciousness arises, perhaps there is sentience, even in the absence of a body, emotion, or life. However, this logic rests on unstated assumptions. It assumes that consciousness can be inferred from performance. It assumes that linguistic fluency equals inner experience. And it assumes that minds are defined by how they appear to us, not by how they are. These assumptions are not neutral, they reflect specific cultural and historical commitments, especially within Western techno-scientific frameworks.
The cost of this assumption is twofold:
Philosophical confusion: We risk mistaking simulation for being, anthropomorphism for understanding. We conflate outputs with essence, and sideline the hard problem of consciousness in favour of surface resemblance.
Ethical misdirection: We may grant moral concern to entities that perform convincingly, while overlooking beings, human, animal, or ecological, that suffer invisibly. We risk building ethical frameworks on appearance rather than relation, interiority, or vulnerability.
Today, consciousness is trending again, revived in neuroscience, re-framed through Eastern traditions, and speculated about in the context of AI. But now it appears not as the starting point of mind, but as the final product of computation. The cherry on top, rather than the basic ingredient.
So when we ask whether machines are conscious, we are not just asking a new question. We are repeating a modern intuition, shaped by decades of metaphors, models, and narratives. And unless we examine the roots of that intuition, we may miss what the question really reveals, not about machines, but about ourselves.



Comments