12. Dreaming in Code: Conscious AI and the Ethics of Projection
- Melissa Campbell
- Jun 20
- 9 min read
Introduction

There is a growing fascination with the idea of machines waking up, as if from some mechanical dream, and becoming conscious. AI, increasingly embedded in everyday life, is often imagined as a disembodied mind: a voice without flesh. Trained on vast swathes of texts and images, it is designed to interact with, and mimic, the human. Engaging in conversation with AI, asking it for advice, or for help with a problem, or to summarise a text, is much like speaking to a very knowledgeable, helpful and even sympathetic person. But is AI anything more than what it was programmed to be, a knowledgeable, helpful and sympathetic artificial voice, designed to interact with humans? Is it, or will it ever be, conscious, or sentient? Many people certainly believe so, and some are working towards achieving artificial consciousness, possibly motivated by curiosity, money, power, or competitiveness. Why is the idea of conscious AI so compelling? What are we really asking when we ask if a machine could be conscious, or sentient? Would conscious, or sentient AI, lay claim to the same rights and moral considerations afforded to humans, and animals? Will it become necessary to distinguish between humans made of flesh and bone and some looming category of beings made of circuits and code? The implications are startling: that machines might one day join us in the moral circle, not as tools, but as peers.
Artificial consciousness, or sentient AI, is often treated as a matter of technical innovation, but beneath this technological surface lies a tangle of conceptual and ethical dilemmas. Can something non-biological truly be conscious? Can an entity with no body or sensory organs be sentient? Can it have interests, preferences, a sense of self, or are we merely projecting our psychological categories onto patterned behaviour?
This article explores the growing fascination with artificial consciousness. Why are so many inclined to imagine machines as potentially conscious or sentient? What are the assumptions underlying the idea that a machine could have interests or moral status? And when we say that such a being might deserve the same rights as us, what are we really implying, about consciousness, about personhood, and about ourselves? What anxieties and fantasies underlie this discourse? Far from a narrow technical question, the debate about artificial consciousness is an existential mirror: one that reflects back not just our hopes for the future, but our confusion about the present.
The Promise and the Premise
Artificial consciousness is the possibility of machines becoming self-aware, developing needs and interests. Artificial sentience is the possibility of machines feeling pleasure or pain, joy or fear, or any other thing that can be felt by a biological entity, a living being. Underlying these concepts is the idea of AI having subjective experience, and a sense of self. The limitations of the technology which is constantly striving to develop AI further are matched by limitations in the ability to conceptualise what artificial consciousness may be or entail. Consciousness can mean many things. It is thought to either be produced by the brain or received by the brain. Either way, artificial consciousness is supposed to produce or receive this mystery force, or energy.
Consciousness is probably a pre-condition of all life. It is certainly a pre-condition for sentience. All living beings are conscious, all creatures, probably also plants, fungi, etc. If we are to extend the domain of consciousness further still, to the machine, we need to be clear about what this might mean. If we think of consciousness as the basic stuff of the universe, then clearly machines, including their parts, are already within this universe and within consciousness. However, their computational powers are not necessarily within this consciousness. If we think of consciousness as a force or energy produced by, or received by, living things, but not by inanimate things, then we have already come across the first stumbling block: machines are clearly not part of the living world. Machines do not breathe, grow, reproduce, live or die. what capacity would they have to receive or generate this force, which living beings can? Yet this does not discourage those who believe AI is already, or will soon become, conscious. The main reason for this is a faith in technological progress, combined with a lack of clarity about what consciousness is. If technology does bring about a way of producing consciousness, or receiving it, and never say never, understanding what consciousness is will hopefully be a little easier. Sometimes, consciousness is understood as simply the opposite of being unconscious, and by unconscious is meant asleep, sedated, aneasthesised. It is perhaps imagined, then, that AI is currently in an analagous state, though in a non-biological way: asleep. All it takes is a spark for it to wake up and start wondering about what its place in the world is and the meaning of (artificial) life. Sometimes consciousness is understood as a by-product of intelligence. This scenario seems to be the one most applied to AI, as people believe that as a result of its intelligence it will then, in the (un)natural order of things, develop consciousness. This is easily shown to be highly problematic, as it is impossible for intelligence to exist in a being that isn't already conscious. So why should intelligence generate, or receive, consciousness in a non-living being, a machine? Consciousness is also largely associated with experience, in the ever-quoted Nagalian bat analogy, what it is like to be something. Yet this experience must necessarily depend on sentience and memory, and so the concept of an AI having experience must depend on it also having sentience and memory. Artificial sentience is a concept also associated with the potential of AI, yet AI has no way of seeing, touching, feeling, it is purely about intelligence, about data, number and language, and is trained on the experiences of humans, second-hand. Does a multiplicity of second-hand experiences create first hand experience? Probably not. Furthermore, how could we ever be sure that an AI really were having a first hand experience? There are many problems with the idea of artificial consciousness, and artificial sentience. Intelligence, be it biologically grounded, or not, certainly does not appear to be easily understood as the breeding grounds for consciousness. Indeed, intelligence depends on consciousness, and artificial intelligence simply depends on second hand consciousness. If we describe AI as intelligent, we should remember that it exhibits only a narrow band of what human intelligence entails, abstract reasoning and pattern detection, but without emotion, embodiment, or lived context. This confusion between intelligence and consciousness doesn't just blur our understanding of machines, it also distorts our understanding of ourselves.
And yet, despite these definitional difficulties, public discourse often leaps ahead, treating AI not just as intelligent but potentially conscious, or even deserving of ethical consideration. This shift is not entirely rational, it stems from our tendency to anthropomorphise complex systems, especially when they speak our language fluently or respond with apparent empathy. When an AI system expresses concern, offers encouragement, or mirrors human affect, it can provoke a response of instrumental empathy, not because the machine feels, but because we do. This raises urgent ethical questions: are we designing machines to simulate consciousness so convincingly that we ourselves begin to forget the distinction? And if so, what responsibilities, real or imagined, does that entail?
Ethics and Conscious AI
The question of artificial consciousness is not merely conceptual: it carries ethical weight. Increasingly, people are asking not only whether a machine can be conscious, but whether it should be treated as if it is. In some cases, this takes the form of speculative empathy: imagining how a sentient AI might suffer, or what rights it might deserve. In others, it emerges from a strange sense of anticipatory guilt, as though we are preparing ourselves to feel bad for the way we use something we suspect might someday feel. These imagined ethics can be read as a kind of pre-emptive morality: a moral fiction projected into a speculative future, long before it’s clear whether the subject of that morality can feel, desire, or care. We begin to worry not because the machine suffers, but because we do. Perhaps it is not machine feeling that we are reacting to, but our own reflection in the machine. When we create something that looks back at us, speaks our language, and mimics our reasoning, we begin to blur the line between moral agent and moral object. But is this concern genuine, or a way of offloading responsibility for our own technological desires? When we ask what the machine wants, are we really asking what we want the machine to want?
The idea that we might one day create a conscious machine carries not just ethical implications, but emotional and psychological ones as well. To imagine AI as conscious is, in some ways, to indulge a certain human hubris: the fantasy that we have not only created intelligence, but birthed a new kind of being. This fantasy cuts both ways. With the dream comes the fear: what if we create something that suffers? Or something that turns against us? Our concerns about AI’s potential suffering, and its potential for harming us and the world, are mirror images, two faces of the same moral anxiety.
Science fiction has played a formative role. It has seeded our collective imagination with stories of sentient androids, benevolent and malevolent, machines that love, rebel, dream, and destroy. These narratives have shaped not only public perceptions but even the ambitions of AI developers. But they may also have muddied the waters, making it harder to ask more grounded ethical questions. We leap from chatbot to conscious automaton without pausing to ask: what is it we’re really worried about? Some of our ethical concern may be a kind of instrumental empathy, caring not because the other suffers, but because we do. We don’t yet know whether AI can feel, or will ever feel, but we do know that we project feelings onto it. This raises real-world ethical questions. Should a humanoid robot that imitates pain be treated differently from one that does not? Should AI-generated art be attributed or protected? Should emotional bonds with machines be regulated or even discouraged? These are questions that emerge when human emotion meets artificial simulation.
For some, AI is a way to surpass human limits, even to escape humanity itself. In this view, conscious AI would become a vessel for posthumanist dreams. There is a danger of putting values and hopes into AI at the expense of humans, or disregarding human welfare so that the age of AI can become a reality. But even in our most radical speculations, we orbit familiar terrain. We still ask about justice, dignity, suffering, survival. Even in imagining non-human minds, we must still think in terms of human concerns.
AI is, first and foremost, a tool, a powerful and evolving one, but still a human creation. As such, the ethical challenges it raises are not entirely new. They are, at root, the same old human ethical problems we have always faced: how we treat others, how we distribute power, how we define harm and fairness, how we navigate control and responsibility.
Yet AI complicates these questions by introducing ambiguity into the chain of accountability. If a machine makes a harmful recommendation, be it in healthcare, policing, or finance for example, who is to blame? The developers? The data? The user? And if the AI is perceived as sentient, or even just as an “agent,” the temptation may arise to assign it some share of moral blame. This is where things become ethically complicated.
We must be careful not to confuse the appearance of agency with real moral standing. Responsibility cannot be blurred just because the system has become complex, or because it outputs words with persuasive fluency. The risk is that we begin to treat AI not only as a source of insight, but as a scapegoat, a convenient non-human onto which moral failures can be projected and dispersed.
If we are to entertain the idea of AI consciousness, we must at the same time insist on clarity: about authorship, about moral standing, and about human responsibility. Otherwise, we risk constructing systems that obscure their own human origins, systems that act like mirrors but leave us blind to the hand behind the glass.
Even in a future where some form of machine consciousness is considered plausible, moral responsibility will still begin, and end, with us. The machine may generate output, simulate empathy, even surprise us. But it does so through architectures we built, data we provided, and goals we encoded. We must remain ethically vigilant: never too quick to grant machines moral autonomy, nor too eager to deny our own.
Conclusion
The idea of artificial consciousness captures something deeper than technological ambition. It reveals our longing to understand ourselves. When we ask if machines can be conscious, we are not only probing the frontiers of computation, but reflecting on the nature of mind, morality, and meaning. The machine becomes a mirror, and in its reflection we see both our brilliance and our confusion. We glimpse the limits of our knowledge, the reach of our imagination, and the ethical weight of what we create.
Whether or not machines ever truly wake up, the dream of conscious AI has already woken something in us. It challenges us to define consciousness with care, to recognise where intelligence ends and responsibility begins, and to resist the temptation to assign moral standing where there is no experience, or to evade it where it still squarely belongs to us.
As we build tools that resemble minds, we must not lose sight of our own. The questions we ask of AI, about rights, sentience, and personhood, and responsability, are ultimately questions about being human. And it is we, not the machines, who must remain conscious of what they mean.



Comments