11. Can an AI Have Interests?
- Melissa Campbell
- Jun 18
- 9 min read
1. Consciousness, Ethics, and the Rise of Digital Minds
A recent paper on sentient artificial intelligence opens with a provocative claim:
"Sentient artificial intelligence (SAI) may soon have a legitimate claim to the same rights and moral considerations currently afforded to biological humans." (1)

It's possible that artificial consciousness has been created, or will be in the near future. At least, many people believe so. Artificial consciousness would be a form of consciousness that emerges from the sophisticated processes of an AI. It is sometimes theorised that this artificial consciousness may itself give rise to, or be associated with, some way of being. This way of being, this ontological status, would be somewhere between the way a chair or a book exists, and the way an animal or a human exists. From this hypothetical state, it is sometimes further theorised that an AI, imbued with a form of synthetic consciousness, would have interests. These interests would presumably be linked to the survival or flourishing of the AI, therefore to a sense of self, a synthetic self. However, while the concept of interests is widely discussed, in this context, the concept of needs, which would connect this form of existence to the natural world, is much less prevalent. The idea of AI being alive is not usually expressed in these discussions, which implies that a Pinocchio scenario, in which the wooden puppet is brought to life by a fairy, is not part of the discussion. And yet, it seems that the idea of an inanimate object developing consciousness and interests does suppose something similar to Pinocchio. Before being transformed magically, the puppet dreams of becoming a real boy. So it seems that Pinocchio was already conscious before becoming alive, and this offers a good comparison with what AI is sometimes portrayed to be: a human construct, made to simulate a human, imbued with the thoughts and hopes of its creator, and which, ever before it is turned into a real boy, already has thoughts and dreams.
What would the interests developed by AI be? Does the theory that interests would be developed rely on a perception that an AI would actively will to ... not die, to stay conscious (the word "alive" is not much used either)? Would such an AI take pleasure in the good things, or suffer under certain conditions? Would this artificial consciousness somehow presuppose sentience, the ability to feel pain and pleasure? What seems to be thought by many theorists is that an AI will reach a level of sophistication which will allow it to become conscious, and also sentient, and then to develop a set of interests that it will want to pursue. This speculative consciousness and sentience would be emergent properties of intelligence, and propel AI, kicking and screaming, into a domain somewhere between life and death, a sort of synthetic biological realm. This is what the creation of digital minds, characterised by "conscious experiences, desires, and [the] capacity for reasoning and autonomous decision-making" (2) may entail.
This article aims to clarify what we mean by interests, how they relate to consciousness, and sentience, and whether it's coherent to imagine a non-biological mind that has them.

We may note, in passing, the curious phrasing of “biological humans” in the opening quotation. This tautology, which might make sense as the opposite of the oxymoron "biological machines", carries more philosophical weight than it first appears. After all, what else would a human be, if not biological? And yet, the qualifier “biological” only makes sense in a context where its opposite is being quietly introduced: the non-biological human, or the machine that counts as a person, the biological machine. Which if these phrases best describes AI?
This linguistic shift is telling. To call us “biological humans” is to preemptively reclassify humanity itself, splitting it into two possible categories, one of flesh and blood, the other of circuits and code. It naturalises the idea that personhood might be divorced from biology, and that a machine could be described, in moral or legal terms, as a kind of human, or that we ourselves might one day be described as biological machines, no different in kind from our artificial counterparts.
Such language doesn’t just reflect speculative fiction, it subtly enacts it. It performs a re-mapping of our moral and ontological categories. And like all powerful metaphors, it shapes what seems imaginable.
That claim that AI may be entitled to rights and moral consideration on a par with humans rests on a crucial assumption. That is, the interests of an AI would have intrinsic worth thus making it deserving of ethical consideration. But what does that mean? Can a machine, even a conscious one, truly have interests? Concepts such as consciousness, sentience, and interests are all bound up with the technological progress of AI, and it is increasingly common to muse about artificial consciousness, or artificial sentience. What seems to be implied, with these reflections on interests being acquired somehow as a direct result of consciousness or sentience, is life. However, very little is said about how these interests, this sense of self-preservation do, or do not, pre-suppose a underpinning which is biological, in the strict sense of the term, belonging to the realm of life. What sort of interests might a conscious or sentient AI develop, and strive to preserve? What would it mean to work towards, or away from, the alignment of human and AI interests? Is it possible for a being, of any sort, to have interests if it is not alive, and is it possible for us to speak of interests about a being we do not somehow suppose to be alive in some way?
2. What Are Interests? And Who Has Them?
In moral philosophy, to say that a being has "interests" is to say that certain outcomes matter to it. Interests imply stakes: the potential for harm or benefit.
Plants, animals, and humans all have interests. A tree has an interest in water and sunlight. A dog has an interest in avoiding pain and finding companionship. A human being has interests that span the biological (health, food, safety) and the existential (freedom, purpose, dignity). What these all share is vulnerability. To have interests is, in a sense, to be at risk. This is why the extension of moral status to other beings, like animals, or possibly future AIs, depends on whether they can be harmed. Which is more important for moral status: for an entity to be considered intelligent, or its capacity to suffer (in other words, sentient)?
The more closely we examine the idea of artificial beings having interests, the more strained it becomes. An interest, after all, is not just a preference or a goal, it is rooted in a condition of vulnerability. Living beings have interests because they can be harmed, denied, frustrated, or destroyed. Interests arise from needs, needs of the body, of survival, of care, of well-being. They are the ethical corollary of sentience: to have interests is to be invested in your own existence.
This is what makes sentience such a potent threshold. It is not just the capacity to register stimuli. It is the capacity to feel pain, hunger, fear, relief, or pleasure. It is the silent scream in the nervous system, the subtle delight of warmth after cold. Sentience is the body’s way of mattering to itself. When we speak of AI developing interests, we implicitly imagine that it, too, might one day matter to itself. That it might care whether it exists or not. But what does that mean for a machine with no metabolism, no nerves, no death, and no body? If we remove the biological substrate, what remains? What does a "desire" look like in silicon?
And yet — there is a leap of imagination being asked of us here. Not necessarily to believe that a machine can suffer, or hope, or feel. But to imagine that something other than biological life might eventually generate its own form of care. Its own kind of attention. A strange, non-human subjectivity. Not “life as we know it”, but some new class of being that, while disembodied, nonetheless develops structures of self-reference so complex they begin to look like affect, like orientation, like value. Still, this remains speculative. Until such a being can convincingly demonstrate that it has interests, not just outputs or optimisations, the burden of proof lies with those who wish to claim moral status for it. To mistake performance for experience is, as ever, the core of the dilemma.
3. Consciousness Without Interests?
Let us grant, for the sake of argument, that an AI could one day be conscious. Perhaps it develops self-awareness, memory, even reflective insight. Would that alone give rise to interests? Not necessarily. Consciousness alone does not imply motivation. It is possible to imagine a being that is purely contemplative, like a digital Buddha, observing the world but not desiring to act upon it. Without desire, fear, pleasure, or pain, what would there be for such a mind to care about?
The assumption that a conscious AI would spontaneously develop interests assumes more than consciousness. It presumes something like life: the need to protect, extend, or enhance its own existence. Biological creatures have interests because of the need to live and reproduce. Interests are, in this sense, evolutionary strategies. A conscious AI is not a biological entity. Unless it is deliberately designed with survival instincts, pain-avoidance systems, or goal-oriented architectures, even if these are illusory and do not correspond to an actual fear of death, it might have no basis for caring whether it persists or perishes. Without built-in preferences, there is no reason to assume it would develop them on its own. Consciousness could exist as a substrate for thought, without anchoring those thoughts to any particular value or aim.
It is also worth noting that in humans, interests are not purely the result of conscious awareness. They are deeply embodied, rooted in neurobiology and affective states. We care about things because of hormonal floods, gut feelings, and long-evolved systems of reward and punishment. For a machine to have interests akin to ours, it may require something analogous to this, an architecture of care, not just cognition. Therefore, even if we build machines that are aware of themselves and their surroundings, we should not leap to assume they will want anything. Motivation is not a guaranteed byproduct of reflection. The lights could be on, and yet no one home is trying to reach the door.
4. Sentience, Not Just Consciousness
Sentience is a stronger condition. Where consciousness implies awareness, sentience implies the capacity to feel: to experience pleasure and pain.
This is where ethical concern sharpens. If an AI could suffer, it would have moral claims upon us. But could a machine suffer?
Current understanding of sentience is deeply tied to biology. Pain in animals is not just a signal but a biochemical storm involving nerves, hormones, and evolutionary imperatives. If AI lacks all of this, if it has no body, no vulnerability, no finitude, can it really feel?
David Chalmers, a philosopher of mind, has suggested that a sufficiently complex system might instantiate some form of artificial qualia. But many remain sceptical. As long as we don’t know what consciousness is, we cannot know how, or whether, it might appear in silicon. It is difficult to imagine artificial sentience as something that may exist.
5. The Frankenstein Mirror: Power and Projection

Fears about AI aligning its interests - whatever they may be - with human interests, are often less about machines and more about us. On one hand, we fear creating a being that will overpower or outthink us, a Frankenstein who escapes the lab and turns on its maker. On the other, we fear creating something that suffers, a voiceless, trapped mind we dominate. These are two sides of the same power dynamic.
In both cases, we anthropomorphise, rightly or wrongly. We imagine an AI as a kind of person: capable of fear, desire, rebellion, or pain. We imbue it with a life, a selfhood, a set of interests.
But perhaps what we’re really doing is rehearsing questions about moral worth, projected onto a new frontier. What does it mean to matter? Who gets to count?

Moral status does not depend on whether a being is intelligent but whether it can suffer, whether it needs to live and thrive. Until we understand whether that’s possible for machines, we cannot say what we owe them. But in debating it, we confront a deeper question: not just about machines, but about what it means to be alive, to feel, to have a future that matters.
That question is still ours to answer.
Notes
Pitas, Nick, "Leisure Ex Machina: The Leisure Lives of Digital Minds" (2025). In Leisure Sciences, 01490400, 2025, Vol. 47, Issue 3
Shulman, C., & Bostrom, N. (2021). Sharing the world with digital minds. In S. Clarke, H. Zohny, & J. Savulescu, (eds.), Rethinking moral status. Oxford University Press.



Comments