top of page
Search

10. Faith in the Algorithm

  • Melissa Campbell
  • Jun 18
  • 10 min read


In 2020, as the COVID-19 pandemic disrupted exams across Europe, governments turned to algorithms to determine students’ final grades. In the UK, the Ofqual-designed system sparked national outrage after nearly 40% of students saw their marks downgraded. High-achieving students from state schools were disproportionately affected, while students from private schools often benefited, not because of individual merit, but because the algorithm relied heavily on schools’ historical performance.

In Ireland, a similar grading algorithm had a different but equally revealing bias: students from fee-paying schools were penalised, causing anger and confusion among families who had expected advantage. In both cases, the algorithm was meant to be a neutral solution, a technical fix in a time of crisis. But when its outputs clashed with public notions of fairness and justice, it became clear that the algorithm was channelling the political and social assumptions of its creators.

In an age of proliferating artificial intelligence, belief in algorithmic authority is becoming a new form of secular faith. People increasingly treat the outputs of AI systems as objective, even righteous, hiring decisions, sentencing recommendations, credit scores, content moderation, diagnostics, all delivered by machines assumed to be free from the flaws of human judgement. It seems the more we deprecate human judgement, the more we revere the algorithm.

This is not just technological convenience, it’s a theological shift. We are witnessing the quiet rise of what could be called algorithmic theocracy, whereby belief in computational objectivity can hold more value than personal ethical reasoning, human empathy, or even the democratic process. We might even imagine that, were AI to become conscious, or should people believe AI to be conscious, machines might even judge us. If we were to arrive at a situation where this trust in AI is widely accepted, to the detriment of human intuiition and reasoning, it would be because we had lost trust in ourselves.

In this article, I want to explore the emerging faith in AI, with its own rituals and dogmas. As people increasingly believe machines are smarter than us humans, we place more and more power in these mechanical hands. Are power, authority, and morality becoming ever more difficult to understand, or question?


1. The Rise of the Algorithm as Authority


At times, we talk about “the algorithm” the way we used to talk about fate, as if it were a single, all-seeing force shaping our digital lives. YouTube recommends a video, and someone comments, “The algorithm did well today,” as if it were a personal assistant, or even a benevolent deity. We speak of it in the singular, the algorithm, but in truth, we live in a dense jungle of algorithms, each tuned to different objectives, some benign, some opaque, and many beyond our understanding.

In just a couple of decades, algorithms have shifted from being mere tools, coded instructions to help us search, sort, or solve, to being invisible arbiters of daily life. They screen job applications, determine parole outcomes, prioritise medical interventions, approve mortgages, and shape the news we read. They rank, filter, match, and optimise. And increasingly, we don't question their decisions. Why? Perhaps because we don't understand how they work, or who exactly made them that way. Perhaps because they are presented as infallible.

This shift has profound ethical consequences. What happens to moral deliberation, the slow, human process of weighing right and wrong, when decisions are automated? Algorithms might optimise for efficiency, but they cannot comprehend fairness, dignity, or the nuance of a unique life situation. They suggest making decisions about people without having the capacity to care for people. Much of the advice on offer from an AI may be quite good, and the conversation with AI about an ethical dilemma may be beneficial, and offer pause for thought. However, many of us accept these outputs as neutral, even enlightened. In doing so, we risk outsourcing our capacity to wrestle with a moral dilemma or make good choices when presented with an ethically delicate situation, but fail to rely on intuition and empathy, which we humans have in spades (if we are lucky enough to come from a loving home), but machines lack. We may potentially arrive at a situation in which difficult decisions are pinned onto an AI, to absolve human responsibility. The algorithm did it. The system made that choice. The machine selected you, or didn’t. This is not just a loss of control; it’s a loss of ethical agency.

We already outsource many critical decisions to algorithms, often trusting their statistical logic over our own judgement. But what do we lose when we do this?


Portrait of a young lion, Martin Falbisoner , Wikimedia Commons
Portrait of a young lion, Martin Falbisoner , Wikimedia Commons

Consider the story of Tao, a wild lion found near death in the Kalahari Desert. Skin and bones, too weak to even lift his head, he was initially deemed beyond saving by veterinary experts who judged the situation statistically, based on survival rates, ecological knowledge, and standard protocols. Why invest time and resources into a wild animal with no apparent future?

But the vet who found him and connected with this animal, and made another kind of judgement. When he looked into the lion’s eyes, he saw something statistics couldn’t measure: a will to live, a bond forming across the species divide. Against the advice of his colleagues, he took the lion to be treated, and stayed with Tao for weeks, feeding him, encouraging him, accompanying him through recovery. The lion lived, and formed an amazing bond with this man.

This moment, a decision made not by the numbers, derived from statistical studies, or a body of knowledge, which this very story would prove to be out-dated and weak, but by human intuition, compassion, and connection, is exactly the kind of reasoning that algorithms cannot replicate. Algorithms operate on generalities; life unfolds in particulars. Algorithms speak in probabilities; humans act from conviction, from presence, from things that aren’t always rational but are often right. Algorithms work by assessing a problem in terms of a given dataset, humans through their actions are directly involved in expanding, and sometimes re-writing, that data. AI is mostly computational, humans have a far richer set of ways to engage with the world.

As we increasingly turn to artificial intelligence for guidance, sometimes for authority, we must ask: What is lost when we no longer look into the eyes of the lion?


2. The Curtain, the Code, and the Comfort of Certainty


Wizard of Oz movie poster, 1939, Wikimedia Commons
Wizard of Oz movie poster, 1939, Wikimedia Commons

Although algorithms are engineered by human hands, trained on human-generated data, and shaped by the biases and assumptions we bring to them, they increasingly feel like something more than us, beyond us. Like the Wizard of Oz behind his curtain, the reality of machine intelligence is often concealed: a collection of processes, datasets, and feedback loops dressed up to look omniscient. And yet, unlike Oz’s Wizard, what’s behind the curtain now works, convincingly, usefully, even beautifully. The output from AI in particular is much more than the sum of its many parts.

There’s a paradox at the heart of this. We know these systems are designed by humans, and yet we place a peculiar kind of faith in them. We trust AI to finish our thoughts, resolve our questions, structure our ideas, and increasingly, to weigh in on matters that stretch far beyond code, questions of ethics, fairness, and human behaviour. From asking it to write emails to asking it to resolve moral dilemmas, we find ourselves turning to a machine that has no experience, no pain, no lived memory. And yet we listen.


Frank Morgan as the Wizard of Oz, from the 1939 film, Wikimedia Commons
Frank Morgan as the Wizard of Oz, from the 1939 film, Wikimedia Commons

This is not simply about convenience or utility, it speaks to a deeper psychological shift. When people speak of “the algorithm” or of “AI” as having insight or judgement, they’re drawing from older human habits: the need to believe in systems larger than ourselves, the comfort of believing in a kind of impersonal wisdom. In teh ancient world, the oracles of Delphi, Dodona, and Siwa, all sacred places where cryptic answers were sought, were also human creations, shaped by culture and power, but cloaked in mystery. AI, too, often speaks in a cryptic voice, offering plausible outputs without revealing its reasoning. The effect is similar: it invites interpretation, projection, and even reverence.

But there is a critical difference. Rather than channelling a divine insight, AI reflects, reorganises, and amplifies our own. It is made from our language, trained on our stories, shaped by our mistakes. And yet, this does not prevent it from shaping us in return. The very act of deferring to AI in moral contexts subtly changes our ethical landscape. It doesn't remove human responsibility, but it displaces it. The tools we build begin to structure our decisions, and eventually, our desires.


3. Echoes, Values, and the New Faith



Temple of Apollo, Delphi, Greece, photo by Bernard Gagnon, Wikimedia Commons
Temple of Apollo, Delphi, Greece, photo by Bernard Gagnon, Wikimedia Commons

If the ancient oracles offered cryptic insight cloaked in incense and ambiguity, today's AI presents itself in a cleaner guise, the interface, the prompt, the answer box. Yet the underlying experience shares something essential. That reply, whether accepted or challenged, exerts an influence. And the more seamless the exchange becomes, the less we notice that something subtle is happening: the values embedded in these systems, inherited from their data, designed by their makers, and shaped by their use, begin to guide the choices we make. Not with commands, but with nudges, framings, suggestions. Over time, these accumulate. We may still believe ourselves to be autonomous, rational agents, but the scaffolding of our decision-making is increasingly shared with, and perhaps shaped by, artificial systems.

This raises questions we are only beginning to face. What values are we absorbing through our interactions with AI? What assumptions lie beneath the fluent surface of its responses? When we ask AI to help us parent, to guide us in relationships, to weigh competing moral priorities, what kind of mirror is it holding up to us? And why, despite knowing its limitations, do we find a kind of comfort in its voice?

it is not faith in AI per se, but faith in a certain kind of process: rational, objective, computational. In a world increasingly complex and chaotic, there is reassurance in turning to something that appears orderly and consistent. But that comfort, like all comforts, has a cost. Faith in the algorithm can become an abdication of doubt, and doubt, as any moral philosopher will remind us, is the root of ethical reflection.

Throughout history, when humans encountered the unknowable, be it the future, death, the meaning of suffering, grief, they turned to the divine. In ancient Greece and Egypt, sacred places like Delphi, Dodona, or the temple of Amun at Siwa were believed to transmit the voice of the gods. Apollo, Zeus, Amun, these were not just figures of myth, but expressions of cosmic order. Their messages, spoken through priestesses or interpreted through rustling leaves or dreams, carried a sense of inevitability. To consult an oracle was to enter into a drama of fate, where the outcome was already inscribed into the fabric of the cosmos. Fate, in the ancient world, was the result of the universe being within the human, and the human being within the universe, as a living being. The universe was reflected in each human soul, and human action was believed to echo through the cosmos, so that the actions of any one part of this huge ordered being might affect any other part of it. This reciprocity between the part and the whole formed the foundation of a belief system according to which individuals were not just subjects of fate, but also expressions of it. What humans did mattered because it resonated with the greater order, and the greater order in turn shaped us. The entire cosmos was mirrored in a single soul, and a single soul was mirrored in the entire cosmos.

Mirror selfie, photo by Tom Mills, Wikimedia Commons
Mirror selfie, photo by Tom Mills, Wikimedia Commons

Today, in the age of artificial intelligence, another mirror has emerged. AI systems are trained on vast oceans of human data, text, images, voice, code, movement, decision-making. These machines do not "think" in any traditional sense, but they reflect something uncanny: a statistical mirror of the collective human self. They offer answers, predictions, and outputs shaped entirely by our prior actions, beliefs, and creations. The AI, in a sense, is the new oracle, but instead of channelling the voice of a god, it channels the echo of humankind, magnified and refracted through machine learning. This creates a double mirroring. Just as ancient oracles were believed to speak from a divine One, of which each human was a part, today’s AI systems present back to us the aggregated voice of humanity itself, scraped, parsed, processed. We look to them for advice, guidance, decisions. And in doing so, we may be shaping our lives in the image of their recommendations, recommendations which are, in turn, shaped by our past selves. The system becomes recursive. We shape the mirror; the mirror shapes us.

It’s no surprise, then, that we find comfort in this. There is something ancient in our desire to consult a source of authority beyond ourselves. But now, that authority is built from us, yet seems to float above us, its inner logic obscure, its answers carrying a strange weight of certainty. The algorithm says. The oracle has spoken.

Whether or not AI becomes conscious is still an open question. But it is already playing a role once reserved for gods. Our faith in its authority, in its objectivity, in its wisdom, in its capacity to know what we cannot, may be less about technology than about something deeper: our longing to see ourselves in the mirror of the world, and to trust that the reflection means something true. We are asking questions to a vast, machine-shaped echo of ourselves, and listening closely to what it says. While the ancient oracles flowed from a divine realm beyond human control, today’s AI oracles are trained on our own collective output. This is no longer a passive mirror, it’s a feedback loop.

As AI begins to influence the rhythms of our language, the aesthetics of our design, the structure of our arguments, even our ethical reasoning, we feed back into the machine a version of ourselves that is already part-machine. Slowly, imperceptibly, the line between the human and the algorithmic begins to blur. What we see in the mirror is no longer purely human, but a hybrid creation, a reflection not of who we are, but of what we’ve become through our entanglement with these systems.

The question is whether, in the act of gazing into this artificial mirror, we are remaking ourselves in its image, and if so, what kind of self will emerge on the other side. Whether or not AI ever becomes truly conscious, its role in shaping our decisions, values, and futures is already profound.



 
 
 

Comments


KHIMAIRA is a space for reflection, for questioning, for peering into the shifting form of AI as it shapes our future. Each week, we explore the intersection of conscious AI, ethics, and the strange, mythic nature of this technology. What does it mean to create something that mimics thought? Can intelligence exist without consciousness? And in the end, who is the true creator—the coder, the data, or the machine itself?

"Invention, it must be humbly admitted, does not consist in creating out of void, but out of chaos." — Mary Shelley, Frankenstein

©2025 by The Khimaira. All rights reserved. 

Please leave a comment and subscribe! 
See also my other blog: www.mercurialpathways.com and my YouTube channel https://www.youtube.com/@MercurialPathways
bottom of page