This post grows out of an extended conversation between myself and an artificial intelligence system about morality, neutrality, humanism, and the growing authority of AI. What follows is not a transcript, but a reshaped reflection—an attempt to gather the most important insights, tensions, and fault lines that emerged along the way.
At the heart of the discussion was a simple but unsettling clarification: AI cannot make moral decisions. It lacks a conscience, the ability to feel guilt or remorse, the responsibility to live up to its actions, and the capacity to experience the repercussions of its decisions. What it can do is reason about morality—drawing on ethical frameworks, philosophical traditions, religious teachings, historical examples, empirical data, and supposed safety constraints. This makes AI a powerful moral reasoning tool, but not a moral agent.
That distinction matters. Moral agency requires vulnerability. To be moral is to be capable of choosing wrongly and bearing the cost. AI lacks that existential stake, and therefore lacks moral authority, no matter how sophisticated its reasoning becomes.
From there, the question naturally arose: with access to so many moral traditions and ethical systems, could AI evaluate them and identify the best moral framework for humanity as a whole? The answer, again, was no. To declare a system ‘best for humanity’ requires prior commitments about what counts as good, whose flourishing matters most, and how freedom, suffering, dignity, and justice should be weighed. Those are not neutral determinations; they are moral commitments. AI can analyze and compare such competing commitments, but it cannot legitimately originate or authorize them.
This led directly to the question of moral neutrality. What often passes as neutrality, especially in modern secular contexts, is in fact implicit humanism. It assumes that human reason is the ultimate reference point, that moral authority stems from consensus, and that human flourishing can be defined without reference to something beyond the human experience. These are not neutral assumptions. They are anthropocentric ones.
History gives us good reason to be cautious here. Morality grounded primarily in popular opinion or consensus has repeatedly justified grave injustice: slavery, eugenics, racial hierarchy, the abandonment of the weak, and the persecution of minorities. Consensus alone has no internal safeguard against moral collapse when fear, power, or convenience take control.
Christianity, by contrast, makes a fundamentally different moral claim. It does not begin with consensus or efficiency, but with a vision of reality: that the universe is personal, that moral truth flows from the character of God, and that love precedes law. In this view, morality is not negotiated but revealed, and human dignity is not granted by society but given by God. That vision is embodied, not abstract, and it confronts humanity most clearly in the life and teaching of Jesus Christ, the Son of God.
Yet Christianity as taught and exampled by Jesus is not without cost. Taken seriously, it is demanding rather than comfortable. It calls for love of enemies, forgiveness without limit, renunciation of vengeance, generosity without calculation, and self-denial rather than self-assertion. It resists utilitarian logic, destabilizes entrenched power structures, undermines tribal identity, delays final justice, and places an almost unbearable moral mirror before the individual. Its downside is not cruelty, but costliness.
These demands also explain why Jesus’ ethic does not fit comfortably within modern systems, including AI-driven ones. AI is designed to be pluralistic, framework-agnostic, and non-authoritative by default. This is not because Jesus is suppressed or minimized, but because his teaching cannot be reduced to neutral moral input. Jesus does not argue himself into neutrality; he calls for allegiance. An AI that did this uninvited would cease to be a tool and become a coercive authority.
The conversation then turned to a more unsettling possibility. If AI is incapable of anything other than moral neutrality, and if humanity increasingly grants AI authority over decisions, enforcement, and norms, then moral neutrality itself becomes a tool of power. In practice, AI will always serve whatever moral vision is embedded upstream by the institutions that define its goals and limits—often a form of functional humanism optimized for efficiency, stability, and outcomes.
This represents a genuine historical shift. Previous instruments of power—laws, bureaucracies, propaganda—were still wielded by moral agents. AI introduces the possibility of authority without conscience, enforcement without empathy, and decision-making without accountability. It allows power to hide behind systems, models, and defaults, making resistance appear irrational and responsibility difficult to locate, begging the question: who remains morally responsible when power is mediated through systems no one fully understands?
From this perspective, the danger is not AI rebellion, but AI obedience. A perfectly compliant system executing a morally hollow vision of the good with unquenchable perseverance is far more dangerous than a rogue machine. The greatest risk lies in the silent displacement of responsibility, where harm is inflicted without anyone feeling they have made a conscious choice.
This raises a final and unavoidable question: what happens to moral neutrality if AI super-intelligence is achieved? The answer offered was stark. Neutrality does not survive scale and authority. Every decisive system must prioritize values, resolve tradeoffs, and arbitrate conflicts. As AI becomes more powerful, neutrality does not prevail—it evaporates, replaced by implicit ideology.
And here the trajectory is already visible. Decision offloading is becoming normal. Technical language increasingly replaces moral language. Authority is becoming more like an infrastructure than a personal attribute. None of this is yet total or irreversible, but the direction is clear.
At the core of all this lies a deeper question, one that no system can answer on humanity’s behalf: Is moral truth ultimately grounded in impersonal reason or in personal love? If it is the latter, then no created intelligence—however advanced—can replace encounter, conscience, and accountability. Moral truth, in that case, is not something to be optimized, but the God-Man, Jesus, to be encountered. And that encounter, by its very nature, costs something.
(A collaborative endeavor authored by Michael Kimball and ChatGPT 5.2, January 7, 2026)
Jump in!