Are Chatbots Like War?
Using Just War Criteria as an Ethical Framework for AI Use
The Church is not opposed in principle to artificial intelligence. After all, the Catholic magisterium is not anti-technology. John Paul II explains that “science and technology are a wonderful product of a God-given human creativity.” Benedict XVI teaches that technology expresses human beings’ desire to overcome material limitations and respond to God’s command to till and keep the land (cf. Genesis 2:15). Building upon his predecessors, Francis writes in Laudato Si that “technology has remedied countless evils which used to harm and limit human beings. How can we not feel gratitude and appreciation for this progress, especially in the fields of medicine, engineering and communications?” (no. 102).
At the same time, the Church also warns us about our society’s idolatry of progress, our simplistic belief that every increase of technological power is necessarily a good, and the many ways in which we easily become blind to our own limitations and the gravity of the challenges that confront us (cf. Laudato Si, no. 105). The truth is that “immense technological development has not been accompanied by a development in human responsibility, values and conscience” and that, accordingly, “we stand naked and exposed in the face of our ever-increasing power, lacking the wherewithal to control it. We have certain superficial mechanisms, but we cannot claim to have a sound ethics, a culture and spirituality genuinely capable of setting limits and teaching clear-minded self-restraint” (Laudato Si, no. 105).
Thus, the advent of artificial intelligence puts us at a special crossroad that requires careful ethical analysis and deliberation so that we may develop and use AI in ways that “reflect justice, solidarity, and a genuine reverence for life.” Here is the problem, though: technological products are not neutral. Laudato Si insists that they “create a framework which ends up conditioning lifestyles and shaping social possibilities along the lines dictated by the interests of certain powerful groups. Decisions which may seem purely instrumental are in reality decisions about the kind of society we want to build” (no. 107). Accordingly, we cannot start our ethical deliberations from scratch, as if AI were a simple tool among many. It is not. As I explained at length elsewhere, AI poses a unique anthropological challenge that we need to be alert to.
Antiqua et Nova warns us that, while mimicking human intelligence and speech, AI is incapable of moral discernment and authentic relationships (no. 32). Furthermore, AI “lacks the richness of corporeality, relationality, and the openness of the human heart to truth and goodness,” which means that it is dangerous to confuse the algorithmic outputs of this amazing technology with human intelligence and understanding, no matter how complex, efficient, or helpful they may be (no. 34). In the end, as the theologians involved in AI Research Group of the Centre for Digital Culture of Dicastery for Culture and Education have argued convincingly, “personhood and intelligence are categories that are not reducible to mechanically replicable behavioral performances, for they involve capacities for subjective, experiential, compassionate engagement with other persons and with reality itself” (11). In fact, the lurking danger of the current moment is a flattening of all intelligence to the number of tasks one can perform. While AI is a product of human intelligence, its ability to simulate thought and speech (together with the design choices that AI labs make to maximize user engagement) constantly tempts us to personalize it and lose sight of the fact that personhood and intelligence can never be reduced to the capacity for outward behavior. When we accept such a reduction, we end up embracing a functionalist perspective that applies the output-focused way we use to judge machines to people. On the one hand, such a mentality makes us lose sight of the “sharing of minds and hearts that we most greatly treasure in personal relationships and, ultimately, in our share in the life of God” (11). On the other hand, we reinforce the mentality of the throwaway culture that looks at those whose abilities are limited or impaired in any way (the unborn, the unconscious, and the elderly, for example) as lesser members of the human community that can be discarded (Antiqua et Nova, no 34). The epochal change brought about by AI “requires responsibility and discernment to ensure that AI is developed and utilized for the common good, building bridges of dialogue and fostering fraternity, and ensuring it serves the interests of humanity as a whole.”
Catholic scholars have taken up the task of developing frameworks for ethical discernment related to AI (including here at Catholic Moral Theology). Some have argued that engaging generative AI through chatbots is always evil, while others have drawn on Catholic social teaching to discern what AI uses and designs might be moral. I want to contribute to this ongoing conversation by proposing that we use just war criteria as a tool to discern the morality of AI use.
Let me start with a few caveats.
First, AI is a very broad category that encompasses many applications that are quite different. Machine learning has been a staple of our technological age for more than a decade now, and various forms of artificial intelligence have been deployed to make many of the apps and services we use daily possible. Second, even when we refer more specifically to generative AI, its uses are so broad and diverse that it is quite difficult to take a general stance about them. Third, AI is not like war in that it does not always involve the killing of human beings (except when it does, of course). Fourth, as recent news shows, the just war tradition is a complex and contested one. These warnings notwithstanding, the just war framework is helpful to investigate how to engage with AI carefully and avoid a “going with the flow” mentality.
Using just war criteria in an analogous way, I would argue that the Gospel demands a presumption against generative AI and chatbots, encourages us to pray for freedom from their bondage, and asks all people to work for their avoidance (cf. CCC, nos. 2307-2308). Given the conditions of fallen humanity, using generative AI might be justified at times, but Christians should engage with it only as a last resort, for just causes, with the right intentions, and if doing so does not produce graver evils and disorders (cf. CCC, no. 2309). The just war framework’s analogy also reminds us that there are ethical demands we must respect when we decide to use AI (jus ad AI), when we are using AI (jus in AI), and after we have used it (jus post AI). Finally, thinking of AI through the lenses of the just war tradition makes us recognize that some will want to renounce it altogether and become conscientious objectors to bear witness to the gravity of its moral risk (cf. CCC, nos. 2306) and that, accordingly, governments and institutions (tenure committees, for example) will need to protect their rights of conscience (cf. CCC, nos. 2311).
Taken together, these criteria help us discriminate between cases in which AI use is always forbidden (simulating personhood to exploit and monetize people’s needs for relationships, deception and misinformation, autonomous weapons systems, outsourcing intellectual work in educational settings, sex robots, deepfakes, and more) and others in which, for the sake of pursuing authentic human flourishing and the common good while protecting inviolable human dignity, its use might be ethical given that human abilities alone would not be able to pursue goods that are essential to integral human development (some applications in science and medicine, for example). The criteria help us tackle difficult areas where careful discernment is needed (for example, when is it moral to substitute human labor with AI? What tasks are ethical to outsource to AI, given the deskilling that accompanies all outsourcing?), while constantly reminding us that the material and spiritual conditions of our age make human exploitation the most likely outcome when it comes to technology (let’s learn our lessons from the ongoing discovery of the harmful effects of screen-based childhood, smartphones, and social media!).
Every time I read or hear that AI could serve the good if used in the right way, I always think of the fact that Adam and Eve could have obeyed God in the garden (cf. Genesis 3). They did not, though, which leaves us in a wounded and fragile condition where, without the ongoing aid of grace, concupiscence, deception, injustice, inequalities, envy, greed, and pride constantly push us to use AI in a disordered way that harms human dignity, agency, and relationships (cf. CCC, 2317). Just like Christians never engage in war following efficiency, productivity, or the pursuit of power as the determining factors for discernment, so too we should not use these criteria to decide whether to use AI. Instead, everything we do must be the outworking of Christian discipleship, a calling that summons us to walk the narrow path not just of using AI for the good but, most importantly, to walk towards holiness and the kingdom. In the end, what good will it be to gain the world thanks to the power of AI if we lose our souls?

