What Does a Positive Ethical Vision for AI Look Like?
A Catholic Computer Scientist Chimes In
Editor’s Note: This post is a part of a series of responses to Reclaiming Human Agency in the Age of Artificial Intelligence curated by Alessandro Rovati, the Associate Editor of the Journal of Moral Theology
As a computer scientist, my vocation is to engineer digital technologies. As a Catholic, I desire to do so in a way that genuinely promotes human flourishing and the common good. The recent rapid advances in AI technologies have certainly caused a number of valid social concerns, yet the Catholic Church is not anti-technology. Thus, my fellow Catholic computer scientists and I are hungry for a clear and robust positive ethical vision for AI. Much of the conversation around AI ethics thus far in both the secular and theological realms has clearly articulated AI’s harms. This is important work; however, the positive visions for AI that I have encountered tend to be underdeveloped in comparison to AI’s critiques. In contrast, the volume Reclaiming Agency in the Age of Artificial Intelligence (which I will refer to henceforth as “the volume”) provides a fresh perspective and clear guidance on both the harms to avoid and goods to foster in AI development through the lens of human agency, which proves to be an effective lens for considering these questions.
As a computer scientist, I have contributed to scholarship on ethical technology design, including creating a Catholic Social Teaching-based framework for technology design and developing a design method inspired by virtue ethics. The goal in these works is to redesign interfaces, especially for AI and social media technologies, in ways that encourage virtuous behavior: what design features encourage or hinder the practice of particular virtues? The volume puts forth a clear articulation of when an AI system upholds or violates human agency that has strong consonance with my research. Agency, while not a virtue itself, is what allows us to have the freedom to choose to act virtuously. Thus, designing to foster agency is a precursor to designing to encourage virtue. The volume’s frameworks for determining if an AI system upholds agency can be easily used by any developer wanting to build an ethical AI system.
Additionally, the volume provides the clearest moral guidance I have encountered (and I have read a lot of technology ethics literature!) on three sticky ethical design questions: when nudging is unethical, when we should be concerned about deskilling, and when advertising is harmful.
Nudging, as defined in the volume, is when a design strongly encourages someone to perform a particular action (even if they are not completely coerced to do so). One example mentioned in the volume is a mapping program that routes the user past particular restaurants around lunchtime to encourage them to stop and eat there. We can be nudged towards good behaviors too: my research asks the question, how can we nudge users toward virtue? Thus, the ethicality of nudging is ambiguous. The previous literature I had read on the ethics of nudging focused primarily on whether the nudge has a good or bad intent. While intent is important, it felt insufficient: is it ethical to coerce someone to do a good thing? In contrast, the volume puts forth a more robust framework for determining the ethicality of a nudge through the lens of agency. This framework takes into account not only the ends sought by the nudge but also the modality of nudging and the nudge’s relational context. It is the clearest ethical framework I have encountered for developers to think through the ethicality of their nudges.
Deskilling is the phenomenon of losing our ability to perform tasks when those tasks are replaced by technology. For example, frequent usage of a calculator may impede one’s ability to perform mental math. The majority of technologies deskill us, and it seems concerning that technology could erode our natural human capacities. At the same time, Catholic Social Teaching tells us that technology has a powerful capacity to bring about human flourishing. With its great number of capabilities, AI has an unprecedentedly large capability to deskill us. In deciding what technologies to build and how to build them, technology developers must navigate the tension between technology’s capacity for deskilling and its potential to help forge a better society. The key question, highlighted in the volume, is what marks the difference between good and bad deskilling? Previous literature on the ethics of deskilling by the philosopher Shannon Vallor highlights moral deskilling as the primary concern. While I agree that moral skills are critically important for humans to preserve, we can’t neglect our practical skills! The volume’s framework for determining when to be concerned about deskilling is the most comprehensive I have seen, categorizing tasks into three different levels of caution for outsourcing to AI based on the nature of the task.
Advertising enables many online platforms to be offered as free services. A consequence of online advertising is that, at the service of an unchecked capitalistic mindset, many online platforms build in nudges to keep us hooked on the platforms for as long as possible to show us as many advertisements as possible. This leads to addictive behaviors, consumeristic mindsets, and even manipulation into buying products that one did not previously intend to buy. At the same time, having these platforms be free increases accessibility, and sometimes targeted advertising can be helpful in leading us to products that we genuinely want or need. The volume gives the clearest ethical guidelines on advertising I have seen, articulating when advertising can be genuinely helpful by helping users discover their needs and wants, versus when advertising is manipulative by creating needs and wants.
In multiple of my scholarly outputs, my collaborators and I have highlighted subsidiarity as a particularly salient principle for ethical technology design. Because of this, I greatly appreciated the mention of subsidiarity in the positive vision for AI design and distribution. According to the volume, an AI system that abides by subsidiarity will be more decentralized across a number of different dimensions, including decentralizing the models, the data, the computational capacity, and even the talent developing the AI systems.
I have two minor criticisms of the volume. The first is that AI systems take a diversity of forms, from large language model chatbots to decision-making algorithms. At times, when the term “AI” was used, it was unclear what type of AI system was being referred to. Second, the volume posed universal basic income (UBI) as part of a positive vision for AI. While the pitfalls of UBI were briefly discussed – namely that UBI cannot become merely a handout that undermines human meaning and purpose – I wish its criticisms were discussed in more depth. UBI is often criticized as being a band-aid rather than an actual solution to the social problems caused by AI. I was surprised that that perspective was glossed over.
Overall, I found the volume provided one of the clearest articulations of AI ethics I have encountered, both in articulating the harms of AI and in casting a positive vision for AI. I intend to draw from it in my research and teaching going forward, and believe it will be a powerful resource for anyone in the tech industry who wants to engineer AI that advances, rather than hinders, the common good.
Editor’s Note: For more conversations with the Journal of Moral Theology’s authors and responses to their essays, check out HERE.


