From San Francisco to Kigali: Silicon Valley's Ambitions in Medicine and the Loss of Human Agency in Health Care
Editor’s Note: This post is a part of a series of responses to Reclaiming Human Agency in the Age of Artificial Intelligence curated by Alessandro Rovati, the Associate Editor of the Journal of Moral Theology
The dam is starting to break. Recently we learned that AI programs will apparently be permitted to prescribe medicine. For now, it will be limited to re-authorizations of very common drugs for mental health, but there is no principled reason it will stay here. We also recently learned that one of the world’s leading AI companies, Anthropic (maker of Claude), signed a three-year MOU with the Government of Rwanda committing to deploy AI toward eliminating cervical cancer, reducing malaria, and lowering maternal mortality. Again, for now. There is no principled reason it will stay there.
There are clear goods associated with these moves, especially Anthropic’s move in Rwanda. Indeed, there are Catholic organizations in that country working toward very similar goals.
But, importantly, Reclaiming Human Agency (RCA) equips us to go deeper and ask, “What vision of the good underlies the deployment of these tools?” Having good goals is not the same as having a coherent vision of the good. The missing teleology in Anthropic’s plan, a missing sense of what the human person and health care are for, is not only a concern—but it also reveals a Catholic-sized hole is in Silicon Valley which cries out to be filled. Happily, as I learned at a recent convening at Anthropic with some of their leadership, they are very interested in engaging with Catholics and others on these and related questions.
RCA points out that much of the AI industry sells “freedom from.” Freedom from inefficacy, drudgery, and the mundane. But our tradition insists on “freedom for.” Freedom for love of God and neighbor, virtue, and excellence. RCA’s global point is that AI’s promise to free us risks undermining the very things which build up who human beings with agency are meant to be.
I thought RCA’s worry about “deskilling” in Chapter 6 was of particular relevance. The worry is that we will not just be de-skilled in a technical sense, though that is certainly part of the deal, but moral capacities atrophy as well. Especially those that are developed through practices requiring judgment. This is the deepest and most worrisome form of deskilling and it is of particular concern in health care. RCA notes that, in clinical settings, health risks are increasingly indicated by algorithms and physicians must decide whether and how to act on them. The key question is then raised: are AI mere tools supporting ends that God and human beings have decided upon—or is AI shaping the ends themselves?
Anthropic commitments to Rwanda are not abstract, but are measurable, national targets. They imagine that their health care tools can be used safely and independently by teachers, health workers, public servants, and other regular folks. This is meeting a very serious need, for many communities in Rwanda have difficulty accessing both information and relationships with clinicians on a regular basis.
These are very, very good goals. Goals which, again, are supported by local Catholic groups working in the country. But the number of bioethical issues here abound. Some obvious/classic topics which come to mind map onto issues in AI ethics more broadly: accuracy, safety, informed consent, data security and privacy, etc. But the ideas in RCA help us dig deeper. Here are three concerns I have about these moves that are related to and indeed build on each other.
1. Directive language in clinical AI. In clinical medical ethics, we often worry about “directive” language: that is, when a provider steers a patient toward a particular outcome (sometimes subtly, sometimes not) rather than presenting options neutrally. A classic example is in end-of-life counseling; a physician who says something like “most patients in your situation choose comfort care” is technically presenting options, but practically she is also directing an outcome. This can be problematic for any number of reasons, but most often because it reflects a particular bias of the physician. Employed in health care contexts, the way Claude phrases options, sequences them, frames probabilities, or describes outcomes will inevitably have directive weight. There is no way to ensure a neutral AI presentation in this context. Every design choice embeds a value judgment about what a good outcome looks like.
2. The unlimited options problem. Or the problem of Burger King medicine. The logic of consumer AI in health care tends toward what one might call “radical optionality:” the idea that a good AI health tool gives patients more choices, more information, more access. But this actually undermines the concept of a profession: a practice with internal goods, standards of excellence, and goods that cannot be achieved by just any means. The American Medical Association’s recent and repeated affirmation that physician-assisted suicide is incompatible with the healer’s role is exactly this: medicine has certain constitutive commitments that cannot be traded away for patient preference. A physician is not a vending machine. A medical clinic is not Burger King where the customer “has it their way.” Will Claude limit patient options? If not, this risks undermining the very nature of medicine itself as a profession. If Claude will limit options, then the question becomes: on what basis will those options be limited?
3. Which vision of the good is medicine based on? It certainly would make things easier if there were a neutral, purely rational ground from which we can answer this question. The answers all seem to come from a very particular vision of the good. Again, Anthropic’s goals in Rwanda are very good. But having good goals is not the same as a coherent vision of the good. And this difference matters enormously for how one pursues one’s goals. A purely consequentialist/utilitarian framework insists that we achieve these outcomes by whatever means produces the best aggregate result. But such a framework has well-known implications: it can justify coercive population health interventions, triage systems that deprioritize the less productive, the systematic devaluation of patients whose conditions are expensive to treat, and many, many more very bad things. But then the question arises: if one is not doing a consequentialist/utilitarian analysis, that requires that some means to your good ends will be ruled out. Which vision of the good will be used to do so?
RCA’s turn toward Catholic social teaching’s principle of subsidiarity is directly applicable here. Anthropic has said that the Rwanda partnership prioritizes local autonomy over how new technologies are introduced. This is promising language, but subsidiarity in the Catholic sense means more than local capacity-building. It means that the people most immediately affected should have genuine decision-making authority over how AI shapes clinical encounters in their particular context.
In Rwanda, it is not only the case that the Catholic Church runs a significant share of the health care infrastructure, but Roman Catholicism is the country’s largest religion, with four-in-ten Rwandans claiming it. Catholic institutions have long experienced navigating the tension between technical medicine and the human, relational, spiritual, communal dimensions of healing. They have long worked with the foundational view that not every means to a good end should be pursued.
RCA helps us understand that a Catholic vision in Rwanda would make sure to use Claude only in ways which preserve both health care itself and, in a related story, the authentic agency of those who practice it. In short, it must preserve the notion that health care is a human vocation, a calling from God. Goods external to the practice of health care, including a consequentialist focus on efficiency, must not threaten the goods internal to the practice of health care.
Anthropic’s origin story is one of putting their internal values and ethics ahead of external goods like efficiency, market capture, and financial expediency. This mattered, most recently, in their courageous stand against the Department of War when it came to autonomous weapons and mass surveillance. And, again, they seem quite interested in dialogue with Catholics and others who want to suggest additional ways in which a foundational commitment to a vision of the good means doing things differently. I have hope that their increased influence will create an opening for ideas like those presented in RCA to gain increased traction at this crucial historical moment.
Editor’s Note: For more conversations with the Journal of Moral Theology’s authors and responses to their essays, check out HERE.



