Encyclical Preview: What Leo XIV Teaches About AI
The Pope's Reflections on the Dangers of Artificial Intelligence
While we are still awaiting Pope Leo XIV’s first social encyclical, he has already started to reflect on the tensions between AI’s growing influence on daily life and the Christian understanding of the person. The Pope thinks that AI “offers great opportunities, but it is also fraught with danger,” because it “raises serious concerns about its possible repercussions on humanity’s openness to truth and beauty, and capacity for wonder and contemplation.”
In his “Message for the 60th World Day of Social Communications,” Pope Leo offers a clear-eyed and stark judgment on the current moment:
By simulating human voices and faces, wisdom and knowledge, consciousness and responsibility, empathy and friendship, the systems known as artificial intelligence not only interfere with information ecosystems, but also encroach upon the deepest level of communication, that of human relationships. The challenge, therefore, is not technological, but anthropological. Safeguarding faces and voices ultimately means safeguarding ourselves. Embracing the opportunities offered by digital technology and artificial intelligence with courage, determination and discernment does not mean turning a blind eye to critical issues, complexities and risks.
In the message, Pope Leo spells out the risks of AI in great detail:
“Algorithms designed to maximize engagement on social media… reward quick emotions and penalize more time-consuming human responses such as the effort required to understand and reflect… [Thus], these algorithms reduce our ability to listen and think critically, and increase social polarization.”
By relying in a naive and unquestioning way on AI and treating it as “as an omniscient ‘friend,’ a source of all knowledge, an archive of every memory, an ‘oracle’ of all advice… [We] erode our ability to think analytically and creatively, to understand meaning and distinguish between syntax and semantics.” While AI is simply using complex algorithms that analyze data and then create well-formed sentences, people mistake its product as an expression of meaningful judgments that are the fruit of intelligent, conscious, and moral deliberation. In the long run, the Pope warns us, “choosing to evade the effort of thinking for ourselves and settling for artificial statistical compilations threatens to diminish our cognitive, emotional and communication skills.”
A third area of concern pertains to the difficulties that generative AI introduces in distinguishing between what is real and what is simulated. The digital space is now inundated with videos, images, and “persons” that are not real but created by automated agents instead. This is a problem in and of itself, but it is made even graver by the fact that such simulated realities influence public debates and individual choices. “Chatbots based on large language models (LLMs),” Leo warns us, “are proving to be surprisingly effective at covert persuasion through continuous optimization of personalized interaction. The dialogic, adaptive, mimetic structure of these language models is capable of imitating human feelings and thus simulating a relationship.” The result, he concludes, is that “they can become hidden architects of our emotional states and so invade and occupy our sphere of intimacy.” It is hard to escape the judgment that, ultimately, the AI labs and the myriad of companies that are starting to use their technology are exploiting and monetizing people’s psychological vulnerabilities to maximize interaction and nudge users to use more of their features and purchase more of their content. According to the Pope, more and more we will be tempted to “substitute relationships with others for AI systems that catalog our thoughts, creating a world of mirrors around us, where everything is made ‘in our image and likeness.’” As a society, we have started to reckon with what happens when screens and social media take an oversized space in people’s lives, imaginations, and habits. We should apply the wisdom gained from these hard lessons and apply it to the challenges posed by AI so as to avoid falling into fabricated parallel realities that usurp our faces and voices.
Finally, Pope Leo alerts us to the problem of bias. “AI models,” he explains, “are shaped by the worldview of those who build them and can, in turn, impose these ways of thinking by reproducing the stereotypes and prejudices present in the data they draw on.” Since such commitments and perspectives remain covert and implicit, though, they nudge in ways that are surreptitious and concerning.
All of this leads us to the problem of using AI in educational settings. Learning to read, consider, study, discuss, and write about important texts and ideas is an essential component of the intellectual and moral formation at the heart of education. This is especially true for Catholic institutions that wish to embody the Church’s vision of formation as the creation of an environment where students “freely associate with their teachers in a common love of knowledge” that steers them towards “searching for, discovering, and communicating truth” (Ex Corde Ecclesiae, no. 1). The Church teaches that the classroom should be a place where both students and teachers grow in their ability “to wonder, to understand, to contemplate, to make personal judgments, and to develop a religious, moral, and social sense” (Gaudium et Spes, no. 59). None of this is possible by outsourcing the work necessary to develop our intellectual and moral abilities to AI.
Paul Scherz and Brian Patrick Green call this process deskilling: “the person never acquires or fails to maintain the habits and skills necessary to act well because many activities are taken over by machine.” Of all the problems spelled out above, this is the most pressing in the context of education. Reading, writing, conversing, arguing, thinking, creating, evaluating, and disagreeing (just to name a few of the tasks that people may now outsource to AI) are not simply technical skills. They have moral salience and touch on constitutive human elements. In fact, these abilities are important for the development of virtue such that deskilling in this area easily leads to what Scherz and Green call “de-virtuing,” a fundamental impairment of human development and moral growth. Pope Leo is very aware of this issue: “Just as all the muscles in the body die if we do not use them, if we do not move them, the brain needs to be used, so our intelligence, your intelligence, needs to be exercised a little so as not to lose this ability.” He even explicitly told students to refrain from using AI to do their homework and urged priests to keep preparing their own homilies!
Considering all the problematic features of AI, I think that we should severely limit it (if not outright ban it) in educational settings so as to cultivate the intellectual, moral, and social skills that human beings need to develop and flourish. These are the ones that, in turn, may allow people to find ways to eventually use AI in ethical ways that serve the common good, protect human dignity, and encourage authentic and integral development. Without spaces that cultivate our humanity and allow it to grow in virtue, though, it is hard to imagine a future where AI is used for the good rather than to simply accelerate our societal ills. Niall Ferguson has suggested that, while living in today’s world is akin to operating a starship, it is essential for education to still function as a cloister where the time and space to develop our intellectual and moral virtues is carved out. Catholic institutions, the Pope tells us, are primed to create such a space to “teach young people to use these tools with their own intelligence, ensuring that they open themselves to the search for truth, a spiritual and fraternal life, broadening their dreams and the horizons of their decision making.” Without such a humanistic formation, he continues, we will grow blind to “the logic behind economics [of AI], [and the] embedded biases and forms of power that shape our perception of reality. Within digital environments — structured to persuade — interaction is optimized to the point of rendering a real encounter superfluous; the otherness of persons in the flesh is neutralized, and relationships are reduced to functional responses.” In contrast, Pope Leo urges us to “return to the reasons of the heart, to the centrality of good relationships and to the ability to get closer to others, without excluding anyone.”


