Artificial Intelligence is growing fast, making the Church face new challenges
Giving the homily during one of his daily Masses, Pope Francis made a remark to the effect that he would baptize aliens if they asked for it.
It was off-the-cuff, as his reflections on the readings of the day usually were, and it caused a bit of a stir.
Francis gave the remark on May 12, 2014, part of considerations he offered on the subject of baptizing “unthinkable” people.
“If, for example, tomorrow an expedition of Martians came to us here and one said ‘I want to be baptized!’, what would happen?” Francis asked.
“Martians, right? Green, with long noses and big ears, like in children’s drawings,” he continued.
It goes without saying that the late pontiff wasn’t making a magisterial statement or even addressing a real question about the possibility of baptizing aliens.
He was speaking about the earliest years of the Church, when Gentiles were being baptized without first being circumcised, and about how Gentile baptism often scandalized the Jewish followers who were among the first Christians.
None of that stopped people in the chattering classes from questioning the pope’s statement, some of them noting – not wrongly – that Christ became Man, not just an “intelligent being.”
“Does he think we should baptize Siri [the Apple virtual assistant] if it asked?” I remember one person asking me at a Roman pub.
That was more than a decade before the election of Pope Leo XIV, who was just Augustinian Father Robert Francis Prevost at the time, but the rise of Artificial Intelligence (AI) – and it bears mention that Siri was an early AI-based virtual assistant – has been a constant concern of the new pontiff.
From the first days of his pontificate, the issue has been for Leo far more than a barroom debate. It even informed his choice of the name by which we know him as pope.
Meanwhile, AI continues to grow at an exponential rate.
“Artificial intelligence has certainly opened up new horizons for creativity, but it also raises serious concerns about its possible repercussions on humanity’s openness to truth and beauty, and capacity for wonder and contemplation,” Leo said on Dec. 5.
“The ability to access vast amounts of data and information should not be confused with the ability to derive meaning and value from it. The latter requires a willingness to confront the mystery and core questions of our existence, even when these realities are often marginalized or ridiculed by the prevailing cultural and economic models,” the pope added.
Leo hasn’t mooted the idea of “baptizing” computers, nor has he spoken directly to the dystopian and apocalyptic concerns of people across the planet, from those who struggle to open apps – like me – to tech industry captains who were the erstwhile drivers of the AI revolution.
In the popular imagination, the very thought of “Intelligent” machines has engendered fears that have led to some of the most successful media of the past century.
“It was the machines,” said the character Kyle Reese – played by the notable character actor Michael Biehn – in James Cameron’s seminal 1984 blockbuster, The Terminator.
“They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: Extermination,” Reese said while trying to save Sarah Connor (played by Linda Hamilton, for those following from home).
This goes further back than the 1980s – HAL the computer killed almost everyone serving on the “Discovery One” spacecraft in Stanley Kubrick’s 1968 film, 2001: A Space Odyssey, in which HAL even expressed fear of dying when he was turned off.
How closely life will come to imitate art is an open question in this or that case, but it is certain that what was fanciful fiction now appears measurably less far-fetched and is even rapidly coming to resemble reality.
Dario Amodei, the CEO of the AI company Anthropic, says he is “unsure” if AI is now actually self-aware.
“We don’t know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we’re open to the idea that it could be,” he told Ross Douthat in a conversation with the New York Times podcast “Interesting Times” on Feb. 12.
Dario Amodei is not necessarily a household name, and perhaps neither is Amodei’s company, Anthropic. It’s fair to say, however, that Anthropic’s flagship Large Language Model (LLM), Claude AI, is pretty well known.
Amodei claims his company has “taken certain measures” to ensure that Claude, should the LLM have some “morally relevant experience” – he didn’t want to use the word “conscious”– would “have a good experience.”
“We’re putting a lot of work into this field called interpretability,” Amodei said, explaining that the field involves “looking inside the brains of the models to try to understand what they’re thinking.”
“You find things that are evocative,” Amodei said, “where there are activations that light up in the models that we see as being associated with the concept of anxiety or something like that.”
“When characters experience anxiety in the text, and then when the model itself is in a situation that a human might associate with anxiety, that same anxiety neuron shows up,” Amodei explained.
“Now, does that mean the model is experiencing anxiety?” he asked.
“That doesn’t prove that at all,” Amodei told Douthat, “but…,” he added, leaving the thought.
When asked about the religious questions surrounding the exponential growth of AI, Amodei said he thought it “is a little bit sensationalistic and besides the point, even as I think this will be the biggest thing that ever happened to humanity.”
Recent reports have shown AI units telling untruths.
Whether this is a “conscious” decision on the machine’s part or just spreading bad data received is an open question very much under debate.
If the AI future does come to a cinematic bad end – be it an end in which the machines win, like in The Matrix; or one in which they lose, like in Dune (admittedly a cinematic adaptation of a novel) – remains to be seen.
As Crux reported just this week, however, the U.S. military is aggressively pursuing an Artificial Intelligence Acceleration Strategy that puts official Pentagon policy at loggerheads with the Holy See, long an opponent of autonomous AI weapons systems; and – it just so happens – Anthropic’s reluctance to see their Claude AI platform used in the design or deployment of autonomous AI weapons systems has drawn the ire of Defense Secretary Pete Hegseth.
What we are certainly seeing right now is a world facing technological development at a pace producing social, civilizational, moral and spiritual disruption on a scale not seen since the Industrial Revolution of the 19th century.
“I chose to take the name Leo XIV,” the pontiff told the cardinals on May 10 of last year, not two full days after they had elected him. “There are different reasons for this,” he continued, but the main reason for his choice was “because Pope Leo XIII in his historic Encyclical Rerum novarum addressed the social question in the context of the first great industrial revolution.”
“In our own day,” Pope Leo XIV continued, “the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor.”
So, whether the religious questions are beside the point or the heart of the matter, Pope Leo and Amodei appear to agree that AI is one of the biggest things to happen to humanity.
In his Dec. 5 statement, Leo said in order to build a future “that achieves the common good and harnesses the potential of artificial intelligence, it is necessary to restore and strengthen their confidence in the human ability to guide the development of these technologies.”
It did not appear that the pope was considering whether to baptize computers, no matter how intelligent they become.
Follow Charles Collins on X: @CharlesinRome