![](https://i0.wp.com/www.catholicnewsagency.com/storage/image/artificialintelligence010325jpg.jpg?w=696&ssl=1)
null / Credit: LookerStudio/Shutterstock
Vatican City, Feb 14, 2025 / 15:15 pm (CNA).
Pope Francis’ adviser on artificial intelligence (AI) warned of the risks of the new technology, saying that its unregulated use could result in the creation of bioweapons as well as an increase in income inequality.
Father Paolo Benanti discussed ethical and human rights challenges surrounding the use of AI at a Thursday event jointly organized by the embassies of Australia to the Holy See and Italy.
The event took place in Rome following the conclusion of the Feb. 10–11 Artificial Intelligence Action Summit in Paris. This week, more than 60 countries — including the Vatican, Italy, Australia, and China — signed an international pact in France pledging to develop AI in a way that is ethical, open, transparent, and safe. The U.S. did not sign the final version of the agreement.
Benanti, a member of the Vatican’s Pontifical Academy for Life and moral theology professor at the Pontifical Gregorian University, was joined on Thursday by four other panelists to exchange perspectives on the various impacts of AI on global politics, the economy, law, and social interactions among people in the wake of an “AI revolution.”
The other speakers at the Feb. 13 event were: Diego Ciulli, head of government affairs and public policy for Google in Italy; Professor Edward Santow, a member of the Australian government’s Artificial Intelligence Expert Group; Professor Luigi Ruggerone, director of business and innovation research for the Intesa Sanpaolo Innovation Center; and Rosalba Pacelli, a postdoctoral researcher at National Institute of Nuclear Physics in Padua, Italy.
During the 90-minute discussion, all five speakers raised ethical concerns about whether people could relinquish their responsibility to promote and defend human rights to data-driven solutions generated by machines and algorithmic tools.
According to Benanti, open-source AI models “without any controls” are “the biggest problem now” as they have the potential to enable users to develop harmful technologies, including bioweapons, that threaten humanity.
Several panelists warned that the technology could exacerbate the gap between the rich and poor.
“AI has a risk to generate more inequalities than more opportunities in society,” Google’s Ciulli said. “AI as a technology has more to do with the impact it has on generating wealth and opportunities.”
The Vatican adviser agreed, adding that AI should be harnessed as a “global resource” that could “empower people” but pointed to the reality that the majority of the world’s population does not have access to this software.
Building upon Ciulli’s comments, Ruggerone expressed his concerns about AI’s potential impact on the distribution of wealth, income, and the labor market from the perspective of an economist.
“In the last 70 years, 99.9% of those who receive the wages and income have not seen their wages increase by 1.5% or 2% a year … actually much, much less,” he explained.
“Even if productivity of labor, thanks to artificial intelligence, increases, nobody guarantees us that wages would increase. Actually quite the opposite,” he added.
For Pacelli, a deep learning expert, a collaborative approach is key to regulating the AI revolution, which fundamentally differs from past industrial revolutions as machines are no longer developed to maximize production but are made to “interact with the user” according to specific data selection processes.
“A bad data selection process can, for example, inject racial bias in diagnostic tools,” she said. “Obviously this is harmful and dangerous for [those who are] already marginalized and so must be taken into account.”
Referencing the Vatican’s document Antiqua et Nova, which outlines the Church’s position on the relationship between AI and human intelligence, Santow said: “It is only the human, not the machine, that is in dialogue with principles such as truth, justice, and peace.”
“Lawyers like me are very concerned about when a machine is being used to make a profound and important legally significant decision,” the former Australian Human Rights commissioner said. “Liability … must always attach to the humans who put those machines in the world.”
Discover more from Scottish Catholic Guardian
Subscribe to get the latest posts to your email.