Advertisement

Catholic church, tech firms join call for transparency in AI

By Patrick S. Roberts
AI is invading homes through "smart" devices, with China investing heavily in the technology. Photo by Stephen Shaver/UPI
1 of 2 | AI is invading homes through "smart" devices, with China investing heavily in the technology. Photo by Stephen Shaver/UPI | License Photo

March 31 (UPI) -- The Catholic Church joined with technology companies last month to release the "Rome Call for AI Ethics" in what it hopes will lend meaning if not governance frameworks for the use of artificial intelligence.

Skeptics might quote Joseph Stalin's alleged quip during World War II peace negotiations, "How many divisions does the pope have?" The Catholic Church can't enact regulations in any state except the Vatican. But the Rome Call offers a glimpse of what governing and regulating AI could mean on a much larger scale.

Advertisement

Worries about AI are everywhere. Dangers are not obvious. And hype can be hysterical. Is the threat from an AI superintelligence that will challenge humanity? Can AI produce weaponry so fast and so autonomous that nuclear deterrence will be destabilized and make war more likely?

The Rome Call attempts to elucidate the dangers that algorithms or their creators could make judgments with severe consequences for human dignity and rights that are not subject to scrutiny. The initiative follows a Catholic tradition of highlighting human dignity in the face of threats from industrial society (e.g. Rerum Novarum, Centesimus Annus and Laudato si').

Advertisement

Much of the ethical language of the call is drawn from the Universal Declaration of Human Rights, which has the advantage of speaking across religious and ethical traditions to common values. However, Pope Francis may find the more Catholic language of human dignity to be a valuable resource if he writes an encyclical on governing and regulating AI. Dignity is not the same as a human right that can be challenged in court, but it does command respect for humans and imply that they be kept in the loop for consequential decisions.

For example, if an AI algorithm judged you to be a risk and forbade you from stepping on a plane but offered no explanation because its decision process was impenetrable, the decision would seem unfair. With humans out of the loop, by definition the decision would be inhumane.

Humans can also make decisions that may seem arbitrary but -- to some anyway --still be justified as fair because they result from human judgment. When an airport security guard pulls someone over and subjects them to further questioning, they do not normally admit why the person was flagged as a risk -- and yet it seems to many more humane than an AI algorithm making a similar decision. In the United States, no-fly lists and other exclusions are usually subject to appeal.

Advertisement

In the city of Hangzhou, China, a new phone app collects the results of a coronavirus screening and reportedly tracks movement data to make a judgment about whether a person should enter the subway system or whether he or she poses a health risk to others and should be denied entry. The system appears to be an AI decision made in the name of public health and "digitally empowered city management" but without the opportunity for appeal. A future full of these seemingly arbitrary decisions seems dystopian.

Additionally, making sure that "everyone can benefit" from AI by making its discoveries widely available will be important. This is perhaps where the church can be most effective.

The pope endorsed the call in a letter, and he has been concerned about the effects of AI on society for more than a year.

"His major concerns were, will it be available to everyone, or is it going to further bifurcate the haves and the have-not's?" said IBM Executive Vice President John Kelly III, who was one of the Rome Call's signatories. The principles in the call could form the starting point for a new papal encyclical, which becomes Catholic doctrine.

Advertisement

Making the benefits of AI available to all will mean establishing a governance framework so that discoveries are shared with the world and across nations. Software regulations could promote transparency by placing restrictions on algorithms so decision-making rules would have to be made visible, alterable or testable.

And while most AI innovations have come from software (e.g. new algorithms), some of the most promising uses of AI incorporate new and expensive materials -- the iPhone is one example. The next generation of AI could bring new materials and new machines that may stoke a desire for hardware regulation, or at least for sharing the benefits with poorer nations.

As AI challenges notions of what is human in the future, Catholic social thought can offer a language of dignity that speaks beyond the church to a broader human audience -- and even an audience of tech companies.

Patrick S. Roberts is a political scientist at the nonprofit, nonpartisan RAND Corp. He has served as an adviser in the State Department's Bureau of International Security and Nonproliferation.

Latest Headlines