Pope Francis Offers ‘Rome Call For AI Ethics’ To Step-Up AI Wokefulness, Which Is A Wake-Up Call For AI Self-Driving Cars Too – Forbes
The Pope is concerned that AI might be used in ways that undercut humanity rather than AI serving as a tool to enable and embellish humanity.
Per the Vatican, a newly released document entitled “Rome Call For AI Ethics” spells out what direction AI ought to go as a technology that preferably is aimed for the betterment of society instead of the detriment or the possible demise of society.
Sometimes, technologists are apt to unleash innovations without having explicitly considered the ramifications of what they have let loose, and thus there is an arising slew of calls for AI ethics considerations in the rapid race toward developing and fielding AI systems, especially ones based on Machine Learning (ML) and Deep Learning (DL).
Here’s a key quote from the Papal issued document:
“Now more than ever, we must guarantee an outlook in which AI is developed with a focus not on technology, but rather for the good of humanity and of the environment, of our common and shared home and of its human inhabitants, who are inextricably connected.”
The Catholic Church is asking anyone that is in the business of making AI systems to be mindful of what the AI is going to do, including considering both intended and potentially unintended consequences.
Some AI developers turn a blind eye to the possibility of unintended adverse consequences of their creations.
Be aware of these social-dynamic aspects involved:
· In the view of some AI engineers and scientists, as long as the AI system at least does what they hoped it would do, anything else that might inadvertently emerge is not on their hands, they believe, and so they attempt to wash themselves of any such adversities.
· There are also many that are developing AI systems that aren’t even thinking at all about the badness possibilities of what they are producing (those are the ones that tend to have a lack of mindfulness on this matter, oftentimes being naively unaware).
· In some cases, AI creators are heads-down into the tech and not cognizant about ethical considerations that could arise, being preoccupied with the tech and/or not being versed in grappling with how to surface potential badness consequences.
· In other cases, they are transfixed by the goodness of their AI system, becoming overly consumed by a sense of being part of a presumed noble cause (see my discussion on noble cause corruption in AI at this link here), and refuse to look at the downsides or reflexively believe that any pitfalls are well worth the presumed upside.
· Or, some are so focused on beating the clock and being the first to achieve a particular AI advancement that they figure they’ll deal with any ethical fallout after-the-fact versus dealing with it during the throes of getting their new machine out-the-door soonest (this though is the classic Trojan horse, putting something onto the street that is just biding time until things go awry).
It Is An AI Wokefulness Pledge
The Catholic Church has issued its call for AI ethics as a pledge document.
Anyone that is involved in crafting AI systems is being politely requested to sign the pledge.
Indeed, out-the-gate there have been some signees already, notably including IBM and Microsoft.
What does the pledge ask the signers to do?
Well, I’ll get into some of the details momentarily, meanwhile here are the three core precepts or objectives of the overall pledge, namely that an AI system shall:
“It must include every human being, discriminating against no one; it must have the good of humankind and the good of every human being at its heart; finally, it must be mindful of the complex reality of our ecosystem and be characterised by the way in which it cares for and protects the planet (our “common and shared home”) with a highly sustainable approach, which also includes the use of artificial intelligence in ensuring sustainable food systems in the future.”
Thus, in shorthand:
1) AI shall not discriminate
2) AI shall be good in its intent
3) AI shall care about sustainability
One aspect to keep in mind is that AI is not somehow determining its own future, which at times when referring to AI systems it is as though they are already autonomous and deciding their own fate.
Not so, at least not now and nor in the near future.
In other words, we humans are the ones that are devising these AI systems.
Therefore, humans are responsible for what those AI systems do.
I mention this because it is an easy escape hatch to have AI designers and developers pretend that the AI did something untoward and it wasn’t somehow the fault of the AI builders that the system went awry.
You are likely familiar with going to say the DMV to get your driver’s license renewed, and the computer system is down, which then the agent at the DMV shrugs their shoulders and laments that it’s just one of those things.
To be correct, it isn’t just one of those things.
If the computer system is down, it’s due to the humans that set up the computer and apparently failed to properly put in place the needed backup and other provisions to ensure that the system is available when needed.
Don’t fall into the mental trap of accepting the notion that a system, including AI systems, are of their own mind and if they blow a gasket it is just one of those things.
The real truth is that the humans that devised and put in place the computer system are the ones that ought to have the buck stop at their door since they are the people that didn’t exercise the due care that they should have properly undertaken.
In the case of the call by Pope Francis for attending to vital AI ethics considerations, and though you might assume that such a rightful covenant would be straightforward and beyond criticism (since, in a manner of speaking, it is a “mom and apple pie” kind of declaration that would seem inarguable), there are some that have already lobbed disparagement.
First, some critics point out that there isn’t any binding aspect to the pledge.
If a company signs on the dotted line, there isn’t any specific penalty imposed for violating the principles of the pledge. Thus, presumably, you can signup without any fear of reprisal.
With no teeth, the pledge is seen by some as hollow.
Second, the fact that the Catholic Church has issued this particular call for AI ethics is disturbing to some since the issuer presumably is impinging religion into a topic that for some is not a religious matter at all.
Will other religions then issue similar calls for AI ethics, and if so, which one will prevail, and what will we do with a fragmented and disparate set of AI ethics proclamations.
Third, some assert that the pledge is bland and overly generic.
Presumably, without being more down-to-earth, the document doesn’t provide sufficient real-world directives that could be implemented in any practical way.
Plus, those AI developers that want to weasel out could claim that they misunderstood the general provisions or merely interpreted them in a manner differently than perhaps originally intended.
Okay, given the condemnation barrage, should we toss out this new call for “algor-ethics” for AI?
As an aside, algor-ethics is the name being given to algorithmic ethics concerns, and though maybe clever, I believe it doesn’t roll off the tongue well enough and won’t likely take hold as a common moniker for these matters. Time will tell.
Back to the question at hand, should we care about this call for AI ethics or not?
Yes, we should.
Despite the aspect that the pledge admittedly doesn’t impose any financial penalties for failing to abide by the principles, there is another angle that you need to consider.
For those firms that sign-up, they are potentially going to be held accountable by the public-at-large.
If they run afoul of the pledge and suppose the Church then points this out, the bad publicity alone could hamper and damage the signee, leading to a loss of business and a loss of industry reputation.
You could say that there is an implied or hidden cost to violating the pledge.
On the matter of whether religion is being infused where it doesn’t belong, please realize that’s a whole other matter of sizable discussion and debate, but do also recognize that there are now numerous AI ethics frameworks and calls for AI ethics from a wide variety of quarters.
In that sense, it isn’t as though this is the first such calling.
And, if you look closely at the pledge, there doesn’t seem to be anything about it that pertains to any religious doctrine per se, meaning that you really could not readily differentiate it from any other similar pledges that were done entirely absent (presumably) of any religious undertones.
Finally, in terms of the vagueness of the pledge, yes, you could easily drive a Mack truck through the numerous loopholes, and those that want to be weasels have a solid chance at being, well, weaselly.
My guess is that we’ll have some follow-ups by others in the AI field that will supplement the pledge with specific indications that can help in plugging up the gaps and omissions.
Perhaps too, anyone that does take the weasel route will get called out by the world at large, and false claims of allegedly misunderstanding the pledge will get denounced as obvious ploys to avoid abiding by the reasonably articulated and quite apparent AI ethical provisions proclaimed.
Speaking of making sure that everyone gets the drift on these AI ethics provisions, contemplate the myriad of ways that AI is being applied.
Here’s an interesting area of applied AI that falls within the AI ethics realm: Should AI-based true self-driving cars be devised and fielded in a manner as guided by these calls for AI ethics?
I say yes, unequivocally so.
Indeed, I call upon all automakers and self-driving tech makers to sign the pledge.
In any case, let’s unpack the matter and see how the pledge applies to true self-driving cars.
The Levels Of Self-Driving Cars
It is important to clarify what I mean when referring to AI-based true self-driving cars.
True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public be forewarned about a disturbing aspect that’s been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And AI Ethics Considerations
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
At first thought, you might assume that there doesn’t seem to be any need for considering AI ethics issues.
A driverless car is just a car that perchance drives you around, doing so without a human driver.
No big deal, and certainly no ethical considerations about how the AI driving system will do its task, so you might presume.
Sorry to say that anyone under the belief that there aren’t AI ethics issues involved needs to be knocked on the head or be doused with a bucket of cold water (I am not advocating violence, please know that those are mere metaphorical characterizations).
Let’s briefly take a look at each of the six core principles outlined in the Rome Call For AI Ethics document.
Seeking to keep things succinct herein, I offer links to my other postings that provide greater details on these weighty topics:
AI Ethics Principle #1: “Transparency: in principle, AI systems must be explainable”
You get into a true self-driving car and it refuses to take you where you’ve told it to go.
Today, most AI driving systems are being devised without offering any explanation for their behavior.
It could be that the AI system is unwilling to drive because there is a tornado underway, or maybe due to your making a request that is undrivable (no passable roads nearby), etc.
AI Ethics Principle #2: “Inclusion: the needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop”
Some are concerned that AI self-driving cars will only be available for the rich, and that the rest of the populous will not glean the benefits of driverless cars (see the link here).
Which shall it be, mobility-for-all or only mobility-for-the-few?
AI Ethics Principle #3: “Responsibility: those who design and deploy the use of AI must proceed with responsibility and transparency”
How will the AI determine the course of action when faced with having to choose between potentially ramming into a child jaywalking the street versus slamming the driverless car into a tree and likely harming the passengers?
Known variously as the Trolley problem (see my link here), many are clamoring for the automakers and self-driving tech firms to be transparent about how their AI systems will make these life-or-death choices.
AI Ethics Principle #4: “Impartiality: do not create or act according to bias, thus safeguarding fairness and human dignity”
Suppose a self-driving car reacts to pedestrians based on their racial characteristics (see discussion at this link)?
Or, suppose a fleet of self-driving cars all “learn” to avoid certain neighborhoods and won’t drive in those locations, which would deny those residents ready access to driverless cars.
Biases due to Machine Learning and Deep Learning systems are a real concern.
AI Ethics Principle #5: “Reliability: AI systems must be able to work reliably”
You are riding in a self-driving car, and all of a sudden it pulls over to the side of the road and comes to a stop.
You might not know.
Could be that the AI reached the bounds of its scope (referred to as its Operational Design Domain, ODD, see the link here).
Or, perhaps the AI system faltered, had a glitch, etc.
AI Ethics Principle #6: “Security and privacy: AI systems must work securely and respect the privacy of users.”
You get into a self-driving car after partying at the bars.
Turns out that the driverless car has cameras pointed inward, in addition to externally pointed cameras.
The basis for the inward-facing cameras is to catch riders that might spray graffiti or damage the interior of the vehicle.
In any case, those cameras aimed at you are capturing video of you the entire time that you are riding in the driverless vehicle, including your rambling remarks since you are in a drunken stupor.
Who owns that video and what can they do with it?
Does the owner of the self-driving car need to provide you with the video, and is there anything preventing them from posting the video online?
There are lots of privacy issues to be dealt with (see analysis at this link here).
From a security perspective, there are tons of possibilities for someone cracking into the AI driving system (see discussion here about backdoor security holes in ML/DL).
Imagine an evildoer that might hack the AI and be able to take over the driving system, or at least tell the driving system to take you to a particular location, wherein kidnappers are waiting for your arrival.
I realize that there are many doomsday scenarios about security breeches in a self-driving car system or its cloud-based fleet wide system, and it is incumbent for the automakers and self-driving tech makers to put in place needed systems security protections and precautions.
If you compare this call for AI ethics to the many others in circulation, you’ll find that they have much in common.
The criticism that we are ending up with a tsunami of these AI ethics calls is somewhat the case, though one supposes that it can’t hurt to cover all the bases.
That being said, it is starting to become a bit confounding, and AI makers are going to potentially use the excuse that since there isn’t one single accepted standard, they will wait until such a document exists.
It seems like a viable excuse, but I don’t buy it.
Anyone that says they are going to wait until the worldwide all-hands Grand Poobah accepted version gets approved is really saying they are willing to postpone becoming grounded on any AI ethics at all, and their wait might be the same as waiting for Godot.
I urge that you pick any AI ethics guideline that has meat to it and that is proffered by a reputable source and get going with your internal AI ethics implementation.
Sooner rather than later.