AI Ethics: Where to Start – InformationWeek
Even as so many enterprises are still struggling to move their artificial intelligence and machine learning pilots into production, there’s another challenge on the horizon. How can organizations ensure that their algorithms are acting in a responsible and ethical fashion?
For those that don’t figure this out, the results can cause embarrassment at best and potentially cause your company to run afoul of the law at worst.
Those looking for guidelines and help with this question got some answers from Frank Buytendijk, vice president and analyst at research and consulting firm Gartner, during the session Artificial Intelligence and Ethics: What You Need To Do Today at Gartner IT Symposium this month.
It used to be that artificial intelligence was touted as the solution that would ultimately eliminate bias and ethics problems. That’s because it would remove human bias in hiring, in housing, in medical treatment, and in any number of other situations where human-introduced bias had caused problems.
But, garbage in, garbage out, as they say. If you use the same data to train your algorithms, you will get the same biased results. Some tech giants found out the hard way, struggling with their inaugural efforts. For instance, in 2016 Microsoft’s Twitter bot, Tay, was quickly “trained” via input from Twitter trolls to tweet racist responses to messages.
If Microsoft is struggling with this, how well will an enterprise outside of the tech business do with this kind of problem, especially if that enterprise is still struggling to get from pilot to production with its AI models?
Buytendijk offered some guidelines for organizations to follow, culled from the best practices of a number of groups working on the problem from governments to industry organizations. The following are the five most common guidelines you should strive to fulfill to achieve ethical AI. It should be:
- Human-centric and socially beneficial
- Explainable and transparent
- Secure and safe
“Your role is to create that discussion with your teams. The intuitive approach is to operationalize it — don’t do this, don’t do that,” Buytendijk said. “The problem with that is that it leads to checklist mentality. But ethics, by nature is a pluralistic topic. There are always unintended consequences that you did not foresee.”
He recommends you operationalize in terms of dilemmas. Consider underlying dilemmas that you can pose to your teams.
“Dilemmas are good because they make you stop and think,” he said. “They make you appreciate the complexity of a certain matter.”
Buytendijk offered the following recommendations for organizations embarking on an AI ethics program:
- Make sure you have an AI ethicist.
- Define guidelines for developers, highlighting your responsibilities.
- Plan significant time for discussion on how to operationalize these guidelines. (Guidelines are supposed to be high level to generally apply.)
- Ask yourself how you want your AI-enabled systems to behave. Your training choices drive their behavior.
- At the same time, prepare for your AI-enabled systems to behave differently in different regions.
It may sound complicated and difficult, but Buytendijk has some good news for enterprises feeling daunted by the challenge. He said Gartner believes that AI ethics-as-a-service offerings will emerge. Such services will connect you to libraries of the rules for particular geographic regions, following the local regulations necessary for compliance. Organizations will be able to plug their models into a matrices of rules, depending on the jurisdiction, industry, or other parameters.
Looking for more on AI ethics? Read these articles:
Jessica Davis has spent a career covering the intersection of business and technology at titles including IDG’s Infoworld, Ziff Davis Enterprise’s eWeek and Channel Insider, and Penton Technology’s MSPmentor. She’s passionate about the practical use of business intelligence, … View Full Bio
We welcome your comments on this topic on our social media channels, or
with questions about the site.