Engineering AI To Be Ethical By Design – SemiEngineering
I joined Arm because of its amazing people and world-class technology. But while I’m constantly excited by the possibilities of what we can achieve, as Arm’s General Counsel I must also consider the potential harm our designs might cause if they don’t perform in the way we expect, or were put to a use we did not intend.
That dilemma comes to the forefront when I think about artificial intelligence (AI) and it’s why two years ago, I formed a working group on AI Ethics at Arm. It’s also why we have now produced a guiding Arm AI Trust Manifesto to shape our thinking and practices around AI design for the foreseeable future. And, why we chose to launch it at Web Summit 2019 to build industry-wide support for the principles of the Arm AI Trust Manifesto as a first step in defining and standardizing practical ways to operationalize ethics standards.
Our role in ethical AI
Before getting into the details of the Arm AI Trust Manifesto, let me first talk about why Arm cares about ethics and the role we can play in bringing about a world built on trusted AI devices.
“We’re calling for a vigorous industry-wide effort to take responsibility for a new set of ethical design system principles.”
Arm technology is already enabling AI processing in billions of advanced products, including the latest mobile devices. But while this engineering is fundamental to the AI revolution, we do not exert direct control over all critical elements of AI systems. So, as trust in AI must be global, we need to join with others to achieve a strong and sustainable framework.
We recognize that time is limited so we’re now calling for a vigorous industry-wide effort to take responsibility for a new set of ethical design system principles. These principles must be debated, agreed and adhered to as a foundational building block on which all AI systems can be built.
Technology will always move faster than regulation. Therefore, industry must work closely with regulators, universities and society to define the right baseline standards that are comprehensive enough to meet our agreed ethical objectives, yet not so onerous that good AI entering the market is held up unnecessarily by fear. This will require creating a universal ethical framework to avoid the regulation fragmentation that might impede the global adoption of trustworthy AI.
First, to understand this engineering basis for ethical decision making, we need to describe the root of an ethical AI device.
The building blocks for ethical AI devices
People—philosophers, politicians, religious leaders, members of society—have argued over ethics for thousands of years. We’ve sought to engineer ethics into human society through debate and, once agreement is reached, we have codified the ‘rules’ in written or verbalized ways. This is both similar and entirely dissimilar to what we are attempting to do with machines.
By comparison, our vision of an ethical AI machine is a device programmed to always make decisions perceived as fair by most right-minded people, with those decisions concluded objectively from data that is clear from detectable bias.
However, the human vs machine ethics comparison is dissimilar in more ways: First, because machine rules must be universal while human ethical agreements tend to be local or regional. Second, society will naturally tolerate some errors in human decision making, even against defined rules, but will not accept errors from a machine built to be ‘ethical.’ So, questions of absolute accountability (in legal terms, liability) must always be answered.
The higher performance level we expect from machines was backed up when we worked with analyst firm Forrester to survey 50 global autonomous driving experts in 2018. They told us that carmakers expect they’ll have to prove that future self-driving vehicles are at least 10x better than humans in performance and AI decision-making to be acceptable as mainstream devices by the public.
So, the challenge ahead is clear. We have to get to a position of near-zero casualties when it comes to machine-made decision making. It means we have to build the most robust technology framework every conceived of; covering all aspects of AI design and delivery, including how engineers are taught to think as well as code and build. Starting the debate on exactly what that has to look like is the precise objective of the Arm AI Trust Manifesto.
Our guiding objective: AI must be ethical by design
Arm would like to see the technology sector come together to create an ethical framework to ensure that AI is developed in a fair and responsible way. Without such a framework, there is a risk that regulation will become onerous and fragmented and will not allow AI to succeed.
We believe that ethics should be incorporated into the key design principles for AI products, services and components. However, at present there is no defining set of ethics to follow and so we call for the formation of an industry-wide working group to define and standardize a set of ethics that can be adopted by anyone deploying AI technologies.
Since ethics are so critical to AI, it is essential that anyone working in the field has a solid foundation in the issues. We call on all universities and colleges that teach AI to include mandatory courses on issues relevant to ethics in AI at undergraduate and graduate level. Further, we believe that all businesses developing AI technologies ensure that their staff complete mandatory professional training in the field of AI ethics.
Ethical principles of trust in AI systems
There are many issues that must be addressed in the development of an ethical framework for AI that enhances trust. As a starting point, Arm proposes the following principles in the Arm AI Trust Manifesto:
- We believe all AI systems should employ state of the art security.
- Every effort should be made to eliminate discriminatory bias in designing and developing AI decision systems.
- We believe AI should be capable of explaining itself as much as possible: we urge further effort to develop technological approaches to help AI systems record and explain their results.
- Users of AI systems have a right to know who is responsible for the consequences of AI decision making.
- Human safety must be the primary consideration in the design of any AI system.
- We will support efforts to retrain people from all backgrounds to develop the skills needed for an AI world.
The above is an abridged overview of the Arm AI Trust Manifesto’s guiding principles. Read the full Manifesto here.
What happens next?
We will now seek to bring technology partners together to build a coalition of parties who can influence AI ethics thinking and the engineering to support the creation of more ethically robust devices. We already partner, or have relationships with, the broadest cross section of influencers we need to reach – inside industry and public bodies.
Great work is already being done to advance AI ethics thinking but we think the task now is to bring influencers together in practical ways. For example, I personally would like to see us build prototype devices we think are ethical, explain why we think they pass the test, and then try to break them. In effect, come up with a new form of ethics hacking to test the security, design ethos, data sets and interrogability limits of an AI device. This won’t happen immediately but this sort of leap can’t wait too long.
This is similar to the Digital Security by Design project work we’re involved in on security where we’re currently designing a new test board to run a prototype architecture which has been designed to be inherently more robust to cyber attacks. It’s a partnership project with the UK Government, several UK-based universities and major industry partners including Microsoft and Google.
This level of cooperation is exactly what we need now to start laying a solid foundation for AI machines that are born ethical.