Vermont calls for AI ‘code of ethics’ – StateScoop
Ryan Johnston Jan 17, 2020 | STATESCOOP
Members of a first-of-its-kind Vermont state task force on artificial intelligence say regulating the technology itself would have unintended consequences, but that they also see promise in creating a “code of ethics” that could drive responsible use of AI within the state and position Vermont as a leader nationally.
The 14-member task force released its recommendations Wednesday after a series of monthly meetings that began September 2018. The group, including representatives from government, academia, industry and civil liberties groups, studied current and future applications of AI, as well as how to ensure ethical testing and use without inhibiting innovation.
While other states all launched AI task forces of their own, the Vermont group ultimately concluded that immediate action — in the form of a permanent commission — would benefit not only their state, but the country as a whole.
“I’m a huge believer in the ‘brave little state’ [of Vermont], and I think we should be a leader,” said John Cohn, a Vermonter and IBM Fellow in the company’s Internet of Things lab, said. “What I mean is that we should be a leader among other states. There’s no federation of states talking about this.”
Cohn told StateScoop that trying to legislate the algorithmic component of artificial intelligence, or how companies are able to perform research and development on their own products, could limit innovation and business growth in the state as an unintended consequence. Rather than legislating AI itself, he said, lawmakers should place regulations around where the technology can be applied, whether it be in public safety products, autonomous vehicles or other emerging technologies.
“It isn’t like there’s a line of AI code that makes it somehow regulated, it’s what you do with it,” Cohn said.
The report identified several sectors that could benefit from increased AI adoption and development, including precision agriculture, public safety and public health. To avoid infringing on civil liberties and reductions in employment as the technology develops, however, the task force would need to be made permanent. The idea, according to task force member Eugene Santos Jr., a Dartmouth College engineering professor, is to provide an independent agency that government officials and the public could go to with ideas, questions and concerns about the technology.
“AI is crosscutting,” Santos said. “The last thing Vermont agencies want is that one comes up with a [AI] policy, another comes up with an [AI] policy and you just find that ‘oh. They’re in conflict.’ and there’s nothing uniform about anything.”
In the report, the task force laid out a draft “Code of Ethics” that could serve as guidelines to both legislators and businesses in the regulation and development of AI. Modeled after the European Union’s guidelines, the task force said AI should be manufactured with fundamental respect for human dignity, individual freedom, democracy, equality and citizens’ rights, including the right to vote or right to protest. The proposed code, which also includes requirements of AI like human oversight and transparency, would be a working document maintained by the state’s permanent commission if it were implemented, according to the report.
Vermont currently has no legislation in place to regulate artificial intelligence, which is used in virtually all autonomous vehicles and facial recognition programs. Despite not agreeing upon a concrete definition for what artificial intelligence means, the task force recommended that the state offer small business grants and competitions to foster growth around the business of AI within Vermont, and that the state create outreach programs to promote AI education in schools.