We need laws about AI, not self-regulation – The Australian Financial Review
“Laws that apply in the real world should apply in the digital world, and we need to enforce laws more rigorously to make that happen,” said Edward Santow, the Human Rights Commissioner. “An ethics framework can help to make good choices, but is different to the law which sets baselines for proper conduct.”
People affected by decisions influenced by AI “should be able to understand the basis of the decision and be able to challenge decisions that they believe to be wrong or unlawful,” the commission said, in a contribution to the global debate on the ethics and governance of AI which is being closely watched by foreign governments.
AI is already being used broadly across in Australia, to fight bushfires, navigate aircraft, predict policing trouble spots, target advertising on social media, triage customer complaints, select staff members, determine who qualifies for government services, and assess credit worthiness. AI is being used to make inferences, predictions, recommendations and decisions. Under the government’s consumer data right, which will make more data available across industries, AI’s influence will grow.
When deploying AI for commercial or government services, the Australian Human Rights Commission said there should be appropriate human oversight and intervention on the technology, transparency to provide customers with meaningful explanations for AI-influenced decisions, and clear parameters for liability when things go wrong.
The commission is concerned about AI being used for exploitative and discriminatory marketing practices, intrusive surveillance in the workplace, and assessments of creditworthiness in banking or predictive analysis in insurance which could affect rights “if decision making is affected by algorithmic bias”.
In the discussion paper–- submissions are due on March 10 – the commission calls for new legislation to introduce a general rule that someone who “deploys an AI-informed decision-making system is legally liable for the use of the system [and] ongoing monitoring of those systems when they are in operation”.
It calls for the establishment of an “AI Safety Commissioner”, an independent statutory office to lead regulators, policy makers and the public to develop and use AI in Australia. The commission also sets out a range of proposals to create “Accessible AI” for people with disability.
The paper, which follows a white paper last year and comes ahead of a final report in 2020, says special attention should be given when AI is used in areas where the risk of violating human rights is particularly high, such as in social security and facial recognition in policing.
It is calling for a statutory cause of action for “serious invasion of privacy”. The government said last week, in its response to the digital platforms inquiry, it would commence a detailed review of the Privacy Act.
The Human Rights Commission’s analysis comes the month after the government released a national “artificial intelligence road map”‘ that identified skills shortages for AI engineers in Australia amid massive global investment in the emerging technology and Data61 released an ethics framework. It has been cast broadly, making practical implication of the principles difficult.
The government should adopt a “human-rights-by-design” approach to decision making and to develop a “human rights impact assessment” tool for AI-based decision making, the commission said. It also calls for a regulatory sandbox to test AI decision systems for compliance with human rights.
The government should conduct cost-benefit analysis of its own use of AI and outline the processes by which it “decides to adopt a decision-making system that uses AI”, the commission said.
The government’s road map cites figures from economic consultancy AlphaBeta estimating digital technologies, including AI, to be worth $315 billion to the Australian economy by 2028, while PwC in Britain found this year AI could be worth $22.2 trillion to the global economy by 2030.