‘Black Box’ Issues Can Be Addressed by Personal Responsibility Requirements – Regulation Asia
A new academic paper develops a regulatory roadmap for understanding and addressing the increasing role of AI in finance, focusing on human involvement to address ‘black box’ issues.
Regulatory approaches which focus on personal responsibility can help to address the risks posed by the increasing role of AI (artificial intelligence) in finance and the related ‘black box’ issues that arise, according to a new academic paper from the CFTE (Centre for Finance, Technology and Entrepreneurship).
The paper maps the various use-cases of AI in finance, highlighting why AI has developed so rapidly in finance and is set to continue to do so. It also highlights the range of the potential issues – i.e. data risks, cybersecurity, systemic risk, and ethics – which may arise as a result of the growth of AI in finance.
The paper pays special attention to the regulatory challenges of AI in the context of financial services, and the tools available to address them. Three specific regulatory challenges are identified:
- AI increases information asymmetries regarding the true capabilities, functions and limits of algorithms, as third party vendors or in-house AI developers typically understand the algorithms far better than the financial institutions that acquire and use them, and the supervisors that supervise those institutions
- AI enhances data dependencies, whereby operations and the effects of algorithms can be impacted by day-to-day changes in the data sources they rely upon
- AI enhances interdependency, whereby different AI systems can interact with unexpected consequences, enhancing or diminishing effectiveness, impact and explainability in finance
“These issues are often summarized as the ‘black box’ problem: no one understands how some AI operates or why it has done what it has done, rendering accountability impossible,” the paper says. “Even if regulatory authorities possessed unlimited resources and expertise – which they clearly do not – regulating the impact of AI by traditional means is challenging.”
The paper argues for strengthening the internal governance of financial institutions through external regulatory approaches that impose personal responsibility requirements for AI systems, based on existing post-financial crisis frameworks of managerial responsibility.
The full paper is available here.
The CFTE paper was authoured by Dirk A. Zetzsche, Douglas W. Arner, Ross P. Buckley, and Brian Tang.