Responsible AI With Model Risk Management – Forbes

The desire among financial institutions to better mitigate risk gained renewed prominence as a result of the financial crisis of 2008. Subsequent regulatory and governance requirements fostered interest in risk modeling and sophisticated forecasting based on artificial intelligence (AI) to improve outcomes.

It now seems common to have AI-driven models supporting decision making related to capital adequacy, liquidity, pricing, exposure and more. Model risk management (MRM) also emerged as a practice, one that is ideally suited to the application of explainability to enable transparency, support governance and facilitate compliance.

Financial institutions have access to more datasets and computing resources than ever before, so they are increasingly adopting modern AI systems, including machine learning (ML) and deep learning (DL), to exploit these assets and capabilities. Yet these same institutions must also contend with the inherent risks posed by the resulting models themselves. Leveraging hindsight and insight to simulate foresight creates unavoidable risk that must be mitigated, especially when using modern AI-based systems that are a generation ahead of traditional rules-based ones.

The scope, complexity and evolution of AI systems have grown beyond human understanding. ML identifies data relationships and automatically generates predictive algorithms that exploit them. Since these patterns remain hidden from view, users are left with “black box models” that deliver meaningful and beneficial outcomes that seem correct but that no one can fully explain. In other words, AI creates scenarios where unknown and unknowable problems with models can occur that remain unfixed and unfixable, respectively. One is thus tempted to question how an ML model can be validated. MRM with explainability is a necessary part of the answer.

For businesses relying on financial modeling to support decision making, MRM helps mitigate diverse risks. An effective governance framework will minimally include validation, inventory management, monitoring and reporting for models. Compliance remains complex, since model owners, developers and stakeholders must fully describe models and clearly explain their outcomes (forecasts, scenarios, etc.). This usually requires creating a well-defined inventory of all models and assessing a risk rating that represents a “severity of failure” or “likelihood of failure” and other relevant attributes. Models with high risk ratings are subject to more stringent monitoring, elaborate tracking and detailed reporting facilitated by MRM tools.

When financial models exist within AI systems and data relationships are identified and by ML, adding explainability enables human understanding and reveals hidden secrets. Transparency into data relationships and modeling algorithms can help to overcome other challenges and avoid other problems too.

Incorporating explainability into MRM mitigates more than financial risk by promoting fairness, avoiding bias, finding interdependencies, highlighting errors, facilitating validation and improving documentation. Of course, this begins with adding explainability to AI systems and their supporting ML processes. Explainable AI (XAI) systems running ML models are evolving the emerging discipline of MRM, so complex models become more understandable to regulators and institutions alike.

Gaining transparency into models offers many tangible benefits, including providing insight to persons reviewing models (including regulators), ensuring compliance with relevant regulations and convincing stakeholders of fair and ethical practices. A recent article by Bloomberg, describing allegedly biased outcomes after credit applications by spouses, suggests the hazards of “black box algorithms” that lack ample transparency. Explainability adds transparency and creates options for using more powerful ML models that generate accurate and precise results to improve the financial performance of a business without adding risk—and this means that explainability can help to avoid controversies like the one reported by Bloomberg.

Clear trends have emerged. The benefits of MRM practices are too powerful to ignore, so businesses beyond those in the financial services industry are adopting and applying enterprise MRM for models supporting broad applications, including diverse strategic and tactical decision making. Similarly, businesses are increasingly exploring and adding explainability to AI systems as the application of machine learning, deep learning and other advancements continues to grow among businesses across all industries.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s