Validating AI & Machine Learning Models — Lessons Learned from the Banking Industry
Sectors that deal with sensitive data are very familiar with compliance standards that require transparency. If you request a loan, the company is required to disclose the reasoning behind your denial or approval. Other regulations, such as the recent GDPR amendment, require companies to disclose how they will use your data. The finance industry has unique needs with this type of compliance because financial data is highly sensitive. With regulations in place for transparency, finance companies are increasingly concerned with using transparent AI models. Dr. Greg Michaelson of Data Robot outlines how finance companies are navigating this tricky territory with his ODSC East 2019 talk “Validating AI & Machine Learning Models — Lessons Learned from the Banking Industry.”
[Related Article: Generative Adversarial Networks for Finance]
Background on AI and ML
Experts disagree on the true definition of AI. Michaelson cites multiple times, including the invention of the calculator, where we’ve moved the goalpost for the definition of AI. He defines it as any computer that completes a task that normally requires human intelligence.
One example of this is a cell phone company. The simple act of routing your call to the right person through taking in data, i.e., your answers to a series of questions, is computer intelligence doing a human task.
It’s this definition that drives these lessons. Michaelson outlines not only the background for the financial sector’s particular relationship with risk and validation but seven lessons he’s learned from the banking industry’s foray into the world of AI — in particular, that companies must be validating AI and machine learning models when it comes to the financial sector.
What is Model Risk?
Using models in banking is risky. Model risk occurs for two reasons:
- A model may have fundamental errors and produce inaccurate outputs when viewed against its design objective.
- A model may be used incorrectly or inappropriately, or there may be misunderstandings about its limitations.
Everything that follows in the finance sector is responding to these two regulations. Every financial institution is required to conform to this regulation, and it costs institutions millions to ensure compliance.
It’s a matter of both financial and ethics. Poorly implemented models affect real people, causing issues with everything from credit approvals to bank accounts. So much can go wrong, and when stretched out over the lifecycle of building solutions, there are a lot of questions.
He says, “What if a loan is 90 days past due? Is the loan defaulted? Maybe. They could have lost their job and might start paying again and everyone is happy?” This simple example is something that will require plentiful of accurate training data while accounting for nuanced answers. There’s a lot to go wrong and lots of questions to ask. For example:
- Is the data good?
- Where did you get it?
- Do you have accurate representations?
- Was the model implemented appropriately?
- What are your test methodologies?
Model risk is present in all sections of your solution. While these risks are present across industries, the sensitive nature of finance and government scrutiny increases the pressure to mitigate these risks.
The government doesn’t tell banks how to approach this type of compliance, only that a plan must be in place. The ensuing plan is complex and requires a lot of work through two lines of defense with checks and balances.
- The first line: builds models and provides documentation
- The second line: checks those against the first line
Seven Lessons from the Banking Industry
With these necessary checks in place and the increasing pressure to get models right, Michaelson has noticed seven different lessons from the practical use of AI in the finance industry.
#1 The Need for Default Best Practices
When you work with large teams of data scientists, not all of them are great. People do things they shouldn’t, so having a way to enforce best practices reduces implementation risk. An example: How do you partition your data? Using best practices helps reduce the risk, but more importantly, you must reinforce them.
#2 Standardization Reduces the Labor of Documentation
Creating all your documentation from scratch each and every time is ludicrous. Standardizing the practices means repeatability. Data Robot’s documentation can be created at the click of a button because they have best practices, so documentation is the same. This standardization streamlines the entire approach and allows results at scale.
#3 Process Reduces Variability
Validation carries a lot of bad press. It’s required for financial models, and introducing standardization into the way institutions develop models makes this so much easier. Introduce education surrounding the model and enforce best practices from the first lesson.
#4 Validating AI and Machine Learning Models with a Risk-Based System is Necessary
You can’t perform validation to the same level of scrutiny for every single model because not all models are equal rank. High-risk models must have the highest level of scrutiny. Low-risk models don’t need the same amount of validation. Discretion can help streamline the validation process and provide context.
#5 Automation Makes Benchmarking Easy
Michaelson cheerfully admits that this is a Data Robot plug, but the fundamental lesson stands. The automation for benchmarking has the potential to save an organization millions of dollars and a ton of human labor. Financial institutions should aim for this kind of automation.
#6 Transparency Means Different Things to Different People
Transparency is a movable concept. Understanding how your organization defines transparency and how regulators define it allows you to build a more logical transparency conversation. Michaelson believes we need to be careful about what transparency means, i.e., not prioritizing transparency over accurate results.
#7 Automation Reduces Implementation Risk
Models with thousands of lines of code are more prone to error. The more automation you introduce, the fewer chances you have for error because you reduce the human risk factor. This is the final lesson for financial institutions looking to scale, and doing this really helps when validating AI and machine learning models.
[Related Article: Big Fields Hiring Data Scientists for 2020]
Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.