Making Explainability Work in Practice
Complex ‘black box’ models are becoming more and more prevalent in industries involving high-stakes decisions (such as finance, healthcare, insurance). As machine learning algorithms take a prominent role in our daily lives, explaining their decision will only grow in importance via explainability.
By now there is enough written about why we may want to explain the decisions and behavior of machine learning models. Model debugging, bias discovery, and increased social trust and acceptance are some of the important and often cited reasons.
What is machine learning explainability?
Before we go into further detail, let’s define explainability. I need to acknowledge that there is no consensus on what machine learning explainability is but the definition I tend to stick to is that explainability (XAI) are:
“Methods and models that make the behaviour and predictions of a machine learning model understandable to humans.”
Source: https://christophm.github.io/interpretable-ml-book/terminology.html
Types of explainability: What can be explained?
What type of explanations can we provide? We can differentiate between:
- Intrinsic and post-hoc methods: Intrinsic methods relate to restricting the complexity of the model and/or features before the model training; Post-hoc methods apply an explainability technique after the model training.
- Model-specific and model-agnostic: Model-specific methods focus on explaining the behavior and decision or a single type of algorithm, whereas model agnostic methods work with any type of a model
- Local and global: Local methods explain the decision of the model for each instance in the data whereas global methods explain the overall model behavior
Figure 1 provides a succinct taxonomy of the most common model-agnostic methods. There are various algorithms to explain the data and/or the model. Some are visual methods, and we have the abovementioned distinction between global and local methods. Don’t be overwhelmed if some of the methods are new to you. The purpose of this flowchart is not to introduce you to a new XAI method but to help you understand how they all fit together. This taxonomy could further help facilitate conversations between various stakeholders involved in a use case. But sadly, only this chart is not sufficient.
Figure 1. Taxonomy of explainability approaches
Explainability in the ML development cycle
We cannot talk about XAI in practice without discussing where XAI fits in the overall machine learning development cycle. Let’s say that we have a clearly defined business problem (I know that is wishful thinking in many scenarios). Even less believable, the input data is clearly identified and even provided to us.
The first important step is to gather the XAI requirements of all stakeholders — if explainability is relevant for the use case, of course. The set of stakeholders will differ with the use case. For example, let’s assume we are using machine learning to help decide whether we should grant a loan to a client. In this case, relevant stakeholders could be:
- the loan officer or relationship manager,
- the client themselves,
- the technical team building and deploying the model,
- the business stakeholders who accept the risk of the model,
Depending on the broader organizational setup, we could also have
- model validation team, compliance, privacy office and legal teams, audit
- In some countries, there are external regulators who may have their own explainability requirements.
And how can we gather the requirements of all these stakeholders? Often utilized approaches are structured and less structured interviews, surveys, and workshops.
Keep in mind that it is very likely that different stakeholders will have different XAI requirements, and thus we will end up utilizing multiple approaches.
The next step is to train our machine learning algorithm of choice (taking into account the XAI requirements, if relevant).
The third step consists of applying the approach or set of approaches. I cannot stress this enough that various XAI techniques help you answer different questions, and sadly there is not a single one that works across scenarios (despite the popularity and nice properties of Shapley values).
And let’s be optimistic (or unrealistic?) and assume the model will be deployed in production. As the model is deployed and we monitor its predictions and explanations, we should ensure we monitor and track the users’ interaction with the explanations and record any possible feedback.
Many of you may know that model monitoring and maintenance is a whole can of worms on its own. Having XAI added to it does not make it easier. While it is (relatively) easy to compare the model’s prediction versus what really happened, it’s much more difficult to validate the explanations or provide a quantitative measure for their accuracy, validity and fidelity.
Figure 2. XAI in the ML development cycle
Conclusion
XAI is definitely a very exciting and important (and a little bit hyped) field. While researchers are still developing new approaches, applying various XAI methods in practice comes with its own challenges. One thing to keep in mind: The earlier in the project you start thinking about XAI, the easier it will be to incorporate it. However, when it comes to XAI,
There is no one-size-fits-all approach, it’s a process rather than a single product.
Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform.