Model Interpretation: What and How?

ODSC - Open Data Science
4 min readSep 4, 2019

--

As modern machine learning methods become more ubiquitous, increasing attention is being paid to understanding how these models work — model interpretation instead of just model use. Typically, these questions come in two sorts of flavors.

  • In general, what variables are important for this model and which are less influential?
  • For a specific prediction, what factors contributed most heavily to the model’s conclusion?

General Model Interpretation and Understanding

In question 1, we are trying to get a general understanding of the mechanisms behind the model. For example, suppose we have an algorithm to predict the value of a house, which looks at a dozen or so factors. An “explanation” might be something along the lines of:

[Related Article: Not Always a Black Box: Machine Learning Approaches For Model Explainability]

“The primary factors are the square footage of the house and the wealth/income of the neighborhood at large. The condition of the house is also important. Other factors such as the number of bathrooms, size of the lot, and whether it has a garage are somewhat important. The rest of the variables have a relatively minor impact.”

Often, people wish to make statements such as “variable X is more important than variable Y”. One caveat to statements such as these is that one needs to consider both the magnitude and the frequency of the impact. For example, imagine a world where half the houses have 2-car garages and half have -car garages, and 0.1% of the houses have luxury swimming pools. All else being equal, a house with a 2-car garage is worth $20,000 more than the 1-car garage counterpart, but a luxury swimming pool adds $150,000 to the value of the house.

Which variable is more “important”? The swimming pool has a higher magnitude of impact, but a much lower frequency (more precisely, the variable corresponding to the swimming pool has lower entropy). There are any number of ways to combine the two factors into a single number, but you will always lose something significant in doing so.

One way to get a feel for “how” a model is doing its reasoning, is to simply see how the predictions change and you change the inputs. Tools such as Individual Conditional Expectation (ICE) plots precisely examine this. Here is an example of an ICE plot for how the year built of a house affects the model’s assessment of its sales price.

Some care must be used when evaluating ICE plots, as they may take the model into an area in feature space where there is little data, and thus the predictions are unreliable. Still, they can be a useful tool in model understanding.

Explaining a Specific Prediction

In question 2, we are confronted with a prediction on a simple example and desire a “justification” for the conclusion of the model. Colloquially, we want to know “Why is that house so expensive?” (from the model’s point of view). The kinds of answers we are looking for are “It’s a 5,000 sq ft mansion!” or “It’s in downtown Manhattan!”.

Beyond a simple reason, we might want something closer to what a real estate agent or professional appraiser might give. Typically, they may start with a baseline estimate, such as “The average house price in the U.S. is $225,000.” Then, from there they would highlight the aspects of this particular house that make it different from typical. “Your town is a bit more expensive than other towns, so that makes the house worth 50K more. This house is smaller than average in your town, which makes it worth 25K less. But it has a relatively large lot (compared to comparably sized homes in your town), which makes it worth 10K more. It’s slightly older, which makes it worth 7K less…” and so on.

[Related Article: ML Operationalization: From What and Why? to How and Who?]

As it happens, methods like SHAP, based on the Shapley value, can do almost exactly the same kind of analysis. XGBoost has integrated SHAP directly, making it possible to get these “prediction explanations” in just a few lines of code.

Learn more during my session at ODSC West 2019!

Connect with Brian here:

Linkedin: https://www.linkedin.com/in/brianlucena/

Personal Blog: http://numeristical.com/

Github: https://github.com/numeristical/introspective

Original post here.

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet