Interpretability and the Rise of Shapley Values

ODSC - Open Data Science
3 min readSep 25, 2019

--

Interpretability is a hot topic in data science this year. Earlier this spring, I presented at ODSC East on the need for data scientists to use best practices like permutation-based importance, partial dependence, and explanations. When I first put together this talk, a lot of it was fairly new to many data scientists. But a year later, and now I see these ideas in many blog posts and ODSC presentations, so what to do?

It’s time to retool my talk. Should I drop the dragons? Not sure about that, but will put a little less emphasis on feature importance and partial dependence. Hopefully, at this point, everyone has moved away from using LIME as an explainer. This fall’s presentation will spend some extra time on the new dominant explainer methodology based on Shapley values.

Yes, Shapley. I have seen a lot of blog posts rehash Scott’s Shap notebooks and stop there. My goal is to give a deeper nuanced discussion around using Shapley values. After all, did you know you need both hands to count all the different Shapley explainer approaches? While many of you are familiar with the Shap package by Scott Lundberg, there are other packages for R and Python to calculate Shapley values for explanations.

An explanation from the Shap Package

An explanation from the IML package

So plan on swinging by my session to learn the fundamentals of interpretability and do a bit of a deeper dive around using Shapley values. I cover different approaches for calculating Shapley values, tradeoffs between these methods, and practical considerations. I will even have Dwayne Johnson stop by to help explain Shapley values (well, actually just a screenshot of him, but it is helpful).

I will also show the variation in explainers in several datasets. Here are results on a benchmark dataset, Boston Housing, for the top explanation by each method. You can see some methods slightly favor RM over LSTAT. Let’s talk about these results and many more extreme examples in the talk.

Top Explanation on the Boston Housing Dataset

If you’d like to learn more about these Shapley values and how it can be used to increase transparency in machine learning approaches, this is the right session for you.

Editor’s note: Be sure to check out Rajiv’s talk at ODSC West this October 29 to November 1, “Deciphering the Black Box: Latest Tools and Techniques for Interpretability.”

Original post here.

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet