Member-only story

Explainable AI: From Prediction To Understanding

ODSC - Open Data Science
5 min readMar 20, 2019

--

It’s not enough to make predictions. Sometimes, you need to generate a deep understanding. Just because you model something doesn’t mean you really know how it works. In classical machine learning, the algorithm spits out predictions, but in some cases, this isn’t good enough. Dr. George Cevora explains why the black box of AI may not always be appropriate and how to go from prediction to understanding.

[Related article: The Importance of Explainable AI]

Why We Need To Understand The Data

So why do you need explainable AI? Cevora outlines two primary reasons companies often need explainability

  • Human Readability: When you’re making decisions for a company, your director or CEO isn’t interested in the data itself. Instead, they may be looking for reasons behind the interpretation and specifically what to do about it. Explainable AI gives the reasoning behind certain decisions, and that can both increase transparency and help offer better business understanding.
  • Justifiability: In Europe, hiring and firing are often driven by large data sets, but employees have the right for a clear justification with any decision made that involves them. If you don’t know how the machine came to a conclusion, you don’t satisfy this fundamental right and…

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

Responses (2)