AI Black Box Horror Stories — When Transparency was Needed More Than Ever

ODSC - Open Data Science
6 min readOct 28, 2019

Arguably, one of the biggest debates happening in data science in 2019 is the need for AI explainability. The ability to interpret machine learning models is turning out to be a defining factor for the acceptance of statistical models for driving business decisions. Enterprise stakeholders are demanding transparency in how and why these algorithms are making specific predictions. A firm understanding of any inherent bias in machine learning keeps boiling up to the top of requirements for data science teams. As a result, many top vendors in the big data ecosystem are launching new tools to take a stab at resolving the challenge of opening the AI “black box.”

[Related Article: Opening The Black Box — Interpretability In Deep Learning]

Some organizations have taken the plunge into AI even with the realization that their algorithm’s decisions can’t be explained. One case in point is the Man Group (one of the world’s largest hedge funds with $96 billion under management), initially wary of the technology’s lack of interpretability, was ultimately persuaded by the excellent returns from algorithm-centric funds. But not all AI adoption strategies culminate in rose colored returns.

In this article, I will make the case for the importance of explainable AI by examining 5 AI…

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.