Member-only story

Model Evaluation in the Land of Deep Learning

ODSC - Open Data Science
4 min readSep 9, 2019

--

Applications for machine learning and deep learning have become increasingly accessible. For example, Keras provides APIs with TensorFlow backend that enable users to build neural networks without being fluent with TensorFlow. Despite the ease of building and testing models, deep learning has suffered from a lack of interpretability; deep learning models are considered black boxes to many users. In a talk at ODSC West in 2018, Pramit Choudhary explained the importance of model evaluation and interpretability in deep learning and some cutting edge techniques for addressing it.

[Related Article: Deep Learning for Speech Recognition]

Predictive accuracy is not the only concern regarding a model’s performance. In many cases, it is critical for data scientists to be able to understand why the models makes the predictions it does. Applications include describing model decisions to business executives, identify blind spots to resist adversarial attacks, comply with data protection regulations, and/or provide justification for customer classification. Pramid Choudhary explained that there are two levels of interpretation, global and local. Global interpretation is understanding the conditional interaction of features and target variables with respect to the entire dataset, while local interpretation involves understanding the same relationship, but…

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet