Humans Throughout the Loop: The Role Humans Play in the Entire ML Lifecycle

ODSC - Open Data Science
4 min readJun 22, 2022

--

I recently attended ODSC East 2022 in Boston. As a data scientist with a particular interest in human-in-the-loop machine learning, I found myself drawn to the MLOps and AI Observability sessions. I had been a bit naive about MLOps. Since it’s frequently mentioned alongside AutoML, I’ve encountered some descriptions that seem to deemphasize the role of the human, whether that be the tasks of the human data scientist or manual interventions to address data quality or exception processing. On reflection, I needed to surround myself with the quality of thought leadership I encountered at ODSC East. I got the opposite sense in the sessions I attended. The emphasis was not only on continuous improvement but on how to collaborate efficiently across multiple teams. I’ll be seeking more opportunities to learn from the MLOps community this year.

I should also acknowledge that I heard the phrase humans-throughout-the-loop from my colleagues at Northeastern’s Experiential AI program. They were in attendance at my ODSC East session. Having interviewed Usama Fayyad, their inaugural director, last year as part of CloudFactory’s thought leadership series, it doesn’t surprise me that they are trying to upend stereotypes about AI. I love what they are trying to communicate with this phase. There isn’t a clear consensus about what human-in-the-loop is. For me, it certainly includes both the manual labeling and annotation of supervised machine learning datasets and exception processing post-deployment. However, I agree with the Northeastern team that we should broaden it further.

So how might MLOps, AI Observability, and Human-in-the-loop all work together throughout the ML lifecycle? In short, quality control and continuous improvement identify shortcomings in our models, our data, or both. Once identified, those flaws need attention — and an automated model refresh is not enough. Human attention is required. If it’s a flaw in the model, a modeler needs to revisit the entire modeling process. If it’s the data, person-hours are needed. My compliments to Pachyderm’s team for emphasizing that the model and the data evolve on parallel tracks and that each needs its separate versioning.

The data scientists could periodically contribute a handful of hours to correct the data manually. It might even be desirable as it will help them with diagnostics. However, it’s not scalable. Without an external data processing strategy, either the internal team will get burnt out, or the work won’t get done. Critically, inconsistent labeling and annotation are so common with unstructured data that data quality might be the best to investigate first. We see this often at CloudFactory. Many of our clients work with unstructured data, including text, image, and video. They need the data quality that only a managed scalable workforce can provide. Without that level of consistency and quality, the modeling never goes smoothly.

Data Observability may be a new phrase to some data scientists. What is it all about? Observability is a concept from control theory that describes how well the state of a system can be inferred from outputs. This concept has been applied by software engineers for some time to monitor for issues in a software system. More recently, companies like Metaplane are doing so for errors in data. What I found to be such a powerful implication for human-in-the-loop is that some of the errors found in data will need manual intervention. A challenge faced by almost any data scientist that has worked in anomaly detection is that cases with a high risk score need to be verified, and verification is labor-intensive. Often the development of an anomaly detection model does not take this into account. Similarly, identifying errors in data should be accompanied by an error processing strategy.

Keith McCormick serves as CloudFactory’s Chief Data Science Advisor. He’s also an author, LinkedIn Learning contributor, university instructor, and conference speaker. Keith has been building predictive analytics models since the late 90s. More recently his focus has shifted to helping organizations build and manage their data science teams.

Original post here.

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.