ODSC West 2019 Keynote Rachel Thomas on Algorithmic Bias
As algorithms are increasingly used in ways that affect daily life, the issue of algorithmic bias becomes ever more important. In her keynote address at ODSC West 2019, Rachel Thomas, co-founder of fast.ai and director of the University of San Francisco Center for Applied Data Ethics, addressed this topic in detail with several recent case studies and suggestions for how to prevent bias from affecting our own models.
During her talk, Thomas focused on three case studies the exhibited different types of bias. First, she addressed the Gender Shades case study, which showed that facial recognition software is significantly less accurate at recognizing dark-skinned women (65.3% accuracy) than it is at recognizing light-skinned men (99.7% accuracy). Amazon’s facial recognition software was also inaccurate, misidentifying members of the US Congress as criminals. Even worse, as Thomas highlights, those “misidentified disproportionately included people of color.” Thomas explained that two types of bias influenced these case studies: representation bias, caused by the overrepresentation of one group in training data, and evaluation bias, caused by benchmarking against already biased data sets.
[Related article: 9 Common Mistakes That Lead To Data Bias]
The second case study focused on Compass, a model that is being used to set bail, parole, and sentencing terms in court. A study of this model found that the false positive rate for black defendants was twice as high as it was for white defendants. And, although a separate study indicated that this model was only as accurate as a linear classifier using three variables, it’s still in use in many states today. The type of bias influencing this case study is historical bias. As Thomas says, it’s particularly difficult to prevent because it preexists the initial collection of the data.
The last case study Thomas addressed during her keynote centered on Online Ad Delivery. Until recently, companies could choose narrowly targeted audiences ads for services and products. Excluding people from seeing ads for jobs or housing could, and did, have a real impact on their lives and future opportunities. Even when it wasn’t their explicit intent, these companies excluded specific populations from seeing and engaging with their services.
Although the above case studies really drove home how bias can become incorporated into machine learning models and how they can significantly impact individuals’ lives, Thomas saved the most important and impactful questions for the end of her talk: Why does algorithmic bias matter? How do we prevent bias from affecting our own models?
Algorithmic bias matters because machine learning models are powerful and we assume they aren’t limited by human weaknesses. Thomas goes on to explain that algorithms can be implemented at scale, affecting many more than a single person, or even a company, ever could. Furthermore, machine learning is likely to amplify already existing human bias, making it much worse. Finally, because people assume that algorithms are objective and error-free, they are unlikely to question the model’s output, or create a method for appealing or reversing a decision. Now, how do we mitigate the effects of algorithmic bias?
To manage the effects of bias in our own models, Thomas poses these essential questions in her slides:
- Should we even be doing this?
- What bias is already in the data?
- Can the code and data be audited?
- What are the error rates for sub-groups?
- What is the accuracy of a simple rule-based alternative?
- What processes are in place to handle appeals or mistakes?
- How diverse is the team that built it?
Hopefully, by being diligent, vigilant, and thoughtful, as Rachel Thomas reminds us to be, we can help mitigate the effects of bias on our society as AI becomes integrated into our daily life.
Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.