Weapons Of Math Destruction: The Power Of Oversight

ODSC - Open Data Science
6 min readMay 31, 2019

--

Weapons of Math Destruction: How algorithms have the power to alter our liberties and what we should be doing instead.

Big data is making decisions about your future behind the scenes, and it’s likely you don’t even know it. If you’ve ever applied for a job and had to take some kind of exam beforehand, if you’ve ever had a run-in with the law, if your career performance took an unexpected turn based on some mysterious system, you might have encountered what Catherine O’Neil, data scientist and author of several books on big data, calls a “weapon of math destruction.”

Algorithms Replicate Our Existing System

We like to think of data as impartial, but the truth is that our data is merely an impression of the systems we have in place. Without considering how our own notions influence the creation and implementation of that data, we merely reinforce our existing systems.

That’s bad news because big data promised us a fairer shake. Our systems are only as good as the data, so training sets deployed without consideration to personal liberties don’t fix anything. They merely automate the institution in place.

O’Neil’s book Weapons of Math Destruction outlines how big data has the potential to be one of the biggest threats to our liberties we’ve seen in a long time. She describes three key components of big data — widespread (W), mysterious (M), and destructive (D) — that defines how our interactions will be with big data if we don’t alter the way we create and deploy those algorithms.

The Myth of Impartial Algorithms

Subjectivity with algorithms isn’t always intentional. An algorithm is merely historical data, pattern recognition combined with models to predict the future. You have to define what success means.

Think of it like a recipe. “The historical data when I cook meals for my family is my memories,” says O’Neil, “what worked in the past, and the ingredients in my kitchen.” But she’s already lied to you without intending to because she doesn’t use all the ingredients in her kitchen. She decides what ingredients are relevant and projects her agenda on her family meal.

Deciding on the success of the meal is also her prerogative. For her son, success means “Did I get Nutella?” But because she’s in charge, she gets to decide that a successful means “Did my kids eat vegetables?” Subjective opinion projected onto the algorithm.

A critical piece of that story is power. Who has the power to choose the definition of success? “As data scientists, that’s something we get to choose without thinking about it,” she says. She loves algorithms and even builds algorithms to audit other algorithms, but she does worry about the use of some of the algorithms she sees.

The Problem With Blind Trust In Algorithms.

We have the same methods going into our algorithms, but instead of choosing what children are eating, these algorithms have far-reaching consequences because they’re being used to afford or deny opportunities and the recipient isn’t even aware these critical junctions exist. She outlines some examples.

Widespread

Kyle Behm applied for a minimum wage job and was required to fill out a personality test online. He failed, and more unusual, he found out that he failed. Worse, he recognized questions on the test used to screen him for bipolar disorder in a hospital, violating his constitutional rights under the Americans With Disabilities Act. Six other stores in the Atlanta area used the same test, and he failed all for the same reason. The class action lawsuit is still in courts.

Mysterious

Sarah Wysocki was fired from the Washington DC public school system based on Teacher Value Added Model scorethat wasn’t explainable. She believed it was artificially low because the previous teacher cheated. She couldn’t appeal the decision because “algorithms are fair.” Once someone got the information through the Freedom of Information Act, the scores were almost random numbers. In a separate but similar case, teachers won a lawsuit because the judge determined that due process was violated. The system doesn’t find bad teachers. It randomly chooses, causing a mass exodus to the suburbs and private schools.

Destructive

Predictive policing algorithms are designed to direct police work for better results, but we don’t have the types of crime data that make that possible. Instead, the data “directs police,” predicting what police are likely to do rather than actually predicting the crime itself. The targeted communities are more likely to get arrested for a crime further entrenching the data used to predict future police actions. The same types of algorithms are also deployed to predict future recidivism with disastrously racist results.

The problem with these algorithms is they’re often created for specific functions but end up in a different field. The data scientists responsible for the personality test created it specifically for a business with a special exemption, but that algorithm ended up in the hands of sales who sold it to companies who didn’t bother creating an algorithm directly related to their operations. That’s a problem.

“When we trust blindly algorithms we’re creating, and when other people trust the algorithms we create behind the authority of mathematics, we cease to attempt to evolve our own cultures. We think we’ve solved the problem which hasn’t been solved,” O’Neil says.

Moving Beyond Blind Trust: Ethical Big Data

O’Neil wants all data scientists to become ethicists. Promise to do no harm and consider whether someone’s legal, human, or constitutional rights are violated when building an algorithm. We need transparency.

Open data is great, but most people can’t read the algorithms themselves. Instead, O’Neil would like to see something more like what goes into deciding your FICO credit score. People should be able to see the data going in, how scores would have changed with different data, and be able to appeal data that’s wrong or unfair algorithms.

She also believes we need something similar to the FDA. Algorithms are put out at a mass scale with no oversight for whether we’re being harmed by them. A governing body designed to regulate the system of scoring we’re increasingly seeing in every aspect of our lives.

You must also have conversations about how these algorithms affect not just the company but for every shareholder involved. Building the grid of who your algorithms are affecting is a conversation you must have within your organization.

Become Your Own Ethicist

Data has the potential to create its own reality. In the case of recidivism, when people have fewer opportunities because of the predictive algorithm, they are more likely to re-offend. The algorithm didn’t just predict the future; it caused it. That type of dystopian future is one O’Neil doesn’t want to see.

Industries don’t want regulations and plausible deniability rules much of how we approach big data, but it’s time to be optimistic yet skeptical of the algorithms. We need to monitor the algorithms and ensure they aren’t causing real harm. We need to figure out what it looks like to have safe algorithms and it needs to happen soon.

Original post here.

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet