Responsible AI is Not an Option. Here’s How You Can Achieve it
If you’re attending ODSC West you’re acutely aware of the fact that artificial intelligence (AI) is today widely used to inform and shape business strategies and services. You’re probably also aware that the decisions made by AI algorithms can appear callous and even careless. But it doesn’t have to be that way; responsible use of AI is within reach of every data science organization. I’m delighted to be presenting my talk “Responsible AI Is Not an Option” at ODSC West, sharing a perspective gained over more than two decades of delivering industry-focused machine learning/AI innovation in a highly regulated environment.
With great power comes great responsibility
Given AI’s powerful impact on business and society, Responsible AI is a standard for accountability; it requires a company’s Board of Directors to approve and support the use of a company-wide standard technical model governance framework to ensure AI implementations are safe, trustworthy, and unbiased.
Building Responsible AI involves identifying key business risks in adopting AI and proactively mitigating them. In this blog I’ll briefly describe a path toward achieving Responsible AI:
- Build a diverse team
- Establish a robust foundation
- Respect the power of data
- Ensure explainability
- Establish Ethical AI guardrails
- Make AI innovation adoption efficient
- Establish proper AI governance
- Evangelize responsibility
1. Build a diverse team
Building an analytics team for AI projects means knowing how to hire the right people. There’s no template or magic formula for getting it right. It’s an iterative process, full of hard questions that must be asked early and often to produce a successful outcome.
First and foremost, the data analytics team should represent a diverse set of perspectives and experiences, and also be able to appropriately balance the company’s current level of analytics sophistication with its AI aspirations. From there, you can determine the right size and capabilities of the team based on organization-specific needs and objectives. It is extremely important to not overstretch the capability or capacity of the team; it’s far better to have a few successful Responsible AI projects than a larger number of projects that fail because your talent was spread too thin.
2. Establish a robust foundation
In an age of cloud services and open source, there are still no “fast and easy” shortcuts to proper model development. AI models that are produced with the proper data and scientific rigor are robust and capable of thriving in tough environments like the one we are experiencing now. Responsible AI requires a robust development methodology that includes:
- Proper use of historical training and testing data
- Well-defined metrics for acceptable performance
- Careful model architecture selection
- Processes for model stability testing, simulation, and governance
Perhaps most importantly, all of the above factors must be adhered to by the entire data science organization and the AI’s explainability should be non-negotiable.
3. Respect the power of data
A growing number of consumers want to be empowered with a constant stream of individualized information to help them make better financial decisions. With control of their own data imminent (as part of the Open Banking movement), we are seeing consumers increasingly provide consent for specific, prescribed and constrained uses of their transaction data. Banks’ ability to obtain and manage specific customer consents will directly impact institutions’ ability to create that “next layer” of transformational data.
AI and machine learning (ML) technologies will be critical in delivering personalized experiences. But to be truly transformative, new data-driven features must be highly accurate, safe, unbiased, and offer personalized insights. Those that don’t will get a lukewarm consumer reception at best, weakening the trust and future data access.
4. Ensure explainability
Model explainability is crucial. In fact, I have a belief that’s unorthodox in the data science world: explainability first, predictive power second — a notion that is more important than ever. AI that is explainable should make it easy for humans to find the answers to important questions, including:
- Was the model built properly?
- What are the risks of using the model?
- What features or behaviors drive the model?
- Is the model ethical?
- When does the model degrade?
The latter question illustrates the related concept of Humble AI. Here, data scientists determine the suitability of a model’s performance in different situations, or situations in which it won’t work because of a low density of historical data. Either of these factors make the model unsafe or unethical to use for similar customers.
5. Establish ethical AI guardrails
One of the most common misperceptions I hear about bias is, “If I don’t use age, gender or race, or similar factors in my model, it’s not biased.” Unfortunately, that’s not true.
ML learns relationships between data to fit a particular objective function or goal. It will often form proxies for avoided inputs, and these proxies can show bias. From a data scientist’s point of view, ethical AI is achieved by taking precautions to expose what the underlying ML model has learned and if it could impute bias.
Ethical models must be tested and any discrimination removed. Interpretable ML architectures allow extraction of the non-linear relationships that typically hide in the inner workings of most ML models. These non-linear relationships need to be identified and separately tested, as they are learned automatically as part of the machine learning training and based on training data that is all-too-often implicitly full of societal biases.
6. Make AI innovation adoption efficient
Efficient AI simply means: building it right the first time. To be efficient, ML models have to be built according to a company-wide development standard that mandates the use of:
- Shared code repositories
- Approved model architectures
- Sanctioned variables
- Approved AI intellectual property (IP) components
- Established bias testing
- Stability standards for active models
AI models are extremely complex, so it is important to use standard and approved technology and IP components. Production models are not a research playground; using new technology requires it to go through extensive research cycles and approval according to formal model standards for the organization. This process would encompass the approval of code, IP, and algorithm usage as an exhaustive study demonstrates its safe use.
7. Establish proper AI governance
Boards of Directors must understand and enforce AI governance based on 4 classic tenets of corporate governance: accountability, fairness, transparency, and responsibility.
- Accountability is achieved only when each decision that occurs during the model development process is recorded in a way that cannot be altered or destroyed.
- Fairness requires that neither the model, nor the data it consumes, be biased.
- Transparency is necessary to adapt analytic models to rapidly changing environments without introducing bias.
- Responsibility is a heavy mantle to bear, but our societal climate underscores the need for companies to use AI technology with deep sensitivity to its impact.
Boards that fail to embrace their responsibility to deliver safe and unbiased AI will be battered by regulation, a cornucopia of litigation, and powerful AI advocacy groups. Governance, not best intentions, is what keeps companies honest.
8. Evangelize responsibility
The way to succeed with AI is by evangelizing Responsible AI throughout your organization. The AI evangelists on your team perform an important role here. These scientists can simplify and communicate complex data science solutions expertly for each audience — whether it’s hardcore analytic skeptics, internal stakeholders, customers, or partners. Without these customer-facing experts, machine learning and AI become science fiction concepts or, worse yet, clumsily applied, which limits adoption and consequent recognition of the technology’s benefits.
That’s it for my preview. See you at ODSC West, November 1–3 in San Francisco!
About Dr. Scott Zoldi
Scott Zoldi is Chief Analytics Officer at FICO responsible for the analytic development of FICO’s product and technology solutions. While at FICO Scott has been responsible for authoring more than 120 analytic patents, with 76 granted and 47 pending. He received his Ph.D. in theoretical and computational physics from Duke University.
Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.