Organizational Approaches to Enabling AI Governance

ODSC - Open Data Science
4 min readOct 7, 2022

--

Editor’s note: Ilana Golbin is a speaker for ODSC West 2022 this November. Be sure to check out her talk, “Emerging Approaches to AI Governance: Tech-Led vs Policy-Led,” there!

Consumers and practitioners alike have been calling for accountability and responsibility across AI system development and usage. High-profile AI failures, coupled with focused regulatory inquiry and potential for legislative action make adopting responsible AI practices a necessity for businesses. One primary capability floated as essential to enable this responsibility is “governance”. While governance is not new to most organizations and can be perceived as a loaded term that means many different things to many different people, governance specific to AI systems and the data they use is likely to look both familiar and new to many organizations.

Some organizations, with robust risk management functions, already follow a “three lines of defense” model for delineating roles and responsibilities of governance across teams that develop systems and drive business as usual, compliance teams that oversee standards and methodologies, and third-line teams who check that the standards and methodologies are applied consistently and effectively, with no gaps. But this structure still leaves much to be desired when providing instructions to organizations and interested practitioners alike.

Other organizations, many of which may be small and focused on specific offerings, may not have governance across anything past what is legally mandated given the spaces in which they operate. This may not mean these companies are not well governed, simply that a formal structure may not have been needed until recently. Adopting any practices past those legally mandated could require tradeoffs with other core, necessary parts of the business including innovation and customization to customer needs.

How do organizations go about building meaningful governance? Largely, there are two pathways to building a governance program. One, start from the top (policy-led) by establishing principles to drive the direction of a governance program, gaining alignment, and then developing progressively more specific guidance in the forms of policies and then standards and operating procedures. These capabilities tend to build off existing policies and structures for privacy, data governance, and software development quality control. In a sense, this approach requires defining the organizational structures first and then gaining consensus following.

Others may start from the practice itself (practice-led), where teams driving AI development and selecting use cases to build or buy capabilities for, align on the practices they want to see, and move those practices out (to other teams) and up (to a more central organization management). These practices are likely to adhere to the process of model development (Figure 1), and rely on tactical activities like testing practices, checkpoints, and review requirements. This approach may leverage technical capabilities to support the automation of specific tests (e.g., bias tests).

Figure 1: 9 Step Model Development Lifecycle, PwC. Responsible AI: Maturing from Theory to Practice whitepaper, 2021.

These approaches are not perfect, and do not in themselves represent complete solutions. The policy-led approach has challenges in that policies and standards may be overly general and therefore struggle to be made specific enough for a given team to adopt without additional work. The practice-led approach may not consider broader organizational values and ideals, and may also be viewed as overly optimized to a specific team and difficult to generalize.

In either case, companies have unique structures which may require a mix of federated and centralized capabilities, depending on how these organizations operate today. Building off existing structures for oversight, at any level, can reduce change management. Engaging a mix of stakeholders from different areas of the business is recommended to identify practices that effectively mitigate potential risks while balancing the basic needs to operate and compete.

About the Author/ODSC West 2022 Speaker on AI Governance:

Ilana Golbin is a Director in PwC Labs (Emerging Tech & AI), where she serves as one of the leads for Artificial Intelligence. Ilana specializes in applying machine learning and simulation modeling to address client needs across sectors regarding strategic deployment of new services, operational efficiencies, geospatial analytics, explainability, and bias. Ilana is a Certified Ethical Emerging Technologist, is listed as one of 100 “Brilliant Women in AI Ethics” in 2020, and was recently recognized in Forbes as one of 15 leaders advancing Ethical AI. Since 2018, she has led PwC’s efforts globally in the development of cutting-edge approaches to building and deploying Responsible AI.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.