Responsible AI 2020: Expectations for the Year Ahead
In 2020, enabling responsible application of AI technologies is one of the field’s foremost challenges as it transitions from research to practice. More and more, we’re hearing of researchers and practitioners from disparate disciplines highlighting the ethical and legal challenges posed by the use of AI in many current and future real-world applications. Additionally, there are calls from academia, government, and industry leaders for technology creators to ensure that AI is used only in ways that benefit mankind and to integrate responsibility aspects into the foundations of the technology. Overcoming these challenges and enabling responsible development is key to certify a future landscape where AI can be widely accepted and used.
In this article, we’ll see how it’s important to understand principles, best practices, and open-source tools centered around responsible development and deployment of AI-driven systems in 2020.
[Related article: Data Science Influencers and Keynotes Coming to ODSC East 2020]
Common Factors for Establishing Responsible AI
The advancements of AI technology lead to inherent challenges around trust and accountability. In order to address these effectively, organizations should understand the challenges and risks with AI and also take these fully into account in its design and deployment. Here are five key important factors when designing and deploying responsible AI systems:
- Governance — the underpinnings for responsible AI point to the need for end-to-end enterprise governance. At a high level, governance for AI should enable an organization to address important questions about the decision-making process of AI applications — identifying accountability; determining how AI aligns with business strategy; finding the business processes could be modified to improve results; putting controls in place to track performance and locate problems; and deciding whether the results are consistent and reproducible.
- Ethics and regulation — the primary goal is to aid organizations develop AI that is ethical and compliant with relevant regulations.
- Explainability — provide a vehicle for AI-driven decisions to be interpretable and easily explainable by those who are affected by them.
- Security — help organizations develop AI systems that are safe to use.
- Bias — address issues of bias and fairness so that organizations are able to develop AI systems designed to mitigate unwanted bias and achieve decisions that are fair in a well-communicated way.
Industry Leaders Taking Charge
2020 will see a continued march of large industry players forging ahead with strategic plans for responsible AI. Microsoft for example, has publicized its approach to responsible AI with six ethical principles to guide the development and use of AI with human beings taking center stage — fairness, reliability & safety, privacy & security, inclusiveness, transparency, and accountability. The company also has developed guidelines for responsible bots, principles for building conversational bots that create confidence in a company’s products and services. Microsoft’s Office of Responsible AI is tasked with putting Microsoft’s principles into practice.
In addition, Google’s public statement on its responsible AI practices indicates the company is addressing new questions about the best way to build fairness, interpretability, privacy, and security into AI systems.
Some industry titans are ready for increased scrutiny on developing AI solutions. Elon Musk, for instance, is calling for regulation on organizations developing advanced AI, including his own companies — the Tesla and SpaceX head tweeting on Feb. 17, 2020, “All orgs developing advanced AI should be regulated, including Tesla.”
Limiting AI Applications
Efforts toward reaching a level of responsible AI also means thorough consideration for the deployment of certain applications of AI. As one prominent example, there has been much public debate centered on the use of facial recognition software which is powered by deep learning (specifically, convolutional neural networks). In 2020, we’re seeing various levels of government, law enforcement agencies, and universities limit the use of facial recognition out of concern that it could introduce economic, racial and gender bias.
For example, this concern has prompted new federal policies such as the Facial Recognition Technology Warrant Act of 2019 (S.2878). If it becomes law, it would require federal officials to get a warrant if they’re going to use facial recognition technology to attempt to track a specific person’s public movements for more than 72 hours.
In addition, California legislation AB 1215 was signed into law by Governor Gavin Newsom in late 2019. The Body Camera Accountability Act, temporarily stops California law enforcement from adding facial recognition and other biometric surveillance technology to officer-worn body cameras.
Also, in February 2020, UCLA opted not to implement facial recognition technology on its campus after backlash from students.
Responsible AI Tools
Some leading-edge organizations are working on new tools to facilitate responsible AI efforts. For example, AI Global offers the Responsible AI Portal, an authoritative repository combining reports, standards, models, government policies, open data sets, and open-source software to help navigate the AI landscape and directly connect with the experts who develop these tools. Also, Element AI produces a timely podcast series “The AI Element” that focuses on exploring the biggest issues and toughest questions around trust and adoption of AI. In addition, PwC’s Responsible AI Toolkit is a suite of customizable frameworks, tools, and processes designed to help harness the power of AI in an ethical and responsible manner — from strategy through execution. The Toolkit enables organizations to build high-quality, transparent, explainable and ethical AI applications that generate trust and inspire confidence among employees, customers, business partners, and other stakeholders.
[Related article: Top Jobs That Pave the Way for Becoming a Data Scientist in 2020]
Conclusion
Based on the accelerated level of adoption of AI by a broad swath of industries, the timing is such that efforts to engage responsible AI is becoming a critical business strategy. Essentially, if AI isn’t responsible, it isn’t truly intelligent. Stakeholders, including board members, customers, stockholders, and regulators, will have many questions about an organization’s use of AI and data, from how it’s developed to how it’s governed. Moving forward, you not only need to be ready to provide the answers, you must also demonstrate ongoing governance and regulatory compliance.
Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.