Best Ethical AI Research for 2021

ODSC - Open Data Science
8 min readOct 26, 2021

Ethical AI is a topic that’s on many people’s minds these days. AI that adheres to well-defined ethical guidelines regarding fundamental values, including such areas like unbiased decisioning, individual rights, privacy, non-discrimination, and non-manipulation is a motivating factor in utilizing this important technology. Ethical AI places central importance on principled considerations in shaping legitimate and illegitimate uses of AI. Organizations that employ ethical AI principles will have clearly stated policies and well-defined review processes to ensure these guidelines are followed.

The benefits from the ethical uses of AI run deep. Deployments of ethical AI can offer various levels of efficiency for organizations, yield energy efficient products, mitigate harmful environmental impacts, and advance public safety and health. Unethical AI, however, can lead to nefarious results such as disinformation, deception, bias, abuse, harassment, and political instability.

Ethical AI is not limited to what is permitted by law. Legal restrictions pertaining to the use of AI set a minimum threshold of acceptability, while ethical AI establishes policies that go beyond legal requirements to ensure respect for fundamental human values. For instance, AI algorithms that serve to manipulate people to engage in addictive and/or self-destructive behavior (e.g. working to addict children to video games) may be considered legal, but they do not adhere to ethical AI.

Recently, we’re seeing that existing laws and regulations are often insufficient to warrant the ethical use of AI, therefore it’s important for organizations who use AI, and also developers and providers of AI solutions and technology, to take proactive steps in the practice of ethical AI. This responsibility must be made in conjunction with specific policies that are actively enforced.

In this article, I provide a round-up of a growing number of recent research papers dealing with ethical AI. You may find the papers useful to see what academia is doing with regard to a variety of topics aligned with this area. Ethical AI is very hot right now and on the mind of many enterprise stakeholders. Enjoy!

The State of AI Ethics Report (January 2021)

This paper constitutes the 3rd edition of the Montreal AI Ethics Institute’s report “The State of AI Ethics.” The report captures the most relevant developments in AI Ethics since October 2020. It aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the field’s ever-changing developments. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: algorithmic injustice, discrimination, ethical AI, labor impacts, misinformation, privacy, risk and security, social media, and more.

Actionable Approaches to Promote Ethical AI in Libraries

The widespread use of AI in many domains has revealed numerous ethical issues from data and design to deployment. In response, countless broad principles and guidelines for ethical AI have been published, and following those, specific approaches have been proposed for how to encourage ethical outcomes of AI. Meanwhile, library and information services too are seeing an increase in the use of AI-powered and machine learning-powered information systems, but no practical guidance currently exists for libraries to plan for, evaluate, or audit the ethics of intended or deployed AI. This paper therefore reports on several promising approaches for promoting ethical AI that can be adapted from other contexts to AI-powered information services and in different stages of the software lifecycle.

https://odsc.com/california/#register

Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers

ML and AI researchers play an important role in the ethics and governance of AI, including taking action against what they perceive to be unethical uses of AI. Nevertheless, this influential group’s attitudes are not well understood, which undermines our ability to discern consensuses or disagreements between AI/ML researchers. To examine these researchers’ views, this paper reviews the results of a survey of those who published in the top AI/ML conferences. The results were compared with those from a 2016 survey of AI/ML researchers and a 2018 survey of the US public. Being closer to the technology itself, AI/ML re-searchers are well placed to highlight new risks and develop technical solutions, so this novel attempt to measure their attitudes has broad relevance. The findings should help to improve how researchers, private sector executives, and policymakers think about regulations, governance frameworks, guiding principles, and national and international governance strategies for AI.

Measuring Ethics in AI with AI: A Methodology and Dataset Construction

Recently, the use of sound measures and metrics in AI has become the subject of interest of academia, government, and industry. Efforts towards measuring different phenomena have gained traction in the AI community, as illustrated by the publication of several influential field reports and policy documents. These metrics are designed to help decision takers to inform themselves about the fast-moving and impacting influences of key advances in AI in general and machine learning in particular. This paper proposes to use such newfound capabilities of AI technologies to augment our AI measuring capabilities. This is done by training a model to classify publications related to ethical issues and concerns. The methodology uses an expert, manually curated dataset as the training set and then evaluates a large set of research papers. Finally, the implications of AI metrics are highlighted, in particular their contribution towards developing trustful and fair AI-based tools and technologies.

AI-Ethics by Design. Evaluating Public Perception on the Importance of Ethical Design Principles of AI

Despite the immense societal importance of ethically designing AI, little research on the public perceptions of ethical AI principles exists. This becomes even more striking when considering that ethical AI development has the aim to be human-centric and of benefit for the whole society. This paper investigates how ethical principles (explainability, fairness, security, accountability, accuracy, privacy, machine autonomy) are weighted in comparison to each other. This is especially important, since simultaneously considering ethical principles is not only costly, but sometimes even impossible, as developers must make specific trade-off decisions.

Ethics as a service: a pragmatic operationalisation of AI Ethics

As the range of potential uses for AI, in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realization that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realization, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. This paper explores why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed Ethics as a Service.

AI and Ethics — Operationalizing Responsible AI

In the last few years, AI continues demonstrating its positive impact on society while sometimes with ethically questionable consequences. Building and maintaining public trust in AI has been identified as the key to successful and sustainable innovation. This paper discusses the challenges related to operationalizing ethical AI principles and presents an integrated view that covers high-level ethical AI principles, the general notion of trust/trustworthiness, and product/process support in the context of responsible AI, which helps improve both trust and trustworthiness of AI for a wider set of stakeholders.

Time for AI (Ethics) Maturity Model Is Now

There appears to be a common agreement that ethical concerns are of high importance when it comes to systems equipped with some sort of AI. Demands for ethical AI are declared from all directions. As a response, in recent years, public bodies, governments, and universities have rushed in to provide a set of principles to be considered when AI based systems are designed and used. We have learned, however, that high-level principles do not turn easily into actionable advice for practitioners. Hence, also companies are publishing their own ethical guidelines to guide their AI development. This paper argues that AI software is still software and needs to be approached from the software development perspective. The software engineering paradigm has introduced maturity model thinking, which provides a roadmap for companies to improve their performance from the selected viewpoints known as the key capabilities. It is important to voice out a call for action for the development of a maturity model for AI software. The paper discusses whether the focus should be on AI ethics or, more broadly, the quality of an AI system, called a maturity model for the development of AI systems.

Signs for Ethical AI: A Route Towards Transparency

AI has recently raised to the point where it has a direct impact on the daily life of billions of people. This is the result of its application to sectors like finance, health, digital entertainment, transportation, security and advertisement. Today, AI fuels some of the most significant economic and research institutions in the world, and the impact of AI in the near future seems difficult to predict or even bound. In contrast to all this power, society remains mostly ignorant of the capabilities, requirements and standard practices of AI today. Society is becoming aware of the dangers that come with that ignorance, and is rightfully asking for solutions. To address this need, improving on current practices of interaction between people and AI systems, this paper proposes a transparency scheme to be implemented on any AI system open to the public. The scheme is based on two main pillars: Data Privacy and AI Transparency. The first recognizes the relevance of data for AI and is supported by GDPR, the most important legislation on the topic. The second considers aspects of AI transparency yet to be regulated: AI capacity, purpose and source.

How to Learn More about Ethical AI

At our upcoming event this November 16th-18th in San Francisco, ODSC West 2021 will feature a plethora of talks, workshops, and training sessions on ethical AI and ethical AI research. You can register now for 30% off all ticket types before the discount drops to 20% next week. Some highlighted sessions on responsible and ethical AI include:

  • Artificial Intelligence for Conservation and Sustainability: From the Local to the Global: Dave Thau, PhD | Data and Technology Global Lead Scientist | WWF
  • Responsible AI; From Principles to Practice: Tempest Van Schaik, PhD | Senior Machine Learning Biomedical Engineer | Microsoft CSE
  • Using AI to Overcome Bias & Make Hiring More Equitable: Ashutosh Garg, PhD | CEO and Founder | Eightfold.ai

Sessions on Machine Learning:

  • Towards More Energy-Efficient Neural Networks? Use Your Brain!: Olaf de Leeuw | Data Scientist | Dataworkz
  • Practical MLOps: Automation Journey: Evgenii Vinogradov, PhD | Head of DHW Development | YooMoney
  • Applications of Modern Survival Modeling with Python: Brian Kent, PhD | Data Scientist | Founder The Crosstab Kite
  • Using Change Detection Algorithms for Detecting Anomalous Behavior in Large Systems: Veena Mendiratta, PhD | Adjunct Faculty, Network Reliability and Analytics Researcher | Northwestern University

Sessions on MLOps:

  • Tuning Hyperparameters with Reproducible Experiments: Milecia McGregor | Senior Software Engineer | Iterative
  • MLOps… From Model to Production: Filipa Peleja, PhD | Lead Data Scientist | Levi Strauss & Co
  • Operationalization of Models Developed and Deployed in Heterogeneous Platforms: Sourav Mazumder | Data Scientist, Thought Leader, AI & ML Operationalization Leader | IBM
  • Develop and Deploy a Machine Learning Pipeline in 45 Minutes with Ploomber: Eduardo Blancas | Data Scientist | Fidelity Investment

Original post here.

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.