ODSC West 2022 Keynote: Kay Firth-Butterfield on Responsible AI in Government and Business
As artificial intelligence grows in scale and encompasses more and more of our lives, what will it mean for both the technology and the people worldwide as each nation looks to manage the ramifications of AI with different regulatory frameworks? The EU, USA, and nations of the Asian Pacific might view the technology through a slightly different lens depending on advancement and cultural sensitives.
Kay Firth-Butterfield, Head of AI & Machine Learning & member of the Executive Committee at the World Economic Forum, and foremost expert in the emerging field of AI governance spoke on this issue at length during ODSC West 2022. Throughout the keynote, she made the connection from the origins of responsible AI to future possibilities.
She takes us back to 2014. Many may have not noticed but quite a few things happened that are still being felt in the world of AI. In February, DeepMind was sold to Google, with the condition that there would be an ethics advisory board in place to ensure that there was ethical oversight to keep their work in check. In May, leading physicists like the late Stephen Hawking, Max Tegmark, and Stewart Russell wrote an insightful article about AI in The Times of London, calling it either the best thing humanity will do or the very last due to the potential ramifications of the technology and problems that were coming to light.
Finally, in 2014 Nick Bostrom released his book, Superintelligence: Paths, Dangers, Strategies. All of these events really got the ball rolling and in that same year, Kay Firth-Butterfield was appointed as Chief AI Ethics Officer for a startup. She explained that the title was her creation as no one else had it at the time, and such a position was new to the field. No one knew what responsible AI was really and how ethics could come to play for the technology. A year later, the IEEE began its work developing standards around responsible artificial intelligence which came to a head in 2017 during a meeting when 99 principles of responsible AI were put together.
What is responsible AI? Kay breaks down the definition with the following principles: Bias, fairness, security robustness, privacy, explainability & transparency, the consideration of human agency, accountability, and lawful creation of AI. With each of these principles, Kay Firth-Butterfield breaks down what is meant by each one as each can be taken subjectively while providing examples such as the AI Bill of Rights which was released a few months ago, and the controversial use of AI.
From this point, she touches on both soft and hard law applications. In one example, she points to the use of algorithms in the hiring lifecycle; if biases aren’t kept in check in the programming of the AI, then employers can find themselves running afoul of civil rights protection. Even though hard law applications work, Kay admits that the soft approach tends to be faster, more dynamic, and more adaptable when it comes to applying the principles of responsible AI.
This takes her to lethal autonomous weapons, or Skynet, as the public better knows. In pointing to a need for a unifying body of work and agreement on responsible AI, she explains that there are many international groups, NGOs, governing bodies, etc. who are against the technology going in that direction. They all have different views, principles, and approaches that are competing among themselves.
One can see this in how different nations view each principle and value as related to the laws governing AI. From the EU, individual states within the USA, and China view privacy protection as tied more to business than individual people. Together these governing bodies represent the largest markets for the technology and so will have greater influence. This is quite important since the first two are taking a human rights approach and the latter not doing so, which can affect how any international framework might look.
With that said, Kay Firth-Butterfield touches on different nations and their progress in creating their own laws governing the use of AI, from the EU, the USA’s AI Bill of Rights, and finally India. Each takes different approaches, but with softer hands. But with all of that said, why is this so important? It’s a great question that is asked during the Keynote, and Kay has a straight answer. Engineers working on AI “are at the cutting edge” of a technology everyone in the data science world, policy leaders, and in just about every industry knows will have a long-lasting and sustained impact on the whole of humanity. It’s on the level of the invention of the light bulb or the combustible engine.
Though it’s still relatively new, the effects are being felt and we can only guess at the changes to our society as it continues to rapidly scale in size. This is why responsible AI is so important and has become a hot topic within the data science community. In conclusion, Kay Firth-Butterfield believes that the fragmented approach with each major market coming up with its own methods for responsible AI for the foreseeable future.
If you found this keynote interesting, then you shouldn’t miss the next ODSC conference, ODSC East! Tickets are now 70% off for a limited time, so don’t delay!
Originally posted on OpenDataScience.com
Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.