A Recap of Our Interview with Kay Firth-Butterfield on AI Governance
As newer fields emerge within data science and the research is still hard to grasp, sometimes it’s best to talk to the experts and pioneers of the field. Recently, we spoke with Kay Firth-Butterfield, Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum ahead of her upcoming ODSC West 2022 talk on AI governance. You can watch the full Lightning Interview here, and read the transcript for the first few questions with Kay Firth-Butterfield below.
Q: What is your definition of AI Governance?
Kay Firth-Butterfield: When I think about AI governance, I sort of think it’s the same thing as responsible AI. What we’re thinking about doing is making sure that the design, development, and use of artificial intelligence tools are done in the best possible ways, so that they have the best possible outcomes and the least possible negative outcomes.
What we have found since I started doing this all these years ago is that there are some applications that can have very negative outcomes and so then we think about governance. So there’s the responsible AI or the AI governance that goes on within the companies — or should go on within the companies — that are creating the products, and then on the other side there’s what can government do or what should government do.
Government has, in my mind, two methods of governance. The first is what I call soft governance mechanisms, and then obviously there’s the hard governance which is the regulations, so passing laws and enforcing laws.
Two examples of what I mean by soft governance are the US AI Bill of Rights. That’s an example of soft governance because it has no teeth. There’s no enforcement procedure to it but it’s the U.S government setting the stage and saying “citizens, businesses, the world, this is how we think about responsible AI.
Another possibility is something that we did when I first moved to the Forum. We collaborated with the UK government to work on the procurement of artificial intelligence. As you know, a lot of money is flowing through governments to procure artificial intelligence. So if you’re a company you will have to elevate your responsible AI to the point of being able to sell to governments.
If governments set responsible AI targets through the procurement process then actually the government’s doing two things. One, it’s elevating anybody who might sell to them, but also it’s saying to the general population and actors within their state “this is our tolerance level and if we were to go on and regulate it, this is where it would be.” That’s actually been really successful, not just in the UK as it’s in many countries around the world that have taken what we started with the UK and adjusted it for their own purposes.
Q: What red or green flags should you look for when discussing AI governance with a company during an interview?
Kay Firth-Butterfield: Different companies do it in different ways. Some of them have a big responsible AI piece that sits in the middle, and if you have worries you ask questions, and as they hear about things that you’re doing they ask questions of you. Many companies will have whole lists of things to consider when starting a project so that you don’t fall foul of their responsible AI practices.
Other companies have their responsible AI work put into, for example, government affairs or don’t have it tied to the people who are actually creating the tools. That would be a red flag for me, because if you’re really wanting to do this properly, you are going to want to have a conduit between you and whoever’s in the responsible AI practice.
Another thing that you might look at depends on your appetite for income, is that some companies pay bonuses based on how fast you do the work or how fast the product goes out of the door. That’s a bit of a red flag for me because if you’re being bonused to produce something by X time, have you got time in that in that runway to actually go and talk to the responsible AI people and say “whoops you know we might have a problem here.” You need to have a company where the bonus structure and the responsible AI structure actually meet, rather than are disjointed in this way.
More on Kay Firth-Butterfield’s session on AI Governance at ODSC West:
AI is ever more ubiquitous in our lives but all countries are not created equal in their access to or use of AI. Likewise, all countries and businesses do not adhere to the same regulatory frameworks or opinions on governance. Yet all companies would benefit from knowing where they stand so that investment in technology is not ultimately wasted. Likewise, access to AI is being used as a geopolitical tool. What lessons are we able to draw and adopt now and how might this thinking mature into the future.
Originally posted on OpenDataScience.com
Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.