Toby Walsh on AI and Ethics
I get a call from the media almost every day to help them understand some new development in Artificial Intelligence (AI). And most often of all, they want to talk about “AI and Ethics.” A little time ago, amongst all these calls from the media, I had a moment of revelation. I was taking calls from journalists about Google’s demo of DUPLEX. I’m sure you’ve heard it. If not, take a listen here.
I was very impressed. A computer can have a simple conversation, book a haircut or a restaurant, and umm and ah just like a human. Many journalists were understandably concerned about computers pretending to be humans. There are plenty of Hollywood movies to suggest this could end badly.
This is when I had my moment of revelation. Most of this is just old fashioned bad behaviour.
Knocking on someone’s door and pretending to be someone you’re not is bad behaviour. Any company that employed people to do just that would be behaving badly. So having a computer metaphorically knock on your door and pretend to be someone else is also bad behaviour.
A colleague working on ethics at Google told me that management was told that they should start the demo with a disclaimer warning the caller that it was a computer. But management didn’t listen. They were more concerned about impressing the audience. That itself should be concerning. But more fundamentally, Google’s DUPLEX is designed to be deceptive. Why umm and err like a human unless you want to fool people?
As a second example of AI and ethics, take Facebook and Cambridge Analytics. Now much of the media attention has been on Facebook helping Cambridge Analytica to steal private information. And stealing private information is, of course, bad behaviour. But there’s another side to the Cambridge Analytica story less discussed which is that this stolen information was then used to manipulate how people vote.
In fact, Facebook had two dedicated employees working full time in the Cambridge Analytica offices in Tucson, Arizona helping Cambridge Analytica buy adverts to manipulate the vote. Cambridge Analytica was one of Facebook’s best customers during the Presidential election. You can watch a BBC documentary where this was revealed on camera here.
It’s hard to understand then why Facebook’s executives sounded so surprised when they testified before Congress about what happened. They were a very active player in manipulating the vote. And manipulating voters has been bad behaviour for thousands of years, ever since the ancient Greeks. We don’t need any new ethics to decide this.
Technology can let us behave badly faster, cheaper, and easier. But most of it is just good old fashioned bad behaviour. Much of the discussion of AI and ethics is a distraction from us holding to account the people and the corporations responsible who are using technology to behave badly. We need to hold the technocrats to account, to ensure they uphold the values of our society.
If you want to hear more about how AI can go wrong, and how we can fix this, please attend my upcoming talk at ODSC APAC, “AI and Ethics.”
About the author/ODSC speaker: Toby Walsh is a Laureate Fellow and Scientia Professor of AI at the University of New South Wales and Data61. He was named by the Australian newspaper as one of the “rock stars” of Australia’s digital revolution. He is a Fellow of the Australia Academy of Science and recipient of the NSW Premier’s Prize for Excellence in Engineering and ICT. He appears regularly on TV and radio, and has authored two books on AI for a general audience, the most recent entitled “2062: The World that AI Made”. His books are available in Arabic, Chinese, English, German, Korean, Polish, Romanian, Russian, Turkish, and Vietnamese. “2062” has just been published in India by Speaking Tiger Books.