Reportingonsuicide.cisco.com: Interview with Team Member Dr. Annie Ying

ODSC - Open Data Science
8 min readAug 26, 2020

--

Last year, I established Cisco’s Data Science and AI for Good initiative as a channel for Cisconians to give back pro bono, using their professional expertise to help nonprofits make the world a better place through data and analytics. Almost a year later, in collaboration with Save.org, Reportingonsuicide.org, and The Erika Legacy Foundation, we’re excited to release https://reportingonsuicide.cisco.com/.

Of all the causes in the world, you may wonder why we chose suicide reporting. On the surface, reporting, blogging, posting on social media, or talking about death by suicide may not seem like a life- saving or life-endangering task. Yet HOW an individual’s death by suicide is reported and discussed is a contributing factor in whether the victim’s death will be followed by further tragedy. So much so that the World Health Organization (WHO) established media adoption of Reporting on Suicide Guidelines as one of 7 priority areas for suicide prevention. The WHO pamphlet for media reporting on suicide is publicly available here.

These life-saving guidelines, available on reportingonsuicide.org outline how each of us can take a proactive role in suicide prevention by changing the way we write and speak about the subject. Looking at the bigger picture, when Dr. Annie Ying and I began this effort, we realized that we could benefit all other suicide prevention efforts by helping to de-stigmatize mental illness through language. When we realized that we ourselves Once we unknowingly could use language understood how the language as team members formerly used surrounding suicide that contributed to the stigmatization of mental illness, it became clear that democratizing the suicide reporting guidelines isn’t just about the media; it’s about raising awareness and understanding of this global health problem.

While none of the brilliant minds behind this effort did so for the recognition (it was a labor of love which took over many of their nights and weekends), they exemplify how each of us can make a difference- and on that note, it’s my honor to interview my partner in good, Dr. Annie Ying, the technical leader behind the effort.

J: With all the demands on your time, including a newborn baby, what prompted you to take on this cause?

A: This is a really important topic globally, and hearing what you went through after Erika, who was a close friend of yours, died by suicide, made it clear what an important opportunity this is to change people’s lives. The fact that Erika lived in Vancouver, where my family and the Cisco AI Lab are, feels like an additional calling pointed to us to get involved.

J: Data science and AI have the potential to do immense harm as well as good. Racist algorithms, for example, are getting much-needed attention right now. What’re your thoughts on the topic and how we can ensure data science doesn’t worsen people’s lives?

A: This is a topic that keeps me up at night. Techniques like e-mail monitoring and surveilling the population without consent can have horrifying unintended consequences. I’m relieved to see that in the last couple of years since the Facebook Cambridge Analytica data scandal made the news, recent protests have brought the topic of ethical uses of data to is finally at the forefront for the public to discuss and am happy to see the increase in awareness. In that sense, I’m hopeful.

I think it’s important for citizens to have literacy surrounding AI. An AI literate population can hold companies and governments more accountable to how these technologies are used in a way that helps the system. Because these technologies are developed by a handful of companies, they have a huge say in where we’re going. For example, Amazon recently stated that it won’t provide an image recognition service to the police. That’s a level of power that companies have that citizens don’t. Without AI Literacy, we’re vulnerable. That’s one reason why I organized a meetup group 2 years ago in Vancouver called “Data Science for Social Good.” I started by giving a talk on demystifying AI to help others develop AI Literacy, and as its grown, have continued to do so to support my local community. As a society, I believe we would all- not just the experts- benefit from AI Literacy.

J: Why is AI Literacy so important?

A: From the perspective of for-profit companies, they have to keep their shareholders in mind when they make decisions. As a scientist, I’m very curious. Scientists in academia are driven by publishing papers. So, on one side, you have capitalism and on the other side you have scientific curiosity, but that doesn’t always mean they align with society’s interests. As a citizen, having AI Literacy is the first step in understanding what’s going on with these technologies, so you can voice your concerns. Just like with politics, if we don’t understand the issues, it’s very hard to make our interests heard. With literacy, we can understand the issues and fight for our rights. It’s the basis of democracy- counting on people on being literate so that they’re well informed and can speak out and vote accordingly. When people are uninformed, democracy doesn’t function as well.

J: What do you see as utopian and dystopian outcomes of AI?

A: I see a utopian outcome as one where AI works with humans and that goes back to AI Literacy, so fear doesn’t prevent AI from helping people. I recently watched a documentary on the industrial revolution which showed how much fear people had about machines taking their jobs but if you look at that period, more jobs were created than were lost. I think it’s a similar case with AI. Whether we’re afraid or not, the world is moving that way, so it’s in everyone’s best interest to know how to work with these new technologies.

J: What do you recommend to someone who wants to use AI and data science for good like we’re doing for suicide prevention?

A: I’m very pleased about the collaboration between the technologists and psychologists on reportingonsuicide.cisco.com. As scientists, we worked with domain experts to help the users, journalists, achieve their goals of minimizing the contagion effect (additional deaths) which can result from suicide reporting. The tool acts in a supporting role to help journalists find guidelines they may have missed but doesn’t try to play the role of a journalist. The decision of how to author a piece that aligns with the guidelines is still the journalists. I’m also very pleased that we developed this tool along with and based on the guidelines developed by the suicide reporting experts as opposed to coming up with guidelines ourselves. At the end of the day, we have a tool that is backed up by wisdom from research experts and is in a supportive role to journalists, bloggers, and anyone else who wants to save lives by changing the way we write and speak about suicide.

J: How important is it that the reportingonsuicide.cisco.com tool transparently supports the users through education and recommendations instead of making changes for the user?

A: Maybe this is my personal opinion, but that’s the type of technology I prefer. I don’t even have devices that can listen in my home because even if you understand how the technology works, you don’t know what’s being done with your data. And there are other types of people who say “I’m not doing anything embarrassing or criminal so I don’t mind a device capturing my conversations.” But the way I think about it is in terms of privacy- just like we close the door when we go to the bathroom. I prefer not to give my data away to companies if they’re not transparent about how they’re using it now and could use it in the future.

J: In the movie Anon, Amanda Seyfried’s character says, “It’s not that I have something to hide. I have nothing I want you to see.” Is that your perspective as well?

A: Yes, that’s a great way to put it. I see that a lot of other people don’t value their privacy as much as I do. But it’s a trade-off between the cost of our privacy and the benefit we get in return, right? It’s not like I’m completely off the grid. I typically don’t like to use credit cards because they collect data on your purchasing habits, but I now exclusively started using credit cards more often since COVID-19 because they’re easier to keep clean than cash. A year ago, I used cash whenever possible but now that cash can carry COVID-19, I prefer credit cards. Again, it’s a trade-off: I’m giving up some privacy in return for being exposed to fewer germs. And that’s why I keep coming back to AI Literacy. A utopian future would include people having more transparency and control of their data. It’s important to understand the trade-offs we’re making. It’s not just about making better technology. At the end of the day, is technology making a big difference in people’s quality of life? And for me, quality of life includes being able to make an informed decision about how much privacy you give away in return for certain benefits and being able to opt-out based on your preferences. It’s nice to see GDPR in Europe and how mindful they are on that front.

J: How would you like to see GDPR evolve in the future?

A: GDPR is very focused on the data today. In the future, it would be great to see it focus on technology as a whole. It’s not just about the data that’s collected about you because that data can be embedded into machine learning models; it’s about visibility into how models use our data.

J: What advice do you have for someone who wants to get started doing AI and data science for good?

A: In Vancouver, I’ve been involved with an organization called Data Science for Good that organizes data-thons anyone can participate in. There are many local and national groups like this that people can volunteer with. You can also call a foundation you’re interested in helping but I would just keep in mind that there’s a lot of pre-work in setting up the problem statement that needs to happen before data scientists can be productive. I find that pre-work is one of the hardest parts of data science. It may be easiest if you’re just starting to reach out to organizations or people who’re already doing data science and AI for good. But it’s also a personal choice. I don’t have a specific cause that matters most to me and some people do. For example, a friend of mine specifically wanted to work on the problem of overdosing. I had met someone in a coworking space who worked for an overdosing prevention nonprofit, introduced him to my friend, and she started volunteering for that organization. So, it really depends on the individual and if you have a cause that’s important to you.

Editor’s note: Jennifer is a speaker for ODSC Europe 2020 this September. Check out her talk, “Data Science for Suicide Prevention,” then!

About the author: Jennifer Redmon joined Cisco in 2009 and serves as its Chief Data Evangelist. Her organization enables an insight-driven culture through globally-scaled data products, services, and community enablement. In response to the shortage of data and analytical talent in the marketplace, her team has upskilled over 3,000 employees to date in the areas of data science, artificial intelligence, data storytelling, and data engineering. By hosting virtual and physical events including AI/data science competitions and symposiums as well as always-on collaboration platforms, her organization interconnects and fosters a thriving federated community of practitioners who drive innovation across functions and geographies. Jennifer holds an international MBA from Duke University with a concentration in Strategy and Bachelor’s in Economics and Art History from UC Davis.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.