The Implications of Emotional AI in the Legal World

ODSC - Open Data Science
5 min readSep 16, 2022

--

Artificial intelligence (AI) has been shaking in the tech world for a while now. People also progressively explore what more it can do. Emotional AI, also known as affective computing or artificial emotional intelligence, is one example. It allows machines to detect people’s feelings.

Law enforcement agencies are starting to use AI to detect and prevent crimes. What might the legal implications be if they use the technology to analyze how people feel?

The Legal Ramifications Would Vary by Use

Some emotion-detecting AI applications are relatively harmless. In 2018, news broke of a system used in call centers that detected tired representatives or irate customers by analyzing their voices. It then provided workers with tips or motivation. It might encourage them to take a break or use a gentler tone.

More recently, a company began offering a system that uses emotional AI and other advanced technologies to evaluate customers’ engagement levels during sales calls. A professional could notice those shifts and adjust whenever a person seems excited or bored at different stages in a pitch.

Those examples don’t cover all the possibilities, though. Some law enforcement agencies apply AI as a crime-fighting measure. Suppose the algorithms make mistakes when identifying someone as a suspect. Then, the results could be life-changing, and not in a positive way.

The often-cited black box problem associated with AI refers to how we usually don’t know how algorithms reach conclusions. A person can’t work backward to see each factor that went into the eventual decision made by the machine.

That’s significant when using the technology to help decide whether someone committed a crime. If the AI helped incorrectly influence people who make judicial decisions, a wrongly accused person could go to prison without cause. In contrast, the harm is arguably minimal if emotional AI makes a mistake during a sales call.

Current Uses for AI in Crime Detection

Despite the potential issues, researchers are still interested in how emotional AI might lead to a safer future. A team at Incheon National University introduced a system that uses the 5G network and detects how people feel based on wireless signals and their body movements. The researchers noted that such an application could protect the public.

They gave an example of an emotionally unstable individual operating a car. They explained that the AI could pick up on that situation and send signals to nearby 5G towers so pedestrians are warned via their smart devices to avoid the area.

Similar applications may also prevent property theft or damage, such as if emotional AI was part of cameras that monitor streets. A smart camera might first detect that a person appears to be casing a neighborhood or business district. It could then pick up on certain emotions that indicate someone is dangerous. Then, it might rely on historic data to determine the likelihood of theft in the person’s location. For example, considering that around 1,200 catalytic converter thefts occur every month, the AI may determine that a suspicious individual hanging out around neighborhood vehicles is likely trying to steal a catalytic converter and could alert local law enforcement accordingly.

Researchers at Monash University in Malaysia are working on AI algorithms that detect weapons like knives and handguns. However, they go further to recognize when people have used those items aggressively. Such instances trigger real-time alerts to an area’s police and ambulance professionals. There’s no emotional AI component in this system yet. However, it’s easy to see how the technology might progress to that point.

Room for Improvement Needed

Algorithms to recognize people’s faces are already common. Some tech company teams have tried adding features so systems can detect emotions. Microsoft was one such brand that took that route. Identity verification and touchless access control are some of the valid reasons for using AI to recognize someone’s face.

However, a recent blog post from Natasha Crampton, Microsoft’s chief responsible AI officer, confirmed the company is retiring features in its Azure Face tool that detect people’s emotions. Crampton said experts within and outside the company noted a lack of scientific consensus on what constitutes feelings. They also took issue with how efforts to infer information about a situation can often create generalizations.

Scientists’ concerns are not new. In 2019, researchers examined more than 1,000 published documents about emotions and facial movements. They used their work to argue that any technology that tries to pick up on emotions based on how the face moves will fall short. They said a smile or scowl could convey more than one feeling simultaneously, and that the specifics differ based on the situation or someone’s culture.

More recently, a team came to a similar conclusion that facial expressions alone are not enough to correctly determine how someone is feeling. Context plays a tremendous role, too. They gave an example of a cropped photo of a red-faced man, mouth open in an apparent scream. The immediately visible clues suggest he’s angry. However, the unaltered image shows you he’s joyful about a sports team scoring a goal.

Humans don’t always correctly interpret emotions or the reasons for them, either. A woman might worry her spouse seems angry due to an argument they had the night before. However, she might completely mislabel an emotion as anger, plus misinterpret the reason for it.

Algorithms can get better. A 2006 study found that algorithms of the time were 100 times more accurate than those from 1995. However, it’s still too risky to use emotional AI for crime detection or prevention.

Emotional AI Influencing Legal Outcomes Is Concerning

People who investigate and solve crimes rely on hard evidence as much as possible. There are trusted, widely used, and reliable ways to analyze fingerprints, blood, and other physical clues. Even so, many individuals go to prison for crimes they didn’t commit.

That could become an even more prevalent issue since researchers assert that tech can’t accurately detect emotions or always explain its conclusions. That inaccuracy and ambiguity make emotional AI highly inappropriate for crime-solving at this time.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet