August 1, 2022

Eliminating bias in AI systems may require a “human-in-the-loop” solution.

Artificial Intelligence is a staple of digitization in business, with 86% of companies viewing AI as a “mainstream technology” within their organizations.

But the vast majority of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them. This bias in data has been shown to stem from the same bias, implicit or otherwise, that affects human decision-making.

AI systems face a particular challenge in recognizing human emotions – in part because humans do, too.

For Gloria Washington, an IEEE Member, the risk of bias in intelligent systems underscores the need for more empathetic AI.  

Why We Need Empathic Computing

“The promise of AI is that it’ll help to make better business decisions, improve patient outcomes in healthcare, automate repetitive or mundane tasks for humans to free up their time to work on more challenging problems – and so much more,” said Washington.

Unfortunately, Washington says, we’ve already seen instances where biased AI systems are causing harm to humans.

Similarly, AI systems deploying algorithms influenced by bias can further marginalize individuals or communities based on human physical, psychological or behavioral characteristics pertaining to identity or emotion recognition.

For example, an automated system responsible for helping a major corporation screen candidates for interviews could theoretically have bias built into it – if, for example, the AI algorithm was based on  a corporation’s hiring practice for the past decade, wherein that timeframe hirings were made by men exhibiting affinity bias, women would be placed at a disadvantage in the screening process, resulting in a gender gap in the hiring process.

As AI systems continue to proliferate, addressing bias in intelligent systems directly is critical.

The Opportunities Before Us

Washington believes that one of the best ways to address AI bias is to incorporate a human-in-the-loop approach for the verification of predictions, as well as offering context on the potential impacts of a particular prediction on a community or individuals.

“An empathetic system is one that learns from various experts, deployed as part of a human-in-the-loop approach, to improve the models underlying a system,” said Washington. “And an expert is someone who, through lived experiences or a deep understanding of real world situations and impacts, is able to leverage multimodal data – pictures, text and metrics – to show what good predictions look like to help better inform an AI tool.”

One area of study is aimed at increasing empathy in healthcare by examining the language that providers use in their notes and electronic health records for signs of bias. The system can then coach the provider to use more inclusive language.

Appropriately planning for all scenarios and potential outcomes is a challenge, according to Washington. However, she believes that it is possible to bring experts from various fields together to codify the necessary information needed for better understanding affect, empathy, biometrics, and human behavior.

What Are Affective Biometrics?

The questions of empathy are ones that weigh heavily on the field of affective biometrics, which is sometimes referred to as emotional computing. 

Affect is often shown in the human body’s physical reaction to emotion.

“We typically associate this with facial expressions, but affect, within the context of biometrics, also encompasses body language, physical gestures, and even the human autonomic system through measurable data focusing on things like heart rate, blood pressure and even sweating,” Washington said.

But affect is inherently cultural. What may seem like a happy reaction in one culture may be interpreted as agitated in another. 

Washington says that affective biometrics analyze emotions, such as frustration or anger, but should also be used along with contextual information to identify how a person may behave based on learned responses to a particular situation or scenario.

“Context is important for understanding affect,” said Washington. “Codifying context can be difficult – it can be challenging to codify the ‘when, where, and how’ a human will use a tool. However, by bringing experts from various disciplines into the fold, we can help to mitigate the harmful effects of bias in systems.”

According to Washington, we could also change the way that AI tools are leveraged.

“AI tools can certainly automate mundane tasks for us,” said Washington. “However, we can choose to leave more important tasks, particularly those that have powerful real world impacts on people, to humans. Additionally, where AI is leveraged, there can be procedural steps built into systems where, prior to the execution of a task, digital tools will communicate in some way the impact of an action on the group or individual in question.”

At the end of the day, AI, like so many other technologies, is a tool to be wielded by humans.

And, as is often the case, to address human problems, such as bias, the best solutions are ones involving humans.

Learn more about Washington’s work at the Affective Biometrics Lab at Howard University.

Learn More: 

Cultural norms and affect sit at the heart of ethically designed artificial intelligence systems. If you’d like to learn more about how IEEE members are addressing these questions, read Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, or get involved with one of the working groups addressing these questions for future technologists.

INTERACTIVE EXPERIENCES

Close Navigation