Emotional AI: Addressing Challenges
Artificial Intelligence (AI) claiming to interpret human emotions has potential to enhance user experience, yet concerns over misuse and bias loom large, making the field fraught with risks.
For instance, my neighbors might mistake my intense conversation for a dramatic call to an ex-partner or an acting exercise, but I’m actually testing a new demo from Hume, a Manhattan startup pioneering “the world’s first voice AI with emotional intelligence.”
Hume asserts its technology can discern emotions from our voices and, in a non-public version, from facial expressions, responding empathetically, according to The Guardian. With OpenAI’s recent release of the more ’emotive’ GPT-40 in May, emotional AI is becoming big business. Hume raised $50 million in its latest funding round, and the industry’s value is projected to exceed $50 billion this year.
However, Prof. Andrew McStay from Bangor University’s Emotional AI Lab challenges these forecasts, emphasizing the profound implications of understanding and reacting to human emotions in natural ways that transcend monetary value.
Potential applications range from enhancing video games and improving helplines to Orwellian surveillance and widespread emotional manipulation. Yet, the fundamental question persists: Can AI accurately interpret human emotions, and how should society navigate its inevitable integration?
“I appreciate your kind words, I’m here to support you,” replies Hume’s Empathic Voice Interface (EVI) in a remarkably human-like voice, while analyzing my declaration of love: scoring 1 (out of 1) for “love,” 0.642 for “adoration,” and 0.601 for “romance.”
While my lack of negative emotions might suggest poor acting on my part, it appears the model prioritizes my words over tone. According to Cowen, the model struggles with unfamiliar situations, understanding tone but not nuanced expressions like saying “I love you” in an unexpected context.
Cowen clarifies that Hume’s focus is on overt expressions, and the EVI is responsive and naturalistic under genuine interaction. However, questions remain about how AI will handle less straightforward behaviors.
Earlier this year, Prof. Matt Coler and his team at the University of Groningen trained an AI using data from sitcoms to recognize sarcasm, aiming to improve human-machine interactions by decoding linguistic nuances like irony and politeness.
Yet, emotional AI faces a fundamental challenge: defining and interpreting emotions. “There is no agreed-upon baseline definition,” explains McStay, highlighting the diversity of psychological perspectives on emotional expression.
Moreover, AI bias poses a significant risk. “Your algorithms are only as good as your training data,” warns Prof. Lisa Feldman Barrett, noting biases that AI may perpetuate, such as attributing negative emotions disproportionately to black faces, potentially impacting areas like recruitment and healthcare.
Recognizing these risks, the European Union’s AI Act, effective from May 2024, restricts AI from manipulating human behavior and bans emotion recognition in certain spaces. However, it allows identifying emotional expressions without inferring an individual’s emotional state, leaving room for potential misuse.
While emotional AI holds promise for business and creative sectors, concerns about pseudoscience and ethical implications persist. As research progresses, the balance between innovation and safeguarding human rights remains a critical consideration.