Emotional AI is rapidly becoming a big business, with startups like Hume raising millions of dollars in funding and the industry’s value predicted to reach over $50 billion this year. But can AI really understand and respond to human emotions? And if so, how should we handle it? While some applications of emotional AI, such as better video games and less frustrating helplines, seem promising, others, like Orwell-worthy surveillance and mass emotional manipulation, are downright concerning. As experts like Prof Andrew McStay and Lisa Feldman Barrett point out, emotions are complex and multifaceted, and AI systems that claim to understand them often rely on flawed assumptions and biased data. Moreover, the lack of a clear definition of what emotions are and how they are expressed makes it difficult to design AI systems that can accurately detect and respond to them. And then there’s the issue of bias, with some emotional AIs disproportionately attributing negative emotions to certain groups of people. As we move forward with emotional AI, it’s essential that we prioritize ethical considerations and ensure that these technologies are designed and used in ways that benefit humanity, not harm it.

Emotional Intelligence in AI
“Emotion is such a fundamental dimension of human life that if you could understand, gauge and react to emotion in natural ways, that has implications that will far exceed $50bn.”
1–2 minutes










