91制片厂

Latest 91制片厂


This content is currently locked.

Your current 91制片厂 subscription does not include access to this content. Contact your account representative to gain access to Premium SoftwareReviews.

Contact Your Representative
Or Call Us:
1-888-670-8889 (US/CAN) or
+1-703-340-1171 (International)

Comprehensive software reviews to make better IT decisions

What Is Emotion AI and Why Should You Care?

Recently I had the opportunity to attend the inaugural , organized by , a leading analyst and business consultant in the areas of natural language processing (NLP), text analytics, sentiment analysis and their business applications. (Seth also organized the first Text Analytics Summit 15 years ago, which I also had the privilege to attend.) The conference was attended by about 70 people (including presenters and panelists) from industry and academia in the US, Canada, and Europe.

Given the conference topic, what is emotion AI, why is it relevant, and what do you need to know about it? Read on to find out, but first, some background.

Emotions are a cornerstone of human intelligence and decision making

We humans are highly emotional beings, and emotions impact everything we do, even if we are not, for the most part, aware of it. They guide our attention, impact what and how we learn and remember, how we perceive ourselves and others, and ultimately how we grow as individuals and who we become. As Yann LeCun, , said: 鈥淚t is impossible to have intelligence without emotions鈥 (Quoted in Daniel McDuff presentation).

Emotions are highly personal, yet also social. Emotional responses in others in general and in reaction to our own actions are some of the first things we learn as infants. They are also the reason why we love storytelling. And why stories are highly effective for learning, for influencing and inspiring others, and for instigating action and change. In a well-constructed story, the plot (its narrative arc) is closely intertwined with the characters鈥 emotional evolution (the emotional arc), forming the double helix of narrative + emotion. Also, what is history, politics, and news if not a collection of stories 鈥 true or otherwise 鈥 and the emotions therein?

We continue to be governed by emotions in subtle and not-so-subtle ways throughout our lives. As one of the conference presenters, Diana Lucaci of said, 鈥淧eople say what they think and act on how they feel.鈥

It is not surprising then, that neuroscience research and emotional design have for years been staples in marketing and advertising and in product, service, and website design. Emotions nudge you, the user, to click on a link and buy a product or follow the emotional breadcrumbs through a website, just like its designer intended.

Emotion artificial intelligence seeks to understand, replicate, and simulate human emotions

In the field of artificial intelligence (AI), researchers and practitioners have likewise for years looked into ways to mimic and explore human emotions. The field when MIT Media lab professor Rosalind Picard published an article entitled 鈥.鈥 It gave rise to a new discipline by the same name, from which emotional artificial intelligence spun off. The goal of emotion AI? To understand, replicate, and simulate human emotions in and by machines.

Affective computing and emotion AI incorporate many technologies and application areas. The at MIT, for example, 鈥渁ims to bridge the gap between human emotions and computational technology.鈥 Its projects range from 鈥渇inding new ways to forecast and prevent depression before there are any clear outward signs of it; to inventing ways to help people with special needs who face communication, motivation, and emotion regulation challenges; to enabling robots and computers to receive natural emotional feedback and improve human experiences.鈥 And this is just scratching the surface; there are many more projects, applications, and use cases on .

Sentiment analysis goes multi-modal

One of the areas of emotion AI is sentiment analysis, a field that has existed since at least the early 2000s. Sentiment analysis is usually conducted on textual data, be it emails, chats, social media posts, or survey responses. It uses NLP, computational linguistics, and text analytics to infer positive or negative attitude (aka 鈥渙rientation鈥) of the text writer: Do they say good or bad things about your brand and your products or services?

The obvious applications of sentiment analysis have been brand/reputation management (especially on social media), recommender systems, content-based filtering, semantic search, and understating user/consumer opinions and needs to inform product design, triaging customer complaints, etc.

Several of the conference presentations were devoted to this topic, which, despite all the recent progress in NLP and related fields, is still hard. Not least because there is little agreement among researchers on even what constitutes basic human emotions and how many of them are there, said , Professor of Computer Science at the University of Illinois at Chicago.

Emotions are also notoriously hard to identify and code (label), since they are ambiguous, shifting, overlapping, and adjacent. For example, one can feel anger, sadness, and disgust at the same time. Moreover, emotions are not always easy to pin down. And clear, unambiguous labels are important: AI 鈥 or at least the 70% of it that is known as supervised learning 鈥 depends on data that has been tagged (鈥渁nnotated鈥 or 鈥渓abeled鈥) by humans. That鈥檚 how machines learn. (Hence 鈥渟upervised鈥.)

Then there is complexity behind how emotions are conveyed, explained Professor Liu. When speaking, emotions are communicated through a broad range of linguistic and paralinguistic cues, such as intonation, facial expressions, body movements, gestures and posture, and bio-physical signals (sweating, skin flushing, etc.). In writing, they are signaled by punctuation, capitalization, emoticons, and other creative expressions, for example, word lengthening (e.g. 鈥渟oooo slow鈥 or 鈥渟o sloooow鈥). And that is in addition to word choices and grammar!

There are also cultural differences in how people convey emotions. To complicate things further, there is a phenomenon known as the 鈥渃ognitive gap.鈥 What people say and how they truly feel do not always match, for multiple reasons: they are trying to be polite or avoid hurting other people鈥檚 feelings. Or maybe they are trying to keep their emotions to themselves.

Professor Liu said that context and multi-modal data may be helpful to resolve many such ambiguities. And in fact, with the advancement of biometrics and wearables, the field has expanded to analyzing emotions from sensor data, which includes heart rate, temperature, brain waves, blood flow, and muscle bio-signals, as well as voice, facial expressions, images, and video.

This trend of leveraging sensors will continue, predicted strategist and 鈥渢ech emotionographer鈥 (who also teaches at Pratt Institute), with the exception of, perhaps, facial recognition technologies (FRTs) and haptic/touch data. FRTs recently came under fire because of privacy concerns. (Some of my previous notes on this topic can be found here.) And touch data is a 鈥渘o-go鈥 for obvious reasons: the COVID-19 pandemic.

Can wearables be used to identify emotions?

A meta-analysis by of Wroclaw University of Science and Technology focused on wearables and tried to answer the following question: 鈥淐an they be used to identify emotions in everyday life?鈥 If we could do that, we could, for example, improve health and well-being and clinical outcomes of patients suffering from diseases that have mood altering effects. (The example he used is kidney dysfunction.)

We could also use wearables for stress control, mental health, and autism. developed at the MIT Media Lab monitors a person鈥檚 heartbeat to detect whether they are experiencing negative emotions such as stress, pain, or frustration and it releases a scent to help the wearer to cope.

And of course, we could use emotions detected from wearables for 鈥済ood old鈥 personalization and product/service improvement: from online content and product recommendations, to virtual assistants and gaming experience. (Although I am concerned about privacy implications for several of the use cases Professor Kazienko mentioned.)

Emotion data from wearable devices can also be used to prevent car accidents (when the driver gets drowsy, for example), to track students鈥 attention and hence academic success, and to improve social interactions, among several other things, he said.

Ms. Pavliscak illustrated the latter with by Lauren Lee McCarthy and Kyle McDonald, which is 鈥渁 Google Hangout video chat app that uses audio, facial expression, and linguistic analysis to optimize conversations.鈥 The app analyzes what users say and whether they use common vocabulary and sentence structures, which we humans tend to do as a conversation unfolds. (This is known as 鈥渓inguistic style matching.鈥) For each of the chat participants the app displays a quick visualization and pop-up notifications, for example: 鈥淪top talking about yourself so much鈥 or 鈥淲hat are you hiding? Clare is speaking much more honestly.鈥 It can even automute a participant 鈥渨hen the conversation gets out of balance.鈥

Image: , accessed May 12, 2020

There were many more fascinating applications of emotional data and emotion analysis in Ms. Pavliscak鈥檚 keynote, 鈥淒esign for an Emotionally Intelligent Future.鈥 on the conference site for $200, with a discounted price of $100 for additional viewers from your organization, and I highly recommend that you watch them.

Going back to the question, 鈥淐an wearables be used to identify emotions in everyday life?鈥, Professor Kazienko鈥檚 conclusion is: 鈥淓motion recognition with wearables is the future of personalized affective computing.鈥 However, he continued, we need more field studies, more and better data, active collaboration between researchers, a common model on how to classify emotions, data and code sharing, and several other things to improve recognition quality and reproducibility.

Using emotions to make AI sound (and act like a) human

As more of our experiences and daily lives are mediated by artificial intelligence, we increasingly want AI to look, 鈥渇eel,鈥 sound, and (re)act like a human. Perhaps, because humans are in general less frightening than machines. (Or are they?) Or, maybe because our imagination is constrained by our own physiological characteristics 鈥 similar to which postulates that the structure of a language affects how its speakers perceive the world. (Add emotions to the mix, and it becomes quite complex very quickly.)

Or maybe because we humans like to remodel the world to our image and liking. Have you seen ? In all his mechanical glory, he is sort-of alright, no? But the company鈥檚 on the other hand are rather unsettling鈥

Now think about how you would like Atlas to sound if he could talk. Obviously, he can鈥檛 (yet), but we increasingly interact with chatbots and virtual assistants. And if we could imbue machines with empathy, they could provide better consumer and brand experience, and by doing this build familiarity, stronger bonds, and deeper trust, said Greg Hedges, Chief Experience Officer at , a consultancy. His firm creates 鈥渆motionally-intelligent voice experiences鈥 for brands such as Nike, Starbucks,Tiffany, Tide, and Sesame Street.

From chatbots to 鈥渆motional chatting machines鈥?

Since potentially significant business benefits can be gained from emotionally aware AI it is perhaps not surprising that 鈥渆motionally intelligent鈥 chatbots are a hot research area. Professor Liu mentioned above is working 鈥渢o create chatbots that can perceive and express emotions and learn continually during conversations.鈥 He calls them 鈥淓motional Chatting Machines.鈥

鈥淓motional intelligence is a vital aspect of human intelligence,鈥 he said, and moreover, research shows that emotion in dialogue systems can enhance user satisfaction. Chatbots that express empathy decrease user frustration and stress. They lead to fewer breakdowns in dialogues. And they also inspire people to cooperate rather than rage about 鈥渟tupid machines,鈥 he continued.

鈥淪ystems that mimic human style are more natural to interact with,鈥 said Daniel McDuff, Principal 91制片厂er at Microsoft. His team is building embodied agents, among other things. (See also the last section for details on his other work.) We humans constantly adapt to each other and this adaptation creates social cohesion. When people interact with a virtual reality agent over time, they similarly try to adapt to its style, he said. And so virtual agents and chatbots will need to adapt to humans.

Emotional chatbots in business could mean more satisfied customers and citizens and perhaps lower costs, as chatbots will be able to take care of most inquiries and free human operators for less common or more complex interactions. Especially in these times of crisis. More than two months after the WHO declared the novel coronavirus disease a global pandemic, telephone lines at banks, insurance companies, major retailers, and government offices remain jammed. IBM, who offered its Watson Assistant to help deploy chatbots to governments saw a to its chatbot platform. Google promptly launched its own to 鈥淸q]uickly build and implement a customized Contact Center AI virtual agent to respond to questions your customers have due to COVID-19 over chat, voice, and social channels.鈥

Not yet鈥

But beware vendor promises. Chatbots in general require extensive training and/or scripting, and so the costs may not be significantly lower 鈥 depending on organization size and number, type and complexity of use cases, and other parameters.

鈥淓motionally aware鈥 chatbots add another layer of complexity and challenges to chatbot technologies. Here are a few mentioned by Professor Liu:

  • Emotion-labeled data that such emotionally aware chatbots and VAs rely on is hard to obtain at the scale needed to train the machines.
  • Annotation (i.e. labels), where available, are subjective and classification may be inaccurate. And with AI, just like with other data-dependent applications, 鈥済arbage in, garbage out.鈥
  • An emotionally enabled bot must also be able to balance understanding the emotions of the speaker with whom it is conversing with generating its own emotional and linguistics responses in return. This is really hard to do because there is a clear dependency between the two processes. And because they need to happen almost simultaneously.
  • All this becomes even harder in what is know as 鈥渙pen domain,鈥 unconstrained by bot use cases, application areas, industry, etc.

Professor Liu鈥檚 takeaway on the state of emotionally aware chatbots: 鈥淐hatting with emotions is vital for dialogue systems,鈥 but there are massive quality issues, and because of that, the technology is not ready for prime time.

He said, there are examples of deployed chatbots with emotions, but such bots are mainly based on rules 鈥 they are scripted, so there is no intelligence there. These bots are also domain-constrained: they are typically used in narrow cases such as customer service or as emotional companions. To create more human-like 鈥渆motional chatting machines,鈥 he said, we will need to get better at multi-modal emotion detection and generation (using sensor data mentioned earlier).

Emotion-inspired machine learning

The most intriguing session at the conference was the keynote by Daniel McDuff, Principal 91制片厂er at Microsoft, who asked the question 鈥淗ow can machine learning leverage emotions to learn and explore?鈥 He spoke about 鈥渧isceral machines鈥 鈥 the idea that machine learning/AI systems should have some emotional mechanisms similar to those of humans. Or, at least that they should be able to model emotions for the reasons mentioned above: we want technology to interact with us humans as if it were human.

But there is another reason. Emotions, said Mr. McDuff, help us to understand and explore the world. They are fundamental to understanding what it means to take risks, to achieve positive outcomes, and so on. As we interact with the world, he continued, we receive either positive or negative response that guides our further actions. What if we could incorporate emotions into machine learning systems to help inform them? (And to improve their performance.)

One of the examples he showed involved using drivers鈥 heartbeats to guide machine learning. (Heart rate is one of the ways we express and experience emotions. Others include at the biological level: changes in body temperature; pupil and blood vessel dilation/constriction and changes in blood flow, breathing, and brain waves; increased/ decreased production of saliva, hormones and digestive enzymes, and so on.) Mr. McDuff鈥檚 team used drivers鈥 emotional responses as expressed through their heartbeat in training a neural network model to drive a car. (In addition to other data, I presume.) The result? The model was able to drive a vehicle longer than the state-of-the-art model without emotions.

Image: Daniel McDuff, 鈥淓motion Inspired (Machine) Leaning鈥, keynote at , May 5, 2020.

His team also looked at applying emotions (or rather human physiological response data) to teach machines to avoid car crashes, in this particular case, using human drivers鈥 facial expressions. It seems that it worked too. Their next project is looking to 鈥渃ombine various emotional response signals, risk-aversity and curiosity to nudge machines to explore more and do so in a safe way.鈥

Our take

In Russian there is a proverb which, loosely translated, says that when we meet a person, we judge them by their dress, and when they leave, we judge them by their intelligence. (It is roughly equivalent in English to 鈥淵ou only get one chance to make a first impression鈥 and 鈥淵ou can鈥檛 judge a book by its cover.鈥) Interestingly enough 鈥 and this is perhaps one of the rare occasions when ancient folk wisdom does not ring true 鈥 people do not usually remember what you wore or said or did, but they do remember how you made them feel 鈥 in other words, the emotions you stirred within them.

Emotions impact all aspects of our intelligence and behavior, at the individual and group level. In aggregate, they determine the behavior of the markets, social cohesion, the health of local and global economies, and of nations. , and they are among the richest, too.

So, can we take emotions 鈥 this fundamental quality that makes us human 鈥 decode it, and re-code it into machines? Well, look at language and speech technologies or computer vision. Significant successes have been achieved in the past decade as a result of big data, cheap compute power, powerful hardware, rapidly improving algorithms, and collaboration/open-source software.

However, if the field of AI is any indication, we may never be able to create truly emotionally aware machines. We just do not understand emotions 鈥 and intelligence 鈥 well enough. What started 60 years ago with an ambitious goal to simulate human intelligence and where the field鈥檚 creators thought evolved into something more narrow and brittle: applied AI (or narrow AI), as the field is now known.

Does this mean that emotion AI is too hard, and that we should just give up? Despite more limited scope, many narrow AI applications generate significant business benefits and human value, and emotion AI has the potential do that, too, as The Emotion AI conference has illustrated. As long as we are aware of the challenges the field is facing and have rational expectations. But that would be so unlike us, humans 鈥 to be rational 鈥 that is.

Next steps

To learn more about emotion AI, check out , , or give us a call to learn more about this field and how to get started with AI and machine learning in the context of your organization.

Visit our IT Critical Response Resource Center
Over 100 analysts waiting to take your call right now: +1 (703) 340 1171