Emotion recognition technology in the financial sector – Curse or blessing?

Emotion detection and recognition is a hot topic in the tech industry. It could enable companies to react to emotional states of their customers by e.g. hindering or fostering impulse purchases, changing the tone of voice in customer services or identifying product functions that are extremely frustrating to use. For instance, virtual assistants like Siri could assess when people are screaming furiously and react more kindly – if that is not fanning the flames. In general, the emotion detection and recognition market is a huge and rapidly growing one: in 2012 it was estimated to be worth $12 billion and some people expect it to rise to $90 billion by 2024

How does emotion recognition technology work?

Based on the analysis of voice and facial expressions in videos, audio or images machine-learning algorithms try to predict the current emotional state of humans. These days, this is often done through supervised deep learning algorithms (mostly convolutional neural networks) which are previously trained on large sets of manually labeled data. The labeling is done by human raters who assess which emotion they perceive as most prevalent in a given image or piece of audio. The analysis is often limited to the so-called “basic emotions” ( happiness, sadness, fear, anger, surprise, and disgust) which are believed to be universal and identifiable by all humans independent of their culture.

How is emotion recognition technology used in the financial sector? 

Personal finances are an emotional topic for many persons. Studies have shown that the emotional state has a significant influence on the ability to make wise financial decisions.  This is an interesting point for banks and financial institutions that want to build services around their customers’ needs and feelings. One of the first movers in this domain was the United Bank of Scotland who partnered with an emotion recognition software company in 2016 to assess customers’ preferences concerning wealth management in a pilot study. However, the software was never adopted, despite the enthusiastic statement of UBS’ chief investment officer who dreamt about identifying his customers’ “subliminal desires”. Rosbank, a Russian bank whose majority shareholder is Societe Generale, decided to use emotion recognition software in call centers to calculate a “customer satisfaction index” in real-time. This is supposed to help operators identify the most critical issues but can also be used as a KPI for call center employees. Moreover, WeSee AI adopted emotion detection and recognition software to detect insurance fraud. The company promises to be able to assess the validity of claims “more significantly and accurately than ever before” through automatically evaluating people’s emotions. Overall, it seems that companies in the financial sector like the idea of using emotion recognition technology. But how reliable is the technology currently? In the following, we will assess the technology’s maturity level from research perspective.

How far developed is emotion recognition technology?

The scientific background for emotion recognition technology is weak. The latest report by the AI NOW Institute of the New York University argues that the technology should, therefore, be banned from the application in decisions that affect people’s life. We are going to discuss two major reasons the authors state in their report.

Displaying and feeling are not the same

Current psychological research concludes that displayed emotions do not necessarily reveal the actual inner emotional state of a person. Hence, it is misleading to rely on software that is only analyzing a fraction of all signals that have to be considered to assess a person’s mood (including asking how she or he feels). A recent paper by the Association for Psychological Science revealed that e.g. facial expressions alone are a very weak indicator to determine someone’s real feelings. If financial products and services are built upon these assumptions they at best add noise to their analysis and at worst disadvantage people or at least offer negligent consulting. Furthermore, facial expressions and tone of voice are for the most part under voluntary control. That could lead to absurd behavior when people interact with emotion-sensitive software: people could scream at call-center software just to be forwarded to a real person. This seems far-fetched but technology has always had behavior-changing effects on society: an ongoing study with currently 66.000 participants found that people are on average checking their phones 35 times to see (among other things) whether somebody texted them. Just imagine people running to their mailbox 35 times a day, seven days a week.   

Illegally scraped and biased data

Finally, the data sets that are needed to train the emotion recognition algorithms are often created by scraping websites without the informed consent of the people pictured in the harvested images or videos. This practice seems to be applied by both companies and research institutions. Not only does this depict a violation of privacy rights but it can also imbalance the composition of training data sets leading to wrong conclusions: a study found systematic racial biases in two well-known and widely used emotion-recognition software (Face++ and Microsoft’s Face API). Software that detects negative emotions based on racial biases could propose very conservative financial products that significantly lower the interest rate of their clients and therefore, further increases systematic racism.

Final thoughts

Facial recognition is often a necessary antecedent for emotion recognition software. Therefore, it is encouraging to see that the tech-savvy city of San Francisco recently stopped using facial recognition software and that a bipartisan bill to regulate commercial use of facial data is currently discussed in the US congress. To conclude, emotion recognition software is still far from being applicable in most business settings. Especially in finance, as an industry that has a strong direct influence on the well-being of people, companies should be careful not to draw wrong conclusions or overestimate the technology’s potential. Researchers have to stay ahead of the industry to ensure transparency and be able to act as technological and ethical evaluators.  

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s