Why brand strategists should be wary of sentiment analysis
Marketers are increasingly using quantitative text analysis to analyse what consumers are saying about their brand. They should also be careful about the pitfalls of this approach.
In a world where 2.5 quintillion bytes of data are generated every single day, it is easier than ever for marketers to collect information on their brand that would have been unimaginable a mere decade ago. How much are consumers talking about us? What are they saying? With the vast data at our disposal, these questions should now be, in theory, straightforward to answer.
Yet making sense of all this data can be tricky. To analyse it, data scientists and social media listening firms tend to resort to sentiment analysis. Otherwise known as opinion mining, sentiment analysis is the process of determining the emotional tone and the opinion expressed in an online mention. And it can be incredibly useful: shifts in sentiment on social media have been shown to correlate with shifts in the stock market.
Yet there are a number of reasons to be wary of sentiment analysis. First, it is important to understand how sentiment is measured: you might be surprised at how agricultural the method really is. Put simply, sentiment analysis uses dictionaries to classify keywords as ‘positive’ and ‘negative’. If a tweet, for example, has more negative words than positive ones, it is classified as negative. Yet sentiment identified as negative in traditional dictionaries doesn’t always travel well into other contexts. Researchers have shown for example, that using sentiment analysis dictionaries in financial services can lead to severe misclassification. After all, an insurer that talks about ‘risk’, or a law firm that is mentioned in conversations about ‘damages’ would hardly consider these mentions as negative.
Second, sentiment analysis tells you nothing about attribution. Your brand might be mentioned in a negative comment, but do people blame your brand specifically? How a brand should reacts to a message with negative sentiment depends crucially on whether they are held responsible for a situation. At Abensour and Partners, one of our partners’ PhD’s focuses in part on developing techniques to understand attribution (and therefore blame) in politicians’ rhetoric, rather than depending on the traditional sentiment analysis that can lead to serious measurement error.
And finally, there is the age-old issue that machines have always struggled with, but that is ubiquitous in human communication (particularly online): sarcasm. In sarcastic text, people express their negative sentiments using positive words. This fact allows sarcasm to easily cheat sentiment analysis models unless they’re specifically designed to take its possibility into account.
Are there any solutions to the pitfalls of sentiment analysis? One of the most promising avenues is supervised learning, where hand coding from human readers eventually ‘teaches’ the algorithm to recognise more nuanced use of language (for example sarcasm or blame). But ultimately nothing can replace taking time out to actually take a subjective look at your data, interpret it yourself, and ensure that the automated analysis your receive confirms your intuitive. Taking back control from the algorithm is not only cathartic, it can lead to better decisions in a world where sentiment analysis is still very much an exciting, but imperfect, work in progress.