According to IBM research, by 2020 US analyst and data jobs will grow 15% to an amazing 2.35 million positions. It seems the more data we generate there is a need for more people to make sense of it. What’s wrong with this picture? The more technology we have – better computers, more software options, smart machines – we still need more and more people?

It’s no secret that understanding consumers today still relies on human efforts; you still need experts to design surveys or focus groups or train AI machines to search for patterns in data. And like any other industry that relies on manual labor, data analytics is slow and shallow, as well, delivering, at best, a shallow understanding of consumers in an untimely manner.

With the business world moving at a much faster pace these days, it’s more critical than ever for consumer brands to be able to respond to changes quickly and gain deep consumer insights at near real time pace. But according to a recent McKinsey Survey about organizational agility, the ability to quickly react to change and move toward value-creating and value-protecting opportunities is elusive for most.

Why is it so slow and shallow?

To truly understand consumers, companies need to go beyond the standard surveys, focus groups, brand health tracking tools, etc., and zero in on the “messier” side of data; that is, all of the unstructured data coming in the form of unprompted text/voice opinions of consumers’ experiences. This data can also come from every brand encounter with consumers – emails, calls, surveys, website feedback, store data, chats….and it is also available online on eCommerce sites, review sites, social media etc.

While this amount of data may seem daunting, if your goal is to obtain a granular understanding of your market – ideally with a 360-degree view -leveraging people to mine this data is going to be too slow. By the time a team is going to finish (if at all) analyzing this data, the amount of new data generated during the analysis period, would overwhelm any expert. According to our customers, manually analyzing opinions can be done at a pace of up to 100 opinions a day per person. If, by chance, you have an AI, training an AI machine can take months at best – during which new topics/competitors/features will become available – and, well . . . you’re behind again.

How automation should bring depth and speed

The ideal situation would be to automate data capture, analysis and presentation.  – That is, to use an AI that can automatically convert messy, unstructured and unprompted qualitative data into quantitative intelligence.

But to tackle it this AI would need several core competencies:

  1. Learn how to classify data the way a market research expert would
  2. Learn to recognize variations, nuance, abbreviations, slang, etc.
  3. Deep analysis of sentiment

 

Learn how to classify data the way a market research expert would

Most AI text mining technologies rely on people to teach AIs and manage its input and output (hence the growing number of analysts predicted by IBM, as mentioned above). Typically, most market researchers will identify a core of 8-12 topics that seemingly impact purchase and repurchase more than other topics – e.g., Price, Service, Quality etc. But consumers are way more complex than that – and if you’re not looking at ALL the data, plenty of significant data is left out of the feedback circle. In fact, based on our clients’ experience, product level issues require up to 40+ product experience aspects to fully learn how to evolve existing products and identify new ones.

To do this, you’d need technology that can automatically decipher all the topics your consumers are talking about and serve them back to you without human prep/bias.

 

Learn to recognize variations, nuance, abbreviations, slang, etc.

Another issue with training AI text mining is to recognize new or different ways to talk about something. Millennials and newer generations keep inventing new ways to express themselves. A product can be “cool,””bad,” “good,” “great,” “solid” or “dope” – how do we keep up?

If we wanted a technology that helps us mitigate this specific point it would have to be one that can learn to recognize new ways of saying “good” or “bad” as well as new discussion topics worthy of brand attention.

 

Deep analysis of sentiment

Once you understand topic classification and all its variations, now you need to classify the references into Positive or Negative sentiment to be able to quantify the data.. Since sentiment can be described in many ways (direct, indirect, cynicism, etc.) automation needs to be able to correctly identify the sentiment per topic reference and also keep up with any new forms of sentiment as they appear.

 

Conclusion

All market research experts and almost all AIs today are incapable of keeping up with the needs of today’s business intelligence needs. The reliance on humans is slow,  inaccurate  and can’t adapt quickly enough. Revuze, however, is revolutionary. It is the first available automation in the ongoing battle to keep up with the onslaught of consumer insights. It offers an innovative technology that turns consumer opinions from any format into quantitative data that anyone can analyze – at will, anytime.

But what about all those humans – will they be left behind? Quite the contrary! Imagine if . . . our intelligence-focused employees didn’t have to worry themselves about input/output methodologies and could, instead, actually spend time on intelligence! Imagine if they, instead had time for interpreting, analyzing, and predicting . . . how much more agile would your company be if you could actually speed up and increase the accuracy of your intelligence?

Leave a Comment