According to a recent IBM research by 2020 US analyst and data jobs will grow 15% to a whopping 2.35M positions! It seems the more data there is we need more people to handle the data, especially market data. Isn’t something wrong with this picture? The more technology we have, better computers, more software options, smart machines – we still need more and more people? What do we need in terms of technology to be able to handle market feedback in a more efficient way?
Why is it so complex?
Market research is about processing lots of data. The data is also very much unorganized (Unstructured is a commonly used term). Data comes basically from every brand encounter with consumers – emails, calls, surveys, website feedback….it also is available in the public domain online on eCommerce sites, review sites, social media etc.
This is a lot of data…sometimes in different languages, with a lot of different formats! To tackle it you need several core competencies:
- Deep understanding of languages
- Ability to relate data to a topic
- Learn to recognize latest ways to talk about a topic
- Understanding of sentiment
Deep understanding of languages
As brands become global so are their consumers. Reviews and feedback can be provided in any number of languages and markets. Deciphering this feedback requires command of the languages in the markets where the brand sells. The larger the brand typically it will open up more markets and this in return will cause the brand to need more capabilities in new languages supported.
So if we wanted a technology that helps us mitigate this specific point it would have to be one that can easily scale to multiple languages and data formats.
Ability to relate data to a topic
As humans, we can’t process large amounts of data. If a brand has 50,000 feedback data points a month about a product (600,000 a year – which is not outrageous), we wouldn’t expect a person to review these data points, memorize them and summarize them to peers. Its just too much. We need the help of technology. But what type?
Most intelligent text processing technologies out there rely on people (hence the growing number of analysts) to define these groups of topics. Typically a core of 8-12 topics that are common practice such as Price, Service, Quality etc. But consumers are not limited to these topics, which means lots of data is left out of the feedback circle.
Ideally, we need here technology that can automatically decipher the topics your consumers are talking about and serve them back to you without human prep/bias.
Learn to recognize latest ways to talk about a topic
Another issue with the topic recognition setup by humans is to recognize new ways to talk about something. Millennials and newer generations keep inventing new ways to express themselves. A product can be “cool”, “good”, “great”, “solid” or “dope” – how do we keep up? One way is to continue to rely on humans to learn the new phrases, implement them into systems and track the new topics. Its time consuming, meanwhile we may miss market feedback or opportunities, and it requires us to keep piling up analysts…
If we wanted a technology that helps us mitigate this specific point it would have to be one that can learn to recognize new ways of saying “good” or “bad” as well as new discussion topics worthy of brand attention.
Understanding of sentiment
Similar to the previous clause, sentiment can be described in many ways/formats/languages and sometimes feedback also lacks sentiment…to be able to correctly identify and keep up with feedback you need a flexible way to pick up on new forms of sentiment as they appear (and not in retrospect) and also know to recognize when there is no sentiment included.
Current market research technologies rely on humans and thus are slow to setup, miss a lot of things and are slow to adapt. Revuze is an innovative technology vendor that addresses just this with a self learning, fast setup and low touch solution that typically delivers 5-8X the data coverage compared to anything else, and it does it without humans…