Market Research in a world where 90% of the data was created recently
According to a recent report by IBM’s Marketing Cloud, 90% of the world’s data was created in the last 2 years!!! Isn’t it amazing? The world had grown 10X in data in 730 days at a rate of 2.5 quintillion bytes of data a day!…there are so many implications for this – where do you save all this data, how do you manage it, is there such a thing as too much data? What does it mean for the future? Will we have 10X data growth again in 2 years?
How do you shift your market research initiatives to handle such data volumes?
Using old Market research in a new world?
Market research used to be about data sampling and focus groups and surveys and in general peeking through the market peephole and estimating the overall market behavior and preferences based on a small group of individuals.
As data became more prevalent business started to use systems that can search data, but keep in mind these systems were set to search data that was likely 1/20 or so of what exists today.
So basically in the “old world” you either used brute manual effort or systems that could handle some data but most likely much less data then exists today.
How to process lots of data?
When processing lots of data, the data is typically unorganized (Unstructured is the commonly used term). Data can come from every brand encounter with consumers – emails, calls, surveys, website feedback….it also is available in the public domain online on eCommerce sites, review sites, social media etc.
This is a lot of data…sometimes in different languages, with a lot of different formats! To tackle it you need several core competencies:
- Deep understanding of languages
- Ability to relate data to a topic
- Learn to recognize latest ways to talk about a topic
- Understanding of sentiment
What is typically lacking in “old world” systems
Mainly autonomous decision making. When you need to go through lots and lots of data, you can’t rely on humans. Hoping humans will setup a computer system to analyze and handle any type of data is unrealistic. There are so many variations in the way people express themselves around a brand or product or feature that you can’t expect one person or even a team to figure all of these up. On top of it add languages, different data formats, ways consumers express sentiment and complexity just grows and grows…
Ideally if we wanted a technology that helps us handle unlimited data it would have to be one that can easily scale to multiple languages and data formats, automatically decipher the topics your consumers are talking about, automatically recognizes sentiment and can sum it all up for you.
Why is this difficult
Let’s pick and example. Let’s say we’re a smartphone brand and want to analyze what consumers are saying about our latest phone’s batter life. We can try to scan online reviews and search for variations of the word “Battery”, but what happens if consumers are using phrases such as “doesn’t last long enough” or “phone died on me in the middle of the work day”?
Current market research technologies rely on humans and thus are slow to setup, miss a lot of things and are slow to adapt. In a world that generates more and more data each year, and the data grows so quickly, you can’t rely on humans or manual labor to figure things out.
The good news is that now there is enough data to make sure you can get answers to your questions, and all you need is just to analyze the data. No more need for feedback groups, surveys etc.
The sad news is that most tools out there to help you do this were not build for this task. Revuze is an innovative technology vendor that addresses just this with a self-learning, fast setup and low touch solution that typically delivers 5-8X the data coverage compared to anything else, and it does it without humans…