The buzz phrases sound so compelling.
Big data. Predictive analytics. Machine learning.
If you aren’t building at least one of those into your business, you’re already behind, right?
Wrong.
This isn’t to say that the concepts aren’t important. Nor that marketers should ignore them.
But the fact is that in practice, most, if not all, of them are really, really hard to execute. In addition, in trying to execute on them, key resources (time, dollars, and people, to name a few) can be drawn away from higher ROI opportunities.
The reasons for these misses can be caused by those key resources – unrealistic time expectations on how long a project can take, not enough dollars being allocated, and perhaps not having the appropriate people assigned to a task. There are plenty of other reasons – wrong strategy, bad execution, etc.
And one topic that unifies all of those above ideas is data. And typically a lot of it. You need data to make more informed and robust decisions. And then as a business grows, there’s just more data.
The problem is that most marketers don’t have great data. There’s no such thing as perfect data. Data integrity is a pain point for pretty much everyone. Which isn’t to say that you don’t work on improving it.
But capturing, aggregating, and then processing data is a lot harder than most of us would expect in 2017.
We all want data to help inform our business decisions. Okay, maybe not everyone wants it, but the idea of using data is probably something most everyone would agree is a good thing.
Getting decent data, however, even when it’s not “Big,” is shockingly difficult. Data integrity is tantamount if you’re going to use it to make better decisions. While bad info doesn’t always mean bad decisions – sure, everyone can get lucky here and there – it dramatically increases the chances of making bad decisions.
Even in the world of digital where everything is supposed to be tracked, tagged, pixeled, and so on, anyone who has run any semblance of campaigns has seen that there are always discrepancies when comparing between 2 systems.
10%-15% variances are the norm.
Think about that – that’s a good-sized difference. And when you consider that tests are sometimes called at 90% confidence, or when the differences are less than 10%, that can become problematic. (I’m oversimplifying a bit, but only by a small amount.)
Explanations of these variances can range from different methodologies, customers deleting cookies (I read one study that said up to 30% regularly delete their cookies – which I find crazy high, but still), timing differences (I love it when one system is based on eastern time and another is western time. And I still can’t remember what GMT is…).
At the same time, it can be easy to think that with big data comes better data. When you have more data, then averages and trends should appear more clearly, right? That’s part of the Law of Large Numbers.
But again, this all presume data integrity.
Which rarely happens.
And so issues are exacerbated. Not resolved.
Let me make a comparison from my crew days back in college.
Our coach constantly emphasized that if we couldn’t row well at a lower rating (strokes per minute), that we would be in worse shape at a higher rating. We discovered pretty quickly that he was right. If we weren’t in sync at a low rating, we were a thrashing mess at a higher one.
And in fact often moved the boat slower at a messy higher rating than being cleaner at a lower rating.
That same analogy applies to data. Trying to reconcile and make sense of messy and larger datasets is a total nightmare. It usually leads to massive amounts of time spent trying to reconcile what’s happening. And at worst, bad decisions. Oddly enough, these bad decisions are often worse than had you focused on more simplistic and higher level results.
To be clear, I’m not saying to ignore the data. And I’m not saying ignore how to leverage bigger data.
What I am saying is that it’s much more important to make sure your core info is in place first. That if you don’t have the simple reports, the basics, the fundamentals, in place first, then you need to focus there before going after Big Data.
Not to mention when you hear about companies doing all these things, getting featured in supposed case studies on vendor sites, etc., it can feel like you’re behind.
But take this in:
Beachbody grew to over $1 billion in revenues without a true CRM.
Dollar Shave Club, which was acquired for $1 billion, used MailChimp as one its ESP’s.
Sure, each of them might have missed out on some lost revenue and margin opportunities, but they are also proof that you don’t always need the biggest and best of tech to succeed.
My suggestion before trying to figure out how to get Big Data to work in your organization is to see if your current data is decent. That’s a low bar, but start there. And then how much is your company actually looking at and using your data. Are you leveraging analytics and insights, not just reports.
Get the basics working and the teams and processes dialed in first before going after the big and sexy tech.
From personal experience, I know it’s no fun when you don’t have a CRM. It’s no fun when people outside criticize you for what you can’t do. But I’d also say that when those critics made their comments about Beachbody, there was also a lot of envy with the size of the business.
And frankly, I felt the same way when I rowed crew. Sure, our stroke rating wasn’t as high as some other boats, and it might not have looked like we were going that fast. We didn’t win every race, but we moved our boat well enough to win a few (let’s be realistic, getting a few wins for the MIT crew team was a challenge when the Ivy League teams were recruiting). And that was better than some of our competitors who thought they were doing better with a higher stroke rating but never came out on top.
We all have to be comfortable with messiness in our lives. And in our businesses.
But if we can avoid some of those messes and get real value from the more simplistic areas of each, then isn’t that preferred over never making progress?