2020 Predictions: Customer data management practices
When it comes to customer data, much has changed in the past decade. With 2020 quickly approaching, learn about the predictions that will set the stage for the coming decade and what you should do to future-proof your business.
When we look back on this decade, we’ll likely see it as the major turning point for how people interact with each other, their devices, and their favorite brands and products. The 2010s was the decade where we fully meshed our daily lives with the digital world; from the moment most people wake up to when they go to sleep, many parts of their lives are managed via the internet. In fact, most US consumers own an average of six internet-connected devices and spend an average of 6 hours and 42 minutes online per day. Our dependence on the internet has led to a fundamental change in how brands conduct business and engage with consumers, as well as the way that these brands look at customer data. But, this change didn’t happen overnight. With a new decade almost upon us, there’s never been a better time to consider how our relationship with data will continue to evolve and use it as a basis to future-proof organizational goals and plans for 2020 and beyond.
In this blog, we’ll review how our relationship with data has changed as both consumers and as consumer-facing companies in the past decade, and cover four major trends we predict will set the tone for how we create and consume data in the new decade.
The 2010s: The end of big data
When mobile exploded onto the scene a little over a decade ago, it introduced an entirely new realm for customer engagement. Suddenly, customers could access apps and websites on the go, and receive messages wherever they went. Companies could access all of that data to better understand their customers and create better experiences. The initial school of thought around the business application of customer data was focused on “big data.” This largely meant that brands were looking at all the data points generated by large sections or entire user bases as an indicator of how and when to engage with customers.
The idea behind big data was one of scale—if companies had access to more data, then they would be able to run analyses that produced statistically significant insights that could be used to drive business, marketing, product, and engineering initiatives. But, as companies quickly realized, the big data approach wasn’t enough for customers that expected companies to tailor every part of every engagement to their unique preferences, attributes, and previous interactions. And while brands were playing catch up with their customers’ engagement data, their customers were receiving emails, push notifications, and other messages that didn’t make sense to them—and even seeing increased issues with responsiveness thanks to apps bloated with third-party code and jerry-rigged web systems.
The shift from big data to what I’ll call “small data,” or the amalgamation of individual customers’ data, was greatly influenced by the rise of mobile. Mobile added a degree of urgency and layers of new data points that could be used to tailor messaging to customers more effectively, driving better outcomes. This shift lead us to where we are now.
2019: More, more, more
To say we are in an age of “more” is putting it lightly. In 2019, companies have access to:
In 2011, there were 2.267 billion internet users. By March of this year, that number reached 4.346 billion internet users. While not every single one of those internet users may be in your target audience, there’s no denying that companies’ reach is greater than ever before.
If we consider just mobile, this decade has produced an astounding uptick in usage. In July 2009 only 65,000 apps used to be available on the App Store. In 2019 there are 3.262 billion, 811 million of which are gaming apps, and mobile internet has grown 504% in daily media consumption since 2011.
And that’s just mobile. Between wearables, tablets, connected TVs and OTT devices, voice assistants, personal computers, social media, among others, the possibilities for engagement are near-boundless.
With all of these new and pre-existing channels and devices, the number of digital interactions has skyrocketed. And, as a result, have produced immense amounts of engagement data over the course of the past ten years. mParticle alone has processed over five billion (yes, with a B) engagement data points in just the last two years.
Long gone are the days of having a singular, monolithic system that does most things mostly okay. The increase in number of channels and devices has instead led to technology vendors honing their offerings to focus on one or a handful of specialties. This is both good and bad, of course. The benefit is that organizations now have tools that can do specific things or handle data from specific channels better than ever before, but the drawback is that many of these systems operate as silos, making it much more difficult to unify data and get a singular view of your customers. Additionally, many organizations’ tech stacks have grown to have too many tools in an effort to create workarounds for data siloing.
More (and greater) expectations
Over the past five years, we’ve seen more changes in how we look at and act on customer data than in the previous 30. In comparison, the kind of messaging consumers were receiving back at the beginning of the decade is orders of magnitude behind the relevant, responsive, personalized customer experiences that are possible today—from location-based ads to push messaging that’s dynamically adjusted based on your engagement with a brand’s app or website. This level of personalization is now expected by customers.
Today’s customer expects highly personalized experiences tailored to their unique preferences, attributes, and previous interactions with brands. And, as brands across every industry put more resources and time into creating great, personalized experiences for customers,expectations are surely only to continue rising.
So, what’s on the horizon for 2020?
Major predictions for 2020 and beyond
Data quality will be more important
With all of the data being generated as customers engage with brands across devices and channels, one of the most important things that brands can invest time and resources into is ensuring that the data they are collecting is accurate and actionable. Duplicative and unstandardized data can lead to mis-personalized customer experiences and data pipeline bloat. Data quality will only become more critical in the coming decade.
One of the most important functions of data quality is its role in identity resolution. As more devices, channels, people, and tools come into the mix, it only becomes more difficult to create an actionable catalog of valid, clean customer data to guide the product roadmap and create personalized experiences that resonate.
Key business expectations of data and analytics teams. Source: Gartner
Hyper-personalization continues to be top of mind
That brings us to our next point: the continued importance of hyper-personalization across every channel over the course of the next decade.
As more devices and channels are introduced over the next few years, companies will have even more ways to reach customers and every one of those interactions will need to be personalized in real time. To keep up with customer expectations, brands will need to be able to create cohesive messaging across every channel based on customers’ histories, purchases, clicks, etc.
With more channels and devices, and no standardization to be seen, collecting and matching data to specific users will become even more complex. Without a solution that can account for the variance in identities and make matches via any set of identifiers, brands lose the “vaunted single view of the customer” and are unable to provide the connected customer experience that modern consumers have now come to expect.
Data will need to be democratized
Product, engineering, and marketing all need to be able to rely on a centralized source of clean, valid, correct data attributed to specific users to understand how to tailor experiences. Data is no longer the realm of one technical team or department within modern organizations, it’s now a team sport.
To operate as agilely and efficiently as possible in 2020, organizations need to adopt a culture around data where everyone has a shared understanding of what data is being collected, what it means, how it is used, and how to access it to make decisions across the business.
In the coming years, we’ll see a marked shift in how every area of business is conducted as data becomes the foundation for all activity and decision-making. No longer should marketers base marketing initiatives on limited information and suppositions, and no longer should technical teams bear the brunt of manually accessing and processing data requests for other teams. Instead, by democratizing access to customer data across teams, companies in 2020 will see more effective and cohesive initiatives across the business tied together by data.
Use of customer data is rapidly changing and expanding. Source: Gartner
Increased scrutiny of data collection and usage
In response to the drastic increase in data being collected by brands, this decade has also seen concern about how and what data is being collected about private citizens, manifesting into increasingly stringent data regulations like the VPPA, GDPR, and upcoming CCPA. In 2020 and beyond, we can expect to see additional policies and regulations put into place around the collection, use, and storage of personally identifying, demographic, and other engagement data. Complying with existing and yet-to-be instated regulations will be a matter of legal and financial importance, as well as critical to maintaining customer trust. To address these regulations, companies will need advanced identity resolution and data quality practices able to rank the order of importance of identities to match and merge profiles. This is especially important when considering anonymous and known states. Switching between anonymous and known states and merged is a common scenario that is treated differently by almost every company, so the right solution needs to be able to be flexible and extensible enough to meet brands’ specific compliance needs.
The CCPA effect: Privacy regulations introduced by state. Source: Gartner
Microsoft demonstrated the potential for CCPA subject rights impact when it launched its global privacy self-service portal with the advent of the GDPR. In the first year, it received 18 million requests, where 6.7M (37%) came from the U.S. The CCPA will only serve to educate more consumers of their rights. (Gartner)
The right solution should be designed to help brands manage their highly identifying customer data, including consent decisions. Personally identifying data will need to be collected and stored in either logically or physically separated identity space where it can be transformed, shared, and even deleted to maintain data control and transparency if companies want to remain compliant without affecting their business goals.
What should you do to prepare for 2020? In short, get your house in order when it comes to your data practices. Prioritize establishing a chain of total quality management for your data and create a culture around using this data to drive business initiatives. With democratized access to quality data and a secure customer data framework, hyper-personalization across every channel and device your customers may use, whether or not they are logged in, becomes possible without risking running afoul of privacy regulations.
Companies that don’t adapt and prioritize data planning, validation, and quality enforcement along with advanced identity resolution will likely find it increasingly difficult to improve efficiency and return on investments across product, marketing, and engineering, let alone create the kind of customer experiences that attract and retain customers.
With this decade ending in just a few weeks, it’s time to future-proof your business. After all, hindsight is 2020.
Latest from mParticle
Introducing mParticle Linting
mParticle Linting enables you to statistically analyze code against your data plan as you develop, making it easier to adhere to your company’s data plan and ensures that high-quality data is logged to your mParticle data workspace.
Introducing Data Planning API and CLI
With mParticle's Data Planning API and CLI, developers have the freedom to work from any operating system and enable interaction with various mParticle services through simple terminal commands, making it easier to ensure proper event collection at run time.
Smartype: Proper event collection at run time
Smartype, a data quality product that translates any data model into type-safe code to help developers ensure proper event collection at run time. Smartype generates personalized SDKs, based on any data model, providing automated code completion and improving data collection and quality at scale. Now available in beta.