AI and Data Privacy: How to ensure your AI programs are safe, responsible, and effective
AI has the potential to transform marketers' business impact. But if you don't have a compliant data foundation, leveraging ML predictions for personalization can lead to breaches of customer trust. This blog post provides guidance on how you can use data and AI to strengthen customer relationships, and not disrupt them.
Evaluating an Enterprise CDP? Consider these five critical requirements.
When it comes to comparing different CDP vendors, there are specific requirements that enterprise brands should consider carefully. By ensuring that your CDP partner delivers on these fronts, you can maximize the ROI of your CDP investment as your business continues to grow.
How we reduced our S3 spend by 65% with block-level compression
As our customer base and platform offerings expanded over recent years, so did the cost of storing our clients’ data. We implemented a block-level compression solution that reduced our S3 spend by 65% without impacting the customer experience or client data.
BigQuery vs. Redshift: Which cloud data warehouse is right for you?
The data warehouse is the source of truth from your business's data set. Choosing the right solution is critical. This article explains how BigQuery and Redshift compare in factors such as performance, security, and cost so that you can select the right warehouse for your needs.
Kinesis vs. Kafka: Comparing performance, features, and cost
In this article, we compare two leading streaming solutions, Kinesis and Kafka. We focus on how they match up in performance, deployment time, fault tolerance, monitoring, and cost, so that you can identify the right solution for your streaming needs.
What the heck is reverse ETL?
Reverse ETL is a process in which data is delivered from a data warehouse to the business applications where non-technical teams can put it to use. By piping data from a data warehouse to downstream business systems, reverse ETL tools fill the gap between data storage and activation.
Snowflake vs. Redshift: Which Data Warehouse Is Better for You?
Learn how popular data warehouse providers Snowflake and Redshift compare in maintenance requirements, pricing, structure, and security so that you can understand which solution is right for your team.
Snowflake vs. BigQuery: What are the key differences?
Learn more about the differences between two popular data warehouse solutions, Snowflake and Google BigQuery, and understand how to identify which is right for your team.
How we improved performance and scalability by migrating to Apache Pulsar
We recently made a significant investment in the scalability and performance of our platform by adopting Apache Pulsar as the streaming engine that powers our core features. Thanks to the time and effort we spent on this project, our mission-critical services now rest on a more flexible, scalable, and reliable data pipeline.
New ways to understand in-app behavior with Apple iOS 16
With the latest updates to iOS and Xcode, Apple has introduced changes to its operating system and developer environment that give engineers and product teams creative new ways to uncover user behavior.
How does Azure work? An explanation of Microsoft’s cloud platform
Learn more about cloud platform Microsoft Azure and how it fits into your data infrastructure.
How does Snowflake work? A simple explanation of the popular data warehouse
Learn more about what Snowflake is and how it fits into your data stack.
Enhancements to mParticle’s developer tools make it easier to collect data and ensure quality at the source
mParticle makes it easy for engineers to accurately collect customer data by translating data schemas into production-ready code.
How we reduced Standard Audience calculation time by 80%
mParticle’s Audiences feature allows customers to define user segments and forward them directly to downstream tools for activation. Thanks to our engineering team’s recent project to optimize one of one of our audience products, mParticle customers will be able to engage high-value customers with even greater efficiency.
The engineer’s guide to working with marketers
While developers don’t readily admit it, working with marketers can sometimes be a pain. But when engineers and marketers collaborate effectively on data, amazing things can happen. We’ve assembled this guide to provide engineers with a roadmap for effectively working with their colleagues in marketing and making friends out of frenemies.
Harveer Singh leads Western Union’s digital transformation with data
Chief Data Architect Harveer Singh creates the data and tech roadmap that will help the iconic company emerge as a leader in fintech.
Developer Deep Dive: mParticle Sample Apps
Recently, a cross-functional squad of engineers, PMs and designers at mParticle assembled to produce a labor of love––sample applications. These sample apps help developers implement our SDK in Web, iOS, and Android environments and understand the value of mParticle. Here’s the nuts-and-bolts story behind what they built, the technical choices they made while building these apps, and what they learned along the way.
Implement a CDP with ease using mParticle's sample applications
Developers rarely look forward to integrating third-party systems into their projects. The learning curve to understand vendor platforms is time-consuming and diverts attention away from more interesting product initiatives. Our sample applications address this problem by helping developers understand how mParticle works on various platforms and providing production-quality, copy/paste-ready code to implement our CDP with ease.
How we cut AWS costs by 80% while usage increased 20%
How do you replace a tire while driving on the highway? This is what it felt like to re-architect the engine behind one of our most heavily used and relied upon products, the mParticle Audience Manager. Here's how we optimized this critical piece of our architecture and positioned it to play a key role in the next phase of our growth, all while customer adoption and usage steadily increased.
Data quality vital signs: Five methods for evaluating the health of your data
It’s simple: Bad data quality leads to bad business outcomes. What’s not so simple is knowing whether the data at your disposal is truly accurate and reliable. This article highlights metrics and processes you can use to quickly evaluate the health of your data, no matter where your company falls on the data maturity curve.
How to choose the right foundation for your data stack
If you’re relying on downstream activation tools to combine data events into profiles, don’t. You’ll end up with fragmented and redundant datasets across systems. Enriching each data point before it is forwarded downstream will prevent this problem, but not all customer data infrastructure solutions deliver this capability.
Clear costs: How we used data aggregation to understand our Cost of Goods Sold
Understanding our cost allocation on the level of individual customers and services is an important metric for us to track. However, the major cloud providers do not readily provide this information, so to obtain it, our data engineering had to get creative. This case study describes how we built a custom library that combines data housed in disparate sources to acquire the insights we needed.
Smartype Hubs: Keeping developers in sync with your Data Plan
Implementing tracking code based on an outdated version of your organization's data plan can result in time-consuming debugging, dirty data pipelines, and misguided decisions. mParticle's Smartype Hubs helps your engineering team avoid these problems by importing the latest version of your Data Plan into your codebase using Github Actions.
A simpler way to implement and maintain video analytics code
Video analytics are essential to maximizing the impact and value of video content. For technical teams, however, capturing this data can often be more challenging than collecting other user events. In this article, we’ll show how mParticle’s Media SDK simplifies this process for engineering teams, and provides data stakeholders with actionable user insights.
Prevent data quality issues with these six habits of highly effective data
Maintaining data quality across an organization can feel like a daunting task, especially when your data comes from a myriad of devices and sources. While there is no one magic solution, adopting these six habits will put your organization on the path to consistently reaping the benefits of high quality data.
How to implement an mParticle data plan in an eCommerce app
This sample application allows you to see mParticle data events and attributes displayed in an eCommerce UI as you perform them, and experiment with implementing an mParticle data plan yourself.
What does good data validation look like?
Data engineers should add data validation processes in various stages throughout ETL pipelines to ensure that data remains accurate and consistent throughout its lifecycle. This article outlines strategies and best practices for doing this effectively.
Should you be buying or building your data pipelines?
With demand for data increasing across the business, data engineers are inundated with requests for new data pipelines. With few cycles to spare, engineers are often forced to decide between implementing third-party solutions and building custom pipelines in-house. This article discusses when it makes sense to buy, and when it makes sense to build.
Three threats to customer data quality (and how to avoid them)
In this video, Jodi Bernardini, a Senior Solutions Consultant at mParticle, lays out three major threats standing in the way of customer data quality, and offers advice on how organizations can address them.
Ask an mParticle Solutions Consultant: What is data quality?
In this video, Andy Wong, a senior leader on mParticle’s Solutions Consulting team, discusses what data quality means, why it is important prioritize, and the benefits of creating a centralized data planning team to oversee data quality.
When to use a data lake vs data warehouse
Enabling teams with access to high-quality data is important for business success. The way in which this data is stored impacts on cost, scalability, data availability, and more. This article breaks down the difference between data lakes and data warehouses, and provides tips on how to decide which to use for data storage.
How Reverb optimized their data workflows at scale and gave users the rockstar treatment
With mParticle at the heart of their data stack, the world’s largest online music marketplace said goodbye to burdensome ETL pipelines, slashed their data maintenance workload, and unlocked new opportunities to build data-driven features into their product.
How to assemble a cross-functional data quality team
Smash your data silos and improve data quality across your organization by assembling a cross-functional team to own data planning.
What is data integrity and why does it matter for customer data?
Integrity is a good quality. Just like you want the people around you to have integrity, you also want the data on which you base strategic decisions to be of high integrity as well. That sounds good, but what does it mean for data to have integrity, and why is this so important? In this post, we’ll explore this broad and nuanced concept, define what it means in the context of customer data, and learn a strategy to ensure your customer data maintains high integrity throughout its lifecycle.