Building a feature and shipping it out to your users is a great feeling for any product or engineering team. But the initial release is only the beginning of the product life cycle.
Great features evolve over time in response to customer engagement, product direction, and technological capabilities.
To make sure that you’re developing your products in the right direction, it’s important to have a system in place that allows you to understand how customers are engaging with each feature, track north star metrics, and experiment with product iterations.
This use case walks through how you can use Split and mParticle to design A/B tests and ensure that your features are delivering more value over time.
Step 1: Track performance and behavioral data with mParticle
mParticle’s native SDKs can be implemented in client-side applications across mobile, web, OTT and more to track a range of customer data. mParticle APIs can additionally be used to ingest data from server-side environments. You can monitor data as it’s collected with Live Stream, and create a Data Plan to establish all of the data points that are being collected across channels.
Step 2: Forward data from mParticle to Split
mParticle’s server-side integrations make it possible to connect any data being collected into your mParticle workspace to the downstream SaaS tools of your choice. For this use case, you can activate mParticle’s Split Event integration to send performance and behavioral data to your Split instance in real time without any custom development. Supported event types include Commerce Events, Custom Events, Session Start, Session Ends, and Screen Views. If there are any data points you don’t want to send to Split, you can exclude them from being forwarded with Data Filtering.
Step 3: Target features in Split
Once you’ve received data in Split, Split’s attribution and exclusion capabilities allow you to measure the influence of a specific feature on engagement. To design your A/B test, identify the feature that you would like to experiment with and assign key metrics for the goals that you would like to achieve. Some example metrics might be average page load time or bookings per user.
Step 4: Iterate on your feature
Use Split feature flags to test feature iterations and track the impact that they have on your key metrics. As user engagement data is ingested into mParticle and forwarded from mParticle to Split, you’ll be able to monitor the impact of each new release. You can also establish guardrail metrics that reveal when new features are performing worse than the original.