Want Better Marketing Activation? Prioritise Experimentation
Ted Sfikas, Field CTO, Amplitude, explains why marketers should prioritise experimentation to reduce wasted spend, improve ROI, and turn first-party data into actionable insights through disciplined test-and-learn approaches.
Topics
What to Read Next
Marketers face a paradox: we can analyse more data than ever, but activating that data (for personalisation, advertising, or product change) often leads to waste and uncertainty. There’s a clear opportunity in this data, but efforts to grasp it seem to always backfire.
That’s because as our first party data tranche grows, the chance of it changing grows at the same rate. Customer data is always changing, and so marketers should be testing for validity and fidelity.
To do that, marketers must treat experimentation as a necessary step before turning customer signals into large-scale activations. This disciplined “test-and-learn” approach will mitigate risk, prove value, and improve operational costs.
Why Experiment First? The Strategic Argument
Observational analytics will tell you what happened, which is great. But you need the whole story. Randomised and controlled experiments will tell you what caused it.
Running smaller, well-designed tests before activating data into adtech or personalisation campaigns prevents two expensive mistakes: false attribution and premature scaling.
Bain’s research shows experimentation has helped businesses achieve marketing ROI increases of 20% or more and make better allocation decisions.
Their studies make it clear that experimentation should be thought of as a risk-management and discovery discipline. These experiments reveal what actually moves the needle before you scale and commit budget.
Frameworks & When to Use Them
Modern marketing teams can utilise a number of different frameworks for their experimentation practice. These are the most popular:
- A/B (split) tests:
This involves a traffic split between two (or more) fixed variants. Outcomes for each variant are compared according to a chosen metric like conversion, click-through rate, etc. A simple example is an email campaign that tests two subject lines, sending each to half of the intended audience. This test is best for isolated changes (subject lines, landing page headlines, CTA text).
- Multivariate tests (MVT):
Extends A/B by testing multiple variables or combinations at once. For example, testing 3 headlines × 2 images yields 6 combinations, all run in parallel. MVT can accelerate discovery but needs large sample sizes (each combination gets fewer users). This test is good for comparing multiple elements and interactions, and is more relevant for digital properties that have high traffic.
- Holdout / control groups:
This is the simplest approach to the incrementality method. The test will withhold exposure from a randomly selected control group to estimate baseline behaviour, and then measure uplift. It allows marketers to easily see the campaign-level causal measurement.
This approach will measure causal lift from a channel and can configure the “control” as a geo, user attribute, or other data value in a first-party audience. It is widely used to validate paid media campaigns that are suspected to yield different values at different times or under different conditions.
- Adaptive multi-armed bandit:
Fully reliant on modern automation, the multi-armed bandit approach chooses a number of test variants that receive different experiences on the website/app. The bandit will dynamically direct real-time traffic to better-performing variants.
This is an excellent example of a proven testing automation technique that has been valued for its ability to scale.
To understand when to use each framework, here’s a practical rule: use A/B, MVT, and multi-armed bandits first for experience-level optimisation; use holdout/incrementality for channel or spend decisions.
New Tactics for Reliable Experiments
Most marketers know to start each test with a clear hypothesis and primary KPI. But the tools they use to perform experiments are siloed and require specific labour. Those obstacles hold up testing, leading to lost revenue.
At Amplitude, we instrument an event definition in a single place: across analytics, experimentation and session replay validation technologies. And the whole organisation benefits from it!
This instrument-once, use-everywhere approach allows for massive gains in productivity, like when it becomes logical to layer multiple experiments together. For example, a team might start with an A/B test to surface the best UX creative. On top of that, they could layer a holdout test to produce insights into the incrementality gains on multiple channels.
Speaking of UX, I also can’t say enough about testing consent. The EU requires a lawful basis for processing experimental data (GDPR/EDPB) with consent or legitimate interest documented. California and many other states demand transparent notices and opt-outs when data is used for experiments tied to targeting.
Best practices here are to minimise data collection and pseudonymise user IDs, alongside the variant consent banner tests, to improve opt-in rates. Your customers have “banner fatigue.” Brands that find the right balance among banner aesthetic, legalese, and subsequent journey flows will undoubtedly gain more authentic users to market to. That outcome should be a North Star goal for every business.
How Do We Convince People to Do This?
When arguing for experimentation to stakeholders, speak their language. Use their business terms and reference team-specific KPIs. Track these core outcomes:
- Conversion rate uplift (relative % change):
Classic for experience tests (A/B, MVT). Use pre-registered primary KPI and confidence intervals from the test to claim significance.
- Incremental conversions / incremental revenue:
Incremental conversion is the difference in conversion between test and control (holdout method). Measuring incremental revenue can also be tested in this manner. This calculation gives you the causal uplift attributable to your activation.
Growthonomics found several cases in which testing revealed that a large share of conversions from retargeting campaigns would have happened anyway. After reallocating budget to channels with better incremental increases, the advertiser improved ROI by double digits and reduced wasted spend.
- Return on ad spend (ROAS) and cost per incremental acquisition (CPIA):
Compute ROAS using only incremental revenue (not total attributed revenue) to avoid overestimating performance. The value here is clear. Some channels always perform well and don’t need to be tested as frequently, which saves the organisation time and money.
- LTV through retention or acquisition:
These tests are meant to give marketers insight into which type of campaign they should run for their audiences. It is common to find that a small short-term uplift from acquisition leaves current high LTV audiences on the wayside. That leads to diminishing retention of high LTV customers.
That retention drop outweighs the modest uplift from acquisition efforts. The testing will find the right balance. Tie experiments to downstream KPIs where possible (Amplitude’s approach emphasises linking experiment wins to product and revenue impact).
How Do I Get Started?
- Days 0–30:
Identify the top 4–6 high-spend or high-risk activations and prioritise using factors of impact, confidence, and effect. Score each of those factors in a range of 1 to 10, then compete for priority, leading to a solid launch plan.
- Days 31–60:
Launch incrementality holdout tests for two channels and A/B tests for two experience elements. This is where Amplitude really differentiates. Analysis normally takes far too long. Overworked marketers get much-needed relief with the ability to automate insights into action just by right-clicking through a single interface.
- Days 61–90:
Analyse the resulting lift from each campaign, reallocate budget, and adopt winning variants using data-driven evidence. At Amplitude, we have built-in messaging and alerts to keep teams aligned at all times. We also have a world-class documentation section where audit trails and success stories are shared publicly.
That Sounds Like A Lot Of Work! Ugh!
We know. It is.
Here’s the good news: Amplitude makes that complexity disappear.
With the introduction of AI Agents, our marketing clients are now designing their experimentation plans in minutes instead of days. The AI responds to marketing prompts and walks users through the steps to set up a variety of tests that take action on the insights being collected. It revolutionised the experimentation practice!
What To Walk Away With
If your objective is to reduce wasted spend, increase ROAS, and build repeatable advantages, make experimentation the first activation step — not an afterthought. Experiments provide causal clarity, protect against privacy pitfalls, and increase budget efficiency.
Marketers (both B2B and B2C) who prioritise test-and-learn cultures will see higher conversions, lower costs, and more innovative growth opportunities. At Amplitude, we understand that—and we’ve opened the floodgates of productivity by combining smart AI foundations with the discipline.




































































































