Explained: Incrementality

TLDR: Attribution is simply the math that goes into distributing credit for conversions across the various marketing channels. The incrementality approach begs the question—how many of those conversions would we have gotten organically?

Topics

  • Earlier in March, London-based Machine Advertising announced their always-on incremental measurement solution, a plug-and-play AI platform. It is specially targeted towards marketers focussed on user acquisition data insights that can lower the cost per acquisition and thereby optimise budgets.

    It claims to deliver an 18 per cent lift in user acquisition and take simply 48 hours to produce results.

    There is no dearth of marketing analytics solution providers that measure attribution; AdRoll, Adjust, Measured, Smartly.io are just a few. These solutions are part of the marketers’ arsenal to understand where dollars spent count, whom to retarget and when the law of diminishing returns kicks in.

    But data can be daunting. Attribution is simply the math that goes into distributing credit for conversions across the various marketing channels. Tools like Adobe or Google Analytics are good to ascertain that there were four interactions with owned, earned and paid media channels before a last-click conversion from paid search. So we distribute the credit for that conversion between all the interactions.

    The incrementality approach begs the question—how many of those conversions would we have gotten organically?

    Marketing borrows the concept of incrementality from the method used to test drugs and vaccines using Randomised Control Trials (RCT). Here, half the sample is administered the drug whereas the other half is given a placebo. In marketing, incrementality tests perform a similar function.

    The lines between organic and paid conversions can be blurry. But incrementality works in a scientific manner. That means there are rules, and sometimes the rules are written in Latin — Cum hoc ergo propter hoc which roughly means correlation is not causation. Say you run an ad campaign on Facebook for your popsicle brand and see an instant lift in sales. Most marketers would count that as a direct win. But how much of the credit goes to the fact that schools were out, it was a holiday weekend and there was heatwave. All these factors could easily have contributed to the rise in sales. Confounding factors are variables that affect both the outcome and the treatment.

    Incrementality testing highlights data-proven causality for campaigns and gives marketers a decisive advantage. It offers a measure of the increase on conversions that ads have on driving all desired outcomes; increasing awareness, driving app installs, or direct conversions such as a payment or subscription. Lift is defined as the increase above native demand. By measuring the incremental lift that each marketing activity has on the target audience, marketers can decide which ads, channels and campaigns are contributing to their bottom line, and which ones offer the highest ROI.

    ROI has been an eternal concern but in the wake of Apple’s App Tracking Transparency framework and the death of the third party cookie—it’s a challenge that is more pertinent now. Without user data, the insights that attribution models can offer are limited. Further, marketing strategies have grown in complexity. Multi-device, cross-channel communication magnifies the permutations of possible touchpoints and their corresponding impact.

    How then do you determine the real value of your campaign? Incrementality measurement uses aggregated data and econometric models.

    Traditionally, incrementality tests consisted of two groups – test and control. The test group is exposed to ads while in the control group, ads are withheld or they are shown ghost ads. The difference in sales between the two groups of clients is the incremental lift. This approach usually produced biased or inconclusive results.

    Today, incrementality testing via machine learning uses various scenarios to isolate conversions data. Structural Causal Models (SCMs consist of two parts: a graph, which visualises causal connections, and equations, which express the details of the connections. SCMs use a special kind of graph, called a Directed Acyclic Graph (DAG), for which all edges are directed and no cycles exist. The model used is Causal inference which aims at answering causal questions as opposed to just statistical ones.

    Using a do-operator method, marketers can simulate countless scenarios to eliminate seasonal factors that may have contributed to a lift. The do-operator is a mathematical representation of a physical intervention. If we start with the model Z → X → Y, we can simulate an intervention in X by deleting all the incoming arrows to X, and manually setting X to some value x_0.

    Armed with retrospective data of marketing spend and sales, we can simulate what would happen if we were to increase marketing spend, and assess whether the change in sales is worth it.

    Models that offer correlation and causality identify relationships between different entities, variables and concepts.

    The incrementality approach does not eliminate the need for ROAS. Attributions inform on incrementality and can make the model smarter with time.

    The insights drawn from incremental models can be used to determine the marketing mix modeling (MMM) and vice versa. For consistent insight from backward-looking analysis to make informed decisions in forward-looking predictions, it’s critical to ensure human bias and channel silos don’t infiltrate the data and give you bad results. Understand conditional dependencies across all random variables to identify coefficients. With consumer behaviour, market dynamics and communication channels evolving, marketers need to evolve the tools they use to stay ahead of the competition.

    Topics

    More Like This