71% of advertisers now rank incrementality as their #1 retail media KPI. Here's how to measure what advertising actually causes.
Here's the uncomfortable truth about advertising measurement: most of what platforms report as "conversions" would have happened anyway.
Incrementality testing asks the fundamental question: what additional value did advertising create beyond what would have occurred naturally? Google just lowered its testing threshold from $100,000 to $5,000 to make these tests accessible to more advertisers. AI is transforming how we measure true advertising impact.
The Measurement Problem
Traditional attribution models have fundamental flaws:
- •They measure correlation, not causation. Someone clicked an ad and then bought. Did the ad cause the purchase?
- •They miss touchpoints. Cross-device journeys, offline influence—attribution models see fragments.
- •They're platform-biased. Google's attribution credits Google. Meta's credits Meta.
- •They overcount retargeting. High "conversion rates" often reflect audience quality, not ad effectiveness.
Example: You spend $100,000 on Amazon Ads and generate $500,000 in attributed sales (5.0 ROAS). But what if $400,000 would have happened organically? Your true incremental ROAS might be 1.0.
Research shows brands ignoring incrementality overinvest in channels by up to 25%, while precise measurement enables 20-30% ROAS improvements through reallocation.
How Incrementality Testing Works
- •Test Group: Exposed to your advertising campaign.
- •Control Group: Similar audience NOT exposed to advertising.
- •Measurement: Compare outcomes between groups.
- •Calculation: Incremental Lift = (Test Group Results - Control Group Results) / Control Group Results
Testing Methodologies
- • Conversion Lift Studies: Platform-native tests that divide audiences into test and control groups.
- • Geo Lift Tests: Compare geographic regions where advertising runs versus regions where it doesn't.
- • Ghost Ads / PSA Tests: Show control group a placebo ad instead of your actual ad.
- • Holdout Groups: Exclude a percentage of your audience from all advertising.
AI's Role in Incrementality Testing
Bayesian Statistical Methods
Google's 2025 reduction of testing thresholds was enabled by Bayesian approaches. AI-powered Bayesian methods enable:
- • Shorter test periods (minimum 7 days vs. weeks/months)
- • Lower spend thresholds ($5,000 vs. $100,000)
- • Feasibility ratings predicting test success probability
Marketing Mix Modeling (MMM)
AI has revolutionized MMM. Google Meridian (January 2025) provides open-source MMM using Bayesian causal inference. AI-enhanced MMM:
- • Processes more variables (channels, tactics, external factors)
- • Updates continuously rather than quarterly
- • Integrates with incrementality tests for calibration
- • Forecasts future scenarios for budget planning
Automated Test Design
- • Sample size calculation: AI determines minimum test duration and audience size.
- • Group matching: Algorithms create comparable test and control groups.
- • Anomaly detection: AI identifies when external factors might invalidate results.
- • Real-time monitoring: Continuous analysis flags when tests are tracking toward inconclusive results.
Platform-Native Testing Tools
Google Conversion Lift (2025 updates)
- • Minimum spend reduced to $5,000 (from $100,000)
- • Test periods as short as 7 days
- • Available for Video, Discovery, Demand Gen without account rep involvement
Meta Conversion Lift
Measures incremental impact by dividing audiences into test (sees ads) and control (sees nothing or PSA).
Amazon Marketing Cloud
Enables incrementality analysis using first-party purchase data.
Third-Party Incrementality Tools
- •Northbeam: Full-funnel attribution with integrated incrementality testing.
- •Measured: Dedicated incrementality measurement platform with geo testing and continuous measurement.
- •Haus: Marketing science platform for launching experiments in minutes.
- •Sellforte: Marketing Mix Modeling with integrated incrementality testing.
Implementation Framework
01Establish Baseline (Weeks 1-2)
- • Audit current measurement and identify high-spend channels
- • Define success metrics and check testing eligibility
02Design First Test (Weeks 3-4)
- • Choose methodology, set test parameters (duration: 14+ days recommended)
- • Document hypothesis
03Execute and Monitor (Weeks 5-8)
- • Launch test and control groups, monitor for issues
- • Don't make campaign changes during test period
04Analyze and Apply (Week 9+)
- • Calculate incremental lift and assess statistical confidence
- • Compare to attribution, reallocate budget, plan next test
Best Practices
- •Test your biggest bets first. High-spend channels deserve validation.
- •Combine methodologies. Use MMM for continuous measurement; use lift tests to calibrate.
- •Accept uncertainty. Incrementality provides better estimates, not perfect truth.
- •Test periodically. What was incremental last quarter may not be this quarter.
- •Integrate with planning. Feed insights into budget allocation, not just reports.
The Bottom Line
Incrementality measurement answers the question attribution can't: what did advertising actually cause?
The data is clear:
- •71% of advertisers rank incrementality as their #1 KPI
- •52% of brands and agencies use incrementality testing
- •Brands ignoring incrementality overinvest in channels by up to 25%
- •Precise measurement enables 20-30% ROAS improvements
AI has made incrementality accessible: Google lowered testing thresholds to $5,000, Bayesian methods enable faster and cheaper tests, and MMM platforms incorporate incrementality automatically.
The advertisers who thrive will be those who measure what advertising actually causes—not just what it claims credit for.







