A tiny, black location pin icon.

Founded in London, UK. We respect your privacy.

A row of five bright yellow stars evenly spaced against a black background, symbolizing a five-star rating.

3,000+ consumers taking control of their data

Oct 2, 2025

The Measurement Gap: Why 92% of Marketers Miss Half the ROI Story

Flat illustration of two marketers comparing ad clicks, CAC and ROAS with brand lift, word of mouth and LTV, highlighting a 92% measurement gap.

Picture this scenario. You're presenting quarterly results to leadership. Your dashboards show strong attribution numbers across channels. Conversion rates look solid. Click-through rates exceed benchmarks. Yet when the CFO asks, "Would these customers have purchased anyway?" the room goes quiet.

This uncomfortable silence reflects a broader crisis in marketing measurement. According to Supermetrics' 2025 Marketing Data Report, only 8% of marketers worldwide use incrementality testing to measure their marketing results. That leaves 92% of marketing teams without a reliable method to prove their true impact. The numbers tell a clear story: we've built sophisticated attribution systems that excel at showing correlation but struggle with the more important question of causation.

The stakes extend beyond awkward board meetings. Research from BCG and The State of Retail Media report reveals that 41% of marketers cannot measure ROI effectively, whilst 63% consider ROI their most important metric. This disconnect creates real consequences. Marketing budgets face scrutiny. Teams lose credibility. Strategic decisions get made without solid evidence. Most critically, budgets flow to channels that appear effective rather than those that genuinely drive incremental growth.

This article examines incrementality testing through the lens of concrete data and real-world implementations. You'll see specific case studies with actual metrics, understand the mathematical foundations of different approaches, and learn why companies that master this methodology achieve 20% to 40% improvements in spending efficiency.

The Attribution Problem: What Traditional Measurement Misses

Traditional attribution models operate on a fundamental assumption: if a customer saw your advertisement before purchasing, the advertisement caused the purchase. Looking at the data objectively, this assumption creates systematic measurement errors.

Consider the mechanics of last-click attribution. A potential customer searches for your brand name, clicks your paid search advertisement, and completes a purchase. Your analytics platform credits the paid search campaign with the full conversion value. Yet this person actively sought your brand; they likely would have found your website through organic results regardless of the paid advertisement.

The State of Retail Media report documents that 36% of consumer packaged goods brand marketers and agency professionals report difficulty proving investment incrementality. This challenge stems from attribution's inherent bias toward channels that intercept existing demand rather than create new demand.

The research identifies four primary attribution challenges:

Accuracy concerns dominate. Some 44% of marketers cite concerns about the accuracy or reliability of incrementality results as their top challenge. When your measurement method conflates correlation with causation, every channel appears more effective than reality.

Cross-channel attribution remains problematic. Some 43% struggle applying incrementality across different advertisement types, targeting methods, and retailers. Traditional attribution models excel at tracking individual customer journeys but fail to account for how channels work together or compete for the same conversions.

Limited tools create barriers. Some 41% cite limited tools or technologies as an obstacle. Whilst attribution platforms have become more sophisticated, they still rely on tracking individual touchpoints rather than measuring true incremental impact.

Lack of standardisation confuses teams. Some 37% identify lack of standardisation as a challenge. The research reveals that marketers aren't even aligned on what incrementality means; 48% define it as "advertisement-attributed conversions of new-to-brand customers" whilst another 48% define it as "serving advertisements where products aren't showing in organic results."

BCG's research adds context to these challenges. Only 25% of marketers measure incrementality in any form for most activities. The majority can forecast marketing effectiveness within specific channels, but only 9% can forecast incrementality across all channels. Traditional channels prove particularly difficult to measure accurately.

The consequences compound over time. When companies optimise marketing based on last-touch attribution, they fall into what BCG describes as a feedback loop: budgets shift toward channels with apparent high returns, upper-funnel investment decreases, brand awareness weakens, demand declines, revenue drops, and budgets get cut further. Like a poorly calibrated instrument, attribution bias doesn't just provide inaccurate readings; it actively misleads teams toward counterproductive decisions.

Understanding Incrementality Testing: The Mathematical Foundation

Incrementality testing provides a rigorous answer to marketing's fundamental question: how many sales happened because of this marketing activity rather than in spite of it? The methodology borrows from clinical trial design; like medical researchers testing drug efficacy, marketers create controlled conditions to isolate the true effect of their interventions.

The core principle operates simply. Divide your audience into two statistically similar groups. Expose one group (the test group) to your marketing activity. Withhold marketing from the other group (the control group). Measure the difference in outcomes between groups. That difference represents your incremental impact.

Think of it like engineering a bridge load test. You don't assume materials will perform as specified; you test them under controlled conditions. Similarly, incrementality testing doesn't assume your marketing drives the results your attribution platform reports; it measures actual impact under controlled conditions.

The mathematical elegance emerges in what this design controls for. External factors like seasonality, competitor actions, economic conditions, and organic brand growth affect both groups equally. The test isolates your marketing activity as the only variable. Any statistically significant difference in outcomes must stem from that marketing activity.

Research from Think with Google and various testing platforms reveals two primary methodologies:

User-based tests randomly assign individual users to test or control groups. These tests require smaller budgets and can run alongside other experiments. They provide campaign-level insights and can segment results by demographics like age and gender. The limitation lies in potential contamination; users in the control group might still encounter your marketing through other channels or devices.

Geography-based tests use geographic regions as test and control groups. Certain areas receive your marketing whilst similar areas don't. According to industry research, geo-testing represents the gold standard because it eliminates many biases inherent in user-level tracking. Recent technological advances have made geo-testing faster and more accessible. The challenge involves selecting truly comparable geographic markets and having sufficient budget scale.

The output from incrementality tests provides three critical metrics. First, incremental lift quantifies the percentage increase in desired outcomes attributable to your marketing. Second, incremental return on advertisement spend (iROAS) divides incremental revenue by media spend, showing the true return. Third, statistical confidence measures the reliability of your results.

Forrester's research identifies four optimal scenarios for incrementality testing: launching new campaigns, channels, or tactics; making budget allocation decisions; entering new markets; and conducting ongoing optimisation. The methodology works best when you have clear hypotheses, sufficient audience size for statistical power, and the organisational commitment to respect results even when they challenge existing assumptions.

Real-World Results: What the Data Actually Shows

The abstract mathematics of incrementality testing might seem removed from daily marketing operations. The case studies from recent research demonstrate otherwise; companies implementing rigorous measurement uncover substantial performance gaps and opportunities.

E-Commerce: Lalo's Upper-Funnel Discovery

Lalo, a baby brand focused on modern parents, suspected their platform algorithms were targeting the same high-intent customers repeatedly rather than reaching new audiences. Their hypothesis held that optimising for upper-funnel events might unlock entirely new customer segments.

Working with measurement partner Haus and their agency Sharma Brands, Lalo designed a three-cell experiment on Meta. The test compared conversion campaigns optimised for purchases versus conversion campaigns optimised for add-to-cart events versus a holdout group receiving no marketing.

The results challenged conventional wisdom. Purchase-optimised campaigns drove 27.1% lift in new customer revenue, whilst add-to-cart optimised campaigns achieved 26.5% lift. The difference? Merely 0.6 percentage points. Looking deeper at returning customer sales revealed more: purchase-optimised campaigns generated 16% lift compared to just 9% lift for add-to-cart campaigns. This pattern suggested that conversion-optimised campaigns continued reaching existing customers despite audience exclusions, whereas add-to-cart campaigns more effectively focused on new prospects.

The incremental ROAS told the complete story. Add-to-cart campaigns delivered iROAS just £0.01 higher than purchase campaigns, essentially identical performance. Yet add-to-cart campaigns brought in proportionally more new customers, validating Lalo's hypothesis about algorithm behaviour.

Building on these insights, Lalo tested TikTok with a two-cell experiment comparing a traffic campaign optimised for landing page views against a holdout. The platform reported zero purchases from the campaign because it tracked only landing page views. Incrementality measurement revealed different mathematics: the campaign drove 12.4% lift in new customer revenue and 3.65% lift in returning customer revenue. The iROAS exceeded the target by 64%, demonstrating substantial value invisible to platform attribution.

These findings became part of Lalo's permanent strategy. Nicole Fisch, SVP of Marketing, stated: "These tests reinforced what we had suspected—that platform algorithms were keeping us in a narrow lane of existing and high-intent customers. By optimising for higher-funnel events, we unlocked an entirely new audience segment."

Retail Media: BrandAlley's Model Validation

BrandAlley, a UK-based online fashion e-commerce business, faced a different challenge: validating their Marketing Mix Model. The company launches over 1,000 digital campaigns annually, making optimal budget allocation critical. After implementing Sellforte's MMM solution, they needed confidence the model's estimates reflected reality.

The validation approach employed Meta's Conversion Lift Study, running across all Meta campaigns for four weeks. The study measured Meta's true incremental impact independent of attribution models. The results provided strong validation: Meta's incrementality test estimated ROI of 4.00 with a 90% confidence interval between 2.91 and 5.09. The MMM had estimated Meta ROI at 3.91, falling well within the test's confidence interval.

This alignment gave BrandAlley confidence to trust their MMM for strategic decisions. The validation demonstrated that their model accurately captured Meta's performance, lending credibility to the model's estimates for other channels where running continuous tests proved impractical.

Retail Media Networks: Gopuff's Partnership Approach

Gopuff, the instant delivery service, represents the 74% of marketers who don't conduct incrementality testing in-house. According to research from Funnel and Ravn, only 26% of in-house marketers perform testing internally. Gopuff addressed this capability gap by partnering with retail media technology provider Koddi.

The partnership produced a tool tracking incremental conversions, incremental revenue, and incremental ROAS. In a pilot programme, advertisers using this tool achieved 40% lift in incremental purchases per user. This substantial improvement came from shifting budget allocation based on actual incremental performance rather than attributed performance.

Performance Marketing: Diverse Implementation Examples

Research from Haus documents numerous shorter case studies revealing the breadth of incrementality applications:

Inkbox, which sells temporary tattoos, wanted to measure impact across all sales channels including direct-to-consumer, Amazon, and Walmart. Their incrementality testing revealed that Snap advertising boosted iROAS by 127% when accounting for sales across all channels rather than just their website.

FanDuel conducted a three-cell test to determine optimal YouTube investment levels. The test compared different spending levels to identify where returns diminished. Following the test results, FanDuel increased sportsbook activations by 11% through more precise budget allocation.

A beauty brand (unnamed in the research) tested Performance Max campaigns and discovered incremental ROAS of £6; for every £1 invested, they generated £6 in incremental revenue that wouldn't have occurred without the advertisements. This 600% return provided clear justification for increased investment.

Conversely, a financial institution tested YouTube campaigns and found incremental ROAS of just £1.10. Whilst technically profitable at £0.10 return per £1 spent, this marginal performance prompted strategic reassessment of their YouTube approach and creative execution.

Enterprise Scale: The MedTech Opportunity

BCG's work with a leading MedTech company demonstrates how incrementality measurement scales to enterprise environments. The project built Marketing Mix Models as part of a four-legged measurement approach, enabled scenario planning for optimal allocation, and implemented advanced capabilities including incrementality testing methodologies.

The comprehensive measurement approach uncovered a $75 million profit opportunity, representing a 6% improvement, purely through reallocating existing budget to higher-performing channels and tactics. The company achieved this without increasing total marketing spend; better measurement simply revealed where investment created genuine incremental value versus where it intercepted demand that would have materialised regardless.

The numbers across these cases share a pattern. Incrementality testing consistently reveals gaps between attributed performance and actual incremental impact. These gaps typically benefit channels that intercept existing demand and penalise channels that build demand. Correcting for these gaps redirects substantial budgets toward more effective investments.

The Two Methodologies: Choosing Your Measurement Approach

Incrementality testing and Marketing Mix Modelling represent two distinct approaches to measuring marketing's true impact. Understanding when to deploy each methodology determines measurement success.

Marketing Mix Modelling: The Statistical Approach

MMM employs statistical regression to estimate relationships between marketing spend and business outcomes using historical data. Modern MMM techniques integrate causal inference frameworks, separately accounting for seasonality, external influences, and market trends. According to Supermetrics, 49% of marketers worldwide currently use MMM to measure marketing results, making it more prevalent than any other advanced measurement method.

BCG's research indicates that 47% of marketing leaders plan to invest in MMM next year, more than any other measurement method. Yet MMM faces constraints. Zach Bricker, Lead Solutions Engineer at Supermetrics, provides direct guidance: "If you're spending $50,000 annually on marketing, you have no business doing an MMM. None whatsoever. You don't have the granularity and volume of data required for the model to perform accurately."

MMM requires specific conditions. The methodology needs sufficient budget scale, typically at least $2 million in annual advertisement spend for meaningful analysis. Historical data must be extensive and detailed. The channel mix should be diverse with multiple marketing channels running simultaneously. Data richness matters; granular data across different channels produces better models than limited digital-only data. Resource investment proves substantial for both implementation and ongoing management.

The advantages justify these requirements. MMM assesses past performance across all channels simultaneously, controls for external factors that attribution ignores, estimates channel interactions and synergies, and provides strategic guidance for long-term budget allocation. The methodology excels at answering questions like "How should we split our annual budget across channels?" or "What's the optimal total marketing investment level?"

Incrementality Testing: The Experimental Approach

Incrementality tests provide the cleanest read on true incremental impact through controlled experimentation. As Forrester's research notes, these tests work best for new channel launches, budget allocation decisions, market entries, and ongoing optimisation.

The methodology operates faster than MMM. Tests typically run for two to six weeks depending on purchase cycles and statistical power requirements. Results provide precise estimates for specific conditions rather than long-term strategic guidance. The approach proves particularly valuable when launching new tactics where historical data doesn't exist or when validating assumptions that models might miss.

Geography-based testing represents the current gold standard. This approach selects comparable geographic regions, implements marketing in test regions whilst withholding it from control regions, and measures performance differences. Industry experts emphasise that geo-testing eliminates many biases inherent in user-level tracking and has become significantly more accessible through recent technological advances.

The limitations matter. Tests provide point-in-time measurements under specific conditions. A test of TikTok advertisements in the UK during September provides strong directional insights but may not apply universally to other markets or seasons. Tests require sufficient data volume for statistical significance; companies with low transaction volumes may struggle to achieve reliable results quickly. Finally, you cannot test everything simultaneously; tests must be prioritised and sequenced carefully.

The Complementary Approach

Research from BCG and various measurement platforms reveals a critical finding: the methodologies work better together than in isolation. Only 60% of marketers who use MMM or attribution models also conduct experiments. This gap represents missed opportunity.

The optimal approach uses MMM for continuous measurement and strategic allocation whilst deploying incrementality tests periodically to validate and calibrate the model. BrandAlley's case study demonstrates this practice perfectly; their MMM estimated Meta ROI at 3.91, their incrementality test measured 4.00, and the alignment validated both methodologies.

Tests can calibrate models in two ways. If test results differ substantially from model estimates, you can adjust model parameters to align with experimental evidence. Tests also help models understand channel effectiveness in new conditions where historical patterns may not apply. For example, testing a channel in a new market provides data points that improve model accuracy for that market.

The research from Think with Google and BCG suggests a practical framework. Use MMM as your primary measurement system for ongoing decisions and annual planning. Run incrementality tests quarterly or semi-annually on your highest-spend channels to validate model estimates. Test new channels before scaling investment. When test results contradict model estimates significantly, investigate the discrepancy to understand whether conditions have changed or model assumptions need adjustment.

This combined approach addresses each methodology's limitations. MMM provides strategic direction but can miss tactical shifts or new channel performance. Testing provides tactical precision but cannot continuously measure all activities. Together, they create a measurement system that's both strategically sound and tactically precise.

Common Pitfalls: Where Measurement Initiatives Fail

The research materials document not just successful implementations but also systematic failures that undermine measurement initiatives. Understanding these failure modes helps avoid them.

The Platform Attribution Trap

Digital platforms provide attribution tools that systematically overstate their impact. Research into Meta's A/B testing tool reveals a critical flaw: divergent delivery. Rather than distributing advertisements evenly across test groups, Meta's algorithm optimises delivery by prioritising users more susceptible to the advertisement's messaging. This optimisation means reported results reflect both targeting effects and actual advertisement impact, artificially inflating perceived lift.

The same dynamics affect most platform-provided attribution. Platforms have incentives to demonstrate value, their tracking methodologies favour their own conversions, and they cannot measure users who would have converted without seeing any advertisements. The research from LBB's article on incrementality challenges emphasises this point: marketers find it increasingly difficult to trust results when major platforms introduce their own measurement tools.

The Data Infrastructure Barrier

BCG's research reveals that only 42% of companies report data quality sufficient for their business needs. Only 21% have a clear strategy to close data gaps. These shortcomings directly impact measurement capability.

The challenge manifests in several ways. Vendors undermine data access; marketers often struggle to extract data housed with agencies or platforms. A travel company's head of digital marketing described spending six months coordinating with IT, analytics platforms, and agencies just to obtain middle-of-funnel data. Cross-functional barriers create additional obstacles; different business units maintain separate data systems, vendors, and metrics. Internal politics and varying profitability definitions compound the problem.

The research indicates that companies strong in measurement are three times more likely than weaker counterparts to synthesise data from all marketing teams before making budgeting decisions. Data centralisation matters.

The Organisational Resistance Challenge

BCG's research across 100 senior marketing executives managing nearly $20 billion in annual budgets reveals that organisational barriers often exceed technical challenges. One CMO stated: "A key barrier to success for us was cultural inertia and an attachment to the way marketing had always been done. It takes discipline to move past preconceived ideas."

The resistance takes multiple forms. Long-tenured executives struggle with "lifer syndrome," the fear of change that comes from established practices. Finance teams lack confidence in marketing's methods until brought closer to the measurement process. Different functions define success through incompatible metrics, creating the "tyranny of random facts" where each manager cites data points from unique tools without shared understanding.

Breaking down these barriers requires intervention from the entire C-suite, with the CEO as ultimate advocate. One automotive manufacturer initiated cross-functional conversations involving business units and functional departments, using multi-unit data first to define profitability consistently, then to determine how it would be measured across the company. Following implementation, they established a cross-functional steering committee to ensure longevity of changes.

The Short-Term Thinking Problem

Measurement initiatives frequently start and stop according to executive turnover. Methods and tools change continuously, partners and vendors rotate, and talent recruitment suffers. Research shows that companies keeping the same methods and tools in place for three or more years are significantly more likely to develop standardised KPIs, integrate learnings into spending decisions, and codify best practices.

The automotive OEM's vice president of marketing noted: "It takes a year of everyone questioning a new model until we have enough history to believe in it. We usually needed 12 to 16 months to show a smooth trend that we could rally support behind." Real behaviour change typically requires two to three years. Companies that recognise this timeline and commit to sustained effort achieve superior results.

The Incomplete Testing Problem

Even organisations conducting incrementality tests often implement them incompletely. The methodology requires four steps: hypothesis-driven test design, rigorous setup and execution, statistically valid measurement using prealigned metrics, and systematic conversion of learnings into actions ("scale it or fail it").

Many teams execute the first three steps but fail on the fourth. Tests produce interesting insights that never translate into budget changes or strategic shifts. This failure wastes the investment in testing and demoralises teams who see their rigorous work ignored.

Building Your Incrementality Testing Programme

The research reveals a clear pathway from measurement aspiration to operational capability. Success requires both technical implementation and organisational change.

Establishing the Foundation

Start by securing cross-functional support. The research emphasises that measurement initiatives succeed or fail based primarily on people factors rather than technical factors. BCG's "10/20/70 rule" applies: dedicate 10% of effort to algorithms, 20% to data and technological backbone, and 70% to business and people transformation.

This means engaging finance early in defining metrics and understanding methodologies. Finance teams bring objectivity and sophistication about measurement that complements marketing's domain expertise. One CMO described initially tense discussions about metric definitions that ultimately strengthened their measurement approach: "I eventually brought finance in closer, to dig into our agency reporting. They were more objective and sophisticated about measurement, and they offered us great insights on how we define our metrics."

Build coalitions across IT for data infrastructure support, sales for funnel alignment and shared metrics, and product teams for customer journey understanding. The research shows that companies without clear cross-functional support struggle with data access, conflicting metrics, and inability to act on insights.

Selecting Your First Tests

Research from Forrester and various testing platforms suggests starting with channels where you suspect attribution might overstate impact. Brand search campaigns represent an ideal starting point; these campaigns intercept existing demand from customers already seeking your brand. Retargeting to existing customers provides another strong first test. Campaigns for well-known products or in markets where you have strong brand awareness similarly merit early testing.

Alternatively, test channels with unknown incrementality potential: new advertising platforms you're considering, untested audience segments, different creative formats or messaging approaches, or geographic markets you haven't entered. The first category helps identify where attribution overstates impact. The second category enables discovery of new growth opportunities.

The key lies in defining success clearly before testing. Specify whether you're measuring higher sales, more new customers, increased average order values, or improved long-term customer value. Establish the minimum lift that would justify continued or increased investment. Determine the statistical confidence level you require before making decisions.

Developing Testing Cadence

The research suggests a strategic approach to test scheduling. Don't test everything simultaneously; tests can interfere with each other and create confusion. Build a plan for the next 30 to 90 days prioritising highest-spending channels first, campaigns where you lack confidence, and initiatives where you must prove ROI to leadership.

Companies mature in incrementality testing often earmark specific budget percentages for experimentation. Common frameworks allocate 70% to tried and true tactics, 20% to tested and promising approaches, and 10% to new experiments. This structure ensures continuous learning whilst maintaining performance.

Frequency matters. The research indicates that strong measurement organisations run tests quarterly or semi-annually on major channels. This cadence provides regular calibration points without overwhelming teams or creating interference between tests.

Securing Resources and Capabilities

The research reveals that only 26% of in-house marketers conduct incrementality testing internally. The remaining 74% rely on external partners for methodology, execution, or both. This distribution suggests that partnering makes sense for most organisations, particularly those building capability.

Options include working with platform-provided testing tools (like Meta's Conversion Lift Study or Google's Conversion Lift), engaging measurement platforms specialised in incrementality testing, or partnering with agencies that provide testing expertise. The choice depends on your technical capabilities, budget availability, and control preferences.

Regardless of approach, invest in understanding the methodology. The digital marketing vice president from a financial services firm stated: "I don't want my team to have a background in media or digital. I'm looking for a test-and-learn mindset—people who question why we do things the way we do and focus on solving problems, instead of chasing the latest hot platform."

Tying Measurement to Incentives

Research shows that organisations linking compensation to marketing measurement generate meaningfully better outcomes. Three-quarters of marketers who submit performance metrics to annual strategic planning exercises believe they have well-functioning processes for understanding marketing return on investment. Only 22% of those not required to submit such metrics share that belief.

The mechanism operates through accountability. When measurement affects bonuses, promotions, or budget allocations, teams invest in measurement quality. They run better tests, integrate learnings more systematically, and maintain measurement systems consistently.

One financial services firm increased sales by over 30% by aligning sales incentives with marketing metrics using a new CRM system. The CMO noted: "We've become an organisation of closers, not hunters." Dashboards from the system provided visibility into individual sales performance, revealed weak performers, and recognised previously unnoticed high performers.

The Path Forward: What the Data Demands

The measurement crisis facing modern marketing stems not from lack of tools but from insufficient commitment to rigorous methodology. The research demonstrates that incrementality testing works; companies implementing it properly discover substantial opportunities, redirect budgets more effectively, and achieve measurable improvements in efficiency and growth.

The 92% of marketers not using incrementality testing lack something the 8% possess: confidence in their measurement data. They know what actually works. They can prove it to leadership. They don't guess which channels drive incremental value, defend budgets based on correlation, or make optimisation decisions without knowing their true business impact.

Looking at the data objectively, several conclusions emerge clearly. Attribution alone doesn't provide adequate measurement. The gap between attributed performance and incremental impact typically ranges from 20% to over 100%, depending on channel and audience. Marketers relying solely on attribution systematically misallocate budgets.

The combination of MMM and incrementality testing provides superior measurement. MMM offers strategic direction and continuous monitoring. Tests provide tactical validation and precise impact estimates. Together they create measurement systems that inform decisions at both strategic and tactical levels.

The organisational challenges exceed the technical challenges. Data infrastructure, statistical methodologies, and testing platforms continue improving. The harder problems involve securing cross-functional support, maintaining multi-year commitment, building cultures of experimentation, and tying measurement to consequences.

The organisations succeeding in measurement share common characteristics. They dedicate 70% of measurement initiative resources to people and organisational change. They maintain consistent methodologies for at least two to three years rather than changing approaches with executive turnover. They integrate measurement into strategic decision-making rather than treating it as a reporting exercise. They test hypotheses rigorously and respect results even when they challenge existing assumptions.

The companies featured in this research span industries, business models, and scales. Lalo operates in e-commerce with modern parents as customers. BrandAlley serves fashion-conscious UK consumers. Gopuff delivers products instantly. FanDuel operates in regulated gambling. The MedTech company works in healthcare. Despite these differences, incrementality testing provided value for all of them.

The question facing marketing leaders isn't whether incrementality testing provides value. The research establishes that clearly. The question is whether your organisation will join the 8% with measurement confidence or remain in the 92% making decisions without knowing marketing's true impact. Like a well-engineered system, success requires building proper foundations and maintaining them consistently. The data shows the path. The choice to follow it remains yours.

Frequently Asked Questions

What's the minimum budget required to conduct incrementality testing effectively?

The answer depends on your testing methodology and business model. For Marketing Mix Modelling, the research indicates you need at least $2 million in annual advertisement spend for meaningful analysis. Smaller budgets don't provide sufficient data volume or granularity for accurate models. For individual incrementality tests, particularly user-based tests on platforms like Meta or Google, you can start with budgets as low as $10,000 to $20,000. The key factor is transaction volume; you need enough conversions in both test and control groups to achieve statistical significance, typically at least 100 conversions per group.

How long does it take to see results from an incrementality test?

Most incrementality tests run between two and six weeks depending on your purchase cycle and statistical power requirements. Products with short purchase cycles (like food delivery or consumer packaged goods) can generate results in two to three weeks. Higher-consideration purchases (like furniture or financial services) typically require four to six weeks. The research shows that BrandAlley's validation test ran for four weeks, whilst many of the Haus case studies completed in similar timeframes. If initial results don't reach statistical significance, you can extend the test duration or increase the budget to accelerate learning.

Should we test all our marketing channels simultaneously or one at a time?

Test channels sequentially rather than simultaneously. The research emphasises this point clearly; running multiple tests at once can create interference that compromises results. Start with your highest-spending channels where measurement improvements would have the greatest financial impact. The research suggests building a test plan for the next 30 to 90 days, prioritising channels where you have the most uncertainty about incremental impact or where leadership requires proof of value. Companies mature in incrementality testing often run tests quarterly on major channels.

How do we know if our Marketing Mix Model is accurate?

The BrandAlley case study demonstrates the validation approach. Run incrementality tests on your highest-spending channels and compare results to your MMM estimates. If test results fall within the model's confidence intervals, your model is working well. If test results differ substantially from model estimates (typically more than 20% to 30%), investigate the discrepancy. The difference might indicate that conditions have changed since your model was built, that your model needs recalibration, or that the tested channel behaves differently than your model assumes. The research indicates that strong measurement organisations validate their models with incrementality tests at least annually.

What do we do when incrementality test results contradict our attribution data?

Trust the incrementality test. The research documents numerous cases where attributed performance exceeded incremental performance substantially. Lalo's tests revealed that purchase-optimised and add-to-cart-optimised campaigns had nearly identical incremental ROAS (differing by just £0.01) despite different attributed performance. The financial institution's YouTube test showed incremental ROAS of £1.10 whilst attribution suggested much higher returns. Attribution shows correlation; incrementality testing measures causation. When they conflict, causation provides better guidance for budget allocation. Use attribution for day-to-day optimisation within channels but rely on incrementality for strategic budget decisions across channels.

How do we build internal support for incrementality testing when leadership is sceptical?

The research suggests three approaches. First, focus on business outcomes leadership cares about; frame incrementality testing as solving the attribution problem that makes budget discussions difficult and enables more effective spending of existing budgets. Second, start with a pilot test on a single channel where you suspect attribution overstates impact. Present the results alongside the profit opportunity from correcting allocation. Third, secure CEO support early. BCG's research shows that breaking down organisational barriers requires C-suite intervention with the CEO as ultimate advocate. One marketing leader noted: "The single biggest unlock for us was a new CMO who was progressive and dedicated to change." Sometimes cultural transformation requires leadership change.

What's the difference between incrementality testing and A/B testing?

Incrementality testing represents a specific type of A/B testing focused on measuring the causal impact of marketing presence versus absence. Standard A/B tests typically compare two active marketing variations (like two different advertisement creatives or landing page designs) to determine which performs better. Both variations receive marketing; you're optimising within a channel. Incrementality testing compares active marketing against no marketing (or significantly reduced marketing) to determine whether the marketing activity itself drives incremental results. You're measuring the value of the channel itself. The research emphasises that incrementality testing answers the fundamental question: "How many sales happened because of this marketing rather than in spite of it?"

References

The State of Retail Media Report - Skai and Path to Purchase Institute

Supermetrics 2025 Marketing Data Report

BCG Marketing Measurement Done Right - Tim Mank, Neal Rich, Carmen Bona, Nicolas de Bellefonds, Thomas Recchione - October 2019

BCG Elevate Your Marketing Measurement: The Four-Legged Approach - Ryan Mason, Alex Baxter, Neal Rich, Arjun Talwar, Nakul Puri - August 2024

Haus Case Study: Lalo - March 2025

Haus Case Study: BrandAlley - Sellforte Marketing Mix Modeling validation

Think with Google: Incrementality Testing - JD Ohlinger and Nik Nedyalkov - October 2023

Forrester: Incrementality Testing Boosts Marketing ROI - Tina Moffett and Benjamin Nagle - October 2023

LBB Marketing: Incrementality in Analytics - Clyde Correa, DAC Toronto - May 2025

Sellforte: What is Incrementality Testing? - Lauri Potka - September 2025

Author image of Camille Durand

Camille Durand

I'm a marketing analytics expert and data scientist with a background in civil engineering. I specialize in helping businesses make data-driven decisions through statistical insights and mathematical modeling. I'm known for my minimalist approach and passion for clean, actionable analytics.

More from Camille

Read More Articles

Turn Your Shopping Habits into Exclusive Rewards

Gain access to personalized discounts, exclusive merchandise, and early access to products from top brands like Zara, Nike, H&M, and more, just by securely sharing your shopping habits. You decide when and how your data is shared. Your information remains private and secure until you choose to redeem an offer.

Wallet-Icon
Happy woman
Wallet-Icon
credit-card
Happy woman
Wallet-Icon
Happy man