Best Incrementality Testing Tools in 2026: In-depth Vendor Comparison

34 min read
May 7, 2026

Quick Summary: Best Incrementality Testing Tools in 2026

We evaluated four vendor-backed incrementality testing tools (Sellforte, Measured, Haus, Recast) and one open-source library (Meridian GeoX) based on 31 criteria in 7 capability areas. Criteria were derived from more than 700 discussions with marketers and marketing analytics professionals in 2025 and 2026. Here is how the five tools compare:

Best Incrementality Testing Tools in 2026: Scores by Category Each cell shows the vendor's score out of the maximum for that category. Color reflects relative coverage
Category Sellforte Measured Haus Recast Meridian
GeoX
1. Geo Test Analysis 4 / 6 5 / 6 5 / 6 4 / 6 2 / 6
2. A/B Test Analysis 4 / 4 4 / 4 0 / 4 0 / 4 0 / 4
3. Conversion Lift Test Analysis 5 / 5 0 / 5 0 / 5 0 / 5 0 / 5
4. Experiment Recs & Insights 3 / 5 3 / 5 5 / 5 3 / 5 2 / 5
5. Unified Experiment Library 3 / 3 1.5 / 3 1.5 / 3 1.5 / 3 0 / 3
6. MMM Integration 3 / 3 1.5 / 3 1.5 / 3 1 / 3 1 / 3
7. Enterprise-Grade Platform 5 / 5 2 / 5 2 / 5 1.5 / 5 0 / 5
Total score out of 31 27 17 15 11 5

Sellforte ranked highest in the comparison with 27 of 31 capabilities supported, followed by Measured (17 out of 31), Haus (15 out of 31), Recast (11 out of 31) and Meridian GeoX (5 out of 31). Sellforte's high score compared to the other platforms is explained by Sellforte's full coverage of all three primary test types (Geo tests, A/B tests, Conversion Lift tests), experiment library unifying all tests into one place, deep integration into a Marketing Mix Model, and enterprise-grade platform.

Sellforte is best for advertisers looking for a full-scale enterprise-grade incrementality testing platform that covers all incrementality test types and collects them into a unified experiment library that is integrated into an MMM. Sellforte is particularly strong in retail and ecommerce.

Measured is best for advertisers looking for a platform that is strong in geo tests and A/B tests, but for whom conversion lift test analysis or collecting all tests into a unified experiment library are not priorities.

Haus is best for advertisers looking for a dedicated geo testing platform, but who are not conducting conversion lift tests or owned media A/B tests, and for whom light MMM integration is sufficient.

Consider Recast's geo lift testing module, if you are already a user of Recast's MMM platform, and advanced capabilities offered by the other platforms are not a priority.

Consider Meridian GeoX if you have an in-house MMM based on Google Meridian, and you are committed to expanding your measurement stack with open-source solutions and covering the gaps, such as UI with self-built solutions.

Introduction and Table of Contents

When we started writing this article, we decided to go far beyond the typical "10 tools" listicle, which are posts that name vendors but evaluate them only superficially. Our goal was to create an objective and verifiable comparison of incrementality testing tools, grounded in the real needs and requirements that modern marketers and marketing analytics teams have in 2026. Here's what we're covering in this article

  1. Quick Summary: Best Incrementality Testing Tools in 2026
  2. What is Incrementality Testing?
  3. How to Choose an Incrementality Testing tool: Evaluation Criteria
  4. In-depth Comparison of Incrementality Testing Tools and Vendors
    1. Sellforte
    2. Measured
    3. Haus
    4. Recast
    5. Meridian GeoX
  5. Frequently Asked Questions
  6. Limitations & Disclosures
  7. Further Reading

What is Incrementality Testing?

Incrementality testing is a method for measuring the true causal impact of advertising: the additional sales, conversions, or revenue that would not have happened without a specific marketing activity. 

Compared to Marketing Mix Modeling (MMM), which measures incrementality by analyzing historical time-series, incrementality testing is an active method: a marketing intervention is designed (such as stopping spend on a channel in one geo), then executed, and finally analyzed. The objective is to get a clean read of the true incrementality of a marketing activity.

While incrementality testing is considered the most accurate approach for estimating true incremental sales impact of a channel, limitations apply: the estimate is only accurate (i) for the tested channel, (ii) at a specific point in time, and (iii) at the tested spend level. This means that incrementality testing does not provide continuous marketing measurement or estimate how Incremental ROAS changes as the spend level changes. In fact, the primary use-case for incrementality testing is to calibrate an MMM, which provides continuous cross-channel measurement of Incremental ROAS (iROAS) and Marginal Incremental ROAS (miROAS).

Three test designs dominate modern incrementality programs:

  • Geo tests compare test and control geographies to derive insights on incrementality
  • A/B tests randomize at the user or audience level, making them best suited for owned media like catalog tests or email.
  • Conversion Lift tests are run inside ad platforms (Meta, Google, TikTok), comparing conversion rates between consumers seeing an ad and those who did not.

Each of these incrementality tests produces a point estimate of incremental ROAS (iROAS) with a confidence interval.

How to Choose an Incrementality Testing Tool: Evaluation Criteria

Each tool in this article is evaluated against criteria that we formed through extensive primary research:

  1. We analyzed more than 700 discussions with Sellforte customers and prospects, including marketers, marketing analytics leads, and data scientists working in advertising-heavy industries such as retail, ecommerce, DTC, travel & hospitality, and restaurants.
  2. We reviewed the documentation of more than ten enterprise RFPs spelling out the requirements for incrementality testing platforms.
  3. We interviewed our Customer Success and Data Science team members who get constant feedback and improvement ideas on Sellforte's own incrementality testing offering.
  4. We complemented this primary research with desk research and LLM-assisted investigation to catch gaps and pressure-test our assumptions against publicly available information.

The result: 31 evaluation criteria across 7 categories.

ID Category Criterion What it means
1. Geo Test Analysis
1.1 Geo Test Analysis Analyzes geo tests with synthetic control method, providing iROAS and confidence interval Uses synthetic control methodology to compare treatment vs. matched control geos, outputting incremental ROAS with statistical confidence intervals to quantify causal lift.
1.2 Geo Test Analysis Self-serve UI for analyzing & reviewing geo test results Marketers can upload data, run analyses, and review geo test results through a web interface without needing a data scientist or analyst to write code.
1.3 Geo Test Analysis Estimates media counterfactual for lost/incremental spend Models what media spend would have been in the absence of the test, so iROAS reflects actual incremental spend rather than nominal budget changes.
1.4 Geo Test Analysis Configurable default post-test treatment / measurement window User can set default treatment and measurement windows (e.g., test duration, post-test cooldown) that apply across tests, with the option to override per test.
1.5 Geo Test Analysis Executes geo experiments on ad platform Tool can launch and manage geo experiments directly on ad platforms (e.g., Meta, Google) via API, rather than only analyzing tests configured elsewhere.
1.6 Geo Test Analysis Automatically detects geo tests from media and sales data Identifies likely geo experiments from observed media and sales patterns automatically, without requiring users to manually flag test periods or geos.
2. A/B Test Analysis for Owned Media
2.1 A/B Test Analysis for Owned Media Analyzes own media A/B tests with synthetic control method, providing iROAS and confidence interval Applies synthetic control methodology to user-level or audience A/B tests, producing incremental ROAS with confidence intervals.
2.2 A/B Test Analysis for Owned Media Self-serve UI for analyzing & reviewing own media A/B test results Marketers can upload data, run analyses, and review A/B test results through a web interface without analyst support.
2.3 A/B Test Analysis for Owned Media Estimates media counterfactual for A/B tests Models counterfactual media spend so iROAS reflects true incremental investment, not just budget delta between cells.
2.4 A/B Test Analysis for Owned Media Configurable default post-test treatment / measurement window for A/B tests User can set default treatment and measurement windows that apply across A/B tests, with per-test overrides.
3. Conversion Lift Test Analysis
3.1 Conversion Lift Test Analysis Ingests Conversion Lift test results, providing iROAS and confidence interval Imports Conversion Lift test results from ad platforms (Meta, Google, etc.) and reports iROAS with confidence intervals.
3.2 Conversion Lift Test Analysis Self-serve UI for analyzing Conversion Lift test results Marketers can review Conversion Lift test results in a web interface without needing to pull raw data from each ad platform.
3.3 Conversion Lift Test Analysis API connectors for automated conversion lift test ingestion Pre-built API connectors automatically pull Conversion Lift results from ad platforms, eliminating manual exports.
3.4 Conversion Lift Test Analysis Results comparable to ad platform data on campaign and ad set level Normalizes Conversion Lift outputs so iROAS and lift can be compared at the campaign and ad set level across platforms on a like-for-like basis.
3.5 Conversion Lift Test Analysis Daily snapshot of conversion lift test progress, including iROAS and confidence interval Provides daily updated views of in-flight tests, including running iROAS and confidence interval estimates, so users can monitor progress before completion.
4. Experiment Recommendations & Insights
4.1 Experiment Recommendations & Insights Platform recommends which channels to test Suggests which channels are highest-priority to test next based on uncertainty in current measurement, spend levels, or expected learning value.
4.2 Experiment Recommendations & Insights Platform recommends control & test groups Recommends which geos, audiences, or users to assign to control vs. test based on similarity, balance, and statistical power.
4.3 Experiment Recommendations & Insights Platform recommends test design (type, methodology) and predicts test success Recommends the best test type (geo, A/B, conversion lift) and methodology for the question, and predicts power / probability of detecting a meaningful effect.
4.4 Experiment Recommendations & Insights AI-generated plain-language readouts / executive summaries Generates plain-language summaries of test results suitable for non-technical stakeholders and executive review.
4.5 Experiment Recommendations & Insights Conversational AI for discussing experiments Built-in AI assistant lets users ask natural-language questions about experiments, results, and learnings across the library.
5. Unified Experiment Library
5.1 Unified Experiment Library Central library covers all experiments regardless of type Single repository stores results from all experiments — geo, A/B, conversion lift — across channels, teams, and methodologies in one searchable place.
5.2 Unified Experiment Library Filterable by country, channel, brand, campaign, team, date Library supports filtering of experiments by attributes like country, channel, brand, campaign, team, and date range.
5.3 Unified Experiment Library Role-based access & governance for the experiment library Supports role-based access control and governance features so different users see appropriate experiments and have appropriate edit rights.
6. MMM Integration
6.1 MMM Integration Bayesian MMM that can be calibrated by the user Includes a Bayesian Marketing Mix Model that users can calibrate with their own priors and assumptions, rather than a black-box model.
6.2 MMM Integration UI to connect experiment results to MMM User interface for connecting experiment results into the MMM as calibration inputs, without requiring custom code.
6.3 MMM Integration Experiment-based priors comparable to attribution-based priors Allows side-by-side comparison of priors derived from experiments vs. priors derived from attribution data.
7. Enterprise-Grade Platform
7.1 Enterprise-Grade Platform At least 10 public reference customers from $1B+ revenue brands Has at least 10 publicly named reference customers among brands with $1B+ in annual revenue, demonstrating enterprise-scale adoption.
7.2 Enterprise-Grade Platform SOC 2, ISO 27001, or audited IT security by a third-party cyber security auditor Holds SOC 2, ISO 27001, or equivalent third-party-audited security certification demonstrating mature security controls.
7.3 Enterprise-Grade Platform Data residency: US and EU options Customer can choose whether their data is stored and processed in US or EU regions to meet data residency and regulatory requirements.
7.4 Enterprise-Grade Platform Dual-cloud option between AWS, GCP, and Azure Customer can choose between AWS, GCP, and Azure as the underlying cloud infrastructure to align with their IT requirements.
7.5 Enterprise-Grade Platform Single sign-on (SSO) for enterprises Supports enterprise authentication with single sign-on (SSO).

 

Scoring the incrementality testing tools

For each of the 31 criteria, we assigned a score:

  • Score of 1: The platform fully supports the capability, based on clear evidence such as product screenshots or technical documentation.
  • Score of 0.5: The platform partially supports the capability, or there is weak but inconclusive evidence for the existence of the capability on the platform.
  • Score of 0: The platform does not support the capability, or we could not find public evidence that it does. Per our conservative scoring rule, the absence of evidence resulted in a 0 rather than a benefit of the doubt.

Scores were assigned based on the following sources of evidence, in order of priority:

  1. Public first party vendor documentation: product pages, product demos, help center articles, technical documentation, product videos, webinar recordings, case studies.
  2. Vendor-disclosed information on third party platforms: Podcasts, webinars, conference presentations
  3. Third-party review sites: G2, TrustRadius, and analyst reports where available.

The seven categories below define what we measured and why each matters.

Category 1. Geo Test Analysis

This category measures Geo testing capabilities. Geo testing is the workhorse of modern incrementality measurement: by comparing control and test geographies, marketers can isolate true causal lift in sales. Geo test analyses typically have two primary parts, as illustrated below with example screenshots.

1. KPI Summary, including iROAS, confidence intervals, and the test period:

Key KPIs in Sellforte Experiments

2. Charts for key KPIs, including comparisons between test and control for the target KPI and media spend:

Charts in Sellforte Experiments

To start the analysis, geo testing tools provide a configuration interface (example below).

Sellforte Experiments creation

Category 2. A/B Test Analysis for Owned Media

This category measures capabilities in audience-level A/B tests, which are another broadly used incrementality test type. They are particularly common among large ecommerce businesses for measuring owned media such as catalogs and email.

From an analysis and output perspective, A/B tests are very similar to geo tests, with the same expectations around iROAS estimation and confidence intervals. However, dedicated A/B test analysis is rare in incrementality testing tools, which is why this category meaningfully differentiates vendors.

Category 3. Conversion Lift Test Analysis

This category measures capabilities in ingesting and analyzing Conversion Lift tests, which are experiments run within a single ad platform (Meta, Google, or TikTok). Conversion Lift tests randomly withhold ads from a portion of the target audience, after which the platform compares conversion rates between the exposed and withheld groups.

Most marketing teams run conversion lift tests, but incrementality testing vendors take wildly different approaches to them. At one end of the spectrum, some vendors dismiss them entirely, citing platform bias concerns. These vendors have not built capabilities to integrate conversion lift tests into their platforms.

At the other end of the spectrum, some vendors natively ingest conversion lift tests into their experiment libraries, normalize the results, and feed them into MMM calibration. Best-in-class vendors even provide in-flight analysis so teams can monitor tests as they run rather than waiting for completion. Below is an example screenshot from a Meta conversion lift test analysis.

Conversion Lift test analysis in Sellforte

Category 4. Experiment Recommendations & Insights

This category measures capabilities for analyzing incrementality tests with modern AI approaches, as well as recommending tests and test designs to help marketers get more value from their measurement programs.

The best platforms recommend high-priority channels to test, design statistically powered control and treatment groups, predict the probability of detecting a meaningful effect, and translate technical readouts into executive-friendly narratives. Some platforms now include conversational AI interfaces that let users ask natural-language questions about past experiments and learnings across the library. The screenshots below illustrate an AI summary of a test, as well as a conversational chat interface for analyzing experiments.

Sellforte AI summary in incrementality tests

Sellforte AI for Experiments

Category 5. Unified Experiment Library

This category measures vendors' capabilities to unify all of an advertiser's experiments into a single library. This becomes critical for large organizations with mature incrementality testing programs, where dozens or even hundreds  of tests are  run each year across teams, regions, and channels.

A unified library stores every experiment, including geo tests, A/B tests, and conversion lift tests, in one searchable and filterable place. This prevents costly re-runs of tests already completed, makes organizational learning compound over time, and supports governance through role-based access controls. Below is an example screenshot of a unified experiment library covering Geo Tests, Conversion Lift tests and A/B tests.

Sellforte Experiments Library

Category 6. MMM Integration

This category measures vendors' capabilities to integrate experiment results into the calibration of a Marketing Mix Model. Experiments and MMM are complements rather than substitutes: experiments provide ground-truth point estimates for specific channels and time windows, while MMM provides the single-source of truth for marketing ROI across all channels at any given time.

Vendor capabilities vary widely, from basic approaches where ROI priors are manually entered into a spreadsheet, to advanced platforms with native integration between the experiment library and a UI-based model configuration tool. 

Category 7. Enterprise-Grade Platform

This category measures vendors' capabilities to serve large organizations with mature IT policies. The dimensions scored include security certifications (SOC 2, ISO 27001), data residency options between US and EU regions, multi-cloud support across AWS, GCP, and Azure, and enterprise authentication via single sign-on.

While mid-sized brands might not yet express all of these requirements, they will ultimately grow into a size where such requirements become not just relevant but mandatory for procurement approval.

In-depth Comparison of Incrementality Testing Tools and Vendors

We focused this article on incrementality testing tools that meet all three of the following criteria:

  1. Product: Incrementality testing software product that is offered as a dedicated tool or as part of a broader measurement platform.
  2. Used by recognized advertisers: The tool is used in production by enterprise brands, with at least some publicly verifiable customer references.
  3. Recognized by the industry: The tool has visible market presence, including coverage in industry press, analyst reports, social media discussion, or conference talks.

For the first analysis batch, we evaluated four vendors matching these criteria: Sellforte, Measured, Haus, and Recast. In addition to the commercial vendors, we included Meridian GeoX, Google's open source incrementality testing tool. We are updating this article with additional vendors during 2026.

Here's the detailed in-depth comparison of incrementality testing tools:

Detailed scoring of incrementality testing tools across 31 criteria Scored against criteria derived from 700+ enterprise advertiser discussions in 2025–2026
Category Criterion Sellforte Measured Haus Recast Meridian GeoX
1. Geo Test Analysis
Geo Test Analysis 1.1 Analyzes Geo tests with synthetic control method, providing iROAS and confidence interval 1 1 1 1 1
1.2 Has a self-serve UI for analyzing & reviewing Geo test results 1 1 1 1 0
1.3 Estimates media counterfactual to construct estimate on the lost/incremental spend 1 1 1 1 0
1.4 Configurable default post-test treatment / measurement window for geo tests 1 1 1 1 1
1.5 Executes geo experiments on ad platform 0 1 1 0 0
1.6 Automatically detects geo tests from media and sales data 0 0 0 0 0
2. A/B Test Analysis for Own Media
A/B Test Analysis for Own Media 2.1 Analyzes own media A/B tests with synthetic control method, providing iROAS and confidence interval 1 1 0 0 0
2.2 Has a self-serve UI for analyzing & reviewing own media A/B test results 1 1 0 0 0
2.3 Estimates media counterfactual to construct estimate on the lost/incremental spend for A/B tests 1 1 0 0 0
2.4 Configurable default post-test treatment / measurement window for A/B tests 1 1 0 0 0
3. Conversion Lift Test Analysis
Conversion Lift Test Analysis 3.1 Ingests Conversion Lift test results, providing iROAS and confidence interval 1 0 0 0 0
3.2 Has a self-serve UI for analyzing Conversion Lift test results 1 0 0 0 0
3.3 Has API connectors for automated conversion lift test ingestion 1 0 0 0 0
3.4 Makes results comparable to ad platform data on campaign and ad set level 1 0 0 0 0
3.5 Daily snapshot of conversion lift test progress, including iROAS and confidence interval 1 0 0 0 0
4. Experiment Recommendations & Insights
Experiment Recommendations & Insights 4.1 The Platform recommends which channels to test 1 1 1 1 1
4.2 The Platform recommends control & test groups 0 1 1 1 1
4.3 The Platform recommends test design (type, methodology) and predicts test success 0 1 1 1 0
4.4 Provides AI-generated plain-language readouts / executive summaries 1 0 1 0 0
4.5 Ability to discuss anything related to experiments with an AI 1 0 1 0 0
5. Unified Experiment Library
Unified Experiment Library 5.1 Central library covers all experiments of the company, regardless of experiment type 1 0.5 0.5 0.5 0
5.2 Possible to filter test in the Experiment library by country, channel, brand, campaign, team, date 1 1 1 1 0
5.3 Has role-based access & governance for the experiment library 1 0 0 0 0
6. MMM Integration
MMM Integration 6.1 Bayesian MMM which can be calibrated by the user 1 1 1 1 1
6.2 UI tool where the user can connect experiment results to MMM 1 0.5 0.5 0 0
6.3 Experiment-based priors can be compared to attribution-based priors 1 0 0 0 0
7. Enterprise Grade Platform
Enterprise Grade Platform 7.1 At least 10 public reference customers from $1B+ revenue brands 1 1 1 0.5 0
7.2 SOC 2, ISO 27001, or audited IT Security by a third party cyber security auditor 1 1 1 1 0
7.3 Data residency: Geography option between US and EU 1 0 0 0 0
7.4 Multi-cloud: Option between AWS, GCP and Azure 1 0 0 0 0
7.5 Supports Single-sign on (SSO) for enterprises 1 0 0 0 0
Total 27 / 31 17 / 31 15 / 31 11 / 31 5 / 31

1. Sellforte (scoring 27 out of 31): Enterprise-grade Incrementality Testing and MMM Platform

Sellforte

Overview of Sellforte

Sellforte is a SaaS platform that unifies Marketing Mix Modeling, Incrementality Testing, and Attribution into a single, always-on Operating System for Retail and Ecommerce. Unlike most other measurement vendors, Sellforte measures incremental ROAS at the campaign and ad-set level, providing spend and bidding recommendations for tactical budget steering.

Incrementality testing is one major part of Sellforte platform, alongside with Marketing Mix Modeling and incrementality-corrected attribution.

Evaluation summary

Sellforte scores 27 out of 31 across the seven capability areas, the highest in this comparison. It is the only platform with coverage of all test types, including Geo tests, A/B tests and Conversion Lift Test Analysis, making Sellforte a great fit for enterprises looking to unify all experiments into a single tool. Experiments are deeply integrated to MMM through model configuration tools, and Sellforte has a native AI support. Sellforte's platform is battle-tested in enterprise implementation with $1B+ revenue companies like bonprix, Lidl, and C&A.

Evaluation of Sellforte as an incrementality testing tool Each cell shows the vendor's score out of the maximum for that category. Color reflects relative coverage
Category Sellforte
1. Geo Test Analysis 4 / 6
2. A/B Test Analysis 4 / 4
3. Conversion Lift Test Analysis 5 / 5
4. Experiment Recs & Insights 3 / 5
5. Unified Experiment Library 3 / 3
6. MMM Integration 3 / 3
7. Enterprise-Grade Platform 5 / 5
Total score out of 31 27

Strengths

  • Full conversion lift coverage. Sellforte is the only platform in this comparison that fully supports Conversion Lift Test analysis. This includes API-based ingestion of the tests from ad platforms, normalization to make results comparable at the campaign and ad set level, and daily in-flight snapshots so teams can monitor tests as they run.
  • Complete MMM integration. Includes a user-calibratable Bayesian MMM with a UI-based workflow for connecting experiment results as calibration inputs, and supports side-by-side comparison of experiment-based priors against attribution-based priors.
  • Full A/B test analysis for owned media. Sellforte is one of the two companies in this evaluation offering A/B test analysis, alongside Measured. 
  • Unified experiment library. Stores all experiment types in a single searchable, filterable repository with role-based access controls and governance.
  • Enterprise-grade platform. Battle-tested platform in multi-country enterprise implementation, offering US and EU data residency option, multi-cloud deployment options across AWS, GCP, and Azure, and enterprise authentication support, such as SSO.

Limitations

  • No native ad platform execution for geo tests. Sellforte analyzes geo tests but does not launch and manage them directly on ad platforms via API. This feature has not been prioritized at Sellforte so far, multi-billion dollar enterprise typically have dedicated support from ad platforms for geo testing execution.
  • No automatic detection of geo tests from media and sales patterns. Users flag test periods and geographies manually.
  • Limited recommendations for test design. The platform does not yet recommend which geos, audiences, or users to assign to control versus test based on similarity, balance, and statistical power. This feature has not been prioritized so far, as enterprises are typically already leveraging open-source design packages. 

Best for

Sellforte is best for advertisers looking for a full-scale enterprise-grade incrementality testing platform that covers all incrementality test types and collects them into a unified experiment library that is integrated into an MMM. Sellforte is particularly strong in retail and ecommerce.

Comments about Sellforte evaluation

Sellforte evalution is particularly easy to verify, because the company is one of the few vendors with a public no-sign demo: https://demo.sellforte.com/ . Sellforte reference customers are also convenient to check, as the company has a practice of announcing new customers in their blog, and includes logos on their website.

2. Measured (scoring 17 out of 31): MMM calibrated with Geo-tests and A/B tests

Measured

Overview of Measured

Measured is a marketing measurement and effectiveness platform, combing automated incrementality experimentation with media mix modeling. Measured originally built their brand and product around incrementality testing, while only later positioning themselves as incrementality-calibrated MMM vendor. 

Evaluation summary

Measured scores 17 out of 31, placing second in this comparison. While not covering Conversion Lift tests, Measured has broader coverage of incrementality test types than some other vendors in this evaluation.

Evaluation of Measured as an incrementality testing tool Each cell shows the vendor's score out of the maximum for that category. Color reflects relative coverage
Category Measured
1. Geo Test Analysis 5 / 6
2. A/B Test Analysis 4 / 4
3. Conversion Lift Test Analysis 0 / 5
4. Experiment Recs & Insights 3 / 5
5. Unified Experiment Library 1.5 / 3
6. MMM Integration 1.5 / 3
7. Enterprise-Grade Platform 2 / 5
Total score out of 31 17

Strengths

Strong in geo testing. Measured scores 5 out of 6 on Geo Test Analysis, tied with Haus for the highest score in this category. Coverage includes geo test design and analysis, but also execution of geo tests on ad platforms. 

Covers A/B test analysis for owned media. Measured scores 4 out of 4, making Measured one of the two vendors in this article covering A/B tests, alongside Sellforte. 

Experiment design. Measured scores 3 out of 5 on Experiment Recommendations & Insights, with comprehensive features for getting recommendations on geo test design. 

Limitations

No Conversion Lift test analysis. Measured scores 0 out of 5 in this category. The platform does not ingest, analyze, or report on Conversion Lift tests run inside ad platforms. Teams that rely on Meta, Google, or TikTok lift tests as part of their measurement program will not get them into a unified view through Measured.

Partial Unified Experiment Library. Measured scores 1.5 out of 3. The library includes geo tests and A/B tests, but not conversion lift tests. This gap means that advertisers need other tools to have a full experiment library.

Partial MMM Integration. Measured scores 1.5 out of 3. It includes a user-calibratable Bayesian MMM. But it remains unclear based on public evidence how mature Measured's UI-based features are for calibration, scoring only half a point for Measured from that feature.

No AI-generated readouts or conversational AI. Unlike Haus and Sellforte, Measured does not generate AI plain-language readouts or executive summaries of test results, and does not offer a conversational AI interface for asking natural-language questions about experiments. Teams that want AI to help interpret and communicate results will find this a notable gap.

Partial enterprise platform capabilities. Measured scores 2 out of 5 on Enterprise-Grade Platform. It has reference customers and security certifications, but does not offer US/EU data residency options, multi-cloud deployment across AWS, GCP, and Azure, or SSO support. These are requirements that frequently appear in enterprise RFPs.

Best for

Measured is best for advertisers looking for a platform that is strong in geo tests and A/B tests, but for whom conversion lift test analysis and collecting all tests into a unified experiment library is not a priority. Multi-region organizations or enterprises should verify whether the lack of US/EU data residency option, multi-cloud deployment, and SSO is a blocker.

Comments about Measured evaluation

Evaluation of the Measured platform proved more difficult than some of the other platforms, as the company website is limited on product screenshots, there is no public demo to test the features and public support center technical documentation is missing. Measured data security was available, though.

3. Haus (scoring 15 out of 31): Dedicated Geo Experimentation Platform

Haus

Overview of Haus

Haus is a marketing science company, founded in 2021 by Zach Epstein. Haus's platform centers on geo-based incrementality experimentation. Recently they have also added Marketing Mix Modeling to their product offering.

Evaluation summary

Haus scores 15 out of 31, placing third in this comparison. Haus strength is in geo experimentation, but does not cover A/B test analysis for owned media or Conversion Lift test analysis. Its enterprise platform capabilities are partial.

Evaluation of Haus as an Incrementality testing tool: Scores by Category Each cell shows the vendor's score out of the maximum for that category. Color reflects relative coverage
Category Haus
1. Geo Test Analysis 5 / 6
2. A/B Test Analysis 0 / 4
3. Conversion Lift Test Analysis 0 / 5
4. Experiment Recs & Insights 5 / 5
5. Unified Experiment Library 1.5 / 3
6. MMM Integration 1.5 / 3
7. Enterprise-Grade Platform 2 / 5
Total score out of 31 15

Strengths

Strong geo test coverage with native ad platform execution. Haus scores 5 out of 6 on Geo Test Analysis. Haus's differentiator in geo testing is its ability to launch and manage geo experiments directly on ad platforms via API. 

Full coverage on Experiment Recommendations & Insights. Haus is the only platform in this comparison with a 5 out of 5 in this category. It recommends which channels to test and helps designing control and test groups. It also generates AI plain-language readouts and executive summaries, and offers a conversational AI interface for asking natural-language questions about experiments. 

Strong US references. Haus has focused their activities in the US, gathering a credible list of US references.

Limitations

No A/B test analysis for owned media. Haus scores 0 out of 4 in this category. Advertisers running owned-media A/B tests (catalog, email) will need a separate solution.

No Conversion Lift test analysis. Haus does not ingest, analyze, and report on Conversion Lift tests run inside ad platforms. Teams that rely on Meta, Google, or TikTok lift tests as part of their measurement program will not get them into a unified view through Haus.

Partial Unified Experiment Library. Haus scores 1.5 out of 3. The library includes geo tests that can be filtered by country, channel, brand and so on, but marketing analytics teams will need a separate solution for the full experiment library which includes also A/B tests and conversion lift tests.

Partial MMM Integration. Haus scores 1.5 out of 3. Haus has a Bayesian MMM that can be calibrated by the user, but Haus provides no evidence of UI-based model configuration tools for MMM, that would also support side-by-side comparison of experiment-based priors against attribution-based priors.

Partial enterprise platform capabilities. Haus scores 2 out of 5 on Enterprise-Grade Platform. It has reference customers and security certifications, but does not offer US/EU data residency options, multi-cloud deployment across AWS, GCP, and Azure, or SSO support. 

Best for

Haus is best for mid-sized advertisers who are looking for a dedicated geo testing platform, but for whom a unified library collecting and analyzing also conversion lift tests or owned media A/B tests is not a priority , and for  whom a light MMM solution is sufficient . 

Comments about Haus evaluation

Evaluation of the Haus platform was challenging to conduct due to scarce public information about the company. The company does not include many product screenshots on their website, they lack a public demo where features can be confirmed, they don't have a public support center with technical documentation, and their "trust center" covering IT security matters is confidential.

4. Recast (scoring 11 out of 31): GeoLift module supporting MMM

Recast GeoLift

Overview of Recast

Recast is a marketing measurement company focused on Bayesian Marketing Mix Modeling, Recast has a separate geo-based incrementality testing module called GeoLift by Recast. 

 

Evaluation summary

Recast scores 11 out of 31, placing fourth in this comparison. Recast is fundamentally a Bayesian Marketing Mix Modeling platform that has added a geo testing module to support MMM calibration, rather than a dedicated incrementality testing platform. Recast does not cover A/B test analysis for owned media or Conversion Lift test analysis, and thus is not able to serve as the unified experiment library either.

Evaluation of Recast as an Incrementality testing tool: Scores by Category
Each cell shows the vendor's score out of the maximum for that category. Color reflects relative coverage
Category Recast
1. Geo Test Analysis 4 / 6
2. A/B Test Analysis 0 / 4
3. Conversion Lift Test Analysis 0 / 5
4. Experiment Recs & Insights 3 / 5
5. Unified Experiment Library 1.5 / 3
6. MMM Integration 1 / 3
7. Enterprise-Grade Platform 1.5 / 5
Total score out of 31 11

Strengths

Solid baseline geo test coverage. Recast scores 4 out of 6 on Geo Test Analysis, including all the basic geo test analysis features, but not execution of geo tests on advertising platforms. 

Geo test design. Recast has features for designing geo tests, including previewing expected test precision before launching the test.

Bayesian MMM. Recast's main product is its Bayesian MMM platform, which can be calibrated by adjusting parameters in a Google Sheet shared by the client and Recast.

Free trial. Recast offers a free trial for its geo testing tool, making it convenient for advertisers to test.

Limitations

No A/B test analysis for owned media. Recast scores 0 out of 4 in this category. Advertisers running A/B tests on catalog or email experiments will need a separate solution.

No Conversion Lift test analysis. Recast does not ingest, analyze, or report on Conversion Lift tests run inside ad platforms. Teams that rely on Meta, Google, or TikTok lift tests cannot bring them into a unified view through Recast. 

Light integration between MMM and incrementality tests. Recast scores 1 out of 3 on MMM Integration. Recast includes a Bayesian MMM that users can calibrate, but it is operating in a separate Google Sheet outside the Recast product without a UI-based workflow. As another gap, Recast does not enable comparison of experiment-based priors to attribution-based priors.

Partial Unified Experiment Library. Recast scores 1.5 out of 3. The Recast library includes geo tests, but Recast users will need a separate solution for the full experiment library which includes conversion lift tests and A/B tests.

Weakest enterprise platform coverage. Recast scores 1.5 out of 5 on Enterprise-Grade Platform — the lowest of the four commercial vendors in this comparison. While it holds SOC 2 certification, it has only partial public reference customers among $1B+ revenue brands, and does not offer US/EU data residency options, multi-cloud deployment across AWS, GCP, and Azure, or SSO support. These gaps will be material for enterprise procurement processes.

Best for

Recast's geo testing product is best for existing Recast MMM users who want a lightweight geo testing module to calibrate their MMM, but for whom A/B testing, Conversion Lift integration, or enterprise-grade experiment library is not a priority.

Comments about Recast evaluation

Recast provides technical documentation for its platform, which made this evaluation easier than for some of the other platforms. At the same time, Recast lacks a public demo that would have made testing Recast's geo test analysis features easier, and this evaluation more comprehensive.

5. Meridian GeoX (scoring 5 out of 31): Open-source Geo testing tool

Meridian GeoX

Overview of Meridian GeoX

Unlike the other four assessed tools, Meridian GeoX is not a commercial vendor-supported platform: it is an open-source incrementality solution from Google, delivered as a code library that integrates with Meridian, Google's open-source Bayesian MMM.

Meridian GeoX was announced as forthcoming and is being introduced in stages, with broader availability expected later in 2026. It supports geo experiments only, and most of the platform-level capabilities measured by this rubric (self-serve UI, unified experiment library, enterprise security certifications, deployment flexibility) do not apply to an open-source library that customers self-host.

We include Meridian GeoX in this comparison for completeness, given its visibility and Google's role in the measurement ecosystem. Readers should interpret the score in light of the structural mismatch between an open-source toolkit and a rubric built for commercial platforms.

Evaluation summary

Meridian GeoX scores 5 out of 31, the lowest in this comparison. This score is expected, as GeoX is not positioned as a full scale commercial platform for advertisers.

Evaluation of Meridian GeoX as an Incrementality testing tool: Scores by Category Each cell shows the vendor's score out of the maximum for that category. Color reflects relative coverage
Category Meridian
GeoX
1. Geo Test Analysis 2 / 6
2. A/B Test Analysis 0 / 4
3. Conversion Lift Test Analysis 0 / 5
4. Experiment Recs & Insights 2 / 5
5. Unified Experiment Library 0 / 3
6. MMM Integration 1 / 3
7. Enterprise-Grade Platform 0 / 5
Total score out of 31 5

Strengths

Solid geo experimentation methodology. Meridian GeoX builds on Google's existing open-source repositories for matched-market geo experiments. The methodology has been used by Google teams for several years and is well-documented.

Channel and geo-pair recommendations. Meridian GeoX recommends which Meridian channels to run geo experiments on, and the underlying matched-markets repository recommends control and test geo pairings based on similarity and statistical power. This earns 2 out of 5 on Experiment Recommendations & Insights.

Native integration with Meridian MMM. GeoX results convert into priors that calibrate Meridian's Bayesian MMM, which is itself open source and user-calibratable. For teams already running Meridian in-house, this provides a coherent open-source measurement stack at no licensing cost.

Limitations

No self-serve UI. Meridian GeoX is delivered as Python code and Colab notebooks. Marketers cannot upload data, run analyses, or review results through a web interface. Analyst or data scientist support is required.

Geo-only. Meridian GeoX does not support A/B test analysis for owned media or Conversion Lift test analysis. Teams running multiple incrementality test types will need separate solutions for non-geo use cases.

No experiment library product. There is no central, filterable, role-based-access library for storing and governing experiments. Teams that want unified experiment management across geo, A/B, and Conversion Lift tests will need to build this layer themselves.

No vendor-supported enterprise platform features. Because Meridian GeoX is open-source software that customers self-host, security certifications (SOC 2, ISO 27001), data residency options, multi-cloud deployment, and SSO are the deployer's responsibility rather than vendor-provided. Meridian GeoX scores 0 out of 5 on Enterprise-Grade Platform for this reason.

Not yet generally available. Meridian GeoX is positioned as forthcoming, with Google indicating that broader availability and packaged deployment via Meridian Studio are still in progress. Some scoring reflects current open-source repositories rather than a fully released product.

Best for

Meridian GeoX is best for advertisers with in-house data science capacity who run Meridian as their MMM and want a first-party, open-source way to generate geo experiment priors for MMM calibration, with no licensing cost and full code transparency. Meridian GeoX is not intended as a substitute for a commercial incrementality testing platform: organizations that need a self-serve UI, multiple incrementality test types in a unified library, AI-assisted readouts, or vendor-supported enterprise features will find these requirements better met by the commercial platforms in this comparison.

Frequently Asked Questions (FAQ)

1. What is incrementality testing, and how does it differ from marketing mix modeling?

Incrementality testing measures the causal impact of marketing by comparing a treated group with a control group to see what would have happened without the ads. Marketing mix modeling (MMM) analyzes historical time-series data to estimate incremental effects across channels . The two approaches complement each other and are often used together.

2. What types of experiments are used in incrementality testing?

Modern programs rely on three designs: geo tests compare regions to estimate lift; A/B tests randomize at the user level to test owned media like catalogs; and conversion lift tests run inside platforms such as Meta, Google or TikTok. Each produces a point estimate with a confidence interval.

3. Why is a unified experiment library important for large advertisers?

Large organizations run dozens or hundreds of experiments across teams and regions. A unified library stores every geo test, A/B test and conversion lift test in a searchable, filterable place. This avoids duplicative testing and enables compounding learning over time.

4. How does experiment data integrate with a marketing mix model?

Experiments provide “ground truth” priors for specific channels and windows, while the MMM offers continuous cross‑channel measurement. Advanced tools let users calibrate a Bayesian MMM using experiment results and compare experiment‑based priors to attribution‑based priors. Integration ensures consistent measurement across channels.

5. What should you look for when choosing an incrementality testing tool?

Key factors include coverage of different test types (geo tests, A/B tests, conversion lift tests), unified measurement library, integration with MMM, experipemnt recommendations & insights, and enterprise-grade platform features.

6. What makes Sellforte’s platform stand out as an incrementality testing platform?

Sellforte uniquely covers geo tests, A/B tests and conversion lift test analysis, and unifies results into a single library. Sellforte also integrates them into a user‑calibratable Bayesian MMM. It offers enterprise‑grade features such as EU/US data residency, multi‑cloud deployment and SSO support.

7. What are the typical limitations of incrementality testing tool vendors?

Most vendors only provide geo testing and ignore A/B test and conversion lift tests. This gap also meams that they are unable to provide a unified experiment library.

Limitations & Disclosures

Author affiliation. This comparison is published by Sellforte, one of the platforms evaluated. We have made every effort to score competitors fairly using the same evidence standards we applied to ourselves, and we deliberately included criteria that customers value but where Sellforte does not score perfectly. That said, readers should weigh our affiliation when interpreting the results, and we encourage cross-referencing with vendor documentation, customer references, and independent analyst coverage.

Scope. This comparison primarily focuses on recognized commercial vendors. MMM-only vendors without an experimentation layer are out of scope.

Snapshot in time. Vendor capabilities evolve quickly, particularly in AI-powered insights, ad platform integrations, and security certifications. This comparison reflects publicly available information as of May 2026. Features released after that date are not yet reflected.

Corrections welcome. We will revise this comparison as new information becomes available. If a vendor believes a score is inaccurate, we welcome corrections with supporting documentation. Please email sales@sellforte.com.

Recommendation for buyers. Use this comparison as a structured starting point for your own evaluation, not as a final answer. We strongly recommend conducting evaluation calls or demos with at least three of the top-scoring vendors to verify fit against your organization's specific requirements. Pricing, services model, regional support, and integrations with your data infrastructure are factors no rubric can fully capture.

Further Reading & Resources

Methdology

AI and Agents in Marketing Measurement

Use-cases on media spend optimization

Original Marketing Measurement Research by Sellforte Labs

Practical hands-on guides

Marketing Measurement Tools, software and vendors

MMM tools, software and vendors for Ecommerce

MMM tools, software and vendors more broadly

Review Sellforte SaaS Product Features

Authors

Lauri Potka

Lauri Potka is the Chief Operating Officer at Sellforte, with over 15 years of experience in Marketing Mix Modeling, marketing measurement, and media spend optimization. Before joining Sellforte, he worked as a management consultant at the Boston Consulting Group, advising some of the world’s largest advertisers on data-driven marketing optimization. Follow Lauri in LinkedIn, where he is one of the leading voices in MMM and marketing measurement.

Kacper Solarski

Kacper Solarski is a Lead Data Scientist at Sellforte, focused on developing Sellforte's Experiments product. Kacper is one of the most senior data scientists and developers at Sellforte, where he has implemented Marketing Mix Models and incrementality testing solutions to Sellforte customers, while at the same time developing Sellforte's platform. Follow Kacper in LinkedIn.

Juha Nuutinen

Juha Nuutinen is the Chief Executive Officer and co-founder at Sellforte, with over 15 years of experience in optimizing marketing spend and promotional activity for the largest advertisers in the world. Before co-founding Sellforte, he worked as a management consultant at the Boston Consulting Group, specializing in promotion optimization. Follow Juha in LinkedIn, where he is actively sharing his views on marketing measurement.