← Clarigital·Clarity in Digital Marketing
Programmatic Advertising · Guide 8

Programmatic Measurement · Viewability, Attribution & Fraud

Measuring programmatic advertising is fundamentally different from measuring search or social. There is no click-through rate worth optimising for display. Impressions served does not equal impressions seen. Conversions attributed to programmatic may not have been caused by programmatic. This guide covers the complete programmatic measurement framework — the metrics that matter, the ones that mislead, and the methodologies that actually tell you whether your programmatic investment is delivering business outcomes.

Programmatic Advertising 5,200 words Updated Apr 2026

The Programmatic Measurement Framework

Programmatic measurement answers three questions: Was the ad delivered? Was the ad seen? Did the ad contribute to a measurable outcome? Each question requires different measurement methodology and different data sources.

QuestionMetricData SourceLimitation
Was the ad delivered?Impressions served, win rate, budget pacingDSP and ad server dataServed ≠ seen; delivery data is self-reported by DSP
Was the ad seen?Viewability rate, video completion rateIndependent verification vendor (IAS, DoubleVerify)Viewability measures pixels visible, not actual human attention
Did the ad contribute to outcomes?Conversions, ROAS, brand lift, incremental reachAttribution, brand lift studies, holdout testsAttribution overstates impact; brand lift requires panel research

Viewability: The IAB Standard

Viewability measures whether an ad had the opportunity to be seen by a real human. The IAB's official viewability standard, defined in the IAB Viewability Guidelines, is: for display ads, at least 50% of the ad's pixels must be visible on screen for at least 1 continuous second; for video ads, at least 50% of the player must be visible for at least 2 continuous seconds.

The 50% threshold means that a half-visible ad below the fold counts as "viewable" even if a large portion of it is hidden. This is a measurement standard, not an attention standard — an ad can be technically viewable while being completely ignored by the user. The Moving Advertising Quality Initiative (MRC) accredits viewability measurement vendors; MRC accreditation means the measurement methodology meets defined accuracy standards.

Average display viewability

~60%

Industry average display viewability rate — approximately 40% of display impressions are never actually seen

Video viewability

~70%

Video ads have higher average viewability than display, but significant portion still goes unseen

CTV viewability

95%+

CTV viewability rates are near-perfect due to the full-screen, TV-delivered format

Viewability benchmarks vary by format and placement position: above-the-fold display placements typically achieve 60–75% viewability; below-the-fold placements may achieve 40–50%; native ads achieve higher viewability (70–80%) due to their integration into content flow. Paying a viewable CPM (vCPM) premium is generally justified for brand awareness campaigns where impression quality is the primary goal.

Invalid Traffic and Ad Fraud

Invalid traffic (IVT) is any ad impression that was not generated by a real human with genuine interest in the content. The IAB and MRC classify IVT into two categories: General Invalid Traffic (GIVT) — traffic from known data centres, bots, crawlers, and other identified non-human sources that can be filtered automatically; and Sophisticated Invalid Traffic (SIVT) — more complex fraud involving human-like bot behaviour, hijacked devices, or malicious ad stacking that requires more sophisticated detection.

The ANA's documented programmatic fraud research estimates that ad fraud costs the global advertising industry billions of dollars annually. Bot networks generate fake impressions at scale by automating browser interactions on programmatic inventory — inflating impression counts and CPM revenue for fraudulent publishers while delivering zero value to advertisers.

Ad fraud types relevant to programmatic buyers:

  • Bot traffic: Automated script-generated impressions on fraudulent or hijacked inventory.
  • Domain spoofing: Fraudulent inventory fraudulently declared as premium publisher inventory in the bid request. An impression served on a fraudulent site declares itself as being on a premium news site to attract higher CPMs. Ads.txt and sellers.json verification prevents this by requiring publishers to declare authorised sellers.
  • Ad stacking: Multiple ads stacked on top of each other in a single placement — only the top ad is visible but all generate impression counts.
  • Click fraud: Automated clicking on ads to generate false click signals — more prevalent in CPC-billed campaigns than impression-billed programmatic.

Attribution for Display Advertising

Display advertising attribution in programmatic works differently from search attribution because display ads rarely generate direct clicks. Display attribution uses post-view (view-through) attribution: if a user is served a display impression and then converts within a defined window (typically 1–7 days), the conversion is attributed to the display campaign.

This model is problematic without a control group. If a user was going to convert naturally regardless of seeing the display ad, attributing that conversion to the display campaign overstates its impact. A user who visits a retailer's website, leaves without purchasing, sees a display retargeting ad later, and then purchases through a direct site visit may attribute the conversion to the display ad — even if they would have converted anyway.

For proper display attribution, see the attribution modelling guide — particularly the discussion of data-driven attribution and incrementality testing.

Post-View Attribution: The Inflation Problem

Post-view attribution is the standard programmatic measurement approach — and the one most prone to inflating programmatic's measured performance. The inflation problem arises because high-reach programmatic campaigns reach a large percentage of the population, meaning that many people who convert naturally (without any causal influence from the ads) will have been exposed to an impression and will therefore be credited as programmatic conversions in a post-view attribution model.

A thought experiment: if a programmatic campaign reaches 80% of a city's population, and 3% of that city's population makes a relevant purchase that week, then approximately 80% of all those purchases will be attributed to the programmatic campaign — even if the campaign had zero causal impact on any of them. This is why post-view conversion numbers in DSP reporting often look impressive while holdout tests of the same campaign show much lower (or zero) incremental impact.

Incrementality and Holdout Testing

Incrementality testing is the gold standard of programmatic measurement. An incrementality test withholds the programmatic campaign from a randomly selected control group (typically 10–20% of the target audience) while serving the campaign normally to the treatment group. The difference in conversion rate between the exposed and control groups represents the incremental conversions caused by the campaign — the additional conversions that would not have occurred without advertising.

Incrementality testing is more methodologically robust than any attribution model because it does not rely on assumptions about which touchpoints caused conversions — it directly measures the causal effect of advertising through random assignment. The trade-off is cost: withholding ads from the control group means those impressions are not served, reducing the campaign's reach. For high-value campaigns where measurement accuracy is critical, this cost is worth paying.

Most major DSPs (DV360, The Trade Desk, Amazon DSP) offer built-in holdout testing tools. Third-party measurement providers (Measured, Rockerbox) specialise in incrementality testing across programmatic and other channels.

Brand Lift Measurement

Brand lift studies measure the direct impact of programmatic advertising on brand metrics — awareness, ad recall, message association, purchase intent — rather than on conversion metrics. They are conducted by serving surveys to both exposed (saw the ads) and control (did not see the ads) audiences and measuring the difference in brand metric responses.

Brand lift is the appropriate primary measurement for brand awareness and consideration campaigns where the goal is to move people along the purchase funnel rather than to drive immediate conversions. A programmatic campaign for a new product launch may have zero post-view conversions in the first week (because most people need multiple exposures and a consideration period before purchasing) but generate significant brand lift in awareness and purchase intent — which is the signal that the campaign is working.

Programmatic Reporting Metrics

MetricWhat It MeasuresGood ForWatch Out For
Impressions servedTotal ad impressions delivered by the DSPCampaign delivery trackingIncludes non-viewable and invalid impressions
Viewability rate% of impressions meeting IAB viewability standardAssessing impression qualityViewable ≠ seen or noticed
Win rate% of bid requests where the DSP won the auctionDiagnosing delivery issues; bid competitivenessLow win rate may indicate underbidding or over-targeting
eCPMEffective cost per thousand impressionsCost benchmarking; supply path evaluationLow eCPM with low viewability is not efficient
VCR (Video Completion Rate)% of video impressions watched to completionVideo campaign effectiveness100% VCR on non-skippable format is table stakes, not impressive
Post-view conversionsConversions attributed to programmatic after impressionDirectional performance indicatorSubstantially inflated without holdout test comparison
Brand lift (awareness/recall)Change in brand metrics between exposed/control groupsBrand campaign effectivenessRequires panel research; sample size requirements

The Core Measurement Challenges

Three structural measurement challenges affect programmatic measurement accuracy that cannot be fully resolved with current industry infrastructure:

Cross-device fragmentation: A user who sees a programmatic ad on their desktop at work, considers the product on their mobile phone at lunch, and purchases on their home laptop may have three different device IDs. Without cross-device identity resolution, the display impression and the conversion may not be linkable, making the display campaign's contribution unmeasurable. Cross-device attribution is addressed through probabilistic and deterministic identity graphs, but neither approach is perfectly accurate.

Walled garden measurement gaps: Programmatic display measurement cannot follow users into walled gardens (Facebook, Google, Amazon) — the platforms where a significant proportion of final purchases occur. A display campaign that builds awareness but drives conversions that happen through a Google search or an Amazon product page may show low measured conversion rates while actually driving substantial business impact through these unattributed paths.

Long attribution windows: Many product categories have long consideration cycles — automotive, insurance, B2B software. A display impression may influence a purchase decision 30–90 days later. Standard attribution windows of 1–7 days miss this long-cycle impact entirely.

Building a Measurement Framework

A practical programmatic measurement framework layers methods by reliability and cost:

  1. Always-on verification: IAS or DoubleVerify integrated with DSP for continuous viewability, brand safety, and IVT reporting on every campaign.
  2. Post-view attribution as directional indicator: Track post-view conversions in DSP and attribution platform, but treat as an upper bound on performance — the actual contribution is likely lower.
  3. Quarterly holdout tests: For always-on programmatic campaigns, run 10–20% holdout groups quarterly to calibrate the actual incremental contribution versus post-view attribution numbers.
  4. Annual brand lift study: For brand awareness campaigns, commission an annual brand lift study to track the cumulative effect of programmatic on brand metrics.
  5. Marketing Mix Modelling: For larger budgets, MMM provides the most complete picture of programmatic's contribution to business outcomes by modelling all marketing channels simultaneously against actual revenue data.

Sources & Further Reading

Source integrity

All frameworks, data, and examples in this guide draw from official documentation, peer-reviewed research, and documented practitioner case studies. We learn from primary sources and explain them in our own words.

ResearchANA — Ad Fraud Research

ANA's documented annual research on programmatic ad fraud and invalid traffic costs.

OfficialIAB — Invalid Traffic Guidelines

IAB and MRC official invalid traffic detection standards and classification framework.

OfficialMRC — Media Rating Council

MRC's official documentation on viewability standards and measurement accreditation.

OfficialIAS — Measurement Resources

Integral Ad Science's documented measurement framework for viewability and ad fraud detection.

218 guides. Official sources only.

The complete digital marketing knowledge base.