Blog post

Blog Article

Navigating ad blindness in retail media: How to measure, manage, and make sure your ads are working

By:
Henry Senger
No items found.

Table of Contents

down chevronup chevron

November 16, 2023

Ad blindness is when users become conditioned to ignore ads after being exposed to “bad ads” for too long. This is a self-inflicted wound to ad businesses since these users become much less likely to click on ads on your site the longer they’re exposed to bad ads. 

A 2015 study by Google on ad blindness demonstrated that the quality of ads is a more significant driver of ad blindness than the quantity. The study indicated that fewer but higher-quality ads can mitigate ad blindness, while an abundance of low-quality ads exacerbates it. More concerning is that it can take a long time for users to unlearn this behavior even if you start showing high-quality ads, as it detrimentally impacts long-term engagement since users develop ad blindness due to decreased satisfaction.

This problem isn't limited to display and banner advertising — it affects all types of ads, including native ones like sponsored products. Users adapt and can become “blind” to anything labeled “sponsored”. Ad blindness has various causes: too many ads, ad fatigue, misleading content, or ads that aren't relevant to the user.

In retail media, if the ads are truly native and the balance between ads and organic content is well thought out, the primary cause of ad blindness is typically irrelevant or low-quality ads. For instance, repeatedly showing dairy product ads to lactose-intolerant individuals or laundry detergent ads to people who use a laundry service annoys users without converting them into buyers.

Recognizing the drawbacks of ad blindness to user engagement is just the beginning. The next step is to accurately measure ad blindness and then ensure you take steps to avoid it or solve it with the right tools to deliver relevant and meaningful ads.

How to measure ad blindness


To accurately measure the rate at which users skip or ignore ads and to eliminate confounding factors, ad blindness is best measured in an A/A experiment setting rather than an A/B setting, where treatment A is compared to treatment B.

The ad blindness experiment setup:

  • Setup: For the experiment to be free of interference from the residual effects of prior experiments, we suggest shuffling user cohorts.
  • A/B experiment period: Following the A/A pre-period, move on to the A/B experiment period. In this phase, users are divided into two cohorts, A and B. Users in cohort A continue to see ads as usual. Users in cohort B receive a different ad treatment, such as not being shown any ads or being shown higher quality ads (ie. weigh quality scores higher than bids).
  • A/A post-period: After the A/B experiment, conduct an A/A post-period. During this phase, you can measure and compare the ad engagement rates between the user cohort who were in cohort A (who were exposed to ads as usual) and the user cohort in cohort B (who saw no ads or better ads) during the experiment period. The differences in engagement rates between the two cohorts reveals the impact of ad blindness, or user learning effects from exposure to lower quality ads. 


User learning may take an extended period, and waiting for months for the post-period to measure learning effects might not be feasible. As an alternative, you can consider the following setup:

  • B/B experiments at fixed intervals: Start a new copy of the experiment with a new user cohort at fixed intervals (e.g., daily). Each new cohort is exposed to treatment B. This creates a B/B experiment setting where the key difference between cohorts is the duration each user cohort was exposed to treatment B. This measures the long-term learning effect.
  • Daily shuffling B' test: Instead of having completely separate, non-overlapping cohorts, the hash function used to assign users into experiment buckets can be changed. By modifying the hash function periodically, you can randomize which users end up in your experiment. If the traffic share of the experiment is small enough, then the number of users repeatedly falling into the experiment is negligible.
  • Measurement of differences: Throughout the experiment period, compare the ad engagement differences between the B cohort and the B’ cohort. This analysis provides valuable insights into the size of the long-term learning effect, allowing you to understand how user behavior changes over time.

By following this experimental setup, you can effectively measure ad blindness and the impact of long-term learning on user interactions with ads.

How to solve (or avoid) ad blindness


Solving ad blindness requires the right tools, but it's a slow process. The best approach is to avoid serving bad ads altogether.

The solution to solve or avoid ad blindness is simple: produce better ads. 

To produce better ads, focus on these two key aspects:

  • Content: Making the ads look better and more creative
  • Context: Delivering more relevant ads by using 1st party user events

Most focus on content, however, in retail media, context may actually be the easier and more valuable part to address. Personalization through data-driven signals powered by machine learning can yield significant improvements.

Take, for example, a user purchasing soccer cleats and, later, socks. By recognizing this pattern through machine learning, you can intelligently suggest shin guards, which wouldn't be as relevant if you only looked at the sock purchase. Machine learning models can identify such connections, offering an in-store-like experience to users.

The effects of ad blindness


It's crucial to recognize that ad blindness affects all ad formats, extending to native content like retail media. The underlying causes are diverse, ranging from ad overload to irrelevance. A holistic approach is necessary to address this issue effectively by emphasizing both content and context. Focusing on personalized user experiences powered by machine learning offers a promising path forward.

If you want to further explore these strategies, reach out to us at Moloco.

Henry Senger

Senior Director of Engineering, Moloco

SEE MORE
Dark blue arrow to learn more about the subject
Editor’s choice
Debunking the onsite retail media ad inventory ceiling: How retail media networks drive growthDebunking the onsite retail media ad inventory ceiling: How retail media networks drive growth

Onsite retail media ads still drive 80% of ad spending. Learn how leading RMNs use machine learning to unlock more revenue from existing inventory.

read more
White arrow to learn more about the subject
Why retail media supply-side platforms alone won’t save your ad businessWhy retail media supply-side platforms alone won’t save your ad business

Learn why relying on a retail media supply-side platform (SSP) won’t drive long-term growth for your ad business and how RMNs can unlock true value with machine learning and first-party data.

read more
White arrow to learn more about the subject
A conversation with commerce media expert Jason BaggA conversation with commerce media expert Jason Bagg

Commerce media expert Jason Bagg discusses his experience launching a scalable retail media platform on a tight deadline.

read more
White arrow to learn more about the subject
3 signs your onsite retail media ads are leaving money on the table3 signs your onsite retail media ads are leaving money on the table

Many retailers are overlooking a crucial element: maximizing the potential of their owned and operated sites. Learn about the three signs that your onsite retail media ads have untapped value.

read more
White arrow to learn more about the subject

Want to learn more?

Subscribe to the Moloco newsletter

arrow top