Back in August of 2016 at the beginning of a heavy political advertising season, the Ad Operations team at Gimbal started receiving consistent complaints from one of our partners regarding questionable placements coming up in their performance reports on a campaign we were running for them.

As with all our clients, we had brand- and candidate-safe whitelists in place to preserve the proper message to the right folks.

Almost immediately after launching the campaign, our placements lists was spotted with websites comprised of violence, fake news, profanity, dating, porn, or other placements that simply didn’t fit the user base of the client.

Building and Maintaining Whitelists

My Ad Operations team and I worked furiously over the following hours and days to compile what we felt was the “perfect” whitelist, comprised solely of top-tier, brand-safe apps and sites from the likes of Fox News, Huffington Post, Words With Friends, and more.

For the next few days, we worked to ensure that this brand safe list was vetted by our partners and then applied to all their campaigns. We were confident that no leakage would occur and that we were golden.

We were wrong.

The day after we re-launched the campaigns with this whitelist, we were met with furious emails from our partner again. According to their third-party reports, we were still serving ads on placements such as,,,,, and the like.

These sites were not on our hand-curated whitelist, so we were left scrambling to figure out how this could have happened.

We checked our internal placement reports and to our surprise, none of those placements appeared in our report; our whitelist was “working.”

Persistent Masking and Leakage

What the hell was going on? How could the results from our internal report be so vasty different than the third-party report?

While the overall percentage of impressions on undesirable sites had dropped significantly, any leakage was unacceptable. And we were determined to get to the bottom of the issue.

We went back and forth for a month with a partner that was rightfully pissed off, testing various theories on what was causing the leakage. We tested the following:

  • Matching the third-party impressions for a specific day against our internal reports to any placements that had a similar range of impressions delivered, then blacklisting those placements to see if they appeared in the third-party report the following day
  • Checking with SSP’s if they had any of the inventory mentioned that was labeled incorrectly
  • Verifying placement names matched either the web domain or the mobile web URL and then blacklisting any that didn’t match

Accidental or Purposeful Deceit?

After much back and forth, testing, checking, and re-testing, we still didn’t have a definitive resolution. We we’re still seeing those placements coming through in the client’s third-party reports.

We began to think that this wasn’t happening accidentally, but rather intentionally.

What we discovered was that a large number of impressions that we were bidding on were actually different domains than what was represented from the SSPs.

This is not at the fault of the SSP’s – but rather who the SSPs work with.

Often times, networks of publishers work with SSP’s to extend their reach of buyers, but these networks can actually mask their inventory and pass it off as something else that’s more coveted – more premium sources, such as Fox News, Huffington Posts, and Words with Friends – in an effort to charge more for their inventory.

The Cost of Manual Intervention

Due to this “spoofing,” there was no way for us to definitively isolate which offenders to remove from our whitelist. It was through 5 weeks of daily meetings between Operations, Account Managers, and Engineers across our company and the SSPs and running a plethora of trial and error test campaigns that we were we able to weed out 99.9% of the undesired traffic to create the finalized whitelist.

We went from seeing 20,000 – 30,000 “leaked” impressions a day across all our campaigns to seeing around 50-100 impressions a day.

All it took was approximately 75 hours of time between three Ad Operations team members, as well as two engineers to come to this theory and resolution. Not bad for nearly $600 per hour if you add up our salaries.

ads.txt by IAB

A Programmatic, Industry-Wide Solution

Several months later, the IAB announced ads.txt, their industry-wide initiative to address and weed out fraudulently represented inventory in the open exchange. This solution allows for sites to place a crawlable text file on their site that informs the buyer (a DSP) which companies are verified to be able to sell their ad space by classifying the SSPs Seller Account ID, Payment Type, and TagID.

Our team has attended the IAB’s open working sessions to help define the criteria and methodology by which ads.txt will eventually be implemented.

In the example provided by the IAB, it would look like this:
#< SSP/Exchange Domain >, < SellerAccountID >, < PaymentsType >, < TAGID >, 12345, DIRECT, AEC242, 4536, DIRECT, 9675, RESELLER

While this initiative is much needed and helps create more transparency, there are pending questions:

  • How long will it take for publishers and SSP’s to adopt this?
  • How long will it take DSP’s to adapt their bidding to this?
  • The IAB doesn’t have a solution for in-app traffic yet and the majority of mobile traffic is in-app. If spoofing mobile web dries up, will we see an even higher influx of spoofed in-app traffic?

Like most things, the first iteration of any major initiative is often the most exciting because it’s a fresh idea, but it is by no means complete. Ultimately, the rate of adoption and the effectiveness of the solution will require open feedback from the advertising ecosystem. In doing so, it will help further legitimize the viability of automated advertising for brands around the world.