Brand Safety and Online Advertising
IN CLEAR FOCUS this week: brand safety and online advertising. Jonathan Marciano of online ad verification service CHEQ discusses the challenges of bringing full transparency to the digital advertising ecosystem. We discuss some of the unintended consequences of keyword blacklists that negatively impact publishers and consumers, and Jonathan explains how artificial intelligence is being used to improve brand safety solutions.
Adrian Tennant: You’re listening to IN CLEAR FOCUS: A unique perspective on the business of advertising. Produced weekly by Bigeye. Hello, I’m your host, Adrian Tennant, VP of Insights at Bigeye. An audience-focused, creative-driven, full-service advertising agency, we’re based in Orlando, Florida, but serve clients across the United States and beyond. Thank you for joining us. Today, we’re going to be talking about brand safety online. In July of 2017, consumer packaged goods giant Procter & Gamble announced that it had cut back its expenditure on digital advertising by $140 million due to concerns about where ads for its brands were appearing. In explaining its decision, P&G said that it had decided to restrict spending in digital forums where it felt its ads were not being placed according to P&G’s brand standards. Earlier that year, P&G had pulled its advertising from YouTube completely after discovering that ads bought programmatically – that is, via an automated system – had too often appeared next to offensive material, such as hate speech. The industry term for this is negative ad adjacency. In April of last year, 2019, Procter & Gamble’s Chief Brand Officer Marc Pritchard issued a call to entirely reinvent the digital media supply chain. In order to understand why brand safety is a concern, and why attempts to bring transparency to the current ecosystem have trouble keeping pace with its rapid growth, we need to recap the way digital advertising works today. At the top of the supply chain are the marketers for brands that want to reach prospective customers. They have marketing budgets, a portion of which goes to advertising. The brands work with advertising agencies like Bigeye, and we develop the creative – the ads themselves – and plan where the ads will be shown – the media. We then work with a number of third parties, purchasing the inventory or ad space on networks of websites. And these sites – which is where digital ads appear – are owned by publishers, who ultimately receive payment for the advertising that appears on their sites. At least, that’s how it is supposed to work. To talk about the complexity of the digital ecosystem and how brands and publishers can ensure their advertising avoids negative adjacency, I’m joined today by Jonathan Marciano, Director of Communications at CHEQ, an artificial intelligence-driven, ad verification service. In his role, Jonathan manages public relations, editorial content, marketing, and communications – and he’s the author of numerous landmark whitepapers that have been covered across the digital advertising industry and in media including The New York Times, Fast Company, AdAge, and CNBC, among others. Welcome to IN CLEAR FOCUS, Jonathan.
Jonathan Marciano: Hi Adrian. Thanks so much for having me.
Adrian Tennant: What is your definition of brand safety?
Jonathan Marciano: Yeah, so brand safety is basically the controls that companies in the digital advertising supply chain use to protect brands against negative impact to their reputation.
Adrian Tennant: Right. Where does ad verification sit within that?
Jonathan Marciano: So as you sort of outlined in your great introduction there’ve been a number of brand safety incidents, sort of most memorably was The Times of London, I think in 2017 where they had a front page revealing that some of the top brands were appearing to finance programmatically the videos and stories about terrorism and ISIS. And this was a big wake up call I think for the entire digital advertising space where basically brands couldn’t understand and couldn’t defend why their ads were being served in such toxic environments. And these incidents have kind of continued to grow and periodically brands have been called out for supporting everything from false information to terrorism to being advertised against negative stories even about their own companies. And so what came in its place was a number of ad verification players who were basically there to act as policemen to prevent the brands from appearing against this negative content.
Adrian Tennant: Now I understand CHEQ published a report called The Brand Safety Effect in October of 2018, which was based on a study you guys undertook with BMW and Hulu. What can you tell us about the research design and the methodology that you employed, first of all?
Jonathan Marciano: Yeah. So this was trying to get to the bottom of whether any of this, you know, really matters. Does a consumer who sees a brand next to a nicest video really care, does it really associate a brand with that? And we saw the answer was in fact, yes, they do know, and they do recognize the brands and they do have recall of the brand and the context in which sometimes horrific content is delivered. So we found in the research, which was with 2000 consumers, that there was strong sentiment about how brands where the company, the brands were keeping online. So the brands were shown adverts in basically unsafe, brand-unsafe content. So for instance, the classic example was an airline ad next to an article about an airline forcibly removing a passenger; a soda ad in front of content about diabetes. And basically the chief insight was that many consumers viewed this as an intentional endorsement for this negative content. I think some of the feedback we had was that it was manipulative, that it was disturbing, that they appeared to be generating revenue through disaster. And I think the headline figure was that there was a 2.8 times reduction in consumers’ intent to associate with this brand. So really hinting at an effect on the bottom line.
Adrian Tennant: You know, we will often track purchase intent and of course the likelihood to recommend to family and friends. So I think what I’m hearing is that those kinds of perceptions were indeed very negatively impacted as a result of this negative ad adjacency.
Jonathan Marciano:That’s right.
Adrian Tennant: So could you tell me, how do brand safety platforms typically work?
Jonathan Marciano: So up until now, up until new players such as CHEQ basically there’s this very crude, and I’d say pretty unsophisticated, solution to the problem which is this idea of keyword blacklists, which have been created in the name of brand safety. So these are basically words, blacklists that are deemed too dangerous for advertisers to appear besides, so this is particularly talking about you know, no news content. So The New York Times or globally online the news industry. So anyone advertising against these sorts of sites, if an online news story contains words such as “sex,” or “terror,” or “ISIS,” or “killing,” then the concerned advertiser stays clear. They basically are prevented from serving ads against any of that content. And it basically de-monetizes the content for the publisher. And the side effect is that it’s a means that the reach of the brands themselves to reach engaged consumers reading this type of content is diminished.
Adrian Tennant: Why is it that keyword blacklists came to dominate the technology for brand safety?
Jonathan Marciano: I think because it’s a fairly simple, I’d say a little bit of a lazy solution. It was a way to ostensibly get round this problem to kind of have a bit of a band aid solution to the problem. And I think in, in theory it’s not bad once the keyword blacklists kind of became manageable when you’re talking about a few words like “killing” and “attack.” But what’s happened is the number of keywords have increased to a crazy degree where there’s now 3,000 keywords on blacklists. And brands are basically having to avoid the news completely because there’s very, very little content that they can appear next to. And that’s affecting the whole system because it’s a hurting brand’s reach. It’s hurting premium publishers and it’s forcing advertisers to go to kind of the lowest common denominator and go to cheaper clicks, which are rife with fraud and and bad associations. And basically hurting, you know, the ultimate purpose of advertising, which is to create leads or convert customers.
Adrian Tennant: Now, just last month, you published, The Economic Cost of Keyword Blacklists for Online News Publishers. And this was a report that you undertook with the Merrick School of Business at the University of Baltimore. In the study, the economic price paid by publishers from incorrect blocking of safe content on premium news sites was in fact quantified. What can you tell us about the research design and the methodology that you used for this study?
Jonathan Marciano: So we found in the US alone, that $2.8 billion is lost by new sites every year. Well in 2019 because of an incorrect flagging of their most read online content. So this does assume that there are certain flags that the that it shows that there are certain blocking that is, is, is justified. But basically we found that 80% of ads served to premium publishers are subject to keyword blacklists. I think the IAB actually came up with a figure close to 95%, which makes us even a bit more conservative in our findings. But these blacklists were designed, as we said, on brand safety grounds by these ad verification providers. So to prevent brands from appearing next to toxic news content. However, by analyzing the actual stories that would have been blocked by these blacklists across 20 premium news sites, including The New York Times, CNN, and The Guardian, we saw that around 40% of global premium media inventory is actually brand safe. But of the safe content of this sort of content that is really, basically 57% that is safe is being actually blocked even though it’s safe because these ad verification blacklists basically don’t understand words such as “killed,” “dead,” “shoot,” and “injury,” which could be talking about an NBA player killing it in a game, or an injury in a football game. In particular, we also found that LGBTQ news publishers are seeing 73% of their inventory being denied through keywords such as “lesbian,” “bisexual,” and “same-sex.” And so based on the annual spend say in the US of 12 billion on online US news advertising we calculated that there’s at least $2.8 billion a year due to incorrect flagging for brand safety, which as you can imagine, for a hard pressed industry that’s already struggling, is a big pain point.
Adrian Tennant: One thing that stood out to me in the report was the way in which even really family friendly content around the launch of Disney+ service was flagged as unsafe. Can you talk to that?
Jonathan Marciano: Of course. Yeah. Google every year produces what is the most searched search trend of the year and in 2019 unsurprisingly, a lot of anticipation around Disney+ which is about, I think, is as brand safe as you could probably imagine a story could be a business which is solidly protective of its brand. But basically with stories about Disney+ proliferating, we found that many millions of impressions couldn’t be monetized because of entertainment. Because basically there were stories about Disney’s back catalog and what would be on the new platform. And it included things like a Star Wars Attack of the Clones and simply that one word, “attack” meant that brands were not able to advertise against this news. One of the first Avengers movies is called Infinity War. And we remember at the time when the Infinity War came out, a lot of publishers were turning to us and saying that they were basically seeing a huge drop off of revenue and all of these reviews of the movie, all these tie-ins to the movies, all this buzz was basically unmonetizable and put in the same bucket as terrorism and ISIS.
Adrian Tennant: You wrote an opinion piece highlighting the ways that keyword blacklists have unintended yet very adverse effects on LGBTQ consumers and the marketers and publishers who serve the community. Can you talk a little bit more about what you learned?
Jonathan Marciano: Yes. In general, there’s a 57% blocking of safe online news content. But when we started applying our methodology to LGBTQ publications like The Advocate and Pink News we found that there was a 77% blocking of their safe content. So there are stories, which are, you know, unarguably not safe. So there was stories of murder and those are fine not to be monetized, but it was stories about a lesbian couple that were being blocked. It was stories about Killing Eve because it was a TV series where they mentioned a a lesbian scenario. It was stories about same sex. So they were being blocked because of stories because of these key words such as “same-sex” or “lesbian.” I think that the point I wanted to make in the article was every year in Pride Month, we see every brand on LinkedIn and Twitter and Facebook and all the platforms changing their logos and appearing you know, LGBTQ-friendly. But when it comes to programmatic advertising, they’re basically denying advertising to stories that the community needs to survive. And as a result, some publishers have already been forced to close because they’ve been denied advertising and it’s a daily struggle. And I think that’s unjust and that’s unfair.
Adrian Tennant: We talked a lot about the problem. Talk to us about CHEQ and why your solution is different.
Jonathan Marciano: Thankfully there’s been advances since this sort of decades old, the technology of key words. And to be fair, some of the ad verification players who are not CHEQ have also started to realize that this situation is just no longer sustainable. And they’ve also started exploring AI options, but because CHEQ is a fairly young company founded in 2016, we look at every problem anew. So we didn’t have any of the previous keyword solutions and I think no one in the industry who was coming in fresh to this problem would ever say that that was a good solution. So CHEQ invested a lot of talent and energy and money in AI to solve the brand safety issue. In this situation, AI complete sequences to understand the meaning of the text and it understands the context of a story, similar to how we read and understand stories. Essentially it builds up a full picture of what the story is and what it’s not. So this enables a more contextual approach. So if I give you an example, to go back to some of the things that people, the brands don’t like appearing next to. The fast food restaurant chain wants to avoid appearing next to content about obesity, for instance. So we’ve trained the AI to define obesity as a category but also trained it to understand sub-terms such as “heart disease,” and “diabetes.” So this helps to show what a piece of content is and it doesn’t just look at one specific keyword, but it analyzes how many categories sub-terms are present in a piece and the relationship between them. So this allows the AI to understand if an obesity-related word is just random or if it’s actually about obesity. So in another way, previously a story that would mention alcohol would be blocked. But this AI enables, through its training, it understands whether it’s simply a mention of alcohol for a recipe, or if it’s something that is brand unsafe, such as driving under the influence. So this basically is a way for brands to decide what is brand safe. And the AI, based on their parameters, will be a far more human judge of brand safety than keywords.
Adrian Tennant: Right, understanding you, this is artificial intelligence.
Jonathan Marciano: That’s right. That’s machine learning. It’s artificial intelligence. So for instance, in terms of artificial intelligence, what’s important is the training and the data. And so for instance, to avoid the silly situation of Avengers being blocked or Disney being blocked, our AI has been trained on film scripts and reviews of TV shows and scripts. So it knows what goes into a story, about film and movies and singers, so that it knows that the Avengers is not a real war. Whereas it would understand based on a story about an invasion in Iraq, for instance, that is a real war. It pieces together the puzzle in milliseconds to decide whether something should be served or not against that content.
Adrian Tennant: Where does CHEQ sit in that digital supply chain?
Jonathan Marciano: We work mostly with brands, so we protect brands from advertising fraud, which itself is a $23 billion problem. And we also prevent the ads from appearing next to unsavory content. So mostly with brands, obviously they’re the ones that have the budget, as well with agencies. We can also integrate with publishers as well, but we have less publisher clients.
Adrian Tennant: Right. And we’re talking about a software-as-a-service model?
Jonathan Marciano: That’s right. So it’s a SaaS model. It has a very, very impressive dashboard. So you’ll receive instantaneous data on why your ads weren’t served. We show you every URL that was blocked. You know and this basically shows how much you’re saving. And it’s also very open and transparent. So we’re not saying doing this behind closed doors, we’re showing you all the data. And clients have appreciated that. There’s not much openness and transparency in digital advertising. And they appreciate that they can mark our homework and if there are any improvements that can be made, we can work on it.
Adrian Tennant: Well, as you know, Bigeye is a full-service agency. What are some of the conversations that you think we need to be having with our clients about just these issues?
Jonathan Marciano: So it’s it comes down to talking to clients about first of all, on the one side, making sure that you’re protected, from ensuring that your brands aren’t served in bad places. We shouldn’t hide behind that we just simply don’t know where programmatic ads are being served or what their funding is, because that’s not the case. We can. There is a way to plan and execute your campaigns so that you are putting your brand and your revenue in the best position because ultimately not only are you serving ads on a content that is undesirable, it’s also probably not the audience you want to reach. And so it’s basically wasted money and wasted spending that could be going elsewhere.
Adrian Tennant: Jonathan, if listeners want to learn more about CHEQ, where can they find information?
Jonathan Marciano: Yeah, so probably the best place is our website. And there you can find out all the information schedule a demo and get a pilot.
Adrian Tennant: Right. And what books, articles, or other resources, would you recommend listeners that want to learn more about ad verification in general, or brand safety in particular?
Jonathan Marciano: So I always read Dr. Fou’s articles about fraud and brand safety. Augustine Fou is not very complimentary about verification players, but he’s always very sharp with his points and they present a lot of challenges to the industry. So I like him a lot. Ad verification, I mean, there isn’t that much. Digiday and AdAge report a lot on some of the challenges about blacklists. And they’re talking a lot more about the challenges that publishers face. So yeah, those, those would be some, something to look out for.
Adrian Tennant: Perfect. You were very modest, you didn’t include your own articles in there, but I have to say everything that’s been written by Jonathan is also very on-point. Jonathan Marciano, director of communications at CHEQ, it’s been our pleasure to have you on IN CLEAR FOCUS today. Thank you very much.
Jonathan Marciano: Thanks, Adrian. It’s been a pleasure.
Adrian Tennant: My thanks to our guest, Jonathan Marciano, Director of Communications at CHEQ. You can find links to the resources we discussed on the IN CLEAR FOCUS page at bigeyeagency.com under “Insights” – just click on the button marked “Podcast.” Consider subscribing to the show on Apple Podcasts, Spotify, or your favorite podcast player – and please, rate and review the show. And, if you have an Amazon Echo device, you can use the IN CLEAR FOCUS skill to add the podcast to your Flash Briefing. Thank you for listening to IN CLEAR FOCUS produced by Bigeye. I’ve been your host, Adrian Tennant. Until next week, goodbye.