Duane Varan Podcast Transcript
Duane Varan joins host Brian Thomas on The Digital Executive Podcast.
Brian Thomas: Welcome to Coruzant Technologies, home of The Digital Executive podcast.
Do you work in emerging tech, working on something innovative? Maybe an entrepreneur? Apply to be a guest at www.coruzant.com/brand.
Welcome to The Digital Executive. Today’s guest is Dr. Duane Varan. Dr. Duane Varan is CEO of both MediaPet and MediaScience.
He’s the recipient of numerous awards, including the Australian Prime Minister’s Award for University Teacher of the Year, and the advertising Research Foundation’s, Irwin Efron Award for Lifetime Achievement. He also ranks among the top 10 researchers in the advertising discipline based on peer reviewed publications and top tier journals.
Dr. Duane Varan also continues to publish in academic journals and was ranked seventh worldwide in terms of the number of top tier publications in the last decade long review for the advertising and research discipline. Well, good afternoon, Duane. Welcome to the show.
Duane Varan: Oh, thanks. Thanks. Thanks, Brian.
Brian Thomas: You’re very welcome my friend. I appreciate it. You are currently in New York, uh, via Austin, Texas, which is cool. I’m in Kansas City, so we always like to highlight where folks are from and I appreciate you jumping out of bed early to do a podcast. I really appreciate it. So, Duane, if you don’t mind, we’re gonna jump into your first question with decades of academic work.
You were ranked seventh worldwide in top tier advertising, research, publications, and later leading MediaScience. What was the pivotal shift for you when you moved from purely academic inquiry into commercial scale research that influences global brands?
Duane Varan: Well, you know, that that transition for me was somewhat accidental.
It, it wasn’t intentional. Uh, and it happened in one very precise moment about almost 18 years ago. Uh, I woke up one day and I got a call from Disney. Um, we had built a, an academic research center and we had some industry sponsors who would pay to see the results of our findings before they were published.
Disney was among them and we had pioneered some new methods for studying audience behavior. Uh, and setting, you know, different approaches to marketing research. Uh, and uh, yeah, Disney called and they said, we have been following what you’re doing and we’re going to announce in four days at our annual upfront conference that we’re gonna be launching a lap just like yours.
And you are literally the only person in the world who knows how to do this stuff. So you need to come and work for us. And I said, sorry, Disney, I love you guys, but I’m very happy being an academic. No way. Thank you, but nah, I’m not interested. And they said, well, don’t say no. What would it take? I said, what would it take?
Well, you know, if it was an independent business and I owned it, but you funded it, you paid for it. If I owned the IP, if I continued being an academic and living in Australia, I mean the list went on and on. The good folks at Disney said, that’s all fine. We only have one condition. What’s that? You have to be exclusive to Disney.
And so for my first five years we were the Disney uh, lab. We were the Disney Media and Inno Ad Innovation Lab, and I continued being an academic and I ran that. Um, and then of course after that we came out of exclusivity and that’s when media science really began to grow. We pretty much took on every other TV network as a client.
At some point. The whole thing became too big for me to juggle both roles. So I eventually had to leave academia and just focus on building up media science so that that was the transition. It, it happened. Uh, within some guardrails and it, it was a gradual process, but it did happen all of a sudden, some 18 years ago.
Brian Thomas: That’s awesome. Love the backstory. You know, you were in academia and this thing happened, I guess happenstance, right? Accidental accidentally as you had mentioned. Um, but having Disney call you and give you the offer or the deal of lifetime is just amazing. But that opened a lot of doors, obviously, and that’s where we’re here today.
So, Duane, jumping into your next question. Media science uses psychological measures skin. Conductance heart rate facial expressions to understand ad impact more deeply than traditional surveys. Can you walk us through a case where neuro metric data revealed something surprising about an ad campaign that conventional measures missed?
What did the brand learn?
Duane Varan: Yeah, that’s a great question. Um, you know, the, the surprise for clients at least is usually that. Not a lot is happening. I mean, especially with ads. People are often watching ads in a very non attentitive kind of mode, and the creators of those ads think that people are gonna be incredibly excited when something happens.
And when you see the real audience reaction, it’s, it’s, it’s, it’s pretty average. It’s not as exceptional as people always assume it’s gonna be. So that’s the most common. But there are some occasions where we had. What I would call really big surprises. Um, the first one that I wanna tell you about is not really, it wasn’t really advertising.
We were testing content. It was the Trump Hillary Clinton debate in 2016. This is a really, really great example of where the. Neuro metric data is telling you things that people stated responses are not. And and ever since then, every time we’ve tested anything that involves Trump, this is the kind of thing that we see where people have emotional responses that they’re not even aware of, and they’re very complex.
In the case of that debate, what was so shocking and surprising. Number one was, so we had Democrats, Republicans, and undecided voters. There were three different groups of voters that we were studying. And uh, when you looked at the, uh, Clinton supporters of the Democrats, I should say, what was so shocking among the Democrats was how incredibly negative.
Their biometric data was, their body’s data was every time Hillary would talk, it was off the charts. Um, it was what we normally would see when an opponent is talking. Um, by way of contrast, uh, with Trump, what was interesting is how people basically acclimate it to Trump. He would say really incredible, almost ridiculous things, and the first time he would say it, there would be a really strong reaction.
Among everybody. And then the second time it would be a little bit weaker, and then the third time, and then eventually he’s just saying things and you’re almost seeing no reaction anymore. So that was really interesting. But the most interesting pattern that we saw was really among undecided voters who were telling us that they were going to vote for Hillary.
But we could tell in their data that they were actually responding much stronger and much more favorably to Trump throughout the debate. And, and the reason for this was this emotional, rational conflict where, like with Hillary, when they would ask her, you know, about what she would do in her first a hundred days, she would give this very evasive answer, not wanting to alienate any potential voters.
Uh, but with Trump, he was very definitive. In my first a hundred days, I’m gonna do this, I’m gonna do that. And it didn’t matter whether it was true or not. Rationally PE people were processing whether what he was saying was true or false, but emotionally, that definitiveness really was resonating well.
And there were moments where people were telling you things. The discussion about immigrants was a great example where people say, oh, it’s terrible that he says that, but the neuro metric data was telling us that that argument was really resonating with them. So you have this conflict between who people believe they are.
And what they believe is right and what their emotional response was, which was very strong. So at that time, everybody was predicting a landslide election for Hillary. We were on stage many times saying, no, this is not going to be a, a landslide election. This is razor thin because you have this emotional, rational conflict.
So that’s, I think, the most interesting case that we’ve seen over the years. The, the only other thing that I would highlight, and I’ll make this one a lot shorter, Brian is. Um, we did do some research in Shanghai in China, and what really struck me and what struck the client was just how incredibly different the Chinese audience was when they would watch ads.
Um, the way that they reacted to ads was just completely different to anything we had ever seen. The narrative structures were, were very different, and this led us to, and, and led the client to understand that they couldn’t just roll out. They’re Western ads in China. They really needed to have significant research there to figure out what those ads should look like in the Chinese market.
Brian Thomas: Thank you. Really appreciate that. And those were great examples. Um, I like how you use this neuro metric data to test audiences and it was interesting, I thought was in that 2016 election example you shared where, uh, you talked about that emotional rational conflict and what people. Uh, you know, there was a conflict there.
Obviously, what they were thinking or feeling versus what they were saying is totally different. And I like how, uh, measuring this data, uh, actually really tells the true story of what’s going on. So I, I appreciate that. Thank you. And Duane, your work involves measuring very intimate responses, emotional physiology, and combining that with AI and big data.
As these tools become more pervasive, what ethical considerations do you believe researchers and brands must be vigilant about, especially when it comes to consumer trust, privacy, and nudging behavior?
Duane Varan: Yeah, you know, I think that you can look at that question from two different perspectives. You can look at the question in terms of.
The kind of ethical questions that will impact management. And you can look at it in terms of like, the kinds of ethical questions that will impact consumers. Um, on the management side, uh, we’re seeing something happen that I think is a little bit of a blind spot for companies. I don’t think that people realize how dramatic the change in management culture is.
And, and I’m, I’m very concerned about this. Um, you know, uh, we, particularly in the US. Have succeeded so well globally in large part because science has helped guide management decisions. You know, we have been data driven and when you are looking at a question, you do research and the research informs the decision.
Um, now you don’t know what the outcome of the research will be. Uh, you may be an executive who commissions a study. Um, the study is done. The data comes back and sometimes the answer is, this doesn’t work. And the C-suite will get that information and they’ll have to evaluate it and they’ll make a decision and they will usually give some pretty hefty do regard to that data to guide their decision.
But what we are seeing with the rise of AI and what we’re seeing with the rise of big data. Is the, uh, diminishing the culture of science? Uh, in a lot of ways, AI in particular is the antithesis of science. You know, science is about transparency. It’s about variables. It’s about understanding how those variables work.
It’s about isolating those variables. You know, AI is really about magic. It’s about. Uh, very complex, kind of like, you know, math that goes into finding patterns that you can’t see. Nobody can understand. You can’t even get clarity around it from the AI system. And so what’s happening is management is relying on these decisions that are being made that nobody really understands, uh, that, you know, have a lot of, of black magic in them.
And the problem is that. With big data as well, with data analytics. Um, you can get the data analytics or you can get the AI to give you the answer you want. And so that culture of turning to data to guide the decision, that’s the culture of science, that culture is giving way to this culture of, you know, using the data to make the argument and what that’s going to lead to.
Is, that’s going to lead to a lot of very bad decision making over time. There’s a lot of data, but the data is not being analyzed in a true scientific kind of like manner, and so that’s a huge risk that will have really, really big consequences I think in due course. And then on the consumer side of things, the, the issues are massive.
There are so many issues, but the issue that I think is here and now in that this is today arrived is the whole question really of people being able to differentiate between what they’re seeing and whether that is. Real, or whether that is ai. Um, we did a study back in July where we took nine ads for, you know, major brands.
This is like, you know, apple and Doritos and Snickers, and we recreated those ads using AI and uh, we delivered Impact that was exactly on par. Uh, this was using our new Tool MediaPet, which is a video creation platform, really powerful with a, a lot of control over precision. And then what we did is we also asked people about the production quality and what we got to was about 98% of the real ad.
Today we are past that a hundred percent mark. Today you can no longer differentiate between the real ad and the AI ad. We’re at that stage now, and so the question that this raises is. Is it really fair for audiences to not know whether what they’re seeing is real or whether it’s ai or should we even hold back AI because we don’t want that, that lack of authenticity.
And we think the solution to that is labeling that whenever content is AI generated, there should be a disclaimer there. And we did a study and in the study we showed people AI ads. Um, and we either informed them that it was AI generated. Or we didn’t tell them that it was AI generated and we were expecting to see a major backlash, but in actual fact, there was no difference in terms of the ad impact.
And so we believe that labeling is a very responsible path. It’s a win-win proposition because it allows creatives to do more AI because they don’t have to worry anymore about deceiving an audience or about a backlash to it. They can do it, but just label it. And of course, consumers win because now they know when what they’re seeing is, is real or, or ai.
Brian Thomas: Thank you. And I, I really appreciate you breaking that out because, you know, we, we obviously traditionally commercials they might be actors portraying, right? So, and that has to be labeled and I think, uh, you make a good point. Uh, keeping ethics, transparency and honesty, uh, working with ai. These guardrails, right?
Labeling these AI generated ads, I think is the right thing. Um, and you talked about, you know, science, you know, science does help decide management decisions. That research informs that decision. Uh, however, you’re seeing some challenges with ai. You’re seeing that a little bit of a departure there from science.
Um, just because the data’s there, if it’s not being properly analyzed, then you know, what, what’s the use of the data? So I appreciate your insights. And Dwayne, the last question of the day, if you could briefly share, looking ahead five to 10 years, how do you see ad effectiveness, measurement evolving? You know, like real time geometrics, AI driven, creative generation, cross device attention tracking.
How should creative teams adapt their process now to be future ready?
Duane Varan: Yeah, that’s a great question. Again, I’ll, I’ll give you an example on both sides of that fence of, uh, stuff that’s scary and some that’s, and some stuff that’s really exciting. Um, we, we sit on both sides. We, we do research on both sides of that equation.
So, in terms of, first, something that I think at least is the dominant trend today in terms of the industry adopting ai. This is specifically the, you know, the advertising industry. Um, the advertising industry has very enthusiastically adopted these measures of attention that are AI derived. We call these synthetic measures of attention, right?
They’re, they’re, and the way that that they’re created is they’re trained, you know, an AI model is trained against some data and instead of testing an ad. Now to see what its relative level of attention might be. You just ingest this ad into the system and it will generate a score for you. And this score is supposed to tell you what that a’s performance is like, and also how that performs in different environments.
And so basically this is very exciting because instead of spending, you know, tens of thousands of dollars doing research to figure out what levels of attention are now, you don’t have to spend all that money for a few hundred dollars. You can get this data. And you have data exchanges now where the placement of the ad is being optimized on the basis of these attention scores.
This is something that is caught on like wildfire. Most advertisers now are using these measures as part of their buy decision, however. None of these measures in the market. I want to say that again, Brian. None of these measures in the market to the best of my knowledge, have actually properly been independently validated.
And so people are trading on numbers that they don’t even really know whether these numbers are meaningful, if they’re real. So, we did a major validation study. We collaborated with the Ehrenberg Bass Institute, which is the leading. Market science, uh, center in the world, you know, academic research center.
We collaborated together and, and what we did is we took 300 ads. We had humans, experts, academics, code them for their relative level of attention. We then had consumers rate them in terms of what they thought their likely level of attention was, and then we took the top five ads. In terms of how much attention we think that those induced and the bottom, so now we’re talking about the top five, the bottom five out of 300 ads, and we then said, okay, these new measures, can they accurately predict whether this particular ad was high or low?
In terms of this, these being the top five or the bottom five, that’s a very, very, very fair task, right? We could have been much more nuanced about it. We’ve really pushed it to the extreme, and I can tell you that the, the synthetic measures that we tested were absolutely horrific in terms of their predictability.
In fact. They predicted it accurately about 25% of the time, but that was actually negatively related, co-related, which means that the number one thing they should do is reverse their scores. So you are talking here about pure garbage measurement. These measures are not. Meaningful whatsoever. And yet the entire market has embraced them and is a trading on them.
And so the number one lesson I think that comes out of this is that. Any new measure that comes into the market that we’re going to use needs to be properly validated. And if it’s not validated, it should not be used. Just because it’s AI doesn’t mean that it’s going to actually work. This is a huge problem for us going into forward, going forward into the future because I think there will be many more measures like this that will be coming into the market.
So that’s an example of, uh, one area that is today. That’s I think, a very negative experience. Um, now by way of contrast, there are also opportunities. And again, I was talking earlier about the work that we had done that, uh, took ads and recreated them using ai, uh, and demonstrated that they delivered the exact same ad impact.
We just, this last week, we did the same thing. Then, the new annual ad came out, um, you know, the holiday ad for Coke, great ad, it was AI generated actually. But we looked at it and we said, okay, can we recreate this on our MediaPET platform? So, it took Coca-Cola, uh, about, uh, a team of 100 people 30 days to create that ad.
That’s really good for Cook because normally that ad takes a full year to make. So they did it in 30 days, which is fantastic. The ad was great. We tested it, it performs better than Dr. Pepper. It performs better than Pepsi, than Fanta, than Sprite. So it’s a great ad. So we said, can we recreate it? It took us literally.
Two hours for one video editor to recreate that ad in Media pet, and it delivered the same impact. And that impact, again, was higher than. Then Pepsi, then Coke, then Dr. Pepper or Fanta, Sprite, et cetera. And, and so what that tells us is that today it is possible to create ads that will deliver on par with real ads.
And that could represent a real revolution in concept testing because before we make an ad, even if we’re gonna spend half a million dollars with real actors and all of that. Which is great. We can still do that. Even if we’re gonna do that, let’s test a lot of concepts. Now. In the past we couldn’t really do that because.
The methods that are available to us like using animatic or testing with text or pictures, these are very, very non predictive. They do not represent how that ad will eventually perform, but now that with, you know, less than a hundred dollars and just a, a few hours, we can actually recreate a really powerful ad.
Now what we should be doing is we should be testing many concepts. Figuring out what the winning concept is, and then that’s the concept that should go into production. And of course, for smaller businesses who won’t spend the half a million dollars, this also means that instead of spending $10,000 on a really.
You know, crappy local ad now you can spend, you know, very little, less than a hundred dollars and get something that is, you know, national ad quality, production quality. So this revolution in terms of creative testing, I think is a really exciting example of the opportunity. That we see with ai.
Brian Thomas: Thank you.
There’s a lot to impact there, obviously, but um, you know, people are looking in the advertising industry to move to more cost effective tools and models using AI. But this synthetic research, as you found out after doing research on it and testing. We found out that it was really a bunch of garbage, but people are still using it.
Um, I think there’s a double-edged sword with AI. We obviously know that, but you really, um, uh, shed some light on uh, really some things that need to be fixed. But I like that you highlighted your media pet platform. And what it can do, especially that example with the Coca-Cola ad that come out for the holidays here.
I think that was uh, interesting, uh, what your platform can do. So, thank you and Duane, it was such a pleasure having you on today and I look forward to speaking with you real soon.
Duane Varan: Oh, thanks. That was tons of fun. Thanks, Brian.
Brian Thomas: Bye for now.
Duane Varan Podcast Transcript. Listen to the audio on the guest’s Podcast Page.











