There are challenges in setting up rigorous advertising experiments. But they are outweighed by the opportunities to receive and learn from rich feedback.
Earlier this summer I had the privilege of being an Industry Judge at the IPA Effectiveness Awards 2024, which had a particular focus on discovering ‘New Frontiers’ in effectiveness. Since the last round of the awards was in 2022, this year presented an opportunity to recognise the longer-term contribution of marketing to brands emerging from the depths of the extraordinary impact of the COVID-19 pandemic and its effects on the global economy.
MMM remains the gold standard for measurement and for good reason. But we mustn’t become over reliant on one methodology for measuring effectiveness.
As hard as it is to believe, it’s nearly five years since the first reported COVID cases started to make headlines. If you force your mind back to those surreal days, you may remember a time of immense disruption. Businesses, institutions, families, and everyday people adapting, pivoting, experimenting, and trying new things for the first time. Some behaviours accelerated their adoption before reverting to their natural curve (for example, e-commerce), some fell aside as quickly as they emerged (House Party, anyone?) and some seemed to have endured for the long haul (Teams calls, urgh).
It was also a time of rapid experimentation for anyone working in brand communications. Testing new ideas, channels, production techniques, and ways of working. Exploring ‘New Frontiers’, if you will. Interestingly, however, this ethos of testing and experimentation in campaign activation was not reflected in the effectiveness methodologies used in most of the batch of papers that I read.
Reassuringly, Marketing Mix Modelling (MMM) – or econometrics – was consistently deployed, particularly amongst well-established brands with large media investments. Those brands with a bank of historical effectiveness data leveraged it extremely well in evidencing campaign outcomes.
However, the limitation of MMM is in recommending and identifying new channels or demonstrating the impact of small-scale investments and media tests. In those instances, or situations where brands are unable to put MMM in place, structured and controlled experiments offer a solution. However, in many of the cases I came across that didn’t use MMM, the opportunity to measure using structured testing was largely missed, and consequently many of these cases struggled to isolate the effect of advertising. Simple experiments such as A/B testing and regional holdouts (in which ads or individual channels are withheld from part of the audience) would go some way to demonstrate the incremental impact of campaign activity. Structured tests and experiments not only fill a measurement gap when MMM is absent, they can also enhance learning, providing genuinely new insights and direction, whereas MMM was used largely in the papers to validate creative strategy.
And learning to adapt, pivot, and change tack in media is crucial, even for the type of long-term brand strategies entered into the IPA Effectiveness Awards. Behaviours and sentiment can change over time. So, it was encouraging to see some papers that outlined how media strategies evolved to reflect their customers’ media consumption.
In my experience, there are two principal challenges to overcome to run an effective experiment. The first is having a really clear understanding of what it is we’re trying to understand, and ensuring we have the right data around the test to draw robust conclusions from it. For example, for a simple test of TV ads in one region, we need to be able to compare this area with a hold-out region with similar sales patterns, demographics, media costs, and competitor activity. That all requires gathering the right data before, during, and after the test to isolate the impact of media. This takes time to set up properly, running counter to the idea of gaining quick and easy learning to implement in the short term. This is a particular challenge for brands that are in their scale-up phase and in the early stages of testing paid media’s impact.
The second main challenge relates to the sunk costs associated with media and creative testing. Whether we’re testing a new channel, creative format, region, or media weighting, the production cost of creating new content is the same whether we’re experimenting with it or activating a heavy-weight campaign. Of course, we’re not always testing new content and incurring additional costs, but the perceived risk and associated costs of a failed test often outweigh the return the test may yield.
However, one of the benefits of the emergence of new digital platforms and behaviours during a time of accelerated digital adoption, such as the COVID era, is the opportunity to conduct experiments in platforms with incredibly rich data feedback loops. This isn’t to advocate for more short-term oriented, lower funnel direct response activity. On the contrary, it would have been fascinating to see more brands with relatively modest media budgets de-risk their brand investments with small scale tests, model their impact, and scale future investment to demonstrate the long-term contribution to the bottom line.
MMM remains the gold standard for measurement and for good reason. But we mustn’t become over reliant on one methodology for measuring effectiveness. Any brand’s measurement portfolio would richly benefit (with or without the presence of MMM) by including more incrementality testing and experimenting of new ideas, new channels, and new media behaviours as we answer the call to seek out New Frontiers in effectiveness.
Chetan Murthy is Executive Strategy Director, Havas Media Network, and an Industry Judge of the IPA Effectiveness Awards 2024.
Hear from 2024 Effectiveness Awards winners at the IPA Effectiveness Conference, 9 October
The opinions expressed here are those of the authors and were submitted in accordance with the IPA terms and conditions regarding the uploading and contribution of content to the IPA newsletters, IPA website, or other IPA media, and should not be interpreted as representing the opinion of the IPA.