The Long and the Short of It or The Wrong and the S*** of it? Les Binet explores whether our obsession with success, and failure to explore failure, is detrimental to our industry learning.
In his critique of our research, Harry Guild, Data Scientist at BBH London makes a good and important point. As Harry rightly points out, the campaigns that we have analysed are effectively a biased sample. These are not run-of-the-mill campaigns. They’re all cases that have at some point been entered into the IPA Effectiveness Awards, and so one would expect them to be more effective than average.
We have always been keenly aware of this fact, and it was one of the first things we addressed when we started analysing the data, way back in 2006. We realised that, in order to identify the ingredients for effectiveness, we needed some data on ineffectiveness. In order to identify the best marketing strategies, we needed to find some that were sub-optimal.
But how to do that when your sample represents the cream of the crop? Well, the first thing we did was to look at campaigns that didn’t win effectiveness awards. Previous analyses of the IPA data had fallen into exactly the trap that Harry describes. They’d focussed solely on IPA winners and looked at what they had in common, and so ended up drawing some very dodgy conclusions. What we did was to look at the whole sample, including the campaigns that the judges had rejected as ineffective. And believe me, there are some clunkers in there that have never been published.
The second thing we did was to delve more deeply into the nature of effectiveness. Previous analyses had treated effectiveness as a monolithic thing – campaigns were either effective or not, and effectiveness was usually defined as winning an IPA award. We ignored the prizes and delved deeper into the business results. We realised that effectiveness had several dimensions and that there were degrees of effectiveness along each dimension. So, some campaigns were extremely effective at moving brand metrics, but totally ineffective when it came to short term sales. Other campaigns were great at generating direct responses but had no effect on market share.
This more granular approach revealed a wealth of variation in the data, hidden below the surface. Some campaigns really were the best of the best, radically shifting all the performance metrics, and so delivering huge profits. Others were partial successes, moving some metrics but not others, with correspondingly smaller paybacks. And others looked pretty mediocre. By comparing variations in performance against variations in budget and strategy, we could perhaps get some clues about what worked and what didn’t.
Of course, we knew that our approach had its limitations. Firstly, we were looking for correlations between marketing inputs and effectiveness outputs, and as Harry rightly points out, correlations can sometimes be misleading. So we’ve always worked hard to rule out spurious correlations where we can, and when we can we’ve always looked for corroboration from other research. And the results have been encouraging. Researchers who work with more complete datasets, such as Ehrenberg-Bass or Nielsen, have often reached similar conclusions to us.
Secondly, as we acknowledged in Marketing in the Era of Accountability, even with the inclusion of the cases that the IPA judges rejected, our sample would still be somewhat biased. All we could do was look at the differences between the good, the better and the best. That meant we would never have much to say about really bad advertising. But we might just come up with some useful rules that could help smart clients and agencies move from good to great.
At this point, a sporting analogy might help. Anyone who knows me will realise I don’t do sport, but it occurs to me that IPA entrants are a bit like professional footballers. They’re all good, and much better than the kids kicking a ball around in your local park. But some are much, much better than others. If you study professional footballers, you will see very few examples of incompetence. But by comparing the good with the better with the best, you just might learn a thing or two about how to improve your game.
Still, it would be good to understand more about failure, as Sarah Carter and I argued in this WARC article.
As Harry says, we need to look at the wrong and the sh** of it, not just the long and the short of it. That’s why Sarah and I have written a new book on the subject, to be published by the APG on 9th July. Entitled “How Not to Plan – 66 ways to screw it up”, it describes some of the muddled thinking and poor practice that we’ve encountered over the last thirty years and tries to draw some useful lessons from it all. It doesn’t claim to be a comprehensive analysis, but we hope that it will act as a slight corrective to our industry’s obsession with success.
Les Binet is Head of Effectiveness at adam&eveDDB.