Diversifying the discourse on advertising effectiveness

Proving the effectiveness of not-for-profit communications.

What lessons can the 2020 IPA Effectiveness Awards teach us about the future of advertising? In the latest of our chapters from Advertising Works 25 – The Definitive Guide to Effective Advertising, 2022 Convenor of Judges Harjot Singh sets out to prove the effectiveness of not-for-profit communications.

At its heart, effectiveness has always been about impact. The measurement and evaluation of effectiveness is the only credible way of discerning the impact of the impact, contextualising it across three key factors, evidenced by:

  • the audacity of the objectives
  • the bravery of the approach
  • the causality of the work

Did I make this up? Yes.

I call it 'the ABC of effectiveness'. It’s my way of landing the plane.

If this sounds simplistic, I’d rather we dwell on the existing discourse on effectiveness instead. I find that it is oversimplistic and inadequate in how it relates to representing the diversity of marketing challenges, particularly those faced by non-profits.

The current discourse on effectiveness is dominated by stories in which the tools, norms, resources and collective wisdom as it applies to evaluating, proving and contextualising effectiveness across 'the ABC of effectiveness' are disproportionately abundant, for marketing effectiveness cases across categories that have clear commercial motives and challenges.

Tried, tested, proven, accepted, cited, and held as a definitive standard; as if to imply that these methods, assumptions and considerations apply to every marketing effectiveness case and category in principle.

Simply put, the discourse on effectiveness as it relates to our industry is not diverse enough.

Not-for-profits in the IPA Effectiveness Awards

One way of understanding the issue of under-representation of successful not- for-profit effectiveness cases and the subsequent paucity of learning that exists around the unique nuances of proving effectiveness for not-for-profit cases across the world is to reflect on how our industry works: inadvertently placing certain kinds of stories in a position of strength to be seen, glorified and valorised more than others.

In 2020, less than 20% of all submissions to the IPA Effectiveness Awards were for not-for-profit. Since 2004, not-for-profit cases have always hovered between 10% to 15% of total entries. Winners are even fewer. International winners are even more scarce.

In the last 22 years, there have only been four not-for-profit cases that have been awarded a Grand Prix. Only one of those was an international, multi-market case. They were the Health Education Authority (1998), Barnardos (2002), The Metropolitan Police Service (2007) and FARC (2011 – the only international Grand Prix winner to date).

In culture, dominant identities establish their superiority from constant representation, repetitive affirmation and recognition of their ideal. As a result, they are seen, heard, shared and socialised more. They become and inform the dominant, desired and idealised narrative and standard.

What’s happened in the IPA Effectiveness Awards to date is quite similar. Not- for-profit stories just don’t have the same exposure, or recognition, and it’s time we fixed that.

As Maya Angelou said, 'when you know better, you do better'.

There is such little learning on methodologies, principles, techniques and approaches focused on proving effectiveness for not-for-profit cases vs. the rest – in the IPA and across the board. As a result, the industry remains disproportionately more educated, familiar and informed on marketing and proving marketing effectiveness in its classical sense.

It is time to start creating a narrative that is focused and sensitised to the unique nuances that exist in marketing challenges faced by an organisation that’s trying to stop child abuse (Truth Project), raise awareness of the stigma associated with mental health and suicide (CALM), save a nation’s healthcare system (NHS England), raise funds for research and medical science to save terminally ill children (SickKids) or prevent half a million children from dying because they don’t have access to drinking water (WaterAid).

I am by no means suggesting that it’s not challenging to sell more mayonnaise, cars, alcohol, fast food or groceries in a highly competitive, connected and fast-paced attention economy. I am by no means suggesting that it’s a competition, even though, in this case, this is one.

But what I am saying is that we need to consider and create a set of principles that can inspire and educate not-for-profit organisations to experience and share in the success of proving the effectiveness of their communications just like the existing majority of for-profit stories.

Since there isn’t enough of a repository of successful not-for-profit cases this year or in the past 20 years to base these principles on, I want to refrain from the expected approach of glorifying or vilifying any particular case.

We need a forward-looking conversation. Because we cannot base it on past performance alone. We just don’t have enough of a sample size to afford the kind of strident views one might expect to read in a chapter like this. That’s why I want to base this point of view on the potential of what can and should make it against what actually made it through to the IPA Awards in 2020.

Unless more not-for-profit cases make it to the top, gain visibility and exposure in effectiveness award shows, we will not have enough learning to determine the range of ways in which not-for-profit cases can and should measure and prove effectiveness.

When I started researching what was available and what was in development, I was filled with optimism, and that is the sentiment I want to share with you from here on. I say this because there is something very progressive that non-profits create.

A different playbook, if you will, to respond to the ABC effectiveness framework of proving the audacity of the objectives, the bravery of the solution and the causality of the work to the impact.

I recently learned that the most compelling starting point in proving effectiveness for non-profits in particular is a clearly identified and articulated theory of change.

It is the most irrefutable foundation to build your effectiveness argument on. Simply put, a theory of change maps out the organisation’s path to impact.

In doing so, it specifies the causal relationships between the activities and the eventual outcomes. It specifies which of these outcomes are short-, medium- or long-term, at the outset.

Ultimately, this argument is the basis to communicate the rationale behind why it is believed that the campaign will deliver the impact that the organisation’s programme or intervention seeks to create.

Outputs vs Outcomes

Speaking of impact, it is particularly important that not-for-profit cases differentiate outputs from outcomes. Often this is not as complex or nuanced as for other, dare I say, more straightforward, commercial cases.

When proving effectiveness for not-for-profits, it is important to clearly establish that the outcome is the change that occurred because of the work.

Outputs usually only demonstrate that, for example, a certain amount of activities have occurred – whether it’s number of hours of training, calls, products delivered, etc.

In proving effectiveness for non-profits, it becomes critical to demonstrate how those outputs then lead to the change, i.e. the outcome(s).

It is not enough to conduct activities, there needs to be a clear rationale that links the execution of those activities with expected changes in the lives of beneficiaries – much like the SickKids submission endeavours to do.

One can’t really have a conversation about outcomes, impact or effectiveness without metrics. In the case of non-profits these metrics have to be very precisely and thoughtfully linked to the mission or purpose.

Yes, mission or purpose is important, and we’ve talked about it incessantly as an industry. In the case of non-profits, the importance of linking the metrics to the mission is critical, more so than it may be in other categories where the key metrics can be credibly linked, argued and proven purely in relation to a marketing objective.

Further, for non-profits, proving effectiveness means measuring the success of the campaign or intervention in achieving their purpose.

This means that non-profits have to consider some very distinctive conditions and options in identifying and articulating their mission or purpose and linking their metrics to it.

Given the diversity of organisations in the non-profit sector, no single measure of effectiveness and no generic set of indicators will work for all of them, to the extent they do for cars and condiments.

Exploring a different set of strategies

Intuitively, the simplest strategy would be for a non-profit to narrowly and precisely define its purpose so that progress can be measured and attributed directly. While this approach works for non-profits that have a very straightforward and quantifiable mission, as seen in the NHS England submission, it doesn’t work for all and it doesn’t take into consideration the diversity that exists within the non-profit sector.

The other issue with this approach is that defining a purpose very narrowly can trivialise or oversimplify the narrative in such a way that it undermines the impact, treating the symptoms rather than the cause of a particular social problem, much like non-profits like CALM and the Truth Project seek to impact with their work.

For them, proving effectiveness relies on being judged not just on key statistics as they relate to fewer deaths etc., but also on changes in public attitudes as expressed in popular culture and opinion surveys.

The second strategy for measuring and proving effectiveness by linking the metrics to purpose is to invest in and cite research to determine the extent to which the activities of the not-for-profit actually help to mitigate the problem(s) or promote the benefits that the mission involves. The Truth Project submission would have been even stronger and more compelling with this kind of context.

The CALM submission is a good example to cite here. CALM’s urgent and important mission is to compel the nation to recognise male suicide and its increase as a matter of national outrage. Its submission this year was evidenced by research that illuminated and contextualised the challenge, which wasn’t raising awareness of the issue – as this had been increasing – but rather getting the UK to truly engage with it.

Its research placed the charity’s purpose in a position of strength because it was focused on the cultural inability to talk about death, especially in our society, the degree to which suicide was still a taboo in communications across many if not most mental health charities, and the existing attitudes among British men. These made them five times more likely to believe that depression is ‘not a real illness’, much more inclined to treat mental health issues disdainfully, twice as likely as women to call suicide an embarrassment, three times more likely to call it pathetic and four times more likely to call it immoral.

Whilst CALM had a quantifiable element in its approach and in its mission, which made linking metrics to the mission possible, this approach doesn’t work nearly as well for others.

Investing in research to substantiate why we believe the activities of the non- profit in question are genuinely meaningful and worthy of creating the social impact they’re after is even more important for them.

However, for some not-for-profits, narrowing the scope of the mission to a quantifiable goal isn’t an option and investing in research outcomes isn’t as feasible. In my experience, I find that this is especially true for non-profits that operate in the environment/conservation area. There was a paper that made it into the entries this year, but it did not advance.

And it made me think about why proving effectiveness might have been more difficult or perhaps less straightforward for them. In my research, I came upon an example that helped me understand this better.

A charity like the Nature Conservancy, for example, can potentially calculate changes in the Earth’s total biodiversity to substantiate their mission and link metrics to it in order to prove effectiveness, but the benefits of engaging in an approach as sophisticated and expensive as calculating our planet’s biodiversity wouldn’t justify the cost.

Come to think about it, the work of any conservation charity arguably has at best only a modest, if not imperceptible, effect on global biodiversity, which is affected on a far greater scale by other factors such as deforestation, climate change, conversion of habitats, etc.

So how can non-profits prove their effectiveness?

In researching this, I found that they have another option. They can develop micro-level goals that, if achieved, would imply success on a grander scale.

An effectiveness paper from such a non-profit should focus on determining success in proving effectiveness by using a baseline set of data established by existing scientific surveys to measure the success of its efforts across a series of microgoals that will ultimately count towards creating a lasting, positive impact.

Every milestone matters: charting a path to success and impact. It’s about brand direction thinking vs. brand position thinking.

It should also be added that not-for-profits are in service of impacting positive social change. It is not as easy to clearly and squarely discount the impact of communications efforts and the value they add from that of other social change strategies and policies being implemented at the same time.

Truth is, it takes creativity and perseverance to prove effectiveness irrespective of the category you operate in.

Not-for-profits have to apply creativity and perseverance differently – in the way they apply precision in defining their purpose to make it quantifiable, in the way they apply discretion in investing in research to show how their purpose and the specific pursuits that follow from it work, and in the way they apply foresight in identifying and developing concrete micro-goals that imply success on a larger scale.

As of today, we don’t have enough learning around measuring and proving effectiveness for non-profits to the extent that we could claim that there is a clear and defined right and wrong way to evaluate non-profit communication campaigns, which are often if not always unique and diverse, making the creation and adoption of standard evaluating guidelines difficult.

Different evaluation designs will have different interpretive boundaries. Designing and investing in evaluation approaches that take into consideration the unique realities of the non-profit sector and the diversity that exists within that sector must become a clear priority for us as an industry, so that we can maximise the opportunities for both learning and assessing impact in future.

Despite the unique challenges that non-profits are faced with, they can, and must, measure and prove their effectiveness, tracking their progress towards making their purpose real in a way that helps them play and earn a meaningful role in people’s lives. And we need to see more of them be recognised in the effectiveness competitions. Juries have a responsibility to discern and elevate the best for us all to learn from. But we need to create the learning first. Juries will only be as experienced and advanced as the learning that exists.

Ultimately, the discourse on effectiveness will only be as contemporary and representative as the diversity of learning and the diversity in the learning that exists.

We know better. Let’s do better.

This is an abridged version of Harjot Singh’s chapter from Advertising Works 25 – The Definitive Guide to Effective Advertising.

Purchase Advertising Work 25
Last updated 17 April 2024