Will Generative AI help tell causation from correlation?

New systems have the potential to improve effectiveness analysis, provided they are developed collaboratively.

Learning how best to combine technological capabilities, human judgement and rigorous econometric principles could open up new ways to understand and improve the impact of advertising investment.

In the dynamic world of media effectiveness analysis, a new player could potentially revolutionise econometrics or media mix modelling (MMM).

Large Language Models (LLMs), the powerful AI systems behind innovations like ChatGPT, are now being considered for one of the most challenging aspects of media analysis.  But could these advanced systems truly overcome the hurdles that have long plagued AI-driven approaches to MMM?

In this speculative paradigm, generative AI systems might work alongside human experts, each complementing the other's strengths. LLMs could rapidly process vast amounts of data, suggest potential causal relationships, and generate hypotheses. Human analysts would then apply their domain knowledge, critical thinking, and ethical judgment to validate and refine these insights.

Paul Cuckoo, Worldwide Head of Analytics at PHD Media and Technical Judge of the IPA Effectiveness Awards 2024

The much sought-after goal for anyone involved in media effectiveness modelling has been to develop techniques that distinguish reliably and quickly between correlation and true causation within a dataset.

This challenge, known as identification and usually applied to identifying causal inferences of relationships between, for example, media investment and commercial outcomes such as sales, has long been the Achilles' heel of AI-driven approaches to MMM.

AI has struggled to demonstrate causality

Previous AI engagements in marketing effectiveness primarily relied on predictive AI, which excelled at forecasting outcomes based on historical data but struggled with causal inference. The argument against AI in this field has been straightforward: no matter how sophisticated it is, a predictive AI can't tell you whether two things are causal, only whether they are correlated.

A recent cautionary tale serves as a stark reminder of the limitations of predictive AI in media effectiveness analysis.

A major EMEA telecoms brand tested an AI-driven MMM provider to model its broadband business. The results were initially impressive, showing a significant contribution from media spend. However, upon closer scrutiny, it was revealed that the AI model had vastly overstated media's impact. When the model was rebuilt using traditional methods, media's actual contribution was found to be less than half of what the AI provider had reported. The discrepancy was largely due to the AI's failure to properly account for the causal impact of promotions.

This example underscores the critical importance of human oversight and domain expertise in applying AI to media effectiveness modelling. It also highlights the persistent challenges in achieving accurate identification through predictive AI alone.

Grounds for cautious optimism

However, with the advent of LLMs, which represent a leap forward into generative AI, we may be on the cusp of a potential paradigm shift. Unlike their predictive counterparts, these advanced AI systems are capable of processing and understanding vast amounts of information, demonstrating reasoning capabilities that were once thought to be the exclusive preserve of human experts. But can generative AI, in the form of LLMs, truly crack the identification problem in media effectiveness? While it's far too early to declare victory, there are reasons to be cautiously optimistic.

LLMs could potentially bring an unprecedented level of contextual understanding to the table. Unlike traditional predictive AI systems that operate within narrow confines, LLMs can draw upon a vast knowledge base spanning multiple disciplines. This breadth of knowledge might allow them to consider factors and relationships that might not be immediately apparent to human analysts or more limited AI systems.

Moreover, LLMs show promise in pattern recognition across diverse datasets. They might be able to identify subtle connections and trends that could escape even the most experienced human analysts. This ability could potentially be valuable in teasing out the complex web of factors that influence media effectiveness, possibly leading to more accurate identification of causal relationships.

Making use of complementary strengths

Looking ahead, the key to leveraging LLM’s potential may lie in combining their analytical capabilities with human judgment and rigorous econometric principles. The integration of LLMs into media effectiveness modelling could lead to a new era of "augmented identification."

In this speculative paradigm, generative AI systems might work alongside human experts, each complementing the other's strengths. LLMs could rapidly process vast amounts of data, suggest potential causal relationships, and generate hypotheses. Human analysts would then apply their domain knowledge, critical thinking, and ethical judgment to validate and refine these insights.

The benefits of a collaborative approach

This collaborative approach, if successful, could yield significant benefits in marketing. More accurate identification of causal factors could lead to more efficient allocation of advertising budgets, potentially avoiding the pitfalls experienced with previous predictive AI models. It might also help marketers better understand the complex interplay of factors affecting campaign outcomes, leading to more targeted and effective media strategies.

As we consider the potential impact of generative AI on media effectiveness modelling, it's clear that the field could be entering an exciting new phase. While significant challenges remain, as evidenced by past missteps with predictive AI, the potential benefits of integrating LLMs are too intriguing to ignore. By cautiously exploring these advanced generative AI systems while maintaining a commitment to rigorous methodology and human oversight, we may be able to push the boundaries of what's possible in media effectiveness analysis.

Paul Cuckoo is Worldwide Head of Analytics at PHD Media and was a Technical Judge of the IPA Effectiveness Awards 2024.

Hear from 2024 Effectiveness Awards winners at the IPA Effectiveness Conference, 9 October

 


The opinions expressed here are those of the authors and were submitted in accordance with the IPA terms and conditions regarding the uploading and contribution of content to the IPA newsletters, IPA website, or other IPA media, and should not be interpreted as representing the opinion of the IPA.

Last updated 12 September 2024