Jane Christian, EVP Analytics at WPP Media UK, and Chair of the Technical Judging Panel for the IPA Effectiveness Awards 2026, says the technical advice to awards entrants has been updated to be shorter, clarify what technical judges are looking for, and be more in line with where the industry is heading.
It’s IPA Effectiveness Awards year again (the deadline for entries is April 16), which is always an exciting time for the industry, with pens poised, ready to sell how our strategies have driven real business growth. If you are an econometrician or technical judge however, thoughts about the technical appendix submitted as part of Awards entries are already popping into your head.
Although typically less than half of IPA Effectiveness Award-winning cases feature econometrics, for many years econometrics has been one of the robust ways used by case authors to quantify the business effects driven by marketing strategies, of course alongside other techniques and evidence.
For anyone who has to write a technical appendix, providing the Technical Judging Panel with more detail on the econometrics used in their case is a big ask, with a lot riding on it. At the same time, for the technical judges, the task of reading numerous lengthy econometric appendices, some of which can be more than 100 pages long, will always evoke the gritting of teeth.
We have therefore updated the guidelines for the technical appendix for the 2026 Awards, attempting to better clarify what the judges are looking for, to reduce the length of the document and to bring it in line with where the industry has started to move towards, with guidelines specific to Bayesian models now included. A number of practitioners from across the industry have been consulted, to get to what we hope is a pragmatic and useful guide for econometricians.
Whilst the judges are looking to ensure a sound approach, a level of pragmatism will be promised by judges in return for openness from authors.
The biggest change that we hope the updated guidelines will elicit is to avoid the technical appendix being a massive data dump of model outputs and tests. Whilst some visibility into the model outputs and tests is useful and still required to some extent, more emphasis is being put on convincing the technical judges of the soundness of your approach through explanation.
Here are a couple of extracts from the updated guidelines which capture this change:
“Your primary goal is to make it easy for the judges to quickly grasp your methodology and the decisions you made, thereby convincing them of your expertise, the soundness of your approach, and the reliability of the model outputs."
“Avoid lengthy data dumps; these can obscure understanding, suggest obfuscation, and detract from the judges' ability to assess your expertise effectively. Instead, present your approach using simple, easy-to-read explanations, supported by a few key illustrative tables and charts.”
On top of this, modellers previously worried about the need to show ‘perfection’ in every element of their modelling. Whilst the judges are looking to ensure a sound approach, a level of pragmatism will be promised by judges in return for openness from the authors:
“The judges know that modelling isn’t always the art of the perfect and will be pragmatic in their assessments. Transparency about model limitations, pragmatic choices and any compromises made is valued by judges. Authors should not attempt to hide model weaknesses but instead explain and justify them. This clarity will help foster a positive impression and aid evaluation.”
In an attempt to shorten the submission, the new guidelines now specifically suggest how to approach cases where there is a large number of econometric models. Previously, ‘just to be sure’, a modeller with, say, 20 models, may have felt the need to churn out tables of stats for all 20 models. The updated guidelines attempt to limit the full evaluation to the critical models, with a high-level summary of key diagnostics for other models.
The other big change to the guidelines is that they now provide more direction for those using a Bayesian approach. This change reflects the growing industry adoption of Bayesian methods.
I was previously a judge in 2020 and in that year all econometric models referenced in a technical appendix used the Ordinary Least Squares (OLS) approach. The previous guidelines were very much based around OLS. In 2024, many of the econometric technical appendices featured ‘Bayesian’ models. This article isn’t the place to go into the differences, but the point is that there are different criteria that are suitable to evaluate a Bayesian approach compared to OLS. We hope that the authors find the new guidelines useful, in terms of what the judges would like to see.
The process of getting to the Bayesian guidelines was interesting in itself (potentially just for the econometricians among us!). A number of experts in the industry were consulted for their opinions on how best to assess a Bayesian model, for example, which tests are most appropriate to assess a certain attribute of the model. There was a lot of discussion and many thanks to everyone who contributed. Unlike for OLS, where there is very clear best practice on how to assess a model, for Bayesian some elements are still being debated. Our guidelines for Bayesian in the future may therefore be refined further, but we hope their first inclusion is a useful and much-needed addition.
Although econometrics is still considered a great method to prove and quantify business effects, the technical judges would also encourage authors to draw on other methods, where appropriate, to either validate their econometric models or as additional/alternative proof of effectiveness.
One such method that has grown in popularity is the running of controlled experiments. In their most simple form, this involves exposing only part of the audience to a piece of activity and measuring the business response vs those not exposed. These are often run using geographic areas to distinguish those exposed vs the control. If set up robustly, these experiments offer a clean and robust way to quantify business effects.
Authors can always put any technical or commercially sensitive detail about how their experiment was run – or, in fact, commercially sensitive information to do with any part of their case – in a technical appendix.
The complete updated guidelines are available and we strongly encourage all prospective entrants to review them thoroughly.
The Awards team is happy to provide more tailored advice to help you write your technical appendix if you have a question not covered by the guidelines. If you would like to participate in a webinar/Q&A session about writing a technical appendix for the 2026 Awards, please register your interest by contacting [email protected].
More information about the IPA Effectiveness Awards 2026
The opinions expressed here are those of the authors and were submitted in accordance with the IPA terms and conditions regarding the uploading and contribution of content to the IPA newsletters, IPA website, or other IPA media, and should not be interpreted as representing the opinion of the IPA.