Benchmarking Smenchmarking

29th August by Deniz Hassan

Lately I've been awash with data from organisations big and small - all of whom want to know how they're doing. Inevitably the conversation of benchmarking comes up, especially around the time when we see the results of one of the larger digital studies released. 

As a sector we've got loads to compare ourselves to - from the IFL's Indigo study which focuses on 'Global Individual Giving', to the M+R Digital Benchmarks and the grandiosely entitled Wood for Tree 'State of the Sector' (not to be confused with the very similarly named Blackbaud 'Status of UK Fundraising') ...and we mustn't forget the countless internal benchmarks which organisations pay lots of money to conduct. 

But it's a conversation (and often a budget line) that I really try to steer people gently away from relying upon - whether it's digital benchmarking or fundraising benchmarking in general - and I'll explain why. 

How useful are they?

They're a bit useful but not massively so. In none of the benchmark studies we've got is there any sort of context attached. It's just numbers with subjective meaning…and in my slightly humble opinion, subjective meaning means no meaning. If you see what I mean.

Let me give you an example of one of the more simple things people like to benchmark against - email click through rate. Many a conversation about email strategy starts with "what's a good click through rate?" and "the benchmarks shows that xx% is good".

This is not a great start to the conversation as it immediately assumes that all organisations are the same. For instance in one large study, within the international aid section, you have the British Red Cross participating alongside Women for Women International UK. Is there really a useful comparison to be made between your data and a merged dataset of including such diverse organisations?

And perhaps my organisation is really focussed on creating a large pool of low value donors (with a high conversion rate) vs another which is concentrated on higher value? If I don't know the separate strategies of all the participants then something like 'average cash gift' is not helpful to compare against.

The wrong metrics

Studies often use quite baffling metrics that we would never use when creating our own strategies. For example, how many strategies have a KPI for "Investment in digital advertising divided by total online revenue" (ignoring the subjectivity of 'total online revenue')?

My favourite red herring is the 'cost per donation'. It's meaningless and only exists because (as I'll go into later) many participants can't actually measure their activity well enough to understand the differences between the investment and returns across acquisition and retention. Again, is there anything meaningful in comparing my cost per donation as, say, a small membership organisation in a moment of growth vs a mature organisation with a much higher base of existing donors? CPD is not the same as CPA.

Similarly the return on ad spend (ROAS) is to be taken with large fistfuls of salt. Having seen multiple submissions, I can tell you that the word 'return' can mean a plethora of things. Is it the immediate return at point of acquisition? Is it a long term modelled return? How have participants attributed spend and income across the channels? What assumptions does it take vs the amount of real data in there? So many variables across so many organisations means it's not going to be reliable to compare against.

Data quality

One of the biggest issues facing our sector is data quality and measurement - this affects organisations big and small. I've had the privilege of working with a number of top 20 charities this year and they all have measurement issues. For example, one internal benchmarking study had a number of different definitions for how income was measured. That really should be a simple one so you get my point.

Similarly phrases such as 'Not all participants were able to provide data for every metric' does not fill me with confidence or joy. It might be roughly translated as 'we asked a bunch of organisations some questions and we might have got some answers which may or may not be useful or accurate. But here's some graphs'.

The best benchmarking study - WHICH I LOVED (yes, I can be upbeat you see) - that I've seen, collected data consistently across all markets and all channels with no variance and stored it in one data warehouse running a consistent data model. ROAS is the same for every market and channel. As is CPA. As is the attribution. Super.

In many studies, participants are invited to submit their own data which is a huge problem. Take the example of attribution - one organisation may use last click attribution to fill out the form and another might use campaign level income allocation as determined by their custom CRM rules. It's the old apples and pears adage. Take paid search - this will always show a better return on last click attribution than Meta.

Using tools such as Google Analytics can also pose problems. In GA4, for example, you can set your own custom attribution settings. And that's before you even consider that GA will always favour (by Google's own admission) Google channels. So income from one organisation might be attributed to email for one organisation and paid channels for another.

The other big assumption these studies make is that people are skilled in extracting, transforming and cleaning their data. It's a big old job and different people will put different levels of rigour into it…it's just another variable. Self submission without any consistent rules or auditing isn’t a reliable way of collecting data. And even if there were consistent rules, it would be impossible for most participants to follow them.

So what's the future?

I do think benchmarking can work though. But I think the sector needs to mature significantly around data. Data literacy is still much lower than it should be considering the amounts of money some organisations are investing in paid media channels. We're still getting our heads around what attribution really means and we still see huge differences in things like our fundraising systems vs management accounts. This is what we need to work on.

But for the time being I think a lot of it is simply navel gazing. My CTR's biggest than your CTR and all that…

Previous
Previous

AAW HOW TO: Understand What a Director of Engagement Does?

Next
Next

AAW HOW TO: Recognise the Benefits of Strategic Consultancy for Philanthropic Organisations