The hunt for the red herring

09 November 11

Comments: 0

Tags /
measurement
social media

Most of the metrics we need to measure social media already exist.

The general consensus among marketers is that social media has finally arrived as a mainstream marketing ‘channel’. The quote marks are intended here. It is not doing justice to the fantastic diversity of social media approaches and platforms to reduce them to just one results line in our campaign evaluation reports. It is certainly not another email.

That diversity of strategic and creative opportunities is currently the hottest topic discussed between clients and agencies. How do we evaluate the effectiveness of social media? How can we know that social media works for our brands and our campaigns? And what ‘works’ actually means? As is usually the case with new digital platforms that have matured into viable marketing spaces, there is no lack of opinions. The confusion (as in a plural noun usually deployed to describe a group of planners) of gurus with fancy new ways of looking at this is growing. Everybody seems to have an evaluation framework or two to offer to the desperate market; everyone is wielding a cutting edge of a sort. Well, so do barbers. There are more (red) herrings in this sea than in Norwegian coastal waters.

There are two notable recent attempts to bring clarity. One was devised by the American consultancy Syncapse. It is based on six key metrics, none of which is very unique to social media (not a bad thing in itself). The metrics are Product Spending, Loyalty, Propensity to Recommend, Brand Affinity, Media Value and Acquisition Cost. Each of them is a measure that every marketer understands and is probably measuring it in other channels already. The main finding of the Syncapse study is the average annual monetary value of a Facebook ‘fan’: $136.38. The problem? It is all based on the claimed behaviour of 4,000 US survey participants. And it was purely Facebook-focused.

The other attempt is the broader social media framework produced by the Internet Advertising Bureau (IAB). It has four ‘As’, each for one composite metric: Awareness (cost per impression), Appreciation (cost per engagement), Action (cost per lead) and Advocacy (cost per referral).

There are two problems with this approach. Firstly is the ‘industry standards’ fallacy that just having a set of universally agreed metrics for a marketing channel is enough to create some sort of benchmarking equilibrium. This is flawed. Even in mature channels such as email there are no ‘response standards’. Clients, even when they ask for some (which the less experienced still do) actually ask for something much more specific: what is the average response rate for an email like this, with a similar purpose, in my own, or a similar, industry.

The fact that, for example, British Gas has an average email opening rate of X, while Diesel jeans has an opening rate of 5X – and this goes for different categories and types of emails sent at different points in the customer journey to different audiences – doesn’t create a meaningful ‘email standard opening rate’. The second problem is the fact that no set of benchmarks will help us if we can’t actually track and measure what goes into them. The frontend doesn’t work without the robust backend.

And that has been the biggest problem with evaluating social media up until now. In our blind panic to make sense of it all, we have started believing that the answer is in a shiny, new evaluation framework, specific to social media and full of new, exotic KPIs - a stylish Bauhaus mansion on the hill, overlooking the Old Digital Town. The new WHAT.

The answer is much more prosaic. It is not so much about ‘what’, but ‘how’. The way to crack the effectiveness of social media for marketing purposes is to better connect all the tracking opportunities between different platforms and then to top up the missing bits with proxy data. It is not a job for an architect; it is a job for a plumber.

Most of the metrics we need to measure social media already exist. Or, at least, their equivalents. A ‘like’ is an action of engagement in the same way that a click on a link in an email is – or giving opt-in permission. A ‘share’ is an action of brand affinity in the similar way that forwarding an email (or a viral) is. A comment, well... it is just a comment, as any brand that has a bulletin board, or an inbound customer service call centre, already knows. Yes, they have different names. But what powers the action, and the outcome, is quite comparable. Marketers already have most of the metrics they need.

And if we are really hard pressed to produce a framework of sorts, one of the best attempts so far, for me at least, is provided by the Measurement Camp, a global evaluation open source movement (in their words). Despite their purist insistence that social media is all about ‘relationships’ and ‘language’ – which they think makes them inherently difficult to measure – their framework is both very old-fashioned and very elegant: Behaviour, Feelings and Financials. It works for any channel. But this is where the magic of frameworks ends for me. Tracking users’ actions across different platforms (social and non-social) is a much bigger mystery.

Think about this: a user comes across your Facebook page and becomes your fan – how will you know, at that moment, that you have that user as a customer in your database already? Can you answer the same question even if the same user clicks on a link to your main website and arrives there? Do you know what the overlap is between your Facebook fans and Twitter followers? Can you prove, in numbers, that users who are more likely to share or re-tweet your content are also better customers? And that they have become that only after they engaged with you – no funny chicken-and-egg, correlation/causation business implied?

Very few brands can. To do that, the evaluation investment has to connect as many dots in the multi-platform journey as possible, long before we start thinking about fancy frameworks. There are three ways to do this.

1. Hard tracking

Digital channels are lauded for the amount of data they provide, to the point that trying to analyse all of it looks like trying to drink from a fire hose. Some things are still impossible to track technically, but more than usually thought can be. We can serve cookies through our Facebook apps; we can use Facebook Connect to identify actions originated on our other digital properties (e.g. main website); we can gear our database to recognise and flag these things from our different tracking and reporting tools; we should work with our ad networks to use their tracking tools better. Gradually, a clearer picture will emerge. This clarity comes at a price, admittedly, but it is still cheaper than throwing money at it blindly, hoping that some of it will hit the jackpot.

2. Implied effects

Sadly, it still isn’t possible to track everything precisely. This is where good data planning skills come in handy. Robust statistical analysis can spot trends and halo effects that different channels excite in one another. It also helps in answering some of the hypotheses and questions we had in the beginning: is our social media work engaging? What is the definition of that engagement? What are the key expressions of our work, results-wise, and do we need to change our KPI set? If we invest smartly, the actual tracking will lay more than a solid ground for this phase, scattering evidence about users’ behaviour all through the system. The Dr. Watsons of data planning will then follow and connect those clues into working solutions.

3. Claimed effects

Finally, we work with human beings. For the same reasons that we deploy usability testing - despite all the technical testing options we have - users can also tell us how they use our social media properties. Is that long dwell time really spent reading content or making a tuna sandwich in the kitchen? What sort of content do they tend to pass on as far as our brand is concerned? How do they use it on the go and how is that different from the leisure of the home or office? This qual/quant top-up, as usual, can create new hypotheses to test, track and imply.

If you are, by this point, persuaded and wish to throw a wad of evaluation money at me saying ‘There, how do you want to use it?’, I would split it like this: 70% hard tracking, 20-25% data planning and 5-10% qualitative. Different brands will, of course, use different mixes.

Just don’t spend it all on consultants selling you the new evaluation framework.

Profile image for Lazar Dzamic

Lazar Dzamic
Planning Director

Likes:
- Drums
- Martial arts
- Cakes

Related Features

  1. What if consumers stop giving their personal information away and start selling it instead?
  2. How many brands are on Facebook?

Comments