.author-name { display: none; }

Results are Stupid and I hate them

BY ROB DYER, CRM BUSINESS ANALYST 

Earlier on this week, I was sent a link to a report on email open and click rates across multiple sectors from a widely–respected source, courtesy of an ESP which sends millions of emails per month. 

Interrogating the data made for an interesting, if exasperating half-hour of my day.  While this sort of information undoubtedly has its place, I can’t help but wonder if it does more damage than good.

The open rates in the report looked disappointingly low to me – not one sector in the report could boast of an open rate over 30%. 

If I were in my first job, cutting my teeth in marketing, I’d have palpitations reading results like those.  I’d ask myself why I was bothering at all.

If only 1 in 4 people I work with listened or spoke back to me when I talked to them, I’d feel like I should go and work somewhere else, but I’d also probably talk to the non-responders less – so as to avoid awkward situations.

The point is, while benchmarking against others is a reasonable and perfectly valid exercise, where does that information actually get you?  What happens next?  What do you change in order to affect the results of the next campaign? 

If your last campaign was fractionally over the industry average, would you take the afternoon off, or have an office party?

No, of course not.

Would you fire your creative team on the spot if results came in lower than the aggregate for companies in the same sector, who may or may not be comparable to your organisation?

Again, no.  Presumably, anyway.

If your results aren’t what you hoped, you’re probably in need of a test plan so you can work towards a goal.  What does ‘success’ mean to you?  Once you have established this, how do you get there?

If you’re looking for a 50% open rate, you’re probably going to need to only send the email to contacts who regularly open and engage with your comms.  Do you know who they are?  

When was the last time you cleansed your mailing list?  We think about what we want as consumers all the time – it comes naturally to us.  If you’re selective about what you engage with as a consumer, why would you not be selective about who you try to engage with as a marketer, as well as how and with what?

Is there something about your email creative or your subject line that’s dragging your results down?  Have you checked your heat-maps to see where contacts are and are not clicking?  If you spent hours and hours writing your content, it doesn’t necessarily mean it was engaging for the contact. 

Perhaps less is more, and sending your lapsing contacts less frequent mail might make them more likely to open it?  If they still don’t open anything, what strategy do you have for re-engagement?

Have you tried sending your contacts mail at different times of day – maybe they’re morning people... maybe they’re not?

What about the overall journey that your contact or segment has been on so far, and what about what happens next for them?  What did you learn from the campaign?  Yes, results are different to last time, but what did you actually learn?  And how will that learning be used to direct your campaigning in the immediate future? 

There are so many questions you can ask yourself when results are in – how does this campaign compare month on month, year on year etc.  

But what’s really important to you?  How many people clicked this or that, or how many conversions you saw on the back of the campaign?  Did your contacts take the action you wanted them to?

In summary, there is nothing wrong with forensic interrogation of your results, and comparing them to others.  However it probably doesn’t pay to get too bogged down in the differences you see with your peers. 

How you react to the results you see, and what you do to change things is far more important.  How do you measure success, and how do you go about making things change if you don’t achieve it?

If you need any help deciding what to do next, Response One can help you – get in touch.