Written by Nathan Perrott on 20 Jan 2017
(12 min read)
It's 2017 and words like 'data', 'key performance indicators' and 'return on investment' are still at the top of your agenda.
But what do they actually mean? Getting focus and clarity on what we should be measuring when it comes to talent attraction has never been more important. It genuinely concerns me how many organisations are still wasting thousands of pounds on marketing activity without understanding what’s having an impact.
I'm currently reading 'Black Box Thinking’, a captivating best-selling book by Matthew Syed on the secrets of performance and the importance of learning from failure. Early on in the book there is a story that resonates with the above challenge.
The story focuses on how sometimes we need to look at the data we cannot see, and it speaks perfectly to the world of talent attraction metrics.
Syed tells the story of Abraham Wald - a Hungarian mathematician who played a pivotal role in the Second World War. Wald had emigrated to the US from his beloved Vienna on grounds of safety and ended up working for a group of 'brilliant mathematicians’. He was asked by the military to help them analyse damaged aircraft that returned from battle and provide recommendations on where to reinforce the aircraft's armour. The goal was to improve the probability of pilot survival and aircraft return, which was around 50/50 before any improvements were made.
The Air Force had taken the trouble to accurately examine returning aircraft and record the damage - so they, and Wald, had lots and lots of data to work with. To the Air Force command, the patterns seemed very clear. Many of the planes were riddled with gunfire all over the wings and fuselage. But they were not being hit in the cockpit or tail. The longer the reporting and data collection continued, the clearer the pattern became, as shown in the image below.
The answer seemed obvious to the Air Force command - they should reinforce the aircraft wings and fuselage with better amour to protect the plane, since that's where the majority of the damage appeared to be. But Wald disagreed with this plan. He realised that command had neglected some key data. They were only considering the planes that had returned from battle. They were not taking into account the planes that had been shot down, and therefore had not returned - planes which may well have been crippled by gunshots to the cockpit and tail. Their initial interpretation of the data was misleading. They had surmised that the aircraft could take hits in the wings and fuselage and safely return. In fact, the planes actually needed reinforcing in the cockpit and the tail, where pilots were most vulnerable and less likely to survive having been hit there. So that’s what they did.
This analysis, insight and action was pivotal to the outcome of the entire war effort. What Wald taught us is that you not only need to make sure you're looking at the right data, but you also need to take into account the data you cannot immediately see. It also highlights the need to persevere and question the conclusion you've come to, even if you feel that is the right solution or outcome based on the data you have.
How can we apply Wald’s thinking to talent attraction?
When it comes to talent attraction measurement, there are some parities that can be drawn with the above story. Recruiters and talent attraction professionals need to look at the data they have with a similar mindset to Wald. Is the data you have at your disposal telling the full story? Is the data accurate and measuring the right things? Are you making critical decisions based on bad or incomplete data?
The answer lies in the famous V's of Big Data.
The fourth and fifth ‘V's’ of big data
We've all heard of the three V's of big data, originally coined by IBM:
Volume: the amount of data we have to deal with - there's more and more being created every day that goes by as more things become measurable.
Velocity: the speed at which data now comes at us, often in real time and so quick that we are unable to analyse it and act on it quickly enough.
Variety: the different types of data we are dealing with (structured vs unstructured, ATS source codes vs cookie-based tracking, recruitment agency vs recruitment marketing activity).
But, based on Wald’s approach there are two more ‘V’s we need to be cognisant of:
Veracity: the integrity and reliability of the data that we have (good data in, good insights out).
Visibility: do we have all the right, meaningful data in order to make the best decisions?
Are your measuring methodologies giving you the best data?
Taking into account Wald's story, and the above five V's, let’s look at a few common scenarios where talent attraction teams need to question whether they’re looking at the right data:
Scenario 1: Asking candidates where they saw the vacancy
Our findings show that asking candidates where they saw the job vacancy is highly inaccurate (one client of ours discovered that 92% of the applicants incorrectly identified the source where they learned about the vacancy). It's very likely that a candidate won't remember what piece of marketing they saw, where and when they saw it. They will likely be applying for other roles at the same time and you will also be using many marketing channels, including those that are influential but maybe don't drive the final conversion.
If you’re using this approach to assess channel performance, you’re probably making critical decisions based on bad data.
Scenario 2: Using ATS source codes to measure channel performance
ATS source codes provide a good starting point on your talent attraction ROI journey. They measure an applicant's linear journey from when they click 'apply' on a job board ad through to when they have either registered their interest on the ATS or have actually submitted their application (depending on the sophistication of the ATS).
However, this method doesn't really track or reflect the job seeker user journeys of today. If a user doesn't register or complete their application in a single browser session, the ATS source code stops tracking.
Examples of when this will not continue tracking the user would be:
• When they navigate to another website (to do research on Glassdoor, LinkedIn or Facebook, for example).
• They close the browser session down to continue later.
• They pick up their research on another device at a later point in time.
Of course, these latter examples are exactly how candidates behave nowadays.
Scenario 3: Using cookies to track and measuring ‘last click wins’ to credit the channel with the conversion
One of the most common methodologies for tracking and measuring recruitment marketing activity is to use cookie-based tracking (dropping a cookie on the user’s device to be able to track them anonymously around the web). This is also considered best practice in non-recruitment sectors, particularly in e-commerce. Most recruiters (and recruitment marketing agencies, to be honest) still rely on a 'last click wins’ methodology (crediting the channel that the user last interacted with and converted from).
This is better than the first two scenarios above, but like Wald’s assessment of the returning damaged aircraft, it ignores some key data. To get a ‘full view’ of your channel performance, you need to consider the following:
Path to conversion & attributed conversions: Are you looking at the number of different channels in the candidate’s path to applying, what they looked at and when? By doing this you can understand the channels that are having influence, but aren’t necessarily responsible for the conversions. Channels such as Glassdoor, LinkedIn and Facebook are hugely influential in the research stages of the candidate’s journey. You need to understand where a candidate was first exposed to the advertising / branding and what channels / sites influenced them, as well as where did they finally convert from. This can help you refine your messaging and content for the appropriate stage of the candidate journey. (For more on correct channel attribution you can read Ross Davies' blog).
Viewability: When it comes to banner advertising, especially with the rapid growth of programmatic marketing in talent attraction, it’s imperative to understand how viewable the inventory is that you’re paying for. It’s not uncommon for an organisation to buy, let’s say, 1,000,000 banner impressions, only to find that 20% of what was served could be seen by the user (‘viewable’ meaning that the banner on the page was served for more than a second and not stuck at the bottom of the page where nobody looks). This is all measurable nowadays and this data is important to measure to make sure you get what you’ve paid for.
Lag time: How long is it taking candidates to convert from the initial exposure to your brand? How does this vary channel to channel, job to job, location to location? Understanding this means that you can identify the channels that get you faster conversions when you need to react quickly, and what channels might be better for branding.
Cross-device tracking: We live in a multi-device world, where it’s highly unlikely that we start and complete a task on one device - whether that’s searching for holidays, researching products to buy or hunting for a new job. Great strides have been made by the likes of Google to better understand, track and measure users who switch devices when carrying out their journeys. Making sure you measure this can see an uplift of 10% - 20% accountability for your recruitment marketing metrics.
This, of course, needs to be complemented by the more fundamental metrics, as highlighted below:
Views: Did your advertising get seen, but not necessarily acted upon? How many people viewed your content, and how long did they spend on the page?
Traffic: How much traffic was driven by your calls to action? Was it the right traffic? Did the users do what you intended? Social media is largely used at the consideration and research stage of the candidate journey, as opposed to the ‘action' stage (applying). Therefore, is measuring applications and hires on a 'last click wins' methodology the right thing to do? Probably not, so it requires a deeper understanding of the user journey and the role that your social media content is having in the decision cycle.
Applications: Is your activity delivering applications? If so, are they the right quality? Is there a way to optimise conversions? Conversion rate optimisation (CRO) was one of the key buzzwords in the world of marketing in 2016, and will only continue to be more important in 2017. To ensure good CRO, we need to focus on offers / hires.
Offers / Hires: Firstly, are you measuring offers / hires by channel source? Knowing the source of your applications and hires will help you understand what channels are worth investing in. But it doesn’t stop there. Ask yourself the question ‘Can I get the same amount of hires with less applications?' or ‘Can I get more hires with the same amount of applications'. Are you looking at application-to-hire ratio and then comparing and contrasting that between channel types and media vendors? Are you using this data to negotiate/re-negotiate your channel investment?
Questions, questions, questions
So, what problems are you having with tracking, reporting, and insight?
Do you have too much data or not enough?
And is the data the right data?
As ever, I’d love to hear about the challenges you’re having. Feel free to connect with me by dropping me a line on LinkedIn.