Data analytics, racism and recruiting: the hidden dangers and how to avoid them
Data analytics can help eliminate human bias in candidate screening – as long as you don’t fall at the first hurdle.
A recent article on Australia’s HRM website showcased a glaring example of racial bias in recruitment. It also highlighted more subtle problems, including the fact that ‘whitening’ a resume increases the likelihood of a callback.
It’s here that data analytics has a clear advantage. Done well, it can objectively predict crucial beahviours and outcomes about job applicants: including those most likely to be high-performing, low-risk staff ‘on the job’.
By using concrete data to screen candidates at the early stages, organisations can save time and money, pinpoint the right people to interview – and avoid human bias and guesswork at the earliest stages.
It can also more objectively identify the leadership potential and specific development needs of existing staff.
Sounds perfect. But there’s a catch. There’s a growing fear that biases will creep in anyway, because human bias will influence the algorithm itself.
And that’s the key: whoever creates the frameworks must know how to avoid bias. They must be specialists who can identify and rectify data distortions – and ensure discriminatory assumptions aren’t embedded from the very beginning.
It makes sense that data-driven recruitment is an exciting new frontier in HR, with incredible opportunities for efficiency and smart talent acquisition.
But to be effective – maximising validity and predictive outcomes – it’s essential to work with experts who have proven experience in both data science and organisational psychology.
In the end, data analytics alone doesn’t eliminate bias. There is a science to getting it right – and that calls for specialist know-how, clarity and a trained eye for potential pitfalls.
Want to get data analytics right from the start? Talk to us: experts in data analytics + executive psychology.