AI in recruitment promises efficiency, speed, and fairness. But here’s the reality of it: if you teach a machine using flawed data, you’re not removing bias, you’re reinforcing it.
AI is not inherently neutral.
It learns from the past, and if the past is biased, the future will be too. There is also inherent bias (conscious or unconscious) in the individual developing the model. If you develop a model for a machine with biases in its code, the machine will also exhibit those biases (sounds obvious, right?).
This isn’t just a theoretical issue. It’s a practical one affecting how candidates are hired, how companies build teams, and how fair (or unfair) recruitment processes become.
According to the World Economic Forum’s Future of Jobs Report, 85 million jobs will be displaced by automation by 2025, but 97 million new roles will emerge that are more adapted to the division of labour between humans, machines, and algorithms.
Recruitment, it seems, will be at the centre of this shift.
But how do we ensure this shift is fair?
The Problem: The Illusion of “Unbiased AI”
AI is often pitched as a fairer alternative to human judgement. The logic is simple: algorithms don’t “see” race, gender, or background — just data. But dig deeper, and the flaws become clear. AI models are trained on existing datasets, which may not be immune to bias; thus, instead of “fixing” the issue, it amplifies it.
- Amazon’s AI Recruitment Tool
Amazon’s AI recruitment tool was scrapped when it was found to penalise CVs that mentioned “women’s” (as in “women’s chess club”). The reason? The AI was trained on a 10-year dataset where male candidates dominated technical roles. The AI simply repeated what it learned. - The Facial Recognition Problem
MIT researchers found that commercial facial recognition algorithms were up to 34% less accurate for dark-skinned women compared to white men. In recruitment, where video interview AI tools are becoming popular, these flaws could have life-changing impacts.
There are many more examples of this in the form of write-ups, books, and across various industries (medical research is another example), but being so closely involved with Harnham as a recruitment brand and managing large-scale recruitment ourselves at Rockborne, we are seeing the impact firsthand. Data-driven recruitment is a growing trend, and there’s no doubt that AI can help. However, when it comes to bias, research shows that caution is needed.
A report from PwC found that 76% of UK business leaders are concerned about ethical issues arising from AI, particularly in hiring. Concerns about “AI fairness” are leading companies to invest in tools to monitor and audit the decisions of AI systems.
I spoke at an event recently and my key message was:
“You can’t remove bias from any form of data analysis or AI. Everyone has biases, and the best people to write these programs are those who recognise their biases and can collaborate with others to help balance them out. You will never eliminate all biases — many may disagree with this statement, but when you really delve into it, and think about it, that’s not a realistic goal. Beyond this, AI can’t replace the ‘human-to-human’ interaction, which is so important in any recruitment process, especially for those starting out in their careers. At Rockborne, we can easily automate our recruitment process. Still, we’re choosing not to for these reasons, and one thing that all our candidates positively feedback on is our human-centred approach to recruitment.”
The problem isn’t just the presence of pre-judgment — it’s the pace at which it scales. Human hiring managers make mistakes. AI makes them, too, but at the speed of light.
Data tells a story, but if that data isn’t questioned, challenged, or curated, it risks becoming a bad story on repeat.
The Solution: Human-Driven AI
Some argue that the answer is to “train the AI better.” But it’s not that simple. At Rockborne, we believe the path to better hiring is through a human-first, tech-supported approach.
- Curate data intentionally
Our approach is to question and curate the data itself, not just the algorithm. This means looking at what’s included but also what’s missing. - Human oversight at key stages
AI can sort and shortlist candidates in seconds, but oversight is essential. Humans review AI-driven decisions, flag patterns, and assess whether critical human qualities (like potential) are overlooked. - Challenge the role of automation itself
Just because you can automate something doesn’t mean you should. While many businesses are racing to “go fully automated,” we choose a more nuanced approach. Some elements are better left in human hands. - Accountability and auditability
Companies need transparency. Every stage of an AI-driven recruitment process should be trackable, and every decision should be open to review. This is essential for candidates, regulators, and hiring managers alike.
Where Do We Go From Here?
Ethical hiring is a business imperative. Regulators are already looking at how to control the use of AI in hiring. The EU’s AI Act, for instance, will soon require “high-risk” AI systems (like hiring tools) to demonstrate fairness, explainability, and accountability.
Companies building fairer systems now will avoid fines and attract better talent. Candidates, especially younger ones, are demanding fairness and transparency in hiring. Brands that fail to deliver will be called out publicly.
Rockborne’s advice to hiring managers? Don’t chase shiny tech for the sake of it. Build a system where humans and machines work together.
If you’re looking for support, Rockborne can help you future-proof your hiring strategy. From data training to deploying skilled talent, our Attract-Train-Deploy (ATD) model ensures your teams are equipped with the knowledge and skills they need to succeed.
So here’s the parting thought:
“How does your company balance technology and ethics in talent acquisition?”
It’s a question every leader should be asking.
Learn more about Rockborne’s data & AI training courses.