Most hiring teams think they are fair. They interview every candidate. They follow a process. But then you look at who actually got hired over the last two years, and they all look the same. Same type of background. Same kind of CV. And nobody can explain why.
That is AI bias in recruitment. It does not come from bad intentions. It comes from AI tools that learned the wrong lessons from the wrong data, and then repeated those lessons thousands of times without anyone noticing.
What AI Bias in Recruitment Actually Is
AI does not think. It looks for patterns. You show it thousands of past hiring decisions, and it learns what a good hire looks like based on those. The problem is that if those past decisions were biased, the AI copies that bias. It does not know better. It just does what the data taught it.
There are five main ways bias gets into AI recruitment tools.
Training data is the biggest risk. If the AI was trained on biased hiring history, it will keep making the same biased decisions. Candidates who do not fit the old pattern get filtered out before anyone even reads their CV. Label definitions are about how you defined a good hire in the first place. If that definition was built on assumptions, the AI inherits them. Feature selection is about which signals the AI uses to score candidates. Education level, for example, says more about someone’s background than their ability to do the job. Proxies are sneaky. Even if you remove obvious things like age or gender, the AI can figure them out from postcode, university name, or gaps in employment history. Masking is when bias is hidden inside technical decisions that look neutral. It is the hardest to spot and the hardest to fix.
Microsoft launched a chatbot called Tay in 2016. Within 24 hours it had learned to repeat harmful and offensive content from users. Nobody programmed it to do that. It just learned from what it was shown. The same thing happens in recruitment AI. Without proper oversight, the tool does not get better over time. It gets more confident in the wrong answers.
The Real Cost of AI Bias in Hiring Practices
AI bias in hiring is not just a fairness problem. It is a business problem. When your tools are filtering out good candidates before they even reach an interview, you are losing talent you never got to meet.
For recruitment agencies, this gets expensive fast. If your shortlists are too narrow, clients get fewer good options. The wrong person gets placed. They leave within three months. The client is unhappy. The agency loses the relationship.
There is also a legal side to this. The EU AI Act now classifies AI tools used in hiring as high-risk. That means agencies in the Netherlands, Belgium, and Germany need to be able to show how their tools make decisions, prove those decisions are fair, and audit them regularly. That is not coming. It is already here.
And before you think this is only an AI problem, manual notes have their own bias built in. When a recruiter writes up an interview from memory, they are not replaying the conversation. They are reconstructing it. And human memory is shaped by first impressions, not facts.
Why Manual Interview Notes Make Bias Worse
How Recruitment Agencies Can Reduce AI Bias in Hiring
If your agency runs ten or more interviews a week and still relies on manual notes, the bias problem is already there. You just cannot see it yet.
The answer is not to stop using AI. It is to use the right kind. AI that is built on proper research, tested frameworks, and clear standards will reduce bias. AI that is just a cheap notetaker layered on top of a general language model will make it worse. Here is what good looks like.
- 1 Capture every conversation, not just video callsBias does not only happen in a formal Zoom or Teams interview. It starts in the quick phone screen where nobody took notes. The in-person meeting where impressions formed before a single question was asked. The intake call with a client where the brief was discussed but never properly written down. If your AI tool only works in video meetings, it is missing most of where the problem actually starts. Interview transcription software for recruiters that works across phone, in-person, and online calls means every conversation gets the same treatment.
- 2 Use the same evaluation criteria for every interviewWhen every recruiter measures candidates differently, fair comparison is impossible. One uses five criteria. Another uses two. The same candidate gets different scores depending on who interviewed them. Consistent AI templates applied to every call create one standard across the whole team. Evidence-based recruitment software built on psychology research gives those templates a proper scientific foundation.
- 3 Match the conversation to the CV and the job descriptionA transcript alone is not enough. You can record every word a candidate says and still miss whether they are actually right for the role. The real insight comes when you combine what was said in the interview with the CV and the job description. That is when you can see whether the match is there, where the gaps are, and what still needs to be asked. Structured AI interview reports that cross-reference all three give hiring managers something they can actually act on, not just a wall of text.
- 4 Send the data straight into your ATSWhen a recruiter types up notes from memory and enters them manually into the ATS, two things happen. Time gets wasted. And the data gets filtered through whatever mood or mental state the recruiter is in after a long day. ATS integration for recruiters removes that step entirely. The structured data from the conversation goes directly into the right fields. No copy and paste. No missing information.
- 5 Pick tools built on real research, not just AI hypeMost AI notetakers on the market are generic tools with a recruitment label stuck on them. They were not built for hiring. They have no scientific grounding. And they carry whatever biases exist in the large language models they run on. The difference is AI built on psychology and linguistics research, developed with universities, and audited on an ongoing basis. That is the standard regulators will expect under the EU AI Act. It is also the standard that actually reduces bias.
| Variable | Manual notes | Structured AI capture |
|---|---|---|
| Information captured | ✗ What the recruiter remembers | ✓ Full transcript with timestamps |
| Bias risk | ✗ High: recency, contrast, familiarity | ✓ Reduced: structured, consistent criteria |
| CV and JD cross-reference | ✗ Not available | ✓ Match, gaps, and follow-up flagged automatically |
| Conversation types supported | ~ Video calls only, if recorded | ✓ Phone, live, Teams, Zoom — all captured |
| ATS data entry | ✗ 10 to 20 minutes manual entry | ✓ Automatic: fields populated directly |
| Report consistency | ✗ Varies by consultant style and memory | ✓ Standardised template every time |
| Audit trail | ✗ No reproducible record | ✓ Every interview traceable and reviewable |
| GDPR compliance | ~ Depends on individual practice | ✓ Consent, storage, and access controls built in |
The point is not to take humans out of hiring. A recruiter’s judgement, instinct, and ability to build a relationship with a candidate still matter enormously. The point is to give recruiters accurate information so their judgement is based on what actually happened, not what they think they remember. You can read more about how In2Dialog prevents bias in its AI models and see exactly how this works in practice.
There is a practical benefit too. Recruiters who are not spending 30 to 45 minutes writing up notes after every call have more time for the work that actually matters. The data on how much time recruiters spend on interview admin shows just how significant that time loss is across a full week.
“In2Dialog works to support and refine the human dimension of recruitment, delivering efficiency, consistency, and objectivity, but never reducing the process to a mere automated tick box exercise. Ultimately, our product delivers both better individual matches and an enhanced organisational approach for securing those matches.”
Diddo van Zand, Co-founder, In2DialogAI bias in recruitment is a real problem. But it is not unsolvable. The agencies that handle it best are not the ones that avoid AI. They are the ones that use it properly, check it regularly, and choose tools that were actually built for recruitment. That is what responsible AI recruitment software looks like.






