While AI can automate tasks traditionally done by Human Resource (HR) departments, it cannot replace the human touch and interactions that are core to the HR function, including recruitment exercises.

A close friend who had been unwell was evaluated for a role using video interviewing software. The AI system, unfortunately, concluded that my friend could have been more enthusiastic during the interview.

Things would have been different if there had been a human interviewer. The interviewer might even ask about my friend’s well-being. Crucially, a human interviewer may even conclude that if the candidate is sick and still making such a valiant effort, they deserve a positive evaluation.

This is further substantiated in our recent study, jointly conducted by NUS Business School, the International Institute for Management Development (IIMD) and The Hong Kong Polytechnic University, where the use of AI in the selection and recruitment process has been perceived as untrustworthy by job applicants. Our findings indicate algorithmic decision-making in the recruitment process as less fair than human-assisted methods.

We interviewed over 1,000 participants of different nationalities, comprising candidates who had experienced both successful and unsuccessful outcomes in an AI-enabled hiring process. The study was conceived to examine the potential challenges of AI-enabled decision-making in the HR hiring process.

The participants were involved in four scenario-based experiments, where the first two experiments studied how the use of algorithms affects the perception of fairness amongst job applicants in the hiring process. At the same time, the remaining two sought to understand the reasons behind the lower fairness score.

Algorithms fail to recognise the uniqueness of candidates

We found job applicants perceived a higher degree of fairness when a human is involved in the resume screening and hiring decision process compared to a fully algorithmic system. This observation remains consistent even amongst the candidates in the study who had received a positive application outcome in an algorithm-driven recruitment process.

The disparity in perceived fairness is largely attributed to AI’s inability to identify the candidates’ unique characteristics compared to human recruiters, who are better equipped to evaluate qualitative information that makes each candidate distinctive. AI-enabled processes can overlook important qualities and potentially screen out good candidates. This challenges the popular notion that algorithms can provide fairer evaluations of candidates and eliminate human biases.

In addition, using algorithms to optimise HR recruitment involves potential legal and ethical risks. These may include privacy loss, lack of transparency, obfuscation of accountability and possible loss of human oversight.

Organisations should consider prioritising the involvement of human recruiters in the hiring process, whenever possible, despite the advantages of algorithms in improving efficiency and productivity in HR management.

Moving forward, HR departments should exercise caution when adopting AI in their recruitment processes, which may lead to brand and reputational risks.