As companies increasingly turn to artificial intelligence (AI) for their hiring processes, numerous well-qualified applicants are being overlooked.
Techniques such as analyzing body language, evaluating voice, gamified assessments, and scanning resumes are tools in the AI recruitment arsenal. These technologies evaluate job seekers, determining their suitability for positions.
AI adoption in recruitment
A significant portion of businesses are adopting these technologies. An IBM survey from late 2023, which included over 8,500 IT professionals worldwide, revealed that 42% are utilizing AI for screening to enhance recruitment and HR processes, while an additional 40% are contemplating its integration.
Many corporate leaders believed that AI in recruitment would eliminate hiring biases. However, the reality has been quite the opposite in some instances. There are rising concerns that these AI tools may be inadvertently filtering out the most capable candidates.
The impact of AI on job seekers
Hilke Schellmann, a US-based author and assistant professor of journalism at New York University, argues that there’s little proof these tools are bias-free or effectively identify the most suitable candidates. She asserts that the primary threat of this software isn’t the replacement of human jobs by machines but rather barring skilled individuals from securing employment.
For example, in 2020, Anthea Mairoudhiou, a UK-based makeup artist, was instructed to reapply for her position post-furlough. Despite her skillset scoring high, the AI tool HireVue rated her body language poorly, resulting in job loss. Following criticism, HireVue discontinued its facial analysis feature in 2021. Schellmann notes other workers have lodged complaints against similar systems.
Systemic flaws in AI tools
Schellmann highlights that job applicants are often unaware if AI tools are the sole reason for their rejection, as the software typically does not provide feedback on their assessment. However, examples of systemic issues are evident.
In one case, a rejected applicant resubmitted their application with a younger birthdate, which led to an interview offer. In another instance, an AI resume scanner trained on current employees’ CVs favored applicants listing “baseball” or “basketball” as hobbies, commonly associated with male employees, over “softball,” a hobby more associated with female applicants.
The expansion of this technology could potentially harm a vast number of applicants, surpassing the impact of biased human hiring managers.
Schellmann emphasizes the difficulty in pinpointing the exact source of harm and suggests that companies may lack the incentive to address these issues due to cost savings from AI adoption.
The future of ethical AI in hiring
Ensuring AI fairness and unbiasedness is crucial, according to Sandra Wachter, professor of technology and regulation. She argues that ethical AI not only fulfills legal and ethical obligations but can also boost profitability by making fairer and merit-based decisions.
Schellmann advocates for regulatory measures to prevent ongoing issues and warns against the future workplace becoming more unequal without intervention.
Like what you see? Then check out tonnes more.
From exclusive content by industry experts and an ever-increasing bank of real world use cases, to 80+
deep-dive summit presentations, our membership plans are packed with awesome AI resources.