In 2023, a new law regulating AI-enabled recruiting will go live in New York City, with more legislatures to inevitably follow. This is nearly a decade after Amazon deployed its infamous AI-recruiting tool that caused harmful bias against female candidates.
SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)
Emerging technologies are often left unchecked as industries take shape around them. Due to rapid innovation and sluggish regulation, first-to-market companies tend to ask for the public’s forgiveness versus seeking institutional permission. Nearly 20 years after its founding, Facebook (now Meta) is still largely self-regulated. Cryptocurrency first made its debut in 2009, and now with a market cap of $2.6 trillion, the debate around regulation is just getting started. The World Wide Web existed completely unfettered for five years until Congress passed the Telecommunications Act in 1996.
Those tasked with developing legislation often don’t understand the technology they are regulating, resulting in vague or out-of-touch statutes that fail to adequately protect users or promote progress. Unsurprisingly, the commercialization of artificial intelligence is following a similar path. But — due to its inherent capacity to exponentially evolve and learn — how can regulators or AI practitioners ever keep up?
Ready or not, AI-hiring governance is here. Here are the four most important things to know as legislation surrounding this transformative technology continues to roll out.
1. Data is never neutral.
In the recruiting world, the stakes of leaving AI unchecked are high. When AI is deployed to screen, assess and select job candidates, the risk of creating or perpetuating biases against race, ethnicity, gender and disability is very real.
Trying to collect unbiased data during the recruiting process is like walking through a field of landmines. Conscious and unconscious determinations are made based on GPA, school reputation or word choice on a resume, leading to historically inequitable outcomes.
This is why the NYC law will require all automated employment decision tools to undergo a bias audit in which an independent auditor determines the tool’s impact on individuals based on a number of demographic factors. While the particulars of the audit requirement is vague, it is likely that AI-enabled hiring companies will be mandated to perform “disparate impact analyses” to determine if any group is being adversely affected.
Practitioners of ethical AI know-how to remediate biased data and still produce highly effective and predictive algorithms. They must visualize, study and clean the data until no meaningful adverse impact is found. However, non-data scientists will have trouble finding ways to do this on their own, as few robust tools exist and are mostly open-source. That’s why it’s critical to have experts in machine learning techniques rigorously scrub the data inputs before any algorithms are deployed.
2. Diverse and ample data sets are essential.
To avoid regulatory trouble, data used to train AI must be adequately representative of all groups in order to avoid biased outcomes. This is especially important in hiring, as many professional working environments are majority white and/or male, especially in industries like tech, finance and media.
If accessing diverse, rich and ample data is not an option, experienced data scientists can synthetically generate additional, representative samples to ensure the entire data set has a one-to-one ratio among all genders, races, ages, etc., regardless of the percentage of the population they represent in the industry or workforce.
3. AI should never exclude candidates.
Traditional recruiting approaches often rely on structured data, like resume information and unstructured data, such as a “gut feeling,” to filter or remove candidates from consideration. These data points are not particularly predictive of future performance and often contain the most sticky and systemic biases.
However, some AI-enabled hiring tools will spit out recommendations that also instruct a hiring decision maker to eliminate candidates based on the AI’s determination. When AI excludes candidates like this, problems are likely to arise.
Instead, these tools should provide additional data points to be used in conjunction with other information collected and evaluated in the hiring process. On AI’s best day, it should provide actionable, explainable and supplemental information on all candidates that allows employers to make the best, human-led determinations possible.
4. Test, test and test again to remove stubborn or buried biases.
Future regulation will require thorough, cataloged and maybe even ongoing testing for any AI designed to help make hiring determinations in the wild. This will likely mirror the four-fifths (4/5ths) rule set in place by the Equal Employment Opportunity Commission (EEOC).
The 4/5ths rule states that a selection rate for any race, sex, or ethnic group must not be less than four-fifths or 80% of the selection rate for the group with the highest selection rate. Achieving no adverse impact in accordance with the 4/5ths rule should be standard practice for an AI-enabled hiring tool.
However, it’s possible, and advisable, to go a step further. For example, let’s say the tool you use presents performance predictions for candidates. You might want to ensure that among your candidates with the highest predictions, there is adequate representation and no sign of adverse impact. This will help determine if biases are potentially concentrated along different points in the prediction scale, and help you create an even more equitable ecosystem for candidates.
Increased oversight in AI-enabled hiring will, in time, reduce the likelihood that candidates will be disadvantaged based on subjective or downright discriminatory factors. However, due to the newness and ambiguity of these laws, AI companies should take it upon themselves to ensure candidates are protected.
Even with all of the risks, the advantages of getting AI in hiring right are simply unmatched. Things like efficiency, accuracy and fairness can all be positively impacted with the use of artificial intelligence, and impending oversight shouldn’t temper its adoption.
Dr. Myers is the CTO of Suited, an A.I. powered, assessment-driven recruiting network used by professional services firms to accurately, confidentially and equitably discover and place early career candidates from all background into competitive early-stage career opportunities. Prior to Suited, he was the co-founder of another AI-based recruiting start-up, dedicated to removing bias from the recruiting process. He received his Ph.D. in Computational Science, Engineering and Mathematics from the University of Texas with a focus on building machine learning models.