Using AI in recruitment

10th May 2022 Employment

The use of “artificial intelligence” (AI) within recruitment accelerated in response to the Covid-19 pandemic. The trend is set to continue as recruiters strive for reduced time-to-hire in an increasingly demanding recruitment market. But, in using AI, is there a risk of falling foul of duties under employment law?

The technology

Over the years, AI has been used in various guises within the recruitment cycle, from talent sourcing to onboarding. This article focuses on the algorithm-based technologies which account for much of the recent growth in this area. Tools adopted by recruiters include:

  • Facial and emotion recognition software, which analyses candidates’ language, tone and facial expressions to determine suitability for a role.
  • Transcription software, which transcribes oral interview sessions with candidates.
  • Gamification software, which comprises behavioural-related tasks to assess candidates’ aptitude for creative thinking and problem-solving.

Developers claim that AI applications can level the playing field in talent acquisition by removing human bias from hiring decisions. However, recounts of some job applicants’ experiences suggest that outcomes can be at odds with that objective.

Computer says no

Algorithms are only as good as the data used to train them. Recognition of the technology’s potential pitfalls fuels concerns that it may not be an equaliser in all contexts. For example:

Facial and emotion recognition

Data sets used to build facial analysis systems may not be sufficiently diverse to account for key candidate differentials.

In 2018, an expert at the University of Maryland conducted a study on facial analysis software. It found that two separate systems ascribed negative emotions to ambiguous facial expressions of black users compared to white users.

It’s also unclear how AI applications will respond to atypical facial expressions that don’t conform to the algorithm’s trained “norm” through its existing data set.

Gamification

The combination of recruitment and gamification is notionally intended to improve the experience for candidates.

However, studies have highlighted the lack of suitability of a one size fits all approach. Gamified assessments may involve metrics based on timing and precision of responses within the context of pre-set audio-visual display settings.

For some candidates, design aspects may serve as stressors. Any rigidity around interface settings and measuring user input may therefore serve to disengage certain candidates rather than to enhance performance.

Voice recognition

Scrutiny of voice recognition applications has demonstrated the scope for systems to produce poorer outcomes across dialectical and regional variations of speech. Research has also indicated that speech recognition platforms may process negative outcomes for individuals with atypical speech patterns which the algorithms have not been trained to recognise.

 

These examples show how, in the modern age of recruitment, the way you look, think and speak might adversely impact on your employment prospects through the so-called “unintended consequences” of AI.

The MAC case

AI was at the heart of a legal dispute between the make-up brand MAC (a subsidiary of Estée Lauder) and three of its former employees. In June 2020, as part of a redundancy exercise, MAC utilised software provided by the recruiting platform HireVue to conduct video interviews. Though not a recruitment exercise, this case aids as a cautionary tale about the adoption of AI tools.

The HireVue software analysed the interviewees’ answers, language and facial expressions using an algorithm. Interview scores were considered alongside sales figures and employment records. Three employees appealed the decision to make them redundant, suspecting that the HireVue interview had been their downfall, and took legal action against Estée Lauder.

According to the employees, they had not been informed of the nature of the redundancy assessment. They also alleged that no explanation of the methodology for the algorithmic evaluation was provided. An out of court settlement was reached earlier this year. The employees’ claims touch on the crux of the problems with AI applications: the lack of transparency around performance. Commercial confidentiality is often cited as the basis for withholding information. However, there is a clear tension between this reticence and employers’ obligations under the Equality Act 2010 regime. Where employers cannot explain how and why decisions have been made, there is a risk of breaching employment legislation. Ultimately, employers are held responsible for the decisions made - not the technology.

What does the law say about AI in employment?

No specific regulatory framework exists in the UK for AI. However, AI has been a focus in the Government’s 2020 National Data Strategy and 2021 National AI Strategy. Following commitments made in these documents, an Algorithmic Transparency Standard for public sector organisations was published in November 2021. Most recently, the Government’s March 2022 response to the report of the Commission on Race and Ethnic Disparities set out action points for building “the most trusted and pro-innovation system for AI governance in the world”. These include that the Office for Artificial Intelligence will - this year - publish a white paper on governing and regulating AI, and that the Equality and Human Rights Commission will issue guidance on how to apply the Equality Act to algorithmic decision-making.

Therefore, conundrums posed by AI applications must be addressed by the existing structure. But, the laws were not purpose-built for these quandaries, and so the answers are not easy to find. Encouragingly, some organisations have campaigned for legal changes to catch up with the technological developments. Most notably, the Trade Union Congress has recommended measures to protect against algorithmic discrimination, including:

  • a reversal of the burden of proof for “high risk” AI uses where the employer must disprove that discrimination occurred rather than the claimant bearing the burden;
  • the creation of “red lines” beyond which the deployment of new technologies should not occur; and
  • mandatory AI registers to be regularly updated by employers and available to job applicants.

The takeaway

It will be some time before the law catches up with the challenges that AI presents. In the meantime, it’s important that all those engaging with these applications do not overlook the possibility of “unintended consequences” occurring and consider actions to eliminate risks or challenge questionable outcomes.​​​​

We're Social

Zila Lwanda is a Solicitor located in Manchesterin our Employment department

View other posts by Zila Lwanda

Let us contact you

*
*
*
*
*
*
View our Privacy Policy

Planning to Mitigate
Employment Law and Tax Risk

Areas of Interest