AI is now all around us, and new improvements have made it even more powerful and easy to use. However, as AI becomes more common in the workplace, there's a risk that it could be used unfairly. It is crucial for employers to ensure that they are adopting the use of AI in a fair and responsible way.

The pitfalls of AI are highlighted in the case of Manjang v Uber Eats. This claim was brought to an Employment Tribunal after an Uber Eats driver claimed that facial recognition software on the Company's app was indirectly discriminatory.

Facts

In March 2020, Uber Eats introduced a facial recognition system to their app which required workers to provide a real-time photograph of themselves as verification in order to access the app. This photograph was then automatically checked against the driver's profile picture. Access to the app was required for the driver to complete their work.

Pa Edrissa Manjang, who is Black and of African descent, worked at Uber Eats as a driver since 2019. Mr Manjang had experienced multiple failed recognition checks when attempting to log into the app, which meant that he could not access work, nor get paid. Following this, Mr Manjang requested that Uber allows a human to review the photos, after which he was told that after 'careful consideration' his account was being deactivated.

Mr Manjang was eventually dismissed from Uber by email when they claimed that there were 'continued mismatches' between the pictures he took to register for a shift and the one on his Uber profile. Mr Manjang claims that he was not made aware of the process that led to his deactivation or provided with an effective route of redress and that the technology placed people who are black, non-white or of African descent at a significant disadvantage as they were less likely to pass the facial recognition test.

It has been reported that facial recognition technology can be up to 99% accurate at recognising white male faces, but when it comes to recognising faces of colour, and especially the faces of black women, the technology is at its highest error rate of up to 35%.

Following his dismissal, Mr Manjang filed an Employment Tribunal claim for indirect race discrimination, harassment and victimisation against Uber in October 2021. Mr Manjang had the support from the Equality and Human Rights Commission (EHRC) and the App Drivers Couriers Union (ADCU) who raised their concerns about AI and automated processes to permanently suspend drivers' access to the app and consequently depriving drivers of income.

Following the recent settlement of the claim, it has become prevalent that employers must ensure that they are transparent with their workforces on the uses of AI and are aware of the apparent risks.

The risks of AI for employers

Given the complexity of AI, employers face unique challenges, requiring a thorough understanding of the risks of discrimination and human rights abuses as AI integration increases in the workplace. Dan Lucy, Director of HR Research and Consulting at the Institute for Employment Studies highlights that it is 'not so much whether AI should be used or not, but the importance of ensuring that the way in which it is designed and deployed is transparent, fair and open to challenge.'

As algorithms replace human-decision making processes, they inevitably carry the risk of replicating existing biases and inequalities in society. Consequently, AI systems may inadvertently encode unfair rules, thereby resulting in discriminatory decisions that employers are unaware of. In addition to this, the process of contesting decisions involving AI remains a difficult procedure for employers.

It is commented that more must be done to foster transparency and openness among employers regarding the use of AI in the workforce. Moreover, employers relying on automation to manage their staff must be vigilant in preventing discrimination.

Practical considerations for employers using AI in their workforce

  • User feedback – Employers should establish a process for workers and employees who interact with the AI tool to provide feedback and report any encountered issues with it. Subsequently, employers should then be providing routes for contestability and redress.
  • Performance testing – Employers should ensure that they are assessing the performance of any AI system for accuracy and precision. The performance of the AI system should be assessed against equalities outcomes to identify any learnt biases or inaccuracies for groups with protected characteristics.
  • Impact assessment – Employers should engage in a full risk assessment in order to mitigate potential areas of concern with each AI tool used. Bias audits should then be delivered periodically to ensure the AI system continues to declare fair outcomes.
  • Flexibility – Allow for flexibility or alternatives in the event that a worker has a protective characteristic meaning that an AI tool may not function correctly.
  • Training – Train HR professionals and managers on how to understand and interpret data resulting from AI, including checking the accuracy of this data.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.