The rise of artificial intelligence (AI) drives ever more of our interactions with services. Some of us welcome its expediency in filtering large masses of data to produce tailored, solution-driven outcomes. Many can also see its great potential in improving our lives in fields as diverse as medical diagnostics, stringent identity checks and driverless vehicles.

However, as some of the AI systems which rely upon algorithms start to bed in, their biases are becoming more exposed. Those biases might well be having a disproportionate or discriminatory impact on certain groups, particularly those which already face disadvantage.

Caroline Criado Perez’s book ‘Invisible Women’ exposed the very real impact of bias across all aspects of daily life as a result of the manner in which data is gathered and used by governments, healthcare, technology, workplaces, urban planning and the media. There is little doubt that the input of biased data to AI systems will logically result in biased results. Until, that is, machines have learnt to address the possibility of human prejudices being built in by those who design them.

As more becomes known about the design of the algorithms which are used by AI, the possibility of those being used as evidence of discriminatory treatment will grow. Two recent examples of employees being affected by data bias have recently been in the headlines.

In the first case, the facial recognition software used by Uber to check the identity of their drivers is alleged to discriminate against individuals on the grounds of their race. Two of the UK’s trade unions are supporting their members to bring employment tribunal cases against Uber on the basis that a disproportionate number of drivers from BAME backgrounds are unable to verify their identity via the software. The consequences for the victims of the errors include them losing their jobs.

A study in 2018 concluded that several facial recognition programmes (including the one produced by Microsoft and used by Uber) produce an error rate of up to 34% when “recognising” women with dark skin. Given that 95% of drivers in London do not identify as white, this risks affecting a significant number of workers.

In the second case, an international NGO (Global Witness) concerns Facebook’s adverts. It alleges that the algorithms which prompt job adverts being shown to certain users are discriminatory on the grounds of gender and age. Global Witness’ investigation revealed some shocking statistics for the adverts which pop up for British users:

  • Mechanic jobs were shown to users who were 96% male.
  • Nursery nurse jobs were shown to users who were 95% female.
  • Pilot jobs were shown to users who were 75% male.
  • Psychologist jobs were shown to users who were 77% female.

The NGO is now petitioning the Equality and Human Rights Commission to investigate whether Facebook’s practices breach the Equality Act. It has also consulted the UK’s data regulator, the Information Commissioner’s Office, as to whether this advertising software breaches the principle of fairness under the UK data protection legislation. More widely, it has asked the government to require transparency in the targeting criteria used by technology companies, to assess the risk of discrimination and to take action to stamp it out.

Since the Equality Act of course protects individuals from discrimination at the recruitment stage employers should be wary of relying on this type of software to drive the focus of their job adverts.

Clearly we are still learning about the flaws in AI systems and employers who rely upon them as part of their interactions with their workforce are not expected to consult their crystal balls. However, as more of these biases come to light, businesses will be expected to exercise a degree of careful scrutiny before adopting them, to regularly appraise the risks, and to alter their relationship with them if their staff start to suffer. Tribunals are unlikely to allow the novelty of such systems to shield organisations which discriminate against individuals in the face of compelling evidence of the risk of discrimination, without strong justification. It’s time to get our houses in order when it comes to AI and data interpretation.

If you would like further advice tailored to your particular circumstances, please contact us.