Skip to main content

Businesses urged to review algorithm processes that may lead to bias or discrimination

Share

The ICO is urging businesses to review their algorithm processes after growing concern they may lead to bias and discrimination in employment-related decisions. David Edwards, director and head of Harrison Drury’s regulatory team, explains how algorithms are used and considers what businesses must do to ensure to remain compliant.

What is an algorithm?

News of algorithms dominated the headlines in 2020, when they were controversially used to determine GCSE and A-level results. As students were unable to sit exams due to the COVID-19 pandemic, algorithms using data about schools’ results in previous years, were used to determine grades, with Shadow Attorney General, Lord Falconer saying the students were unlawfully discriminated.

Algorithms were also discussed in length in the hit Netflix documentary-drama ‘The Social Dilemma’, which explores the human impact of social networking, demonstrating how each ‘like’, watch and click can tailor ads to you in an accurate way. However, algorithms in social media have been proven to go wrong and can also push users toward hateful and extremist content.

Algorithms work by way of computer code used to navigate and often quickly develop a complex decision tree (a flowchart-like structure). When used in business for employment purposes, these are usually created for employers by third-party specialists. Algorithms can vary from basic decision trees to complex programmes which incorporate artificial intelligence assistance.

Algorithms in employment

Organisations are increasingly adopting artificial intelligence (AI) solutions which use algorithms to make more efficient decisions in interviews, for example, to assess candidates on their facial and vocal expressions.

Chatbots are replacing people as interviewers and textbots are communicating with candidates by text or email. The use of algorithms has taken a bigger step in recent years, and assists HR with important decisions such as redundancies, performance dismissals, promotions and rewards.

As the use of algorithms expands beyond recruitment, and into general employment affairs, litigation in respect of the fairness of the decision-making will undoubtedly become more common.

Bias and unlawful discrimination can occur when using these algorithms by way of:

  • What were the objectives set for the algorithm?
  • What data is inputted to create the algorithm?
  • The causal links identified by the algorithm
  • The data used when running the algorithm.

 

To prevent unlawful discrimination occurring by way of above, the ICO has released six key points organisations must consider before implementing algorithms for hiring purposes. These include:

  1. Bias and discrimination are a problem in human decision-making, so it is a problem in AI decision-making; therefore, you must assess whether AI is a necessary and proportionate solution to a problem before you start processing.
  2. It is difficult to build fairness into an algorithm, particularly when using an AI system, which must not have any unjustified, adverse effects on an individual. This includes discrimination against people who have a protected characteristic. It is important for UK organisations to note that there is no guarantee that an algorithm designed to meet US standards, will meet UK standards.
  3. Big data and machine learning algorithms are making it difficult to detect bias and discrimination, and the decisions they make are unfair because the AI technology make certain correlations that discriminate against groups of people. The ICO recommends businesses follow best practice and staff remain appropriately trained.
  4. Data protection law and equalities law must be considered when developing AI systems. Compliance must be met with both categories of legislation.
  5. Solely automated decision-making tools for private sector hiring purposes is likely to be illegal under the GDPR. Businesses should consider how they can bring a human element to an AI assisted decision-making process.
  6. Algorithms and automation can have the opposite effect, and when used appropriately at the early stages of a system’s lifecycle, it can address the problems of bias and discrimination.

How can my business prevent the unlawful discrimination and bias of algorithms?

Whilst legal cases in the United Kingdom challenging algorithm-based employment decisions are rare, the ICO has predicted this will all change in the years ahead following the increased use of such programmes, which may result in litigation. Further, there is evidence to suggest that people are more likely to mistrust a computer-based decision than a human-made one, therefore, people will be more inclined to challenge an employment decision made by an algorithm.

Defending an algorithm related employment claim is likely to be costly. The disclosure process alone can be a lengthy exercise and expert evidence will undoubtedly be required, which substantially increases fees.

Employers should also be mindful that, in the future, the ICO may introduce internal auditing tools to audit and conduct investigations. Therefore, if an enforcement strategy is put in place, employers will be at risk of facing ICO investigation in addition to any employment claim.

The ICO released guidance in September 2020, on how businesses can modify the way in which you would normally assess risk in the development of an AI model, or when purchasing one from a third-party provider. The most advantageous way of assessing risk, is by completing a Data Protection Impact Assessment (DPIA), which outlines how your business undertakes processing activities which would result in a high risk to individuals’ rights and freedoms.

The DPIA will assess whether the AI is a necessary and proportionate solution to a problem before you start processing. The ICO has already stated that AI projects will itself be regarded as a high level risk, so it is imperative that a DPIA completed, even when the particular AI does not involve ‘high risk’ processing, the guidance states that you will need to document how you came to that conclusion.

Businesses are encouraged at the early start of an AI programme to document how you will sufficiently mitigate bias and discrimination as part of your DPIA. Following completion, you should be able to put in place appropriate safeguards and technical measures during the design and completion phase.

Businesses should also continue to use best practice and perform regular reviews to monitor any changes and update any AI processes where appropriate, this will involve regular staff training, particularly for those involved in HR processes with AI technology.

Harrison Drury’s regulatory team can assist you with preparing a DPIA and documenting any algorithm-based employment decisions which your business may use.

As highlighted by the ICO, it is important to document any of these processes, as they are automatically regarded as ‘high risk’ processing. If you wish to discuss any issues raised in this article, please contact Harrison Drury’s regulatory team on 01772 258321.


Questions & Answers

Leave a Comment

Leave a comment

Your email address will not be published. Required fields are marked *


x

Manage your privacy

How we handle your personal data

The General Data Protection Regulation (GDPR) gives you more control over how companies like ours use your personal information and makes it quicker and easier for you to check and update the information we hold about you.

As part of our service to you, we will continue to collect, use, store and share your data safely and securely. This doesn’t require any action on your part.

For more detailed information view our Privacy Hub