Lottie Lane
September 1, 2021
6 min read

A Human Rights Responsibility Primer for Businesses Developing AI: Part 3

It’s all about human rights due diligence

In my previous post, I introduced the concept of corporate responsibility for human rights. I explained that under the United Nations Guiding Principles on Business and Human Rights (UNGPs), all businesses — including those developing and deploying AI — need to adopt: (1) a human rights policy commitment; (2) human rights due diligence processes; and (3) processes to enable the remediation of harm.

In this post, I’m going to focus on human rights due diligence (HRDD) and what this means for businesses developing AI. The discussion is not exhaustive but introduces some key points of HRDD and what AI businesses should bear in mind. The next post will dive into how to achieve HRDD of AI specifically.

Making sense of due diligence

You may already be familiar with ethical or privacy/data protection impact assessments for AI systems. The experience here is undoubtedly helpful, but HRDD is a concept in its own right and entails a specific approach for businesses to ‘identify, prevent, mitigate and account for how they address their impacts on human rights (UNGPs).

HRDD comprises 4 key elements

1

Identify and assess actual or potential adverse human rights impacts in which a business is involved (either directly or through its business relationships).

What does this mean? This is comparable to other types of corporate impact assessments. Businesses should make use of internal or external expertise to help them identify and assess relevant impacts and conduct meaningful consultations with relevant stakeholders, including those that may be negatively affected. The assessment should be ongoing, but its scale is relative to the size, nature, and context of the businesses’ operations (UNGPs, Principle 18).

Given the fast pace of AI systems’ development and regular updates to a system, it is even more important that the assessment is ongoing and relates to a system’s entire life cycle. Identifying risks should start from the conceptualisation and design of a system — it is not enough to only assess a system’s negative impact once it has already been developed (although this should be done as well, in case an unexpected impact materialises in practice).

2

Integrate findings of the impact assessment into internal functions and processes and take appropriate action.

What does this mean? Businesses should cease or prevent their (potential) contribution to adverse human rights impacts and use their leverage within business relationships to mitigate further impacts. This could be through the use of contracts or even by terminating a relationship. What constitutes appropriate action depends on the degree of the businesses’ involvement in the negative impact and the extent of their leverage. They may need to engage the help of independent experts to determine what action they should take. (Commentary to UNGPs Principle 19).

Depending on the nature of the risks to human rights, appropriate action could entail (e.g.) making changes to the way in which training data is collected, stored, and processed, the level of human oversight of a system, or introducing greater levels of consultation and involvement of affected groups in the development of a system. It may even require AI businesses to stop developing a certain AI system if risks cannot be sufficiently mitigated (see Toronto Declaration).

3

Track the effectiveness of the response, with the aim of driving continuous improvement.

What does this mean? Businesses should be able to ‘verify whether adverse impacts are being addressed’ (UNGPs, Principle 20). They, therefore, need to base tracking on both qualitative and quantitative indicators and on internal and external feedback, including from affected individuals/groups. Tracking should form part of a business’ reporting processes and could involve, for example, internal or external auditing or the findings of a business’ operational grievance mechanisms (see my previous post).

There have been many calls for AI systems to be (independently) audited and to undergo regular quality control checks (e.g. the EU AI Act; Toronto Declaration). Another way to track a response would be to conduct another impact assessment of a system that has been subject to changes to assess whether the negative impacts have been mitigated or removed.

4

Communicate how human rights impacts are addressed

What does this mean? Transparency and accountability: Accessible information about a business’ human rights impacts and their responses that is provided often enough and in sufficient detail to evaluate the response to a particular impact. However, this should ‘not pose risks to affected stakeholders, personnel or to legitimate requirements of commercial confidentiality’ (UNGPs, Principle 21).

Given that a large portion of the population is not AI-literate, making sure that this information is understandable, especially to those impacted by a system, can be a challenge in this context. However, in light of the obscurity that pervades many AI systems, transparency of responses to human rights risks (and of HRDD more generally: Toronto Declaration) is crucial, especially for ensuring trustworthiness.

The following figure by the Organisation for Economic Co-operation and Development (OECD) shows each stage of the HRDD process as well as the other two elements of corporate respect for human rights (policy commitment and remediation processes).

HRDD processes: OECD Due Diligence Guidance for Responsible Business Conduct

Why incorporate HRDD into an AI business?

HRDD is not yet a legal requirement for all AI businesses, so you may be wondering why it’s so important. Here are a few reasons why AI businesses should be hustling to embed HRDD into their operations:

  1. HRDD is increasingly being made mandatory in national legislation, and EU legislation on this is currently being drafted. An international treaty that could impose an obligation on States to make sure businesses conduct HRDD is also underway. Long story short: depending on a business’ location, size, and the type of AI it develops, it may be legally obligated to conduct HRDD in the future.
  2. Conducting HRDD can be a selling point for your business. Awareness amongst consumers about the risks AI poses to human rights is growing. Adopting HRDD now could get businesses ahead of the curve and help build a reputation for human rights-complaint AI.
  3. Businesses in a (potential) AI supply chain, as well as potential customers, may only wish to do business with organisations that respect human rights (i.e. conduct HRDD). As more laws and regulations on HRDD are adopted worldwide, it could even become a legal requirement for them to use their leverage to ensure that their business relationships also respect human rights.
  4. It’s the ethical thing to do. If a business strives towards ethical and/or human-centric AI, HRDD is a great way to follow overlapping ethical standards (e.g. avoiding bias) and to ensure that products and services have as little negative impact as possible on society and individuals.

Key points to remember

  • Already conduct ethical impact assessments? You’re on your way towards HRDD, but you’ll need to address human rights specifically and fulfil the four key elements of HRDD.
  • HRDD processes are not one-off activities but ongoing commitments that should be repeated throughout the whole life cycle of an AI system.
  • To successfully implement HRDD, you may need the help of an independent/human rights expert, especially when deciding how to mitigate and/or eliminate negative impacts.
  • There is no one-size-fits-all approach to HRDD — human rights impacts and how to tackle them will depend on the nature, size, and context of your business, your business relationships, and the type of AI system in question.
  • There are many reasons to conduct HRDD. If you aren’t convinced by the ethics or reputation-related arguments, consider that for you or your business relationships, HRDD may become a legal requirement in the not-too-distant future.

This post only gives a birds-eye-view of HRDD — there are many details that need ironing out based on the nature and risks of an AI system and the way it is/will be used. My next post will provide some pointers on what exactly AI businesses could do to put HRDD into practice.

To learn more about Slimmer AI’s approach to ethical and responsible AI development please read here: https://www.slimmer.ai/innovation.

To learn more about Lottie’s work, please visit: https://www.rug.nl/staff/c.l.lane/, connect with her on LinkedIn, or follow her on Twitter.