In my previous post, I introduced the concept of corporate responsibility for human rights. I explained that under the United Nations Guiding Principles on Business and Human Rights (UNGPs), all businesses — including those developing and deploying AI — need to adopt: (1) a human rights policy commitment; (2) human rights due diligence processes; and (3) processes to enable the remediation of harm.
In this post, I’m going to focus on human rights due diligence (HRDD) and what this means for businesses developing AI. The discussion is not exhaustive but introduces some key points of HRDD and what AI businesses should bear in mind. The next post will dive into how to achieve HRDD of AI specifically.
You may already be familiar with ethical or privacy/data protection impact assessments for AI systems. The experience here is undoubtedly helpful, but HRDD is a concept in its own right and entails a specific approach for businesses to ‘identify, prevent, mitigate and account for how they address their impacts on human rights (UNGPs).
HRDD comprises 4 key elements
Identify and assess actual or potential adverse human rights impacts in which a business is involved (either directly or through its business relationships).
What does this mean? This is comparable to other types of corporate impact assessments. Businesses should make use of internal or external expertise to help them identify and assess relevant impacts and conduct meaningful consultations with relevant stakeholders, including those that may be negatively affected. The assessment should be ongoing, but its scale is relative to the size, nature, and context of the businesses’ operations (UNGPs, Principle 18).
Given the fast pace of AI systems’ development and regular updates to a system, it is even more important that the assessment is ongoing and relates to a system’s entire life cycle. Identifying risks should start from the conceptualisation and design of a system — it is not enough to only assess a system’s negative impact once it has already been developed (although this should be done as well, in case an unexpected impact materialises in practice).
Integrate findings of the impact assessment into internal functions and processes and take appropriate action.
What does this mean? Businesses should cease or prevent their (potential) contribution to adverse human rights impacts and use their leverage within business relationships to mitigate further impacts. This could be through the use of contracts or even by terminating a relationship. What constitutes appropriate action depends on the degree of the businesses’ involvement in the negative impact and the extent of their leverage. They may need to engage the help of independent experts to determine what action they should take. (Commentary to UNGPs Principle 19).
Depending on the nature of the risks to human rights, appropriate action could entail (e.g.) making changes to the way in which training data is collected, stored, and processed, the level of human oversight of a system, or introducing greater levels of consultation and involvement of affected groups in the development of a system. It may even require AI businesses to stop developing a certain AI system if risks cannot be sufficiently mitigated (see Toronto Declaration).
Track the effectiveness of the response, with the aim of driving continuous improvement.
What does this mean? Businesses should be able to ‘verify whether adverse impacts are being addressed’ (UNGPs, Principle 20). They, therefore, need to base tracking on both qualitative and quantitative indicators and on internal and external feedback, including from affected individuals/groups. Tracking should form part of a business’ reporting processes and could involve, for example, internal or external auditing or the findings of a business’ operational grievance mechanisms (see my previous post).
There have been many calls for AI systems to be (independently) audited and to undergo regular quality control checks (e.g. the EU AI Act; Toronto Declaration). Another way to track a response would be to conduct another impact assessment of a system that has been subject to changes to assess whether the negative impacts have been mitigated or removed.
Communicate how human rights impacts are addressed
What does this mean? Transparency and accountability: Accessible information about a business’ human rights impacts and their responses that is provided often enough and in sufficient detail to evaluate the response to a particular impact. However, this should ‘not pose risks to affected stakeholders, personnel or to legitimate requirements of commercial confidentiality’ (UNGPs, Principle 21).
Given that a large portion of the population is not AI-literate, making sure that this information is understandable, especially to those impacted by a system, can be a challenge in this context. However, in light of the obscurity that pervades many AI systems, transparency of responses to human rights risks (and of HRDD more generally: Toronto Declaration) is crucial, especially for ensuring trustworthiness.
The following figure by the Organisation for Economic Co-operation and Development (OECD) shows each stage of the HRDD process as well as the other two elements of corporate respect for human rights (policy commitment and remediation processes).
HRDD is not yet a legal requirement for all AI businesses, so you may be wondering why it’s so important. Here are a few reasons why AI businesses should be hustling to embed HRDD into their operations:
This post only gives a birds-eye-view of HRDD — there are many details that need ironing out based on the nature and risks of an AI system and the way it is/will be used. My next post will provide some pointers on what exactly AI businesses could do to put HRDD into practice.
To learn more about Slimmer AI’s approach to ethical and responsible AI development please read here: https://www.slimmer.ai/innovation.