Lottie Lane
June 23, 2021
5 min read

A Human Rights Responsibility Primer for Businesses Developing AI: Part 1

How businesses can engage with human rights and AI

In my previous article, I discussed why an ethics approach alone is not enough to ensure that AI is truly human-centric and does not cause harm to individuals and society.

In this article, I will be building upon this, looking at why it is so important that businesses developing and using AI are aware of and engage with their human rights responsibilities. Further, I will discuss the consequences which could arise when this doesn’t happen.

During a recent presentation I provided to a group of AI developers, I was asked,

“If there was one piece of advice for a business developing AI, what would it be? Where should we start?”.

Spoiler alert: The answer is human rights. But let me help explain why I believe developers should start there.

What risks does AI pose to human rights?

It is impossible to miss the many stories of ‘AI-gone-wrong’ — they are splashed all over the media on a regular basis. Many of these situations raise serious concerns for human rights. There are particularly well-known examples concerning violations of privacy through the use of facial recognition, and discrimination (especially racial and sex) pursuant to reliance on algorithmic decision-making, whether in the context of criminal justice, recruitment, or the digital welfare state.

These are only a few examples. The list of human rights which can be negatively affected by AI just goes on and on. We have also seen human rights issues related to social security, education, housing, freedom of expression, and even freedom of religion and belief. It is actually much harder to think of a right that cannot be negatively affected by AI than one that can.

While the impact of AI on human rights is perhaps more frequently discussed in the context of B2C, some of the examples just mentioned could easily occur in the context of B2B. For example, AI that is used for recruitment purposes to help a business choose a suitable individual for a specific role could have a serious impact on human rights. Another widely discussed concern is the impact of AI on the ‘right to work’. This concern is predominantly raised when there are ‘changes to the requirements for future employees, lowering in demand for workers, labour relations, creation of new job structures and new types of jobs’. This has already happened in many different sectors, including textiles, banking, and technology. The replacement of humans by machines is not new but has been exacerbated by the Covid-19 pandemic, with businesses attempting to minimise the number of employees working on-site to keep transmission to a minimum and to operate as cost-efficiently as possible.

Despite this, the majority of businesses developing and using AI do not actively engage with human rights. In fairness, it is not technically obligatory to do so for many businesses, which perhaps creates an unwillingness. However, from my numerous discussions with individuals working in this field, the overwhelming lack of awareness and understanding of human rights (law) appears to be the biggest factor for many businesses’ lack of engagement on this issue.

For example, machine learning experts I have spoken to about the risks of AI have been extremely concerned that the systems they create could have negative impacts on human rights. However, until that point, they were unaware of a businesses’ responsibility to respect human rights and what this requires businesses to do in their daily operations. Many individuals with a technical background also find the language of human rights (law) intimidating or inaccessible, leading to translation issues. Those of us with a legal background have had the same experience with the language used in tech. This ‘translation’ issue results in a steep learning curve, requiring more meaningful collaboration between actors from both backgrounds to make sure that human rights standards can be meaningfully realised in practice.

What are the consequences of not addressing those risks?

If knowing that you produce and/or use human rights-friendly AI is not enough of an incentive, there can be regulatory, reputational and even legal risks to not ensuring that AI is compliant with human rights. This can in turn have a negative effect on a business’ (financial) success.

For example, tech-powered businesses are very familiar with the GDPR and the concept of ‘privacy by design’. Additionally, businesses are well informed of the potential consequences of not following the GDPR’s standards when dealing with personal data (potentially huge fines). Although there is still a lot of progress to be made, there is a growing awareness amongst society and consumers of the risks of AI and the importance of AI ethics, with stories of negative effects sometimes going viral. Being known as a business that doesn’t respect human rights, whether in B2C or B2B, often results in significant reputational damage. This risk is increasing, as the regulatory and legal framework concerning AI and human rights evolve.

On this note, as I mentioned in my previous post, EU legislation on mandatory human rights due diligence is on its way. This legislation will require many AI businesses to adopt processes to respect human rights not only in relation to their own activities but also in their business relationships (more on this in future posts). And, of course, the new draft EU ‘Artificial Intelligence Act’ places the protection of fundamental rights center-stage, with systems that pose ‘significant risks’ to fundamental rights earning the label of ‘high risk’, and therefore subject to strict requirements such as a conformity assessment before they can be placed on the market.

What can your business do to address the risks?

Now, the good news: There is plenty you can do to avoid — or at least mitigate — these risks. Let’s revisit the question posed above in my introduction: Where should we start?

My most important recommendation is to make human rights a serious concern for your business. You can do this by:

  1. Educating staff about human rights so they are able to recognise and flag up potential issues at all stages of an AI system’s life cycle.
  2. Ensuring that all employees are aware of what human rights mean and require in the context of your operations.
  3. Have staff demonstrate understanding of which human rights are relevant and why, and be able to identify potential risks your products and services pose to human rights.

By making human rights a serious concern for your business, you’ll ensure that the AI you develop is future-proof to regulations, that your business reduces integrity risk, and most importantly, that your AI products support a human rights approach.

Follow Dr. Lottie Lane on LinkedIn and Twitter.