Human rights LP header

International human rights and legal responsibilities of businesses developing AI

Many forward-thinking industry leaders hope to make use of the power of artificial intelligence, but wish to do so as responsibly as possible. AI and machine learning can produce strong opinions within companies - particularly when concerning governance, autonomy and protecting human rights.

The International human rights and legal responsibilities of businesses developing AI paper serves as an introduction to the subject of ethical, responsible and explainable AI (XAI). Working closely with Dr Lane of the University of Groningen, Slimmer AI is on a mission to close the gap between human rights and the development of artificial intelligence in the private sector.

Introduction

'Over the years, artificial intelligence ('AI') technologies have advanced at an unprecedented pace, resulting in increased reliance upon it by both governments and the private sector. While the deployment of AI has generated many advantages for the current generation, it could also have potentially harmful human rights and legal consequences.

A particular concern in applying AI is the so-called 'black box‘ model used to compute algorithms that are difficult to comprehend, even by experts in the field. Not knowing how an AI system came to a particular end-result is problematic when action is taken based on the outcome and the decision may have to be justified at the time or thereafter. This is especially true when the algorithmic-based decision leads to human rights interference such as discrimination.

To better understand how AI works and reaches conclusions, companies like Slimmer AI have developed explainable AI (XAI) to employ in their products, such as their anti-money laundering (AML) transaction monitoring software.The XAI technology aims to use techniques that are transparent and interpretable, to make some of the processes occurring in the 'black box‘ interpretable and transparent.

We set out to explore the international legal and human rights considerations that AI companies would need to satisfy in the development of AI software, with a specific focus on XAI.'