Dr.Lottie Lane, Senior Advisor on Artificial Intelligence and Human Rights for Slimmer AI
In my previous article, I discussed why an ethics approach alone is not enough to ensure that AI is truly human centric and does not cause harm to individuals and society. In this article, I will be building upon this, looking at why it is so important that businesses developing and using AI are aware of and engage with their human rights responsibilities. Further, I will discuss the consequences which could arise when this doesn’t happen.
During a recent presentation I provided a group of AI developers, I was asked: “If there was one piece of advice for a business developing AI, what would it be? Where should we start?”. Spoiler alert: The answer is human rights. But let me help explain why I believe developers should start there.
What risks does AI pose to human rights?
It is impossible to miss the many stories of ‘AI-gone-wrong': They are splashed all over the media on a regular basis. Many of these situations raise serious concerns for human rights. There are particularly well-known examples concerning violations of privacy through the use of facial recognition, and discrimination (especially racial and sex) pursuant to reliance on algorithmic decision-making, whether in the context of criminal justice, recruitment, or the digital welfare state.
These are only a few examples. The list of human rights which can be negatively affected by AI just goes on and on. We have also seen human rights issues related to social security, education, housing, freedom of expression, and even freedom of religion and belief. It is actually much harder to think of a right that cannot be negatively affected by AI than one that can.
While the impact of AI on human rights is perhaps more frequently discussed in the context of B2C, some of the examples just mentioned could easily occur in the context of B2B. For example, AI that is used for recruitment purposes to help a business choose a suitable individual for a specific role could have a serious impact on human rights. Another widely discussed concern is the impact of AI on the ‘right to work’. This concern is predominantly raised when there are ‘changes to the requirements for future employees, lowering in demand for workers, labour relations, creation of new job structures and new types of jobs’. This has already happened in many different sectors, including textiles, banking and technology. The replacement of humans by machines is not new, but has been exacerbated by the Covid-19 pandemic, with businesses attempting to minimise the number of employees working on-site to keep transmission to a minimum and to operate as cost-efficiently as possible.
Despite this, the majority of businesses developing and using AI do not actively engage with human rights. In fairness, it is not technically obligatory to do so for many businesses, which perhaps creates an unwillingness. However, from my numerous discussions with individuals working in this field, the overwhelming lack of awareness and understanding of human rights (law) appears to be the biggest factor to many businesses’ lack of engagement on this issue.
For example, machine learning experts I have spoken to about the risks of AI have been extremely concerned that the systems they create could have negative impacts on human rights. However, until that point, they were unaware of a businesses’ responsibility to respect human rights and what this requires businesses to do in their daily operations. Many individuals with a technical background also find the language of human rights (law) intimidating or inaccessible, leading to translation issues. Those of us with a legal background have had the same experience with language used in tech. This ‘translation’ issue results in a steep learning curve, requiring more meaningful collaboration between actors from both backgrounds to make sure that human rights standards can be meaningfully realised in practice.
What are the consequences of not addressing those risks?
If knowing that you produce and/or use human rights-friendly AI is not enough of an incentive, there can be regulatory, reputational and even legal risks to not ensuring that AI is compliant with human rights. This can in turn have a negative effect on a business’ (financial) success.
For example, tech-powered businesses are very familiar with the GDPR and the concept of ‘privacy by design’. Additionally, businesses are well informed of the potential consequences of not following the GDPR’s standards when dealing with personal data (potentially huge fines). Although there is still a lot of progress to be made, there is a growing awareness amongst society and consumers of the risks of AI and the importance of AI ethics, with stories of negative effects sometimes going viral. Being known as a business that doesn’t respect human rights, whether in B2C or B2B, often results in significant reputational damage. This risk is increasing, as the regulatory and legal framework concerning AI and human rights evolves.
On this note, as I mentioned in my previous post, EU legislation on mandatory human rights due diligence is on its way. This legislation will require many AI businesses to adopt processes to respect human rights not only in relation to their own activities, but also in their business relationships (more on this in future posts). And, of course, the new draft EU ‘Artificial Intelligence Act’ places the protection of fundamental rights center-stage, with systems that pose ‘significant risks’ to fundamental rights earning the label of ‘high risk’, and therefore subject to strict requirements such as a conformity assessment before they can be placed on the market.
What can your business do to address the risks?
Now, the good news: There is plenty you can do to avoid - or at least mitigate - these risks.
Let’s revisit the question posed above in my introduction: Where should we start? My most important recommendation is to make human rights a serious concern for your business. You can do this by:
By making human rights a serious concern for your business, you’ll ensure that the AI you develop is future-proof to regulations, that your business reduces integrity risk, and most importantly, that your AI products support a human rights approach.
This may be easier said than done and require some investment -- perhaps less if you already integrate AI ethics into your business model -- but it is crucial to being able to incorporate human rights compliance into the DNA of your business. Even though international human rights law is not fully evolved in this context, it contains some crucial standards for AI businesses to follow. I’ll be discussing this in my next post.
To learn more about Slimmer AI’s approach to ethical and responsible AI development please read here: https://www.slimmer.ai/innovat...
To support your use of the website of Slimmer AI, we, Slimmer AI traded by Target Holding B.V., need to register some of your information. We consider taking proper care of your data of utmost importance. We treat all information you provide as confidential.
In this privacy statement we explain how we process your data. If you have any questions, remarks or complaints please contact us at firstname.lastname@example.org.
Your data is used for the following purposes:
Based on your data, the website of Slimmer AI does not make autonomous decisions which affect you or other people.
The website of Slimmer AI only stores your data as long as your account exists. You may delete your account at any time. If you use the website of Slimmer AI without a personal account, we only store the data you provide for as long as needed to be of service to you.
For security reasons we store your IP address up to a maximum duration of 1 month. However when a user attempts to access the website of Slimmer AI in a way that clearly differs from its intended use, this can be considered suspicious activity. As a result, the IP address may be stored for a maximum of 2 years for security purposes, and the IP address may be reported to organizations involved in internet security.
Any statistics based partly on your data will be stored, but cannot be used to identify you.
The website of Slimmer AI has no intention to process information of users younger than 16 years of age, unless they are permitted to use the website of Slimmer AI by their legal guardians. However, we have no reliable way to determine the actual age of users. We therefore advise parents to be involved in their children’s online internet activities to prevent any data being gathered without their permission.
If you feel that we gathered personal data from minors without proper permission, please contact us at email@example.com. We will remove the data upon your request.
All your data is protected by state of the art technology. This ensures that your data cannot be accessed by parties who have no business doing so. If you feel that this is not happening in an appropriate way, please contact us at firstname.lastname@example.org.
Slimmer AI traded by Target Holding B.V. does not sell your data to third parties. Your data is only shared when this is needed for proper operation of the website of Slimmer AI, or to fulfil lawful obligations.
Any data we share with third parties is based on a data processing agreement with these parties. This agreement ensures that the parties process your data under the same terms that we do. Any shared data will be as restricted in scope and size as possible, given the purpose for which it is shared.
The website of Slimmer AI may also make use of third party cookies like Google Analytics. These cookies allow us to analyze the operation and usage of the website of Slimmer AI.
You are entitled to view your data stored by the website of Slimmer AI, and you are entitled to have your data corrected or removed. You can send a request to do so to email@example.com.