top of page

The Rule of Law and AI Ethics: FRTs in Focus

*Image credit: The National Judicial College


While AI systems have proven beneficial to humankind in several ways, their deployment has invariably presented complex moral, legal and ethical challenges. On May 12, 2023, a local radio broadcaster reported that “[t]he Jamaica Constabulary Force will soon roll out facial recognition technolog[ies] that will assist them in apprehending unwanted persons”.  Facial Recognition Technologies (FRTs) have been described as “a set of digital tools used to perform tasks on images or videos of human faces…” (Buolamwini et al., 2020, 2). While FRTs are widely deployed within the commercial sphere, their deployment in law enforcement contexts raises serious concerns about bias, discrimination, and respect for human rights.


Dangers of unregulated FRT deployment


Despite their widespread use, however, FRTs are classified as “high risk” AI systems due to inherent algorithmic biases. More specifically, FRTs misidentify people of colour and women at significantly higher rates than white men (Buolamwini et al., 2018). A 2020 NIST study also found that misidentification rates were even higher when subjects wore face masks (Ngan et al., 2020, 1 & 7).


In the law enforcement context, these misidentifications have led to violations of fundamental human rights. In December 2020, Nijeer Parks, an African American man, was wrongfully arrested and detained for 10 days after being misidentified as a suspected shoplifter. More recently, in February 2023, an African American woman who was eight months pregnant, was wrongfully arrested for carjacking and interrogated for 11 hours following an erroneous FRT match. According to the New York Times, she was “the sixth person to report being falsely accused of a crime as a result of facial recognition technology.” These real-world cases illustrate the dangers of deploying FRTs in law enforcement without regulatory oversight.


Moreover, the unregulated deployment of FRTs has raised concerns about informational privacy as the highly sensitive biometric data that they collect, process and store can be misused to facilitate mass surveillance. Given all of these significant risks, it becomes crucial to regulate FRTs in ways that balance AI-driven innovation with respect for fundamental human rights and the Rule of Law.


Interplay between the Rule of Law, AI Ethics and FRTs


In the age of the Fourth Industrial Revolution, unregulated AI systems like FRTs can threaten fundamental human rights and undermine the Rule of Law—a democratic principle concerned with preventing the abuse of executive power by ensuring that state actions are legal, accountable, transparent, fair, and non-discriminatory. In Edward Bridges v The Chief Constable of South Wales Police and others [2020] EWHC Civ 1058, the Court of Appeal determined that the South Wales police’s use of automated FRTs as part of a surveillance project that facilitated the capture of sensitive facial biometric data from approximately 500,000 people without their consent breached “data protection laws, privacy laws, and equality laws,” including Article 8 of the ECHR.


Given the implications of unregulated FRT deployment, the concept of AI Ethics assumes considerable significance as a “a systematic normative reflection…that can guide societies in dealing responsibly with the known and unknown impacts of AI technologies…” (UNESCO Recommendation on the Ethics of Artificial Intelligence, 2021). The EU’s recently passed Artificial Intelligence Act 2024, which has been described by the European Commission as “…the first-ever comprehensive legal framework on AI world-wide”, establishes legally binding standards which require that “very powerful and impactful” AI systems “respect fundamental rights, safety and ethical principles.” Last year, The Council of Europe Framework Convention on Artificial Intelligence was promulgated. The Convention is ‘the first international legally binding instrument…[that]…aims to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law, while being conducive to technological progress and innovation’ (see page 2 of the Council of Europe’s Convention brochure). Thus far, 15 jurisdictions, including the UK, USA and the EU, have acceded to the Convention.


Conceptually, the Rule of Law and AI Ethics converge around their shared endorsement of the principles of accountability, fairness, transparency, equality and non-discrimination, as core constituent components. This is significant since an understanding of these points of convergence can help shape ethical approaches to regulating FRTs so that they will be deployed in a rights-respecting manner that upholds the Rule of Law. AI Ethics, grounded in universal human rights standards, will provide a moral guide for responsible FRT deployment which will then be given the force of law through the enactment of legislation. Accordingly, embracing this convergence can better position legislators and other stakeholders to champion AI governance frameworks that are rights-respecting and ethically sound.


Balancing AI-driven innovation with respect for fundamental human rights and the Rule of Law


The JCF’s plan to roll out FRTs reflects a commendable desire to modernize crime-fighting strategies. However, given the significant concerns raised, Jamaica should impose a moratorium on their deployment pending legal regulation, as has been done in far more technologically advanced jurisdictions, such as the USA. Any legal framework for regulating FRTs should be informed by human-rights centred AI Ethics aimed at ensuring that they are deployed in a rights-respecting manner that upholds the Rule of Law. Adopting this approach accords with the imperative of ensuring that respect for fundamental human rights and the Rule of Law is not sacrificed on the altar of AI-driven innovation. Meaningful engagement with the Jamaican public must also be prioritized as this will enhance transparency and inspire increased public confidence.


In the final analysis, responsible AI governance requires an approach that recognizes and integrates the convergence between human rights-centred AI Ethics and the Rule of Law. By embracing this AI Ethics-Rule of Law convergence, legislators and other stakeholders will be better positioned to develop regulatory frameworks for FRTs that respect fundamental human rights and uphold the Rule of Law without stifling AI-driven innovation.

 
 
 

Recent Posts

See All

Comments


Musings by Amanda J.D. Quest

©2022 by Musings by Amanda J.D. Quest. Proudly created with Wix.com

bottom of page