top of page

Repairing and recalibrating trust in AI technologies as a force for good

In life, many universal truths underpin and inform the way we understand and experience our individual and collective realities as human beings. One such truth is that trust is often easier to break than it is to earn. Another universal truth comprehends a breach of trust an inevitability that can seldom be avoided. Yet another third universal truth sees human beings as being both inherently trustworthy and untrustworthy. Trust, as both concept and a construct, is complicated and of course contextually relative. It therefore cannot be sufficiently captured, represented by or understood through any unidimensional conceptual lens. As human beings, our first introduction to this idea of trust comes at the stage of infancy where we have no choice but to rely upon our caregivers to meet our most basic yet essential needs. To survive and indeed thrive in a world that is both defined and energised by interaction and interconnectivity, we must all trust and be trusted by others. In fact, the distinctly transactional nature of our increasingly globalised world necessitates and requires this. By extension, we must feel like we can trust and have faith in those entities, institutions, and systems that provide us with a range of services through various instrumentalities, including and especially digital, disruptive and emerging technologies. It has been said that ‘[f]aith in an entity requires a strong understanding of that entity, affiliation and connection to it and belief in that ability of that entity to effect positive change in one’s life…’ (Se-shauna Wheatle and Yonique Campbell, ‘Constitutional Faith and Identity in the Caribbean: Tradition, Politics and the Creolisation of Caribbean Constitutional Law‘ (2020) 58(4) Commonwealth and Comparative Politics, 1). Faith lays the foundation for trust in an entity, institution or system to be developed, maintained and—if or when broken—repaired. 


Recent reports tell of the existence of “trust crises” in the realms of politics, health, business, technology, science and media. Institutional actors in these fields of endeavour grapple persistently with the mammoth task of inspiring in consumers and other key stakeholders within and across various sectors the trust, faith and confidence necessary to expand their reach, amplify their impact and thrive in what is now a COVID-19 induced contagion. Indeed, the pandemic has compelled us all to navigate unprecedented times in which the use of technology, and particularly disruptive technology like artificial intelligence (AI), is quickly becoming a staple in such industries as the security, health, communications and transportation industries. Facial recognition technology (FRT) is but one example of disruptive AI technology that is being utilised to facilitate contactless verification to facilitate access to services, resources and critical amenities. The airline transportation industry has notably seen ‘…an increasing number of…players…turning to facial recognition technology (FRT), which is perceived as an efficient means to ensure a seamless and contactless passenger journey while preventing virus transmission…’ (Lofred Madzou, ‘Facial recognition can help re-start post-pandemic travel. Here’s how to limit the risks‘ (Cologny, December 16, 2020), World Economic Forum : <https://www.weforum.org/agenda/2020/12/facial-recognition-technology-and-travel-after-covid-19-freedom-versus-privacy/ >). In light of these realities, fostering, maintaining—and where damaged--repairing trust in technology broadly as well as, more specifically, disruptive technologies like FRT is one of the most pressing imperatives of our times. The exigencies of what has been designated the “Fourth Industrial Revolution” mean that technology has assumed a hitherto unprecedented, central and singular role in the ordering of our increasingly globalised world. Any “trust crisis” in technology therefore has serious implications for recovery efforts in the aftermath of COVID-19 whilst advancing the welfare of humanity through technological innovation.


                                                              

The Problem--Contemporary “trust crises” in Technology: Clearview AI, “technologized racism” and the end of “privacy as we know it”


Trust in technological innovations has now captured our collective imagination(s) due to recent developments in technology across the globe. On the one hand, people can be said to “trust” and even rely on disruptive AI technologies, including and especially in recent times, FRTs. Certainly, from the more quotidian tasks like unlocking smartphones or securing valuables through face verification requirements to the more complex and intricate ones like facilitating arrests and supporting law enforcement efforts, FRTs have become a mainstay in many of our lives. On the other hand, however, where sensitive and contested subjects such as policing especially in countries fraught with racial tensions, like the USA, come to the fore trust in FRTs especially by those who stand to be most affected by its use in this context—necessarily diminishes (see Emily Birnbaum, ‘People of colour less likely to trust law enforcement’s use of facial recognition tech, survey shows‘ (Washington DC, May 9, 2019), The Hill: <https://thehill.com/policy/technology/460081-people-of-color-less-likely-to-trust-law-enforcements-use-of-facial>). This is not surprising when considering that many commercial FRTs register higher rates of misidentification of people of colour in particular (see Patrick Grother, Mei Nagan and Kayee Hanaoka, Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects (National Institute of Standards and Technology 2019) 3:<https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf>).


Equally true is the fact that consumer mistrust in FRTs tends to increase where the circumstances under which the data operationalising the technologies are ethically questionable or otherwise suspicious. The debacle involving Clearview AI--a start-up that sells controversial FRTs powered by facial images it non-consensually appropriated from various social media platforms like Instagram and Facebook--best exemplifies this (see Kashmir Hill, ‘The Secret Company That Might End Privacy As We Know It’ (New York, January 18, 2020), The New York Times: <https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html>. When an exposé revealing Clearview AI’s ethically questionable activities came to light, including but not limited to its sale of FRT to law enforcement and private persons, it committed to halting the sale of its FRT to private companies and non-law enforcement entities (see Nick Statt, ‘Clearview AI to stop selling controversial facial recognition app to private companies‘ (New York, May 7, 2020), TheVerge: <https://www.theverge.com/2020/5/7/21251387/clearview-ai-law-enforcement-police-facial-recognition-illinois-privacy-law>). However, this did very little to change public perception of a company that eventually became the subject of multiple class-action lawsuits last year (see Soumyrendra Barik, ‘ACLU sues Clearview AI for posing “unprecedented threat to security and safety”’


Still, the “trust crisis” concerning the deployment of FRTs is not limited solely to their utilisation by law enforcement. In truth, while a recent poll found that the majority of Americans believed that law enforcement personnel would use FRTs responsibly, a significant lower number felt that tech companies and other stakeholders within the technology sector would do the same (See Sara Harrison, ‘Poll Finds Americans Trust Police Use of Facial Recognition’ (San Francisco, May 9, 2019), WIRED: <https://www.wired.com/story/poll-americans-trust-police-facial-recognition/>). There are also anxieties surrounding the role of FRTs as facilitators and perpetuators of “technologized racism” against black and brown peoples, who are already over-policed by a racist law enforcement superstructure—especially in the US. Two incidents involving the wrongful arrest and detention of African American men based on a misidentification by FRT—one for 10 days (See Martin Coulter, ‘A Black man spent 10 days in jail after he was misidentified by facial recognition, a new lawsuit says’ (New York, December 29, 2020), Business Insider: <https://www.businessinsider.com/black-man-facial-recognition-technology-crime-2020-12>), and the other for approximately 30 hours ( See Victoria Burton- Harris and Philip Mayor, ‘Wrongfully Arrested Because Face Recognition Can’t Tell Black People Apart’ (New York, June 24, 2020). ACLU: <https://www.aclu.org/news/privacy-technology/wrongfully-arrested-because-face-recognition-cant-tell-black-people-apart/)>) prove that these anxieties are not misplaced.


Given the concerns and anxieties surrounding the use of FRTs, especially by law enforcement and private corporations, privacy activists and lawmakers alike have called for moratoriums to be imposed on their use pending legal regulation and oversight. In response to mounting pressure, tech giants Amazon, IBM and Microsoft imposed temporary moratoriums on the sale of their FRTs to law enforcement until the matter was settled legislatively by Congress (See Kate Polit, ‘Tech Giants Back Away From Facial Recognition Amid Bias Concerns‘ (Virginia, June 12, 2020), MeriTalk: <https://www.meritalk.com/articles/tech-giants-back-away-from-facial-recognition-amid-bias-concerns/>).The utilisation of FRTs by city departments and in body cameras has also been banned in San Francisco, Boston and California (Haley Samsel, ‘California Becomes Third State to Ban Facial Recognition Software in Police Body Cameras’

(Dallas TX, October 19, 2019), Security Today: <https://securitytoday.com/articles/2019/10/10/california-to-become-third-state-to-ban-facial-recognition-software-in-police-body-cameras.aspx>).


The city of Portland Oregon has gone a step further and banned the utilisation of FRTs by government agencies, law enforcement and even private businesses in places of public accommodation (Stephanie Pagones, ‘Portland enacts most stringent facial recognition technology ban in US, barring public, some private use’ (New York, September 10 2020), Fox News: <https://www.foxnews.com/us/portland-facial-recognition-technology-ban>). While in England and Wales, it the Court of Appeal recently determined that the use of automated FRT ran afoul of “data protection laws, privacy laws and equality laws”. In coming to its decision, the Court of Appeal observed that the existence of “fundamental deficiencies” in the legal framework supporting law enforcement’s use of FRTs enabled breaches of fundamental rights. It also referenced the low accuracy levels of the technology in identifying people on grounds of race and sex (See generally Edward Bridges v The Chief Constable of South Wales Police and others [2020] EWHC Civ 1058).


The Solution: A “Tripartite Typology of Trust Restoration” in AI Technology


My proposal for addressing the “trust crises” in AI Technology, and particularly FRTs, identified above is informed by an acknowledgement of the fact that inspiring collective trust in these technologies requires that we work assiduously towards inspiring trust, on a fundamental level, in the people who are designing and programming them. After all, if those who are developing and building these technologies are perceived to be untrustworthy, then why should we trust their creations? Recognising the importance of engendering trust, and by extension faith, in the various institutions and systems of the technological sector, I advocate a “tripartite typology of trust restoration” to be implemented by those at the helm of power in the technology sector.  The substance and mechanics of this “tripartite typology of trust restoration” are best captured by the acronym “ATI”.

A-     Acknowledge the breach of trust and demonstrate a commitment to regaining and repairing trust;

T-   Take responsibility for the breach of trust and dialogue in an open, sincere and authentic way with those whose trust has been broken to gain practical and contextually relevant insight(s) into how it can be repaired;

I-       Implement corrective measures based on insight(s) gained and educate/sensitise the public about corrective measures taken to generate improvement.  

The first step in implementing this “tripartite typology of trust restoration” would require that where trust in AI technologies --here, FRTs-- has been breached, the developers and/or deployers of those technologies must acknowledge the breach of trust without reservation, and also demonstrate a commitment to repairing that trust. This, as a start, shows humility on the part of those who are responsible for the technologies’ existence and deployment as well as a willingness to embrace principles of transparency and accountability. Deflecting, prevaricating or otherwise attempting to justify and/or explain away the breach as was done by Clearview AI’s CEO following the exposé does very little to repair broken trust and can counterproductively exacerbate an already untenable situation thereby doing more harm than good (See Alfred Ng, ‘Clearview AI says the First Amendment lets it scrape the internet. Lawyers disagree’ (San Francisco, February 6, 2020), CNET: <https://www.cnet.com/news/clearview-says-first-amendment-lets-it-scrape-the-internet-lawyers-disagree/>). Rather than laying the foundation for the restoration of collective trust in Clearview’s FRT, his expressed position and posture on the matter in the aftermath of the exposé only engendered more distrust in Clearview AI’s values and ethics and, more specifically, whether they are in alignment with certain democratic tenets, including the protection of privacy rights.


The second step necessitates that developers and deployers of AI technologies—in this case, FRTs-- take responsibility for the breach of trust that their misuse or (ab)use has occasioned, and foster meaningful dialogue with those who stand to be most affected by and are also most mistrustful of FRTs (usually consumers and/or members of the ordinary public). This will allow for useful insight to be gained as to how trust can be repaired and will, in turn, inspire innovative yet practical and contextually relevant approaches for repairing the breached trust and, by the same token, engendering greater trust in FRTs. A recurring theme in public criticism of FRTs has been the absence of regulation and hence accountability in their deployment. There is also a major concern that tech companies and other high-powered players within the tech labyrinth will resist any attempts at stringent regulation of FRTs as this could ultimately affect their profit margins. Taking this second step will therefore signal in clear terms the sincerity of any expressed commitment to regaining and repairing the broken trust of members of the public who are wary of FRTs and anxious about what they feel are slow but steady movements toward a surveillance state. Public buy in will ultimately prove critical to securing the long-term success, viability and sustainability of FRTs technologies. So, fostering meaningful dialogue with the public and other key stakeholders will be indispensable.


The third and final step in the process requires that corrective measures be implemented by developers and deployers of FRTs based on insights gleaned from dialoguing with and listening to those whose trust has been breached. Once the corrective measures are taken, efforts should be made to educate and inform the public about the same whilst demonstrating improved outcomes based on the measures implemented. I acknowledge that this may involve some level of difficulty for tech businesses and other industry giants it will inevitably involve subordinating, at least initially, corporate and monetary interests to the public interest—especially if the insights emerging from the dialogue which the second steps advocates are oriented towards pushing for the legal regulation of FRT use. This is a risk that tech businesses and other stakeholders driven by profit may be unwilling to take. However, the trade-off would eventually be worthwhile since where there is public trust and support for technological innovation, consumer investment—both monetary and otherwise-- in technological innovation will increase.


Whether we like it or not, AI technologies, including FRTs, are here to stay and we can expect rapid developments in that area to be spurred on by the exigencies of the present pandemic which necessitates disruptive technological innovations to keep us connected as a global community whilst enabling us to better embrace the “new normal” inaugurated by the pandemic. and which subsists four (4) years later. As such, recalibrating, repairing and restoring our collective trust in the persons and institutions utilizing AI technologies—and in this particular case, FRTs—is imperative. As AI technologies proliferate and become more exponential in scope, reach, intricacy and impact, human trust and faith in their capacity to be a unifying force for good must be restored, maintained and ultimately deepened.







5 views0 comments

Comments


bottom of page