Dear Friends,
The global pandemic has thrust so many of us more deeply into the online world. Online teaching, online shopping, online entertainment, online socializing, online conferences, online research, online gaming, online worship, online political engagement, et cetera.
As educators, it is very likely that we have become acutely aware of how differential access to computers and to internet connectivity have exacerbated already existing inequalities both among students and among teachers. Many of us probably feel that the measures we are able to employ in our teaching still fall short of adequately addressing the online divide, because the scope of the problem is too broad and its nature institutional.
This issue of internet access is certainly an important matter for ethical reflection and political action, and its importance should not be undermined, but what I would like to point out in this short piece is that what inclusion in the online sphere means in itself requires critical interrogation.
Our lives are so deeply entrenched in information technology in ways that are largely invisible to us. It seems to me that we are still very far from appreciating how economies of knowledge, mediated by information technology pertaining to the collection and deployment of data – big data – impact all areas of our lives, such as health, employment, law enforcement, food access, ecology, environmental vulnerability, political enfranchisement.
Our lives are overrun by technological mediation, and yet our philosophy curricula – at least in my own context – do not seem particularly equipped to grapple with this swiftly evolving and overly complex role of information technology in our lives. The ubiquitous and overwhelming presence of technology, our near complete immersion in our online existence, tends to breed a sense of helplessness and lack of control in our relation to technology, as if the tide of technological innovation is something inevitable and beyond the purview of critical examination and individual or even collective agency.
But if philosophy is purportedly good at anything, it is precisely in rising to the challenge of posing questions when the temptation is strong to acquiesce to the tyranny of existing conditions. The answers might not be forthcoming, and the question might ring merely rhetorical, but it is worth asking, for instance, what agency might mean when we sign off on our personal data and digital footprints in exchange for access to online goods and information. (How many cookies have you approved in the name of “better app function”?)
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fffe0801b-d837-4607-974d-8d826712c600_1280x851.jpeg)
I would like to introduce you to Annette Zimmermann, whose work allows us to gain some foothold in the broad category of philosophical reflection on the social and political significance of information technology and more specifically of artificial intelligence (AI). Dr. Annette Zimmermann is a political philosopher specializing in the ethics of AI and machine learning. Her essay, co-authored with Elena Di Rosa and Hochan Kim, “Technology Can’t Fix Algorithmic Injustice,” published in the Boston Review (January 9, 2020), won the 2020 David Roscoe Award for an Early-Career Essay on Science, Ethics, and Society.
In “Technology Can’t Fix Algorithmic Injustice,” Zimmermann et al. argue for an ethical examination and evaluation of AI tools. Citing specific examples, the article provides an interesting discussion of why the presumption that algorithmic tools of decision making are neutral and objective is problematic. The article shows how even attempts to correct bias by means of purely procedural mechanisms in the coding of algorithms might prove insufficient in addressing ethical problems, and recommends a socio-politically grounded examination of AI:
Developers cannot just ask, “What do I need to do to fix my algorithm?” They must rather ask: “How does my algorithm interact with society at large, and as it currently is, including its structural inequalities?” We must carefully examine the relationship and contribution of AI systems to existing configurations of political and social injustice, lest these systems continue to perpetuate those very conditions under the guise of neutrality. As many critical race theorists and feminist philosophers have argued, neutral solutions might well secure just outcomes in a just society, but only serve to preserve the status quo in an unjust one. (BR, Jan. 9, 2020)
The article mentions recent books on the topic of algorithmic injustice from both sides of the debate. Incidentally, I found it interesting that the works cited (seven titles were mentioned) as making the case through “a wealth of empirical evidence” that “the use of AI systems can often replicate historical and contemporary conditions of injustice, rather than alleviate them” were all written by women, whereas the two books mentioned as advocating technically improving AI to address algorithmic injustice – under the heading of “FAT ML (‘fairness, accountability and transparency in machine learning’)” – were both by men. (This might be of interest to those among you who wish to explore the epistemic significance of gender in the field of computer science, data science, and AI.)
Zimmermann et al. make the case for recognizing that AI does not exist in a vacuum and that AI should be evaluated not merely on technical but on ethical grounds. AI tools – in their design, development, and use – are a product of human decisions and should thus be subject to human deliberation and control. The authors argue that we should, therefore, resist automation bias (the tendency to think that automated decision making is better than human decision making based on the assumption that it is more objective and neutral and less prone to human error) as well as what the authors call a learned helplessness with regard to AI:
As we have argued, AI’s alleged neutrality and inevitability are harmful, yet pervasive, myths. Debunking them will require an ongoing process of public, democratic contestation about the social, political, and moral dimensions of algorithmic decision making. (BR, Jan. 9, 2020)
In another essay, “Stop Building Bad AI,” (Boston Review, July 21, 2021), Zimmermann identifies four obstacles to ethical reflection on AI: (1) the cultural presupposition about technological innovation (in the tech world, but I think also in business), that faster is better (“the cultural imperative […] to move fast and break things”); (2) “the contention that developing a potentially harmful technology is better than leaving it to bad actors”; (3) a narrow conception of algorithmic injustice as simply a matter of bias (understood as “disparate distributions of error rates across demographic groups”; and (4) the “presumption that we always have the option of non-deployment,” that is to say, the questionable assumption that it is always possible to interrupt or undo the harmful effects of bad AI simply by stopping to use the AI tool should things turn out badly in the future.
To support her argument against the narrow conception of algorithmic injustice mentioned in (3), Zimmermann cites the following case:
Consider Megvii, a Chinese company that used its facial recognition technology in collaboration with Huawei, the tech giant, to test a “Uighur alarm” tool designed to recognize the faces of members of the Uighur minority and alert the police. Here it is the very goal of the technology that fails to be morally legitimate. (BR, July 21, 2021)
At least in some instances, there could very well be a conflict between the efficiency of AI tools and the demands of social justice that would warrant not deploying – or even to halt developing – AI tools on moral grounds:
Non-deployment efforts in this area have been prompted by influential studies showing that currently used facial recognition systems are highly inaccurate for women and people of color. This is a good reason not to deploy these systems for now, but it is also important to recognize that the unregulated use of such systems might well be politically and morally objectionable even if those tools could be made highly accurate for everyone. Tools that support and accelerate the smooth functioning of ordinary policing practices do not seem to be the best we can do in our pursuit of social justice. In fact, the use and continued optimization of such tools may actively undermine social justice if they operate in a social setting that is itself systemically unjust. (BR, July 21, 2021, italics mine)
To consider the ethical question of purposes (rather than merely technical questions of means) means that the option of non-deployment has to be seriously considered. Zimmermann makes the point that in cases that call for it, we should consider non-deployment of as an option, rather than assume that improving the accuracy or efficiency of AI is always the better or only available choice.
Finally, I would like to note that asking questions about the purposes to which AI technologies are put to use, about its social effects, about questions of responsibility, about whom to hold into account and how to hold those responsible into account, about what to do with unforeseen harms caused by the deployment of AI tools – all these considerations bring us to the topic of what Zimmerman et al. call “democratic agenda-setting.” They argue that decisions about AI tools should not be left in the hands of developers, businesses, and state regulators alone.
Jean Tan
Ateneo de Manila University
Very relevant and interesting article. The actual facial recognition app cited above reminds me of the historian Yuval Noah Harari’s ideal “philosophical car,” one which is coded to run fast a’la Schumacher while being ethical a’la Kant, Mill and Rawls. But Zimmerman interestingly takes the role of ethics in AI further.