Web
Analytics
top of page

Darwin's Animoji: Histories of Emotion, Animation, and Racism in Everyday Facial Recognition

Writer's picture: Jessie G TaftJessie G Taft

Blog by Cameron Mine | MS Student in Computer Science


Illustration by Gary Zamchick | DLI Chronicler

The discussion around racism tends to revolve around a few thematic areas: individual racism, interpersonal racism, institutional racism, and cultural racism. While at times difficult to converse about, it is critical that we continue doing so as it forces us to think transformatively in the pursuit to establish an anti-racist society. Yet lost among these traditional silos of discrimination is technology. What role is technology playing, if any, in the racializing energy of division permeating our world? On January 31st, Microsoft Research Montreal's Postdoctoral Researcher Luke Stark led the way into the uncharted waters of technical racism.


How Race Is a Technology

Stark begins his argument by positing that race is indeed a technology. To consider its technological function we were first asked to reflect on the biological function of race - of which there is none. Race instead becomes the instrument used to forcefully assign individuals into a discriminatory population controlled by some sovereign power. The origin of the sovereign power is conveniently determined by race’s hierarchical nature to naturalize racial classification. In other words, the general population is forced to spend their entire existence under the watchful eye of a group illegitimately authorized to do so by some “racial advantage”. Once such naturalized categories are established the sovereign power utilizes them to identify and justify the exclusion of people considered outliers and exceptions to the unalienable rights belonging solely to the sovereign power's class. Rather than emphasizing meaningful genetic variance among the human population, race is used as a scientific justification to identify populations deemed inferior, immoral, or criminal.


Stark quickly defends the claim that race is technology by looking at historical examples such as Victorian statistician Francis Galton (1822-1911) whose research attempted to empirically characterize “criminals” by superimposing faces through multiple exposure to construct the average criminal face. Galton’s findings unsurprisingly sparked the formative years of the eugenics movement, which aimed to improve the biological make of the human population by restricting parenthood. Another example is Samuel Morton (1799-1851) whose research focused on the relationship between cranial capacity and intelligence in support of white supremacy. One might now be thinking, as a more enlightened society, we have moved the primitive caricatures and deeply embedded racist ideas of the 17th to 19th century, right? Unfortunately, not.


How We Still Deploy the Technology of Race

Despite how startling it would be to prolong the mistakes of the past, it appears that with facial recognition we (as both scientists and technologists) continue to do exactly that. Stark masterfully considers the discomfiting paradox at the heart of this issue and then declares "facial recognition is the perfect tool for oppression". I began to ponder about the possibility of facial recognition being an oppressive force. After all, as the adage goes "when you know better, you do better". We know physiognomy (the attempt to interpret a person’s character by means of the face) is racist pseudoscience masquerading as respectable science. We know there is no physical feature indicating rampant immorality in a certain race. So how could we not be doing better?


Stark sheds insight on our most recent stumblings. In 2016, researchers Xiaolin Wu and Xi Zhang published a paper entitled “Automated Inference on Criminality Using Face Images”. The paper states that with 90% accuracy it can predict the likelihood of a person being a convicted criminal from nothing more than their driver's license photo. Sounds familiar, right? At the root of the paper, Wu and Zhang are making the claim that one can automate the extraction and classification of criminality using facial features. The only difference between this paper and the previously mentioned false claims of the likes of Galton and Morton is the use of machine learning / facial recognition for new credibility. This is deeply problematic.


The Inherent Racism of Facial Recognition

Stark provides the ultimate reality check by reminding the audience that facial recognition is incapable of making any meaningful sociological deductions from the input data provided. What today’s facial recognition software is cable of is linking the features identified in a given input face retroactively to a database entry. Facial recognition can answer the question, “have we seen this face this before”. However what facial recognition is actually being tasked to do, in works like Wu and Zhang’s is to take on a more predictive nature. It is taking an input face one has never encountered before, and asking whether the person can be reduced to a defining classification such as criminal. This just simply can’t be done, because instead of looking for behaviors / actions defined as legally criminal, facial recognition is being forced to identify facial traits that predict criminality.


Yet more importantly, even if facial recognition could make such deductions, the act of making such a judgment based off of surface level features is inherently racist. Faces are not in existence to reflect the innate properties of the person they are attached to. Beyond the truism of race acting as a tool for physiognomy, Stark's arguments surrounding facial recognition finally began to congeal in my mind. The larger failure of sanctioning facial recognition with a gratuitous distribution of power is the free licensing of the technology to conduct a host of abuses and corrosive actions against the most vulnerable in our society: the racially disenfranchised.


Stark's lecture is wholly indispensable. While the lecture did not go into the future of facial recognition, it instead ushers us into the necessary long-term thinking of technological racism. It implores us to question whether the conveniences of facial recognition are worth the amplification of erroneous harms of the past. Most importantly, it emboldens us to act before we allow the misuse of facial recognition to erode the democratic values we hold dear.

1,137 Comments


Jacky seo
Jacky seo
20 hours ago

Deburring with vibration ensures that even the smallest imperfections are removed from intricate parts, making it an ideal solution for parts with complex geometries. Vibratory Finishing

Like

Hunain Khan 6
Hunain Khan 6
2 days ago

Really I enjoy your site with effective and useful information. It is included very nice post with a lot of our resources.thanks for share. i enjoy this post. TradingView

Like

Ihtisham Ahmed
Ihtisham Ahmed
2 days ago

Thank you for some other informative blog. Where else could I get that type of information written in such an ideal means? I have a mission that I’m just now working on, and I have been at the look out for such information.Samenspenden

Like

member
2 days ago

Great article with excellent idea!Thank you for such a valuable article. I really appreciate for this great information.. satta king

Like

ali umair
ali umair
2 days ago

Really a great addition. I have read this marvelous post. Thanks for sharing information about it. I really like that. Thanks so lot for your convene.Wellhealthorganic.com effective natural beauty tips

Like
bottom of page