Week 5: reflections

Timothy Lee
2 min readNov 10, 2020

My next few blog entries will be devoted to the research topic I am proposing for my computational arts based theory and research class. Titled under the theme ‘Big Brother: Racial Bias and Methods of Resistance in a World of Surveillance,’ my point of focus turns to algorithmic biases found in facial recognition softwares and how this flawed technology is used in a world of surveillance (particularly in contemporary times as riots and protests are erupting around the world).

In Figuring the Human in AI and Robotics by Lucy Suchman, she makes the argument that any contemporary research and developments in various technologies all emergent from a “source imaginary” — most likely that of Euro-American. She makes this point regarding the definition of figuration in artificial intelligence, and how the assumptions of what makes something “human” is defined by Euro-American and culturally-specific imaginary. Likewise, the in facial recognition, one of the earliest models from which the technology developed from was “the standard head,” which was a mathematical model of a head that provided the ‘ground truth” for facial recognition. But who’s head was it based off of? If it is based off a standard European male head, which I suspect it is, what implications does this technology have for POCs, women, and people who do not conform to these Westernized standards?

For too long has technology overlooked the biases implicit in its research and development; why is it that utilitarian objects like motion-sensing soap distributors have a harder time noticing Black hands? Or the Kodak neutral grey background in image editing softwares was made for color-correcting white skin? Technology requires a base reading as a template for innovation and articulation; it is clear that even in a purportedly post-racial world of science that these biases are ingrained into our research at the source.

In terms of facial recognition technology, which is becoming increasingly relevant today as governments are becoming frighteningly closed to emulating “Big Brother” standards of surveillance over its citizens, how might biases in the technology play out? There has already been research that shows that current facial recognition softwares have a more difficult time distinguishing faces of Black people than those of White — this has implications on wrongful detainment or tracking of individuals, especially at the expense of their privacy. Moreover, if the ‘standard head’ model includes a base reference to the heads of Black people, Asian people and other POC communities, how might this change how facial recognition software is used or developed? These are the questions I will begin to research and delve into more deeply in the coming weeks. I think it’s an incredibly relevant field in light of current political climates across the world, including the mass surveillance of protests erupting in major cities such as in the US and Hong Kong, where protestors are actually using the flawed technologies of facial recognition to subvert its tracking mechanisms — using something as simple as make-up to confuse the system.

--

--

Timothy Lee
0 Followers

Blog for Computational Arts-Based Research & Theory