Week 7 reflections

Timothy Lee
2 min readNov 23, 2020

Update on our research project:

I’ve been spending the last week reading Race after Technology by Ruha Benjamin in preparation for our group’s project in Computational Art Theory and Research — my sub-topic of interest is in racial biases in surveillance technology, and exploring the idea of a truly neutral algorithm.

Race after Technology is a really interesting and informative read because it addresses a topic much larger, but equally relevant, to my theme of interest: it explores how racism and biases are integrated into every technological advancement — she argues that “the plight of Black people has consistently been a harbinger of wider processes — bankers using financial technologies to prey on Black homeowners, law enforcement using surveillance technologies to control Black neighborhoods, or politicians using legislative techniques to disenfranchise Black voters. (32)” In this way, Benjamin argues, Black people “already live in the future. (32)”

I’m particularly interested in the purported neutrality of algorithms, and how we as a society can go about making amore equitable and neutral system free of biases. But how naive is this goal? After all, if humans are the ones teaching, coding, and feeding data to train machines and their algorithms (such as input or labelling of faces into a database), and humans all have our own biases, how can machines be free of them? Can machines be racist?

Benjamin believes that machines, like humans, have the capability to be racist — since their conception, design, and training have all happened in a racist world. She writes:

“To a certain extent, [robots] learn to speak the coded language of their human parents — not only programmers but all of us online who contribute to ‘naturally occurring’ datasets on which AI learn. Just like diverse programmers, Black and Latinx police officers are known to engage in racial profiling alongside their White colleagues, thought they are also the target of harassment in a way their White counterparts are not. One’s individual racial identity offers no surefire insulation from the prevailing ideologies. There is no need to identify ‘giggling programmers’ self-consciously seeking to denigrate one particular group as evidence of discriminatory design. Instead, so much of what is routine, reasonable, intuitive, and codified reproduces unjust social arrangements, without ever burning a cross to shine light on the problem.” (62)

Ultimately, Benjamin seems to assert that because machines learn by taking in data from their outside world — which is undoubtably racist — you cannot create a “neutral” algorithm or machine. Instead, much like we do to other people (especially the younger generations), we must design machines and further technological advances from a race-conscious approach — one that does not assume colorblindness.

--

--

Timothy Lee
0 Followers

Blog for Computational Arts-Based Research & Theory