Is Face Surveillance Technology Racist?

In June 2020, IBM, Microsoft and Amazon stopped the sale of face recognition technology to police in the United States. Face surveillance technology became a staple for a lot of companies’ security systems and even law enforcement. It also led to a ripple of disapproval by racial justice and civil rights activists about how this surveillance technology and the algorithm was inherently racist. 

In 2018, Black scholars Joy Buolamwini, Deb Raji, and Timnit Gebru from Massachusetts Institute of Technology conducted a research that determined how face surveillance technologies often identify middle aged white men correctly, 35 percent of Black, female and 18-30 users are identified wrong. A subsequent study defined these problems existed more with Amazon’s software.

IBM and Microsoft’s face technology software is also used for law enforcement making it all the more concerning. IBM, Microsoft observed the risk and subtle racism with the current algorithm at hand and publicly declared to work on the bias. The need for diversity and inclusivity in tech continues. This only makes us wonder how many more technologies are possibly biased. 

Leave a Reply

Your email address will not be published. Required fields are marked *