Creating inclusive technology products
Cathy O’Neil in her book, Weapons of Math Destruction, talks about how many of the mathematical models powering the data economy reflect the prejudice, misunderstanding, and bias of their creators. She warns that “many poisonous assumptions are camouflaged by math and go largely untested and unquestioned”.
Prejudice and bias are everywhere, not just in mathematical models but also in technology products.
We all imagine technology to be neutral so it’s a shock to discover that, like anything produced by humans, our technology products contain bias and prejudice. In December 2017, several stories emerged around Apple’s iPhone X ‘racist’ facial recognition software when a Chinese woman’s colleague was able to unlock her phone and another Chinese woman reported that her son could unlock her phone. In 2015 it was discovered that Google Photos, which automatically applies labels to pictures in people's photo albums, had classified images of black people as gorillas. And Nikon’s S630 camera wrongly identified a photograph of a smiling Asian woman as someone who might be blinking.
But it’s not just racial prejudice that’s appearing in our technology products and services. Gender discrimination happens too. For example, Google's ad targeting system means that women are much less likely to be shown adverts on Google for highly-paid jobs than men.
The New York Times, in an article on prejudices being built into artificial intelligence, declared: “This is fundamentally a data problem”.
But who creates the data, the systems, the algorithms, the technology? People do. People with biases and prejudices, whether conscious or unconscious.
Given how pervasive technology is in our lives, now more than at any time in history, it’s increasingly imperative that technology companies are not only aware of the potential of prejudice creeping into the products they are creating, but take urgent steps to prevent it happening.
So, as far as I'm concerned, the question isn’t: is tech biased? The question is: what can technology companies do about it?
One way is to increase diversity in technology so that our technology industry is more reflective of society.
Another is to test biases in artificial intelligence by stress-testing the system like Dr Anupam Datta, a computer scientist at Carnegie Mellon University, has done in order to uncover "dodgy, potentially harmful or discriminatory results".
But it's only when people are made aware of and understand their own prejudices that they can identify and address bias in the datasets, algorithms, systems, and technology products they are creating.
A computer science professor in Virginia realised that he and his fellow researchers were unconsciously feeding their biases into the image-recognition software they were building — images of people cooking, shopping and washing were being linked to women while outdoor sporting equipment such as snowboards were being associated with men.
However, as the examples earlier in this article show, bias and prejudice aren't always that easy for the creators of technology products to spot. One of the ways social anthropologists can help organisations is by exploring and uncovering the (often unconscious) biases and prejudices that people bring to their work. After all, the power of anthropology is to make visible that which is invisible.
- Weapons of Math Destruction by Cathy O’Neil
- Why I’m No Longer Talking to White People About Race by Reni Eddo-Lodge
- Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks