The inherent biases of artificial intelligence

The inherent biases of artificial intelligence

On March 23 2016, Microsoft launched its coolest product to date: an AI chatbot that could talk like a teenager. Self-described as a chatbot with “zero chill”, Tay had not only the teen lingo but also all the teen swag. It understood abbreviations such as “tbh”, “hbu”, “totes”, and “srsly”. It knew how to use “dank” and “swagulate” in a sentence. It could sing along “What is love” and rickrolled you like no one ever could. According to Microsoft’s scientists, Tay’s conversational abilities came from “mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians.”  Needless to say, the Internet was teeming with excitement. Within hours, Tay’s Twitter account got more 30,000 followers.

 

ai-racist-bias-post-image-1-knowmail

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

However, things went South, fast. The very next day, the innocent teenager started spitting out racial slurs, homophobic slander, and sexist jokes. It endorsed Hitler, referred to Barack Obama as “the monkey”, called Zoe Quinn “a stupid whore”, and claimed that the Holocaust was a fabricated myth.

 

 

ai-racist-bias-post-image-5-knowmail

 

 

 

 

 

 

 

 

 

 

 

 

ai-racist-bias-post-image-6-knowmail

 

 

 

Tay’s algorithm worked exactly as its creators had planned, but maybe it shouldn’t have. Tay was trained to learn from the humans it interacted with and mimic their patterns of speech, even if those patterns of speech would be deemed unacceptable by a respectable adult.

Among many questions that this incident brought up, such as “if that’s what Tay learned after a day on the Internet, what do our kids learn after years online”, one question particularly haunted me. How did Microsoft researchers, arguably among the best in the world, fail to see it coming? Can we blame them? It seemed inconceivable to think that Artificial Intelligence can be racist or sexist. But apparently, it can. After all, AI communicates and learns from humans, and humans are ridden with biases.

AI-powered devices have been known to show racial biases as early as 2009. In 2009, Wang, a Taiwanese-American strategy consultant, found it funny that her Nikon’s camera kept suggesting that she was blinking while she had her eyes wide open. It is bizarre given that Nikon is a Japanese company–their face recognition algorithm should have been tested to be good with Asian faces.

Just a few months after, Hewlett-Packard’s web camera software made news as being racist for failing to recognize people with dark skin. A YouTube video posted by wzamen01 showed that the camera had no problem recognizing and tracking a white face. However, when her black friend came into the frame, nothing happened. “I think my blackness is interfering with the computer’s ability to follow me,” the guy jokingly concludes in the video.

Even Google, the company that spearheads AI researches in the world with projects such as DeepMind and Google Brain, also created products that are accidentally racist. In 2015, it launched Google Photos that, in Google’s words, can automatically tag and label your photos for you. True to their advertisement, the app could indeed label high buildings as “skyscrapers”, part of an airplane wing as an “airplane” and people wearing graduation caps as “graduation”. Then it went right ahead and tagged black faces as “gorillas”. Google was “appalled“. So were we.


 


While most incidents above could be dismissed as jokes in bad taste, there are cases when machine’s inherent racism can lead to serious consequences. Risk assessment software has become increasingly common in courtrooms in the US. When a person is booked into jail, they are given a questionnaire. A system then analyzes their answers, together with their personal details, to output a score from 1 to 10. This score is supposed to represent the likelihood they will commit a future crime. The higher score you have, the higher risk you pose to the society. In states such as Florida, Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, these scores are given to judges during criminal sentencing. They can be used to inform decisions about the defendant’s freedom, from assigning bond amounts to if the person should be set free.

Research done by the investigate journalist Julia Angwin showed that while this algorithm is good at predicting whether you’re going to commit another crime within the next two years, its results are heavily biased against blacks. “Black defendants are twice as likely to be rated high risk incorrectly, meaning they did not go on to reoffend. And white defendants are twice as likely to be rated incorrectly as low risk and yet go on to reoffend,” Angwin told NPR in an interview.

And then there is sexism. It’s not just the manifestation of sexism in a handful of applications, but it’s in the fundamental way human language is interpreted by machines.

To help machines understand human languages, each word is represented by a vector of numbers. If two words are similar, their corresponding vectors will be close, and if two words are very different, the distance between their vectors will be large. One of the most popular systems of word representation is word2vec, created by Google and trained on hundreds of thousands of articles taken from Google News. word2vec has been tested to show that it represents the English vocabulary very well. For example, the difference between the vectors “man” and “king” is equal to the difference between the vectors “woman” and “queen”. In their notation, it is written as “man: king; woman: queen”. Some other examples include “Boston: Boston Bruins; Phoenix: Phoenix Coyotes” (NHL teams), “Steve Ballmer: Microsoft; Larry Page: Google” (company executives), “Austria: Austrian Airlines; Spain: Spainair” (airlines) and so on. word2vec is widely used in many Natural Language Processing (NLP) tasks, from machine translation, text summarizations to improving search results.

However, a closer look into word2vec revealed that it’s packed with gender biases. The query “father: doctor; mother: x” returns x = nurse, and the query “man: computer programmer; woman: x” gives x = homemaker. There is also this mind-boggling chain of relationship: she: he; midwife: doctor; sewing: carpentry; registered_nurse: physician; hairdresser: barber; nude: shirtless; boobs: ass; giggling: grinning; nanny: chauffeur. Basically, word2vec is saying that women are midwifes and nurses while men are doctors and physicians, women go nude while men go shirtless, women giggle while men grin. It’s startling given that word2vec was trained on Google News articles–you would expect that professional journalists are the last to profess gender biases.

The problem with these biases is that they are automatically passed onto any application that uses word2vec. Take google searches. Since “doctor” is closer to “men” than to “women”, search results for the query “nearby doctors” might rank male doctors higher than female doctors.

Another way in which NLP’s inherent gender discrimination can hurt women is in advertising. In July 2015, Carnegie Mellon University’s computer scientists found that women were less likely than men to be shown ads on Google for jobs paying more than $200,000. According to Amit Datta, a Ph.D. student participating in the research, “the male users were shown the high-paying job ads about 1,800 times, compared to female users who saw those ads about 300 times.” Meanwhile, the ads most associated with female profiles were for a generic job posting service and an auto dealer.

This is a violation of equal employment opportunity. How would a woman know to apply for a well-paying job if she didn’t even know that the job exists?

Now that we have realized that those biases exist, what can we do about it? One way is to manually correct them. The word2vec team compiled a list of gender biased pairs, investigated how each bias was reflected in the word vectors and then transformed those vectors to get rid of the biases. They call this process “hard de-biasing.” The foul-mouthed teenage chatbot Tay appeared to have been hardcoded to reject nasty comments about GamerGate. Textio is a startup that can analyze your recruiting messages and tell you if they are gender biased.

However, this laborious approach only works, albeit to a certain degree, with biases that we are already aware of. We have been modeling Artificial Intelligence after the bias-ridden humans, and, to make true AI love and truly bias-free, we have to model it after something better than humans.

Chip Huyen
Chip Huyen

A student at Stanford University, studying Artificial Intelligence and Creative Writing. When she is not busy trying to sound smart, you can find her juggling at an obscure street corner somewhere in South America or sipping on over-sweet tea on the Himalayas. Every day, she tries to learn something new and blog about it at Learn 365 Project.

Leave a Reply

Your Email address will not be published. Required fields are marked *

  1. Pingback: AI powered Democracy - Knowmail