You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

Turns Out Algorithms Are Racist

Artificial intelligence is becoming a greater part of our daily lives, but the technologies can contain dangerous biases and assumptions—and we're only beginning to understand the consequences.

Issac Lawrence/Getty Images

It turns out that artificial intelligence may be just as bigoted as human beings. Last fall, it was discovered that a complex program used in image recognition software was producing sexist results—associating cleaning or the kitchen with women, for example and sports with men. The developers were disturbed, but perhaps it shouldn’t have been so surprising. After all, Computers and software, even at their most sophisticated, are still in essence input-output systems. AI is “taught” by feeding it enormous amounts of pre-existing data—in this case, thousands upon thousands of photos. If it began to associate certain genders with certain activities, it is because it was outputting the bias inherent in its source material—that is, a world in which pictures of people in kitchens are too often women.

The problem is a significant one. While in the abstract the term artificial intelligence can conjure sci-fi ideas of fully autonomous robots with personalities, for the present, AI mostly refers to complex software used to make decisions or carry out tasks—everything from determining credit approvals to predicting shopping habits to self-driving cars—software that’s becoming more and more omnipresent in our daily lives. As we outsource and automate things like decision-making, customer service, and physical or mental tasks to software, there are profound ramifications for employment, government, regulation—and social justice, too. We are situated at the threshold of determining whether a new era of technology will replicate the injustices of the past, or if it may in fact be used to challenge the inequalities of the present.

 The term “AI” refers to a set of related fields that compose it: machine learning, in which computers are fed information to gradually learn to recognize and process it in increasingly complex ways; natural language processing, which lets software recognize the varied, shifting ways in which humans might express ideas; image recognition, which learns to “see” and understand images; and reasoning, in which computers learn to “think.” Obviously, our phones are not sentient little beings in our pockets. But many apps use parts of artificial intelligence to do things like recognize faces or images, react to context such as adding one’s location to messages, or understanding commands we give with our voice. On the most basic level AI technologies, operate like “if this, then that” statements—if x is greater than 100, then sound an alarm—only vastly more complicated. 

Once we understand artificial intelligence as primarily a decision-making system, it becomes easier, and also surprising, to see the extent to which it has already penetrated our lives—often with a worrying lack of transparency. Amazon, for example, is already using AI in hundreds of ways: to determine consumer preferences, to suggest products to buyers, to organize its warehouse and distribution, and of course in their Alexa voice assistant products like the Amazon Echo. But the company also uses AI to push customers to higher-priced products that come from preferred partners. These kinds of examples are becoming more common, and more serious. A ProPublica investigation revealed that justice systems were using AI to predict that chance of reoffending, and incorrectly marked black defendants as more likely to befuture criminals. AI is also being used to determine which prisons a convict should have to go to or, as The Atlantic revealed, what visitation rights she or he might have. AI and machine learning were in part responsible for the fake news that may have influenced the 2016 election. AI is also used to determine credit eligibility or offers for other financial products, and often does so in discriminatory ways—the programs may offer you a higher interest rate if you are black or Latino, for instance, than if you are Asian or white.

As AI becomes more and more complex, it can become difficult for even its own designers understand why it acts the way it does. This poses a serious problem, particularly when given the common perception that AI is somehow objective or scientific. In such scenarios, technology becomes a black box that makes decisions and offers pronouncements and that we are encouraged to obey. As complexity only grows, it becomes more and more important to be aware of what effects AI is having on a broader social reality.

But the fact is that technology is never neutral. The example of sexist interpretation of images is just the tip of the iceberg. Since machine learning and AI operate through collecting, filtering, and then learning from and analyzing existing data, they will replicate existing structural biases unless they are designed explicitly to account for and counteract that. To address this situation, an approach would require a specifically social justice-oriented perspective, one that considers how economics intertwine with gender, race, sexuality, and a host of other factors. But given cultural factors such as the pervasive faith in blind meritocracy among tech professionals, pushing ideas focused on equity will be an uphill battle.

There are moves afoot, however. AI Now is a New York-based research initiative led by Kate Crawford and Meredith Whittaker that seeks to further understanding of how AI works, how it might be put to better use, and how current implementations may sometimes be harmful. In an annual report, the group put out a series of recommendations designed to offer a perspective on how to mitigate the worst of AI’s prejudices. Among them: dedicated resources to diversifying the range of inputs for AI systems, especially those related to marginalized groups—photos of men doing the dishes, say, or of two women getting married. Another suggestion is to develop systems to evaluate AI’s fairness and harms when in use; and try and improve diversity amongst the people designing and implementing AI in order to ferret out blind spots and bias. After all, if more of the developers were black, or women, then the programs might not reflect such a white, male worldview.


When we talk about artificial intelligence, it can be easy to get carried away with alarmism. Recently, dramatic headlines circulated that Facebook had to cut off an AI experiment because two bots “invented their own language.” In reality, all that happened is that the bots started to communicate in a shorthand, and not in a way that was understandable to most humans—the program was cancelled not because it had become dangerous, but because it had ceased to be useful. There were no nefarious plots of autonomous intelligences to conquer the earth.

But in a sense, one thing that digital technology has done is to make manifest the ideas of society. Social media, for example, makes visible and concrete the idea of a “public sphere,” and we’ve seen there can be both incredible up- and downsides to this: The electric nature of open conversation also brings with it harassment and public venues for hate. For its part, AI seems to reveal the structure behind the structural bias. There are complex material and ideological factors and institutions that shape our lives, and in order to counteract the prejudice contained therein, we first need to recognize that we are in fact shaped and influenced by these bigger systems, and then work against their tendency to reproduce existing power structures. What is clear, however, is that the “intelligence” of AI is the output of our own investments and biases, and it behoves us to build it in a way the recognizes that fact. AI will always reflect its input, and its input is us—our culture, our ideologies, our selves. To make the technology that will enable a more just future, those are what we have to change.