You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

Monetize Your Dissent

An attempt to build a more empathic Facebook beyond the click of a button

Ivan Ryabokon / Shutterstock

Facebook users have been demanding a “dislike” button since the social network’s inception. Tens of thousands of users belong to groups like “We Need a 'Dislike' Button on Facebook”  and Facebook needs a 'dislike' button”. But tech commenters have consistently dismissed the idea—it would allow users to show disapproval of brands and ads and potentially lead to awkwardness that Facebook, with its cheery, advertiser-friendly culture, would like to avoid. So, when reporter Arun Venugopal asked me recently on the Leonard Lopate Show if Facebook would ever introduce such a button, I said no.

Oops.

In a public forum on September 15, Mark Zuckerberg acknowledged that a “dislike”  button had been a frequently requested feature, and indicated that the company was working on something to solve the problem. Zuckerberg’s comments were almost instantly met with an explosion of enthusiasm (from users) and wildly uninformed speculation (from analysts). One popular line of commentary has centered around worries that a “dislike”  button could be a bullying tool, or that it could create an atmosphere of negativity. While calling the button a victory for more realistic thinking, Andre Spicer, a professor of organizational behavior, solemnly declared, “Living with the thumbs down will be tough. We may get upset, be disturbed and sometimes feel gloomy.”

What most of these reactions missed is that Zuckerberg never said that the company is building a “dislike” button. He specifically dismissed the idea of the “like” having an unhappy twin: “We didn’t want to just build a “dislike”  button because we don’t want to turn Facebook into a forum where people are voting up or down on people’s posts. That doesn’t seem like the kind of community we want to create.”

Instead, Zuckerberg talked about creating a tool that can express empathy, especially in situations of tragedy or hardship. It’s supposed to be similar to the “like" button, a “quick way to emote,” Zuckerberg explained, but it will not be about disliking—or expressing negativity at all. It will be, if Zuckerberg is to be believed, a button for responding to a natural disaster or a status update about a death in the family. The company, reportedly, has been testing it for some time, and some version may appear soon.

But Facebook’s development of an empathy button needs to be looked at in the larger context of how the company monitors and manipulates the emotions of its users. Last summer, Facebook published the results of a study of “emotional contagion,” which examined whether certain emotions can be induced in people—in this case, almost 700,000 Facebook users—and then be observed moving across the network. In essence, Facebook wanted to know if the company can make users feel happier or sadder, simply by altering the mixture of content they see, and if those emotions were “contagious,” transmissible through users’ Facebook interactions. (Receiving far less attention was another study by the same researchers, who determined “that rainfall directly influences the emotional content of status messages, and it also affects the status messages of friends in other cities who are not experiencing rainfall.”)

For Facebook’s data scientists, the answer was yes on both counts. But the study’s conclusions were highly disputed, and the larger controversy concerned issues of consent, experimentation, and the opaque ways in which Facebook’s algorithms filter information and influence our lives, perhaps even guiding us toward desired behaviors. After all, if Facebook could induce a little more positivity in its users—believing this would make us more pliable, more amenable to brand messaging—why wouldn’t it do so? We may think that we are immune to these kinds of subtle influences, but Silicon Valley behaviorists—not to mention every advertising firm in the country—would disagree.

The “like”  button has become a popular tool for judging user response, as well as dominant metric in web advertising, but it’s only a single click in a grand architecture of emotional surveillance and data analysis In April 2013, Facebook introduced the ability to tag feelings, activities, brands, and businesses in status updates; these tags also came with emoji-style icons that helped give them the feel of text messages shared between friends. Perhaps this was by design, as these tags served an important purpose: they made status updates more machine-readable, giving Facebook’s advertising algorithms further insights into what people were posting about. Someone who tags that they’re sad and eating ice cream immediately signals a certain emotional state—and a potential susceptibility to targeted advertising.

While the “like” button can be seen as limiting Facebook’s ability to know how people respond to bad news, these new tags allowed users the chance to offer a clearer sense of their emotions, both to their network of friends and Facebook’s network of data. As a result, these new tags also allowed Facebook to better monetize negativity—the sad, the mad, the generally unlikable. For years, the Facebook monoculture has struggled to process emotions more complex than the “like.” The company’s speech policies reflect a certain prurience, as indicated by the company’s recent grappling with nudity constraints after removing photos of breast-feeding mothers. The empathy button, if it comes to pass, should be seen in the context of the social network’s attempt to give users more ways to express their emotions while also integrating these tools into a larger architecture of data collection and monitoring.

At the same time, Facebook, like its competitors, has been investing in machine-learning and natural language processing. For the moment, computers have a hard time understanding semantic text — language as it’s written by human beings, who are prone to using humor, irony, metaphors, slang, , typos, idioms, and much else that defies literal reading. Should these technologies be perfected, Facebook’s software could understand users’ status updates as they’re posted, no matter their content.

Until then, Facebook will confront a peculiar challenge. As Zuckerberg acknowledged, the company wants to give users more opportunities for expression. The “like” button—when it’s not simply clicked as a Pavlovian impulse, out of obligation, or because it’s the only option—can feel restricting and often inappropriate for a given context. But like any autocratic ruler, Zuckerberg knows that introducing new speech rights might lead to people using these liberties in ways you don’t intend. One day they might show empathy for enemies of the king.

Still, Facebook users who have been lobbying for more creative forms of expression, or at least more options, should be heartened. The company has heard your pleas and is designing a new form of emotional interaction that will have much in common with a German sports car: it will be impeccably engineered and thoroughly crash-tested. This new interaction may be controversial for a while, but time will pass and people will soon forget they ever protested it. And one day, if we’re lucky, Zuckerberg will give us another button to play with.