You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

Can You Really Watch a Debate and Tweet It at the Same Time?

On September 26, 1960, nearly 70 million people tuned in to the first televised presidential debate in the United States. The candidates were Richard Nixon and John F. Kennedy. The winner was the senator from Massachusetts. He was the winner, that is, according to those who watched the debate on television. The people who listened to the debate on the radio thought Nixon had won.

A lot of people have dismissed this story as myth; the sociologist Michael Schudson has called it “telemythology.” The polling company that made this claim right after the debate used a non-representative sample of radio listeners and failed to filter out such confounding variables as party preference. In 2003, however, a political scientist named James Druckman somehow managed to find 171 students wholly ignorant of the Kennedy-Nixon debate, had half of them listen to it and half of them watch it, and showed that, indeed, significantly more listeners than viewers thought Nixon had won. So perhaps it’s safe to say that 1960 was the year we learned that looks and demeanor, as seen on TV, were just as important as speech when it came to winning over voters.

But the move from radio to television did something else too. It upped the cognitive demand on voters. Before television, all they had to do was listen. Afterward, they had to listen and look. They had to multitask.

Fast-forward to debate-watching today. Forget listening and looking. Now we comment, tweet, update Facebook, respond to others’ updates, perhaps even live blog the discussion. According to the Financial Times, seven million people -- or "dual viewers," as the campaigns now call them -- will watch the debate while also discussing it online. We have taken multitasking up several levels. And while the repercussions of that change may seem fairly obvious, they’re not at all.

Support thought-provoking, quality journalism. Join The New Republic for $3.99/month.

Intuitively, you might imagine that if we multi-tasked more, we’d notice less. Attention is a finite resource. Paying more attention to one thing means paying less to another. And that’s true in certain cases. In a phenomenon psychologists call inattentional blindness, if we focus hard on a task, we’re less likely to notice an object right before our eyes.

In a famous experiment, psychologists Daniel Simons and Christopher Chabris dramatically demonstrated how inattentional blindness works. They showed several hundred observers a video of a basketball game and asked them to count the number of passes made by one of the two teams. In the middle of the video, a person in a gorilla suit walked onto the court, stopped, beat his chest, and walked off. Simons and Chabris wanted to know whether their subjects could accurately count the passes and still notice the entrance of the gorilla-suited man. As it turns out, some of them couldn’t. Forty-six percent of the people in the study did not see him at all, even though they looked right at him for a good long time. Moreover, when the psychologists asked participants not only to count passes but distinguish between aerial and bounce passes, the percentage of those who failed to take note of the gorilla-suited man rose to more than half—55 percent.

How can we explain such extreme obliviousness? To notice something in our environment, we have to process it by way of our visual cortex. Visual input alone won’t suffice; it has to reach the threshold of conscious awareness. The more we tax our conscious processing system, however, the more likely we are to “see” without processing. Brain imaging studies show that the greater our perceptual load (that is, the more a task or tasks strain our working memory), the lower the activity in our visual cortex. It simply stops responding to stimuli that are irrelevant to the goal.

But multitasking turns out to have the opposite effect. With multitasking, we take those same finite cognitive resources and spread them among multiple inputs. We are just as busy, cognitively, but the two types of busyness differ radically.

Consider this 2009 study by a group of Stanford psychologists on the effects of media multitasking on the ability to pay attention. During a series of tasks, participants had to make judgments about objects and letters. Did a rectangle change orientation between two viewings? Did a sequence of two letters correspond to a sequence they’d been told to look out for? Meanwhile, the study’s subjects had to ignore information from irrelevant stimuli (rectangles of the wrong color, for instance) that sometimes popped up mid-task.

The result? Participants who described themselves as the heaviest media multitaskers, which is to say, the top one-sixth of those who used twelve different types of media over a week (including print media, television, music, other audio, YouTube, instant messaging, text messaging, and web surfing) performed surprisingly worse than the lightest multitaskers, that is to say, the bottom one-sixth of the group. The fiercer multitaskers were less able to filter out irrelevant stimuli, responded more slowly and less accurately whenever distractions were presented to them, and were worse at switching among tasks—the very thing you would think habitual multitaskers would be good at doing. During a task that involved classifying numbers, for instance, they lagged .426 of a second behind their counterparts when suddenly asked to classify letters, and .259 of a second behind when classifying numbers, which was the job they’d been told to do in the first place.

Does this mean that the heavy media multitaskers couldn’t pay attention? Not exactly. It means they paid partial attention to a lot and complete attention to less. The study’s authors call this “breadth-based media consumption behavior.” Another technical term is “increased cognitive interference.” According to the researchers, the difference is one of top-down versus bottom-up processing. Light multitaskers have top-down attentional control. Heavy multitaskers have a bottom-up response to environmental stimuli. They take it all in, and only then apportion their attention to it. In this, they may be may be like people with attention-deficit disorder, who would have been more likely to see the guy in the gorilla suit to begin with, because they would have been less able to count the passes.

When we multitask during a presidential debate, we notice the seemingly superficial stuff: the candidates’ body language, their expressions, the dynamics of their interactions. What we fail to do is follow the subtleties of their exchanges. The irony here is that it’s the most, not the least, politically engaged viewers who are likely to do this—the pundits, the commentators, the bloggers, the columnists. Because multitaskers shift between activities more slowly, because they get sidetracked more easily, they’ll watch Obama’s downcast gaze or Romney’s widened eyes and miss the gist of a back-and-forth, especially if they’re also trying to hold a quip in their short-term memories long enough to get it out to their Twitter followers.

And yet I wouldn’t say that this tradeoff is necessarily a bad one. When it comes to choosing a president, noticing the guy in the gorilla suit may have its merits. For over half a century, psychologist Paul Ekman has studied the meaning of facial expressions and physical deportment—posture, eye movements, the tilt of a head—as well as our ability to infer intention and emotion from them. What Ekman has found, time and again, is that miniscule movements and shifts in expression can predict such seemingly unpredictable events as whether a couple will divorce. These cues can also help us discern whether or not someone is telling the truth. Nonverbal signs allow us to distinguish between sincerity and deception, dedication and superficiality, discomfort and ease: all qualities with clear political implications. After all, the television watchers of 1960 were arguably right about Tricky Dick.

Maria Konnikova is a doctoral candidate in psychology at Columbia University. Her book Mastermind: How to Think Like Sherlock Holmes will be published by Viking this January.