You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

Siri, You're Messing Up a Generation of Children

Melina Mara/The Washington Post via Getty Images

One of the unexpected pleasures of modern parenthood is eavesdropping on your ten-year-old as she conducts existential conversations with an iPhone. “Who are you, Siri?” “What is the meaning of life?” Pride becomes bemusement, though, as the questions degenerate into abuse. “Siri, you’re stupid!” Siri’s unruffled response—“I’m sorry you feel that way”—provokes “Siri, you’re fired!”

I don’t think of my daughter as petulant. Friends tell me they’ve watched their children go through the same love, then hate, for digital personal assistants. Siri’s repertoire of bon mots is limited, and she can be slow to understand seemingly straightforward commands, such as, “Send e-mail to Hannah.” (“Uh oh, something’s gone wrong.”) Worse, from a child’s point of view, she rebuffs stabs at intimacy: Ask her if she loves you, and after deflecting the question a few times (“Awk-ward,” “Do I what?”) she admits: “I’m not capable of love.” Earlier this year, a mother wrote to Philip Galanes, the “Social Q’s” columnist for The New York Times, asking him what to do when her ten-year-old son called Siri a “stupid idiot.” Stop him, said Galanes; the vituperation of virtual pals amounts to a “dry run” for hurling insults at people. His answer struck me as clueless: Children yell at toys all the time, whether talking or dumb. It’s how they work through their aggression.

Siri will get smarter, though, and more companionable, because conversational agents are almost certain to become the user interface of the future. They’re already close to ubiquitous. Google has had its own digital personal assistant, Google Voice Search, since 2008. Siri will soon be available in Ford, Toyota, and General Motors cars. As this magazine goes to press, Microsoft is unveiling its own version of Siri, code-named Cortana (the brilliant, babelicious hologram in Microsoft’s Halo video game). Voice activation is the easiest method of controlling the smart devices—refrigerators, toilets, lights, elevators, robotic servants—that will soon populate our environment. All the more reason, then, to understand why children can’t stop trying to make friends with these voices. Think of our children as less inhibited avatars of ourselves. It is through them that we’ll learn what it will be like to live in a world crowded with “friends” like Siri.

The wonderment is that Siri has any emotional pull at all, given her many limitations. Some of her appeal can be chalked up to novelty. But she has another, more fundamental attraction: her voice. Voice is a more visceral medium than text. A child first comes to know his mother through her voice, which he recognizes as distinctively hers while still in the womb. Moreover, the disembodied voice unleashes fantasies and projections that the embodied voice somehow keeps in check. That’s why Freud sat psychoanalysts behind their patients. It’s also why phone sex can be so intense.

The literary critic Ruth Franklin, whose children were also entranced by, then peeved at, Siri, suggested to me that maybe kids get mad at her because she fails to meet “the maternal expectations they associate with women.” That sounds right, although, of course, adults have these expectations, too. The current generation of iPhones allows you to set Siri to male as well as female, but the point is that voices communicate gender, age, authority or the lack thereof—primal social cues that we can’t help but process as markers of a real personality.

Our minds respond to speech as if it were human, no matter what device it comes out of. Evolutionary theorists point out that, during the 200,000 years or so in which homo sapiens have been chatting with an “other,” the only other beings who could chat were also human; we didn’t need to differentiate the speech of humans and not-quite humans, and we still can’t do so without mental effort. (Processing speech, as it happens, draws on more parts of the brain than any other mental function.) Manufactured speech tricks us into reacting as if it were real, if only for a moment or two. 

Children today will be the first to grow up in constant interaction with these artificially more or less intelligent entities. So what will they make of them? What social category will they slot them into? I put that question to Peter Kahn, a developmental psychologist who studies child-robot interactions at the University of Washington. In his lab, Kahn analyzes how children relate to cumbersome robots whose unmistakably electronic voices express very human emotions. I watched a videotape of one of Kahn’s experiments, in which a teenaged boy played a game of “I Spy” with a robot named Robovie. First, Robovie “thought” of an object in the room and the boy had to guess what it was. Then it was Robovie’s turn. The boy tugged on his hair and said, “This object is green.” Robovie slowly turned its bulging eyes and clunky head and entire metallic body to scan the room, but just as it was about to make a guess, a man emerged and announced that Robovie had to go in the closet. (This, not the game, was the point of the exercise.) “That’s not fair,” said Robovie, in its soft, childish, faintly reverberating voice. “I wasn’t given enough chances to. Guess the object. I should be able to finish. This round of the game.” “Come on, Robovie,” the man said brusquely. “You’re just a robot.” “Sorry, Robovie,” said the boy, who looked uncomfortable. “It hurts my feelings that,” said Robovie, “You would want. To put me in. The closet. Everyone else. Is out here.” 

Afterward, Kahn asked the children whether they thought the machine had been treated unjustly. Most thought it had. Moreover, most believed that Robovie was intelligent and had feelings. They knew that they were playing with a robot, but nonetheless experienced Robovie as something like a person. Kahn speculates that “we’re creating a new category of being,” the “personified non-animal semi-conscious half-agent.” Or, as one child involved in his experiment said of Robovie, “He’s like, he’s half living, half not.”

Sherry Turkle, a psychologist at MIT who has been studying technology and children for several decades, worries that they’ll be too willing to settle for the reduced emotional sustenance to be had from these non-animal half-agents. In her recent book, Alone Together: Why We Expect More from Technology and Less From Each Other, she describes watching children’s toys go from being “sort of alive” (like the Tamagotchis popular two decades ago) to being “alive enough.” In other words, robotic pets, friends, teachers, babysitters, even therapists—all already in production or development—will do when the real thing isn’t available, as it so often isn’t in our time- and care-deprived world. (And don’t think that robotic caregivers will be given only to children. Robotic baby harp seals have already begun serving as companions for the elderly.) Turkle believes that today’s socially precocious technologies are training all of us, regardless of age, to accept “the performance of connection” in lieu of connection itself. 

But what will these simulations of fellow-feeling mean for the psychological and moral development of children? “I think we’re going to be unpleasantly surprised,” Turkle told me. One risk is that they’ll turn into selfish monsters. “Imagine the following future situation,” writes Kahn in his paper on the “I Spy” experiment. “A humanoid robot, like Robovie, helps look after your 8-year-old son after school every day. ... He considers the robot his friend, maybe one of his best friends. Do you want this robot to do everything your child tells it to do? ... If we design robots to do everything a child demands, does that put into motion a master-servant relationship?” To be sure, the robot could be programmed to say no to the child. But as parents understand all too well, the key to getting a child to accept authority is knowing when to say no and when to say yes, and you wonder how a robot can be taught to know the difference. 

Toward the end of his interviews with the children, Kahn asked them questions about Robovie’s moral status. They felt bad for the robot, they told him, but weren’t willing to grant it its freedom. They were OK with it being bought or sold. Nor did they think it should have the right to vote or to be paid for its labor. To the children, Robovie was “slave-like,” Kahn told me. Anyone who has read about life in slaveholding societies knows how coarsening it can be to grow up among others defined as almost but not quite equal.

Moreover, thinking of these “friends” and “mentors” as subordinates may obscure the fact that many of them will effectively serve as spies. Most children probably don’t realize—and might not care—that every question they ask Siri is relayed back to servers in Apple’s cloud for analysis and kept there for two years. For the first six months, voice records are tagged with a number; after that, the number is stripped from the recordings. But that doesn’t actually anonymize the data. According to Nicole Ozer, who keeps tabs on technology and privacy for the aclu of Northern California, the recordings can still be traced to our smartphones via a “unique device identifier”; the data also contains geolocation. As for our children’s inquiries and impertinences, Apple's privacy policy says that Apple does not “knowingly collect personal information” on children, although “knowingly” is not defined. But that’s almost beside the point. Friends don’t collect data on friends, wittingly or not.

Judith Shulevitz is a senior editor at The New Republic.

Correction: This piece originally attributed a privacy policy to Siri. In fact, it is Apple's company-wide policy. Also, the article mistakenly said that the word "personal" was undefined in that policy.