Consumers may be used to shedding personal data as they wander the internet or scroll through social media. It’s common knowledge that Google and Amazon track online searches, Facebook accesses the cameras and microphones of its users, and even weather apps and games, such as Words With Friends and FarmVille, are in the business of harvesting user data. But even so, the notion that personal data could be gathered from an application like Zoom, where we now consult with doctors and catch up with loved ones, was unsettling.
Arvind Narayanan, who leads Princeton’s Web Transparency and Accountability Project, had heard chatter about Zoom for some time, but it was only when the pandemic made the app de rigueur that he realized most individuals, including himself, “no longer had a meaningful choice to opt out,” he said. “That’s when it hit me that the privacy and security problems that I’d been hearing about were a big deal.”
Throughout March, “Zoom-bombers” had been breaking into private conversations, scrawling racial slurs or posting pornography—a practice that soon got the FBI’s attention. By early April, senators had been advised against using the software, and Google had banned its employees from running Zoom on company devices. To Shoshana Zuboff, the author of The Age of Surveillance Capitalism, Google’s ban was particularly telling, since Google was in exactly the right position to “know just how pernicious these systems are,” she said in an interview.
Zoom is far from the only app to benefit from the pandemic; Covid-19 has brought with it a wave of heightened digital dependence, as yoga classes, grocery shopping, and church services—to say nothing of office work and school—migrate from the real world (or, as tech geeks call it, “meatspace”) to Zoom—or to Houseparty, Instagram Live, and any number of other apps that have garnered a host of users during the pandemic.
This transition brings a welter of online privacy and security issues. As Zuboff has written, there are hidden trade-offs in digital life. We are accustomed to sacrificing privacy to use certain devices and programs, and by forcing millions of users onto new online services, the current emergency could send that dynamic into destructive new territory. But it could also start a conversation about whether relinquishing that privacy is really necessary.
It was a surprising response from a Silicon Valley CEO. Most of Yuan’s peers in the tech world, when confronted with privacy issues, tend to deflect, downplay, and lay the responsibility upon users who, they say, are free to choose their own privacy settings. To Zuboff, the fact that a company with a veritable monopoly still felt enough pressure to lurch toward transparency points to a greater cultural shift. The conversation about the harms of surveillance capitalism, she said, “its unwanted invasiveness, its challenges not only to privacy but to our sense of autonomy,” is not going away. She is fond of saying that “privacy is not private”—noting that the effort to fix digital security is a social project, akin to the overhauling of labor practices during the Progressive Era. You can’t fix child labor factory by factory, she said, and with privacy issues, “we can’t just go application by application.”
For his part, Searls doesn’t have predictions about how the pandemic will affect the tech industry’s reliance on user surveillance, but he doesn’t expect it to continue indefinitely. “It’s simply too absurd to last,” he told me. He has written before, and still believes, that “Madison Avenue fell asleep [and] direct response marketing ate its brain.” Ultimately, digital security must be a matter of policy, not corporate beneficence. But users can hold a tech company’s feet to the fire, tuning in when shady practices get exposed. And that bodes well for society’s ability to establish the legal guardrails necessary to, as Zuboff puts it, “live a good life in the digital age.”