On March 27, David “Doc” Searls, a technology writer and veteran of the ad industry, posted an entry on his blog about the newly ubiquitous videoconferencing program Zoom, calling it “creepily chummy” with the shadowy advertisers who electronically eavesdrop on users. The blog was shared on Hacker News, the message board that serves as a digital town square for Silicon Valley habitués, and readers took notice. His blog, which usually got about 50 readers a day, was getting more than 16,000, and he continued to put pressure on Zoom, writing that its privacy policy left all sorts of room for the company to harvest personal data for ad tracking. “Here’s the thing,” he wrote. “Zoom doesn’t need to be in the advertising business, least of all in the part of it that lives like a vampire off the blood of human data.”
Consumers may be used to shedding personal data as they wander the internet or scroll through social media. It’s common knowledge that Google and Amazon track online searches, Facebook accesses the cameras and microphones of its users, and even weather apps and games, such as Words With Friends and FarmVille, are in the business of harvesting user data. But even so, the notion that personal data could be gathered from an application like Zoom, where we now consult with doctors and catch up with loved ones, was unsettling.
Arvind Narayanan, who leads Princeton’s Web Transparency and Accountability Project, had heard chatter about Zoom for some time, but it was only when the pandemic made the app de rigueur that he realized most individuals, including himself, “no longer had a meaningful choice to opt out,” he said. “That’s when it hit me that the privacy and security problems that I’d been hearing about were a big deal.”
Throughout March, “Zoom-bombers” had been breaking into private conversations, scrawling racial slurs or posting pornography—a practice that soon got the FBI’s attention. By early April, senators had been advised against using the software, and Google had banned its employees from running Zoom on company devices. To Shoshana Zuboff, the author of The Age of Surveillance Capitalism, Google’s ban was particularly telling, since Google was in exactly the right position to “know just how pernicious these systems are,” she said in an interview.
Zoom is far from the only app to benefit from the pandemic; Covid-19 has brought with it a wave of heightened digital dependence, as yoga classes, grocery shopping, and church services—to say nothing of office work and school—migrate from the real world (or, as tech geeks call it, “meatspace”) to Zoom—or to Houseparty, Instagram Live, and any number of other apps that have garnered a host of users during the pandemic.
This transition brings a welter of online privacy and security issues. As Zuboff has written, there are hidden trade-offs in digital life. We are accustomed to sacrificing privacy to use certain devices and programs, and by forcing millions of users onto new online services, the current emergency could send that dynamic into destructive new territory. But it could also start a conversation about whether relinquishing that privacy is really necessary.
In a heartening twist, in late March, Zoom’s CEO, Eric Yuan, reached out to Searls for a talk. During their conversation, Searls told me, Yuan seemed “blindsided” by how quickly Zoom had exploded. (The company had gone from having 10 million users in December to 200 million in March.) Zoom charges businesses between $14.99 and $19.99 per month per host—for large firms, that can put the cost of a monthly subscription in the thousands; it doesn’t need to be in the business of spying on users at all. And yet it had left the door open to just those practices. An “attention-tracking” feature let the host of a meeting—say, a teacher or work supervisor—see if you clicked away to another window; and, as Searls pointed out, Zoom’s privacy policy had used the same standard, “weaselly” language that apps adopt if they want to gather data.
On April 8, Zoom held its first “Ask Eric Anything” session. Appearing in front of a virtual background of the Golden Gate Bridge, Yuan began by acknowledging that there had been some “missteps.” The company’s business had not just increased, he said; it had changed. Founded in 2011 and launched in 2013, Zoom had been designed for business meetings and webinars—not telemedicine and kindergarten. Yuan said that the “attention-tracking” feature, for example, was appropriate in some business contexts, but for many new consumers, it presented a privacy conflict. “We disabled it entirely,” he said. The company also announced a number of additional changes—from fixing bugs to overhauling its privacy policy and creating a special set of privacy rules for K-12 users. It also ramped up its “bug bounty” program, bringing on a slew of cryptographers to identify any security weaknesses hiding in its code.
It was a surprising response from a Silicon Valley CEO. Most of Yuan’s peers in the tech world, when confronted with privacy issues, tend to deflect, downplay, and lay the responsibility upon users who, they say, are free to choose their own privacy settings. To Zuboff, the fact that a company with a veritable monopoly still felt enough pressure to lurch toward transparency points to a greater cultural shift. The conversation about the harms of surveillance capitalism, she said, “its unwanted invasiveness, its challenges not only to privacy but to our sense of autonomy,” is not going away. She is fond of saying that “privacy is not private”—noting that the effort to fix digital security is a social project, akin to the overhauling of labor practices during the Progressive Era. You can’t fix child labor factory by factory, she said, and with privacy issues, “we can’t just go application by application.”
For his part, Searls doesn’t have predictions about how the pandemic will affect the tech industry’s reliance on user surveillance, but he doesn’t expect it to continue indefinitely. “It’s simply too absurd to last,” he told me. He has written before, and still believes, that “Madison Avenue fell asleep [and] direct response marketing ate its brain.” Ultimately, digital security must be a matter of policy, not corporate beneficence. But users can hold a tech company’s feet to the fire, tuning in when shady practices get exposed. And that bodes well for society’s ability to establish the legal guardrails necessary to, as Zuboff puts it, “live a good life in the digital age.”