The childish panic that has swept the policy establishment over the past few weeks over the Wikileaks revelations themselves will soon subside. Secretary of Defense Robert Gates’s sensible remark that “[g]overnments deal with the United States because it's in their interest, not because they like us, not because they trust us, and not because they believe we can keep secrets,” is worth a boatload of apocalyptic prognostications on the order of Michael Cohen from Democracy Arsenal insisting that Wikileaks has “fundamentally undermined US national security and effective US diplomacy, or Joe Klein in Time claiming, “This entire, anarchic exercise in ‘freedom’ stands as a human disaster.” (Click here to read all of TNR's obsessive coverage of the juicy State Department cables.)

The frenzy was unwarranted from the start. Secretary Gates could express his confidence that the long-term effects would actually be “fairly modest,” at least in part because the ‘revelations’ contained in the Wikileaks document dumps mostly confirmed things that were at least long suspected. For example, many supporters of the Israeli government’s alarmed (or alarmist, depending on your point of view) position on stopping the Iranian nuclear program, if necessary by force, have been saying for some time that it was a view shared by Saudi Arabia and the Gulf Cooperation Council. They were right. And, while it may be satisfying to have the details of Washington’s anxieties over the security of Pakistan’s nuclear weapons arsenal, China’s inability to bring North Korea to heel, or the Yemeni president’s willingness to collaborate with U.S. anti-terrorist operations in his country, the broad outlines of all three of these stories were already largely public knowledge—at least among specialists.

Does anyone seriously think the Iranians don’t know about the Saudi king lobbying Washington to bomb Natanz and the other nuclear facilities, or that the Pakistani Taliban don’t know about American moves with regard to Islamabad’s nuclear program? It would be idiotic to imagine our enemies are so badly informed that the Wikileaks information is news to them.

In reality, there was only one group that was not privy to this information released by Wikileaks: the general public. And we can’t have them properly informed, now can we? Father (or, in the case of Secretary Clinton, “mother,” I suppose) knows best. I do not often agree with Noam Chomsky, but it seems to me that he was exactly right when he said that “one of the main reasons for state secrets is so that the state can defend itself from its citizens.” But, regardless of Washington’s motives, stopping Julian Assange (which, in any case, is not the same thing as stopping Wikileaks, as we are all starting to discover) will not be a victory over terrorism, as Senator Mitch McConnell has suggested so preposterously, for the simple reason that the one group we can be sure had the information in the cables before the Wikileaks are the terrorists.

In contrast, powerful people hate being shown up as much if not more than they hate failure, and people with insider information that gives them special status hate losing their intellectual monopoly, since they know that, if they do, loss of status will not be far behind. In this sense, the back-story of Wikileaks is not that American diplomacy is threatened or that Al Qaeda has been strengthened but that American diplomats have lost face, and American policy intellectuals have been confronted by an existential threat to their priestly monopoly on inside information. Oh, the pity of it!

In fact, though, the policy establishment is absolutely correct in worrying about Wikileaks—just not for the reasons that have usually been stated. For where Wikileaks poses a serious challenge is in its application of a technological mindset that up to now had seemed both the product of and inextricably linked to the clean, enlightened liberal capitalism of the Microsofts, Googles, Apples, and Intels of this world.

Throughout the 1990s, technophiles wrote rapturously of the Internet inaugurating a new age of high-tech Jeffersonian democracy (the phrase is from Richard Barbrook and Andy Cameron’s 1995 critical essay, “The California Ideology). As a leading Silicon Valley software entrepreneur, Mitch Kapor, put it in 1993, “Life in cyberspace seems to be shaping up exactly as Thomas Jefferson would have wanted: founded on the primacy of individual liberty and a commitment to pluralism, diversity, and community.”

In many ways, this thinking was extremely radical. In the future, people like Kapor, Apple’s Steve Jobs, and of course Bill Gates argued, we would have a completely new relationship to information. “Universal connectivity,” Gates wrote in Forbes in 1999, “will bring together all the information and services you need and make them available to you regardless of where you are, what you are doing, or the kind of device you are using. Call it virtualconvergence with everything you want in one place, but that place is wherever you want it to be, not just at home or in the office.”

But the future Gates and so many others believed awaited us was also profoundly post-political, or, more precisely, proceeded from a Fukuyamish assumption that the great ideological questions had been resolved. We were all liberal capitalists now. There were important questions remaining—above all, when history, which Fukuyama declared had already “ended” in the rich world, would end in the developing world as well—especially, of course, in China. But markets would bring prosperity, and the rise of the middle class would inspire a finally unstoppable demand for freedom. People from Gates to Margaret Thatcher seemed to consider this almost like a law of nature, although, in retrospect, syllogism may be closer to the mark.

And what would we do with all this freedom? Well, the Bill Gates of the 1990s thought we would all go shopping, either literally or metaphorically. As he put it at the time, “[W]e [will] find ourselves in a new world of lowfriction, low overhead capitalism, in which market information [will] be plentiful and transaction costs low. It [will] be a shopper’s heaven.” Later, as Gates’s interests shifted toward philanthropy, his view of what capitalism needed to do and, more importantly, could accomplish, broadened and deepened.

What two especially sycophantic journalists have called “philanthrocapitalism” is now the order of the day with Gates. In fairness, though hardly the unassailable paragon of virtue its myriad admirers make it out to be, the Gates Foundation has done a great deal of good, and the world would probably be less well-off without it. But the Gates vision of solving the world’s problems—AIDS, the global food crisis, education at home—is just as post-political as his pre-philanthropic vision of the world as shoppers’ paradise.

Everything has a technical fix, or, to put it slightly differently, we all agree on what we want—an end to poverty, decent education for everyone, etc.—so the thing to do is brainstorm and research the best way to get there. The idea that one’s political views, for example, on whether the established order or property rights, or, dearer to Gates’s pocketbook if not his heart, the current global patent regime, so favorable to companies like Microsoft, might affect what one thought the right outcome to be, is a thought utterly outside capitalist philanthropy’s ken.

And yet, paradoxically, philanthrocapitalists like Gates are absolutely persuaded by the idea that technology brings radical change. There is even a term for this: disruptive technology. A disruptive technology is conventionally defined as “an innovation that disrupts an existing market.” Coined in 1999 by Clayton M. Christensen of the Harvard Business School, the term was originally meant to describe business innovations that improve a product or service in ways that the market does not expect, usually either by lowering the price or redesigning for a different market or a different set of consumers. Two current examples of disruptive technology are nanotechnology, which is heralded as promising to supersede current production technology, and so-called open source software, which challenges the reigning assumptions about what the basis should be for how software is both created and sold.

At first glance, Wikileaks would seem to be far from this world of business innovation. And yet it isn’t. To the contrary, what Wikileaks does is exactly what a disruptive product does: As with nanotechnology, it supersedes the way information is made available to the general public; and, as with open source software, it challenges the idea of what the public can know and how it can know it.

In the former case, Wikileaks breaks the established transmission network of office holders and diplomats leaking some information to trusted journalists and pundits, who then transmit it to the public. And, in the latter, it insists that there is simply no such thing as proprietary information, which in the context of diplomacy means it does not acknowledge the state’s right to keep secrets. Here, the state is like Microsoft, with its closed-source technology, while Wikileaks is the open-source alternative.

And, again as with open-source software, there is no going back. Julian Assange may go to prison in Sweden, or even be extradited to the United States, and, though it is far less likely, Wikileaks itself may be shut down. But, for better or worse, the Wikileaks model is here to stay. For, as it turns out, the web is not just a place for shopping, or searching for pornographic images, or finding virtual communities of like-minded people, it is the new bloody crossroads of our politics.

Al Qaeda proved this with its virtual jihad; and then, the Chinese state demonstrated how easily the web could be used for surveillance and repression. Now, (presumably state-sponsored) attacks against Wikileaks are being countered by attacks on purported enemies of Wikileaks from Sarah Palin to Visa and Mastercard by online techno-anarchist groups like anonops, which recently posted a list of e-mail addresses of institutions that had either cut off Wikileaks or criticized its message.

Speaking of the PayPal online payment service, the anonops poster wrote, “With shopping coming up and people needing to pay for their online purchases, this will really put them at a halt,” and they will “regret messing with Wikileaks and Anon.” Whatever the Bill Gateses of this world may choose to imagine, ideology is alive, and well, and living in cyberspace. Hold on, Toto, we’re not on eBay anymore.

The new cyber-battlefield will exact real-world casualties. Why break a few windows and burn a few cars in an anti-globalization demonstration anymore? Even the black-clad anarchists know the glass is quickly swept up and business is back to normal in days. The only real casualties are innocent bystanders, like the three bank clerks burned to death during such a demonstration a few months ago. Even the maddest anarchist cannot think such crimes undermine capitalism. But it is not vainglory to believe that incapacitating PayPal, even briefly, causes real damage. Napoleon said that in war, the moral was to the material as three to one, and perhaps more important still is the fear such attacks inspire.

Of course, government will strike back, presumably far harder than they have already. To those who worried about the co-optation, or, as Tom Frank famously put it, the commodification of dissent, stop worrying. When the ad agency that enjoined buyers of Apple Macs to “think outside the box,” or when Microsoft commercials asked, “where do you want to go today?” the only answer they didn’t expect was “to war.” But, with the revenge attacks by supporters of Wikileaks and the counter-attacks by governments, we are getting a small taste of the cyberwars to come.

David Rieff is the author of eight books including A Bed for the Night: Humanitarianism in Crisis.

For more TNR, become a fan on Facebook and follow us on Twitter.