You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation

This Is What It Will Look Like When Robots Take All Our Jobs

Oxford philosopher Nick Bostrom thinks we’re neglecting the biggest challenge that we as a species are likely to face this century: what to do when the machines take over. We already have artificially-intelligent machines that are “super-human” in specific domains like playing chess and processing certain kinds of data, but Bostom believes we’re not far off from creating machines whose general intelligence and learning ability exceed our own. We chatted about why humanity is in its teenage phase, whether there’s anything we can do to preempt the machines, and just how realistic is Spike Jonze’s Her. His new book, Superintelligence: Paths, Dangers, Strategiesis out this month.

Alice Robb: What kind of time frame are we looking at?

Nick Bostrom: We ran a survey among experts in the field of Artificial Intelligence. We asked them by what year they thought there would be a 50 percent chance that we would have human-level machine intelligence. The median answer was 2040 or 2050. They thought there was a 90 percent probability we would have it by 2075, which I think is a bit over-confident. There’s a big chance that we’ll have it in this century, but a lot of uncertainty as to the exact date. But what I’m interested in trying to figure out is, what happens when we reach that point? Whether it’s 10 years away or 60 years away. What are the dynamics that kick in once you get human-level machine intelligence?

AR: What are the greatest risks? Are you worried that one super-intelligent agent could take over?

NB: That takes up a big chunk of the worry space: that there is one artificial agent that becomes very powerful, and is therefore able to shape the future according to its goals. Those goals may not have anything to do with human goals. They may be arbitrary. If it’s, “Make as many paper clips as possible,” you get a future that consists of a lot of paper clips, but no humans.

[The other possibility is that there are many agents.] Then you get a very different set of concerns. You have, perhaps, a slower transition. … Then you have economic competition and evolutionary dynamics occurring among these digital minds competing in a digital economy.

AR: So how would this “digital economy” work, exactly?

NB: It takes decades to make a new, functional, adult human, and even then, the person starts off having forgotten everything their parents learned. It’s not easy to create a new worker in this way. But if the worker were a piece of software, then the employer could just make several of them in the course of minutes. In a lot of these models [for the expansion of artificial intelligence], you have what amounts to a population explosion of digital minds, to the point where you drive down the wages to subsistence level. The subsistence level for digital minds would be a lot lower than for biological minds. Biological humans need to have houses—we need to eat, we need to transport ourselves. Digital minds could earn, like, a penny an hour. The wage level would fall; humans could then no longer earn a wage income. It looks very questionable, in this free-for-all competitive world, that we would find a niche for our small, stupid, obsolete minds. 

AR: Those are some pretty serious downsides. What about the upsides?

NB: We have a lot of problems in the world, and if we were smarter or had smarter machines, we could solve a lot of those problems faster. The benefits could be beyond our wildest ability to imagine. There are modes of being, feeling, thinking, living, and relating that are beyond what can fit into this three-pound cheesy lump of gray matter that we call the brain. You could imagine inter-galactic space civilizations with minds that live for billions of years, experiencing things we have no inkling of. 

AR: If experts think this scenario is basically inevitable, why don’t people talk about it more?

NB: It’s not on the same radar as other serious issues. It’s on the entertainment or science-fiction radar, but as a first step to be able to think constructively about these things, we need to stop anthropomorphizing the risks of AI. We shouldn’t think of advanced AI systems as sort of nerdy humans.

We are powerful not because we have stronger muscles or sharper teeth than other animals but because our brains are slightly different from those of other great apes. That has enabled us to invent technologies, to build modern, complex societies, to deliberate a plan for the future. But AI may be radically superhuman in those capabilities, and that would, similarly, be very powerful relative to us.

AI presents a very difficult set of problems that we have to get right on the first try, and there are very few things that we’ve ever gotten right on the first try. … We’ve kind of been mucking up for millennia—but not irrevocably. A lot of these things require a higher level of maturity as a civilization. We are in a kind of teenage stage, where we’re getting access to strong muscles but we’re reckless and short-termist.

AR: Is there anything we could be doing differently today?

NB: I think it would be a mistake to focus too much on the specific causal implementation path that an artificial agent might use, because there are any number of these paths. And if you are super-intelligent, you might think of other paths that we can’t imagine. Just imagine gorillas trying to figure out all the ways that humans might think of to outwit the gorillas. They wouldn’t get very far.

AR: Will the world ever look like the movie Her?

NB: As a movie it was kind of entertaining, but as futurism … They had these operating systems that seemed to be smarter than normal humans, but did nothing other than being a kind of conversational partner. In the real world, if you had Artificial Intelligence that could perform at that level, then they would be doing all the law work, all the journalism, all the medicine. Employers would just copy these operating systems and have them do all the things that humans were doing. You’d have massive unemployment and a reallocation in the economy. That’s one big blind spot in the movie. I guess they didn’t want to distract from the story.

This interview has been edited and condensed.