Kevin Drum is a political blogger at Mother Jones, and previously wrote for The Washington Monthly and for his own blog, CalPundit. His latest piece for the print magazine concerns the effect that intelligent machines will have on our economy in the medium-run. We spoke on the phone Monday afternoon; a lightly edited transcript follows.
Dylan Matthews: We've had technologies that save labor and increase productivity for years. What makes artificial intelligence different?
Kevin Drum: The difference is that, in the Industrial Revolution, we got big productivity increases from steam engines but there were still people required to run those machines. We had a huge increase in the amount of stuff you could make, but you needed people to design the machines, and make the machines, and use the machines.
With the digital revolution, the difference is that smart machines provide both power and intelligence. You don't need human beings for anything anymore. You don't need them for power, or for the intelligence to use the power. It puts everyone out of work eventually. Because smart machines will become as smart as human beings, there simply is not a job that a machine can't do on its own.
This assumes that there are no human labor tasks that are simply beyond the reach of computers. There are a number of philosophers and computer scientists who might argue that there are some tasks computers just can't do.
There's a couple of arguments against the idea that AI is coming soon. One is, as you say, a philosophical argument, which boils down to "However smart machines seem to get, they'll never have true human intelligence." I just don't think that matters. You can call it intelligence or something difference, but that's semantic. What matters is that they can accomplish the same things humans can.
The second argument is "can we do it?" Moore's law says computing power will double every 18 months. The question is whether that's going to continue. There are some good arguments that we're running up against physical boundaries that will prevent that from happening. But I think it has at least enough life left in it to produce computers that have about the power of the human brain. If you look at software development, that follows Moore's law as well. If anything, it doubles even faster than hardware development. I think the software is going to catch up rather quickly.
That's an interesting point, that a lot of the innovation is going to be in the algorithms and techniques used for emulating human intelligence. Are we seeing that kind of innovation?
That's been happening, and it continues to happen. The evidence is all around us. It's in things that obviously aren't AI yet, since we're just not there yet, but are getting close. If you look at Watson, the computer that beat the two Jeopardy champions, well, 10 years ago who would have thought that a computer could have done something as sophisticated as answering a wide range of questions and getting them right?
Take a look at something like Google Translate, which isn't even the most sophisticated translator out there. It does a surprisingly serviceable job, for free, of translating from one language to another. Google's driverless car does a surprisingly good job. So the software's already making huge progress. The other thing is that the human mind is modular. You don't have a software team that has to replicate the whole human brain. What you have are teams that are replicating little bits, little modules, that, when you put them together, will produce true artificial intelligence.
The obvious economic story is that AI reduces the need for human labor, so the demand for it falls and wages fall in turn. Do you buy that?
I think that's what's going to happen. If you take a look at a science fictional future, where you're dropped into a society where robots are doing all the work, everyone accepts that you can't rely on the same kind of free market in labor you have today. There is no labor.
The question you have is "Do you accept this if you're getting to that future very very slowly?" You get just a tiny little bits of progress, and what that does is, bit by bit, it puts just a few more people out of work each year.
People are being put out of work because of automation and we don't notice it, so we argue about other stuff. We argue that education levels are too low, or that we're coddling people on unemployment and we just need to cut them loose. These are all old issues that aren't ultimately about what's really happening. This is all going to happen slowly enough that we'll keep arguing until we realize that something new is going on.
Robert Solow has a famous quote, "You can see the computer age everywhere but in the productivity statistics." Are we finally seeing it there? Where's the evidence that this is rippling out?
My own guess is that we're just barely starting to see it. There are a whole bunch of macroeconomic trends that, if you look at them, sort of start to bend around the year 2000. They reach a peak, and they've started to decline for the last decade or so. You can produce explanations for all of them.
One of those trends, for example, is that the percentage of people in America who are employed is going down. Some of that is the aging of the population. But there's a whole bunch of these trends. When you put them together, the best explanation, I think, is that automation is starting to put people out of work, and make the economy more capital and less labor-intensive than it used to be.
It seems like the education debates relate to this, though. Eventually, if you're right, then AI is going to get to the point where computers can build themselves and you don't need humans at all. But until then, being able to code, knowing CS and robotics, is going to be really valuable.
I think that's exactly right, and the first people to be put out of work are people who have jobs that are fairly routine. These are the jobs the computers can take over. You're going to put those people out of work, and then you're going to put out of work people whose work is more cognitively demanding, and it's going to go up and up and up until machines can do anything a human can do, and do it without human supervision. They'll repair themselves, and manufacture themselves, and do all the work.
So who has all the money? It's whoever has the robots. And who has the robots? The people who have all the money. Today's income inequality will be peanuts compared to income inequality then.
How does this interface with politics? Radiology, say, is an area that might be easily automated, since we've already seen computers that can effectively read EKGs. But the American Medical Association is much more powerful than lobbies representing, say, factory workers. Is it possible you'd see protections for wealthy industries like that, going forward?
That's possible, and there will almost certainly be some differences in who gets put out of a job faster, but it's pretty much inevitable. You can outsource radiology one way or another. If you can outsource it to a computer, and you can demonstrate that the computer does it better than a human being, patients themselves are going to start demanding computer readings of their X-rays and their MRIs.
There are powerful groups that can delay this. There are equally powerful groups that are interested in having lower cost labor, including lower cost radiologists. The shareholders who own the hospital, they want lower cost radiology. And it's the people with money who are going to win this fight.
It seems like if you have a huge section of people who are unemployed, who don't really have resources but have a lot of spare time, then there's a possibility of really huge political mobilizations on the part of those people, like you see in countries nowadays with mass unemployment.
I think that's likely to be one of the things that happens along the way. Societies that suffer from mass unemployment, the history of what happens to those societies is not a bright one. At some point you have to respond, and there's going to be a lot of resistance to responding because of ideology, because of politics, because of pure greed, but eventually we are going to respond to this. It's going to be obvious what's happening, that people are unemployed due to no fault of their own, and that we have to respond.
In the meantime, we're going to resist responding, and we're probably going to resist responding very very strongly, because rich people don't like giving up their money. We're in for a few decades of a really grim fight between the poor, who are losing jobs, and the rich, who don't want to give up their riches.
It does speak a bit to a split you see on the left between, on the one hand, people who want work to pay more, and be safer, and have dignity, and on the other hand people who think that modern work is wage slavery and needs to be abolished. The world you describe is more hospitable to the latter camp.
I'm not sure there are that many people taking the latter position today, but you're right, in the future we'll hear this more and more. If I'm right about what happens with artificial intelligence, there won't be any work, period, so there won't be dignity in work. We'll have to find dignity in doing other things. It's in the medium run, getting there, that I think we'll have problems.
What does this mean for developing countries? It seems like, if this happens before they reach Western labor standards, it'd be even more devastating.
The problems I'm talking about could be even worse in developing countries they are here. In one sense they are less vulnerable to artificial intelligence, because labor is so much cheaper there already. On the other hand, it will take away their jobs, and it will take away their jobs at a point where they're still considerably worse off than Americans are.
At a macro level, though, it's going to play out about the same everywhere. It's going to displace labor everywhere. It doesn't matter how cheap it is, eventually machine intelligence will displace it all.
I think that's it for my questions. Anything you wanted to add?
Well, I touched on a bit what the possible solutions would be, but the core is that you're going to have to redistribute wealth, and the debate is over how to do that. It's the obvious conclusion from this stuff, that at some point you're going to have to accept that you'll have much more massive income redistribution than we do today.
And it seems like at that point it just becomes a version of the same debates over whether we should just give people cash or provide goods like education and health care.
Exactly. I used an example from Noah Smith of giving an equal equity stake to everybody when they turn 18. It's an interesting idea, and it's a little different from redistribution, but it's still redistribution in a different form.