Krugman and Klein on the Future

Ezra Klein interviews Paul Krugman on prospects for the future. It seems PK and EK are both SF fans, so they take a prophetic look. Topic addressed include global pandemics, artificial intelligence, and economic inequality.

I find them least convincing on the subject of AI.

Ezra Klein: A fear I hear about a lot lately is the idea that we’ll build a self-improving artificial intelligence that will ultimately destroy us.

Paul Krugman: The history of artificial intelligence is that it's always ten years ahead, and that's been true for about 50 years.

Ezra Klein: But let’s assume it does emerge. A lot of smart people right now seem terrified by it. You've got Elon Musk tweeting, "Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable." Google's Larry Page is reading Nick Bostrom’s new book Superintelligence. I wonder, reading this stuff, whether people are overestimating the value of analytical intelligence. It’s just never been my experience that the higher you go up the IQ scale, the better people are at achieving their goals.

Our intelligence is really lashed to a lot of things that aren’t about intelligence, like endless generations of social competition in the evolutionary fight for the best mates. I don’t even know how to think about what a genuinely new, artifical intelligence would believe is important and what it would find interesting. It often seems to me that one of the reasons people get so afraid of AI is you have people who themselves are really bought into intelligence as being the most important of all traits and they underestimate importance of other motivations and aptitudes. But it seems as likely as not that a superintelligence would be completely hopeless at anything beyond the analysis of really abstract intellectual problems.

Paul Krugman: Yeah, or one thing we might find out if we produce something that is vastly analytically superior is it ends up going all solipsistic and spending all its time solving extremely difficult and pointless math problems. We just don't know. I feel like I was suckered again into getting all excited about self-driving cars, and so on, and now I hear it's actually a lot further from really happening that we thought. Producing artificial intelligence that can cope with the real world is still a much harder problem than people realize.

The problem is that you don't need to produce an AI smarter than Terry Tao for it to be dangerous. Even now, our robot developers are producing semi-autonomous devices which have intelligence roughly equivalent to some of the less clever insects. How likely do you think it is that humans could compete in an evolutionary sense with truck sized insect creatures with jet engines, brains that think a million times faster than ours (and those of insects), and loaded with modern weaponry? Such things are already here or nearly so, and our main control over them right now is that we still manage their reproduction.

It's also true that their smarter and more sedentary cousins have already proven better at many tasks formerly done by highly trained professionals than humans. Robots won't need to bother navigating our complex social rules if they simple replace us.

Comments

Popular posts from this blog

Anti-Libertarian: re-post

Uneasy Lies The Head

We Call it Soccer