Ceding Control to Machines That Can't Think

One of the more interesting contributions to this year's Edge question on machines that think comes from philosopher Daniel C. Dennett. Don't worry about the machines that can think just yet, he says, worry about ceding control to those that can't:

I think, on the contrary, that these alarm calls distract us from a more pressing problem, an impending disaster that won't need any help from Moore's Law or further breakthroughs in theory to reach its much closer tipping point: after centuries of hard-won understanding of nature that now permits us, for the first time in history, to control many aspects of our destinies, we are on the verge of abdicating this control to artificial agents that can't think, prematurely putting civilization on auto-pilot.

...

The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence.

He has some examples, but my eye was caught by this Washington Post story of the Russian spy ring that just got busted. It seems that their priority target was uncovering means for:

“destabilization of the markets” and automated trading algorithms — “trading robots.”

As the story points out:

“The acceleration of Wall Street cannot be separated from the automation of Wall Street,” wrote Mother Jones’s Nick Baumann. “Since the dawn of the computer age, humans have worried about sophisticated artificial intelligence … seizing control. But traders, in their quest for that million-dollar millisecond, have willingly handed over the reins. Although humans still run the banks and write the code, algorithms now make millions of moment-to-moment calls in the global markets.”

For evidence of what can go wrong when one of these bots goes crazy, look no further than Aug. 1, 2013. That was when a mid-size trading company named Knight Capital Group lost nearly $10 million per minute over the course of 45 minutes for a total of $440 million. The managers said it was a computer glitch, a misfiring algorithm, a complex computer program gone rogue.

“The company said the problems happened because of new trading software that had been installed,” the New York Times reported. “The event was the latest to draw attention to the potentially destabilizing effect of the computerized trading that has increasingly dominated the nation’s stock markets.”

Of course the idea of rogue trading programs ravaging the markets is really the same story of unintended consequences as Skynet.

I think, though, that the problem is not so much that the machines don't think, as that they don't think like us.

Comments

Popular posts from this blog

Anti-Libertarian: re-post

Uneasy Lies The Head

We Call it Soccer