Machine Morality

Colin Allen tackles the issue of machine morality in the New York Times: The Future of Moral Machines. We don't have to wait until machines get smarter than people to worry about this he argues - in fact he seem to be a bit skeptical that they will get smarter. He right on the first point and wrong on the second, I think. As to the second:

The neuro- and cognitive sciences are presently in a state of rapid development in which alternatives to the metaphor of mind as computer have gained ground. Dynamical systems theory, network science, statistical learning theory, developmental psychobiology and molecular neuroscience all challenge some foundational assumptions of A.I., and the last 50 years of cognitive science more generally. These new approaches analyze and exploit the complex causal structure of physically embodied and environmentally embedded systems, at every level, from molecular to social. They demonstrate the inadequacy of highly abstract algorithms operating on discrete symbols with fixed meanings to capture the adaptive flexibility of intelligent behavior.

A total crock, I think. Computers are just as good at dynamical systems behavior etc as they are at operation on abstract symbols with fixed meaning. Our understanding of all the above is in fact predicated on having reduced them all to abstract symbols - to physics in other words.

Back to his main point - robots, even if they don't have any moral programs are already operating in domains with complex moral dimensions. As the author points out, Isaac Asimov explored these dimensions in his robot stories, with robots constrained by his laws of robotics. Our robots are not so constrained, but they are gaining greater autonomy and are increasing trusted with matters of life and death - from killing those we call our enemies to driving our cars and doing our surgeries. Robots make buy and sell decisions in the stock market in a millionth of a second. Nobody can check their work in advance and the consequences may be calamitous - some stock market crashes have already been blamed on programs run amuck.

It's pretty obvious that plenty of other dimensions of our society are going to be trusted to robot deciders.

At a more mundane level, consider just the robot red light/speeding cameras that have proliferated. Here the moral is the morality of letting a machine give tickets where most of the income goes to private entrepreneurs who sponsor the cameras. Speeding is a simple case, perhaps, but when private enterprise profits from identification of crime, their is plenty of scope for justice to be subverted in the name of profit.

Does this talk of artificial moral agents overreach, contributing to our own dehumanization, to the reduction of human autonomy, and to lowered barriers to warfare? If so, does it grease the slope to a horrendous, dystopian future? I am sensitive to the worries, but optimistic enough to think that this kind of techno-pessimism has, over the centuries, been oversold. Luddites have always come to seem quaint, except when they were dangerous. The challenge for philosophers and engineers alike is to figure out what should and can reasonably be done in the middle space that contains somewhat autonomous, partly ethically-sensitive machines. Some may think the exploration of this space is too dangerous to allow. Prohibitionists may succeed in some areas — robot arms control, anyone? — but they will not, I believe, be able to contain the spread of increasingly autonomous robots into homes, eldercare, and public spaces, not to mention the virtual spaces in which much software already operates without a human in the loop. We want machines that do chores and errands without our having to monitor them continuously. Retailers and banks depend on software controlling all manner of operations, from credit card purchases to inventory control, freeing humans to do other things that we don’t yet know how to construct machines to do.

We will either prescribe some moral principles for our machines, or pay the consequences.

Comments

Popular posts from this blog

Anti-Libertarian: re-post

Uneasy Lies The Head

We Call it Soccer