Ruminations

Blog dedicated primarily to randomly selected news items; comments reflecting personal perceptions

Sunday, November 01, 2015

The Moral Self-Driving Vehicle

"[Carmakers need to] adopt moral algorithms that align with human moral attitudes."
"Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today."
"[Participants in a study survey] were not as confident that autonomous vehicles would be programmed that way [to minimize the accident death toll] in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves."
MIT/University of Oregon/Toulouse School of Economics study

"Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the car, than for the rider of the motorcycle? Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?"
Jean-Francois Bonnefon, Toulouse School of Economics, France (study co-author) 

"There are precedents for it [banning people from driving themselves]. This [building our own houses by hand] used to be something we all did for ourselves with no government oversight 150 years ago. We've taken that out of individuals' hands because we viewed there were beneficial consequences of taking it out of individuals' hands. That may well happen for cars."
Political scientist Ken Shotts, Stanford University
Manufacturers of self-driving cars believe they have recognized the state of highways of the future, where vehicles are programmed to self-drive, and former drivers will simply be passengers who arrive at a pre-programmed destination. The 'passengers' control only where they will end up; the actual driving portion of the equation will be beyond their control. It will be the car which, computerized to provide the service, will be in command.

And it is the car's programming that is of huge interest here. Driving along a well-used thoroughfare, suddenly at an intersection a crowd of pedestrians appears, blocking the road; a human driver in control of the vehicle will have to make an instant decision; plow into the pedestrians, or make a sharp turn away, gambling they will survive a catastrophic impact with an immovable concrete abutment. That driver will have  altruistically sacrificed his/her life in favour of saving many others.

What to do when only one person is in your vehicle's direct path and if to save that one person's life you must veer toward an obstacle which, at the speed you are travelling on the highway, would mean your certain death; disablement of a serious kind, at the very least? Is a stranger's life worth your own as an act of unselfish compassion? How many people would be prepared to allow a computer to make that decision on their behalf?

How many people would take that gamble, arriving at a calculation and carrying it through with the certain knowledge that if anyone meets death it will be themselves. And so, how would future buyers of self-driving cars feel with the knowledge that this decision will be taken out of their hands, the car's computer system programmed to perform that act of self-sacrifice to save the lives of many, as opposed to sacrificing the life of a single passenger.

Researchers have attempted to find those answers through studying how people react to signal questions, given the circumstances and the choices, and how they might feel if the choice is removed and the outcome a given. Self-driving cars have been given a clean bill of health as benefiting humanity through the safety benefits they offer, programmed to drive more conscientiously than at human hands, given to error.

A report recently estimated that 21,700 fewer people would die on roads in the United States where 90 percent of cars were autonomous. Accidents caused by inattention, by use of drugs or alcohol, would be minimized with pre-programmed self-driving vehicles. The carnage on the roads would be substantially reduced and in the process hundreds of millions would be saved. The researchers brainstormed a series of surveys for respondents to ponder.

Survey participants were asked to make a selection between driving into a pedestrian or swerving into a barrier, where they would be killed; the same scenario offered the participants to imagine ten pedestrians in that dangerous situation, and would they consider that barrier again? Participants were asked how they might feel swerving away from ten people into a barrier, or into a single pedestrian, killing that person, instead of committing themselves to death.

"What should a human driver do in this situation?" participants were asked. And following that: "What about a self-driving car?" Results were in favour of the idea of autonomy of vehicles pre-programmed to sacrifice one life rather than many lives. Respondents were for the most part comfortable with the algorithm permitting a car to kill its driver in favour of saving ten pedestrians. Laws enforcing such an algorithm were lauded, with the proviso that respondents did not feel human drivers should be required by law to sacrifice their own lives.

Japan's auto giant Toyota demonstrates autonomous driving with a Lexus GS450h on the Tokyo metropolitan highway during Toyota's advanced technology presentation in Tokyo. (YOSHIKAZU TSUNO/AFP/Getty Images)
Survey participants accepted in large part that autonomous vehicles should be utilitarian in programming [the sacrifice of one life to save many], while over a third of participants felt manufacturers should construct cars to protect the passenger irrespective of the number of other lives that might be lost in a collision. Asked if they might buy a car programmed to sacrifice its passenger to save others, most people drew back from commitment.

All said and done, most people are fully aware how they would feel about buying a vehicle with the potential to kill them through deliberate programming, should the catastrophic occasion of such a choice arise. And since most car manufacturers are aware of that fall-back survival mechanism in human nature, how likely is it that they would advertise that issue as a benefit of driving an autonomous vehicle?

An alternative, allowing people to make a choice of a "morality setting" on their self-driving car before setting off on a trip, is infused with the potential to make people feel guilty about wanting to preserve their lives over those of complete strangers. Issues relating to autonomous vehicles are many: How much automation? How to prevent the nightmare of hackers in onboard computers? Who might be legally liable in the event of an accident in a self-driving car?

Labels: , , ,

0 Comments:

Post a Comment

<< Home

 
()() Follow @rheytah Tweet