Driverless ethics

Thanks to Richard Owers for pointing me at an article from the MIT Technology Review entitled Why Self-Driving Cars Must Be Programmed to Kill. (Doesn’t that make you want a custom licence plate on yours? BOND007 – programmed to kill?)

We talked earlier about the ethical challenges of driverless cars and how many of these are variations on Phillipa Foot’s classic Trolley Problem.

The MIT article takes it further and raises a nice conundrum or two:

How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs?

As they point out, who is going to buy a car which is programmed to sacrifice its owner?

Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?

One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.

But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.

One way to approach this is that adopted by a group at the Toulouse School of Economics. They used ‘experimental ethics’, which roughly means crowd-sourcing the answers to difficult questions and seeing what the majority think.

In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.

Makes sense, but…

“[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves”

Ah – understandable, I guess! And…

If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?

Lovely stuff. I wonder how we’ll deal with this.

Of course, some of these kind of decisions are always being made by anyone building or using potentially dangerous machinery. Did your car’s manufacturer install the most expensive and reliable braking system available when they built your car, or did they base their decision partly on cost? Perhaps they did so to spend more money on the airbags, which protect the occupants instead of pedestrians and cyclists?

Similarly, those who design medical systems, drug-dispensing machines, prescription printers, and so on make decisions which could be life-or-death ones, and we somehow cope with that. But the driverless car does throw some of these questions into sharp relief.

Update: thanks too to Laura James who pointed me at the Principles of Robotics.

Enjoyed this post? Why not sign up to receive Status-Q in your inbox?

1 Comment

Interesting! I wonder about the future of ‘ownership’ in this case though. I imagine in a fully autonomous vehicle world, cars could be more fully utilised (not wasting time parked up, but moving people and things almost all the time). I would not expect to own one of these – I’d expect to have access to one when I needed it. Of course, there would still be some owner who would have to be persuaded to opt for one spec of vehicle rather than another…

Got Something To Say:

Your email address will not be published. Required fields are marked *

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax

*

© Copyright Quentin Stafford-Fraser