Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Can autonomous vehicles make moral and ethical decisions?

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,878
Design Engineering magazine has an article, Can autonomous vehicles make moral and ethical decisions?

Is this even a valid concept? An autonomous vehicle should refrain from smashing into stuff. If the vehicle has to decide between hitting a baby carriage or a pair of old people, it probably has been driving too fast for the conditions. If these are autonomous weapons, the discussion is even more complicated.

--
JHG
 
Replies continue below

Recommended for you

kingnero said:
...the ethnical background of your victims

Is that ethical or ethnic? [smile]

If you are doing aerobatics at an airshow, there is an understanding that if you lose control, you make sure the plane does not land in the crowd, even if that means not bailing out. If you are driving a car and the brakes fail, you generally have the option of driving into a tree or over a cliff or into a wall, thus (probably) injuring only yourself. It was your car and your brakes.

--
JHG
 
Every decision I made in that poll was based upon rules of the road. If I was offered the choice of running over two old people on the sidewalk or a gaggle of schoolchildren in the middle of the road when the light was green, I chose running down the children. The old people shouldn't be punished because they're old and somehow less useful to society... the children were where they shouldn't be, and as rulebreakers should be the first to get "punished".

That was the intent of that poll, to see how the average driver decided what to do in situations where there wasn't a correct answer, only shades of gray. To me, the guilty (rulebreakers) were always punished before the innocents (rule followers).

Dan - Owner
URL]
 
@ drawoh, nice catch, I'm not a native english speaker so those details (to me at least!) don't really trigger my brain.

@ MacGyverS2000, the poll I mentioned included many choices were both were obeing all regulations, so basically you had to choose between a mother with a buggy and two elderly...

 
MacGyverS2000,

My problem with this is that these ethical emergencies represent a narrow range of circumstances. For example, the RMS Lusitania did not capture the Blue Riband on her maiden voyage. She encountered fog, and she slowed down and arrived two days late. In 1930, RMS Olympic rammed and sunk a lightship off New York City. There were complaints afterwards that Olympic habitually sped through the fog. In between these two events, the captain of RMS Titanic had to make the ethical decision of who got into the lifeboats.

How do Asimov's three laws of robotics address risk? If my robot is flying an airliner, at what point does it refuse to take off, or if flying, head for the nearest airport not affected by the hurricane? There is an economic cost and even physical risk to people arriving at their destinations late.

--
JHG
 
The whole point of the three laws was that they generated interesting problems and paradoxes. That's great for entertainment, not so great for real life.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
So if there is a glitch in it's programming then we are blaming the vehicle, right?

My question is why would you program human ethical behaviour into the vehicle when we all know that humans are flawed?

 
When a driver makes an "eye to eye" contact with a pedestrian for instance, the pedestrian´s action could be anticipated. I suspect that part of this process in not 100% rational or put in a different way, I can hardly imagine how that process could be captured in an algorithm whatever the level of sophistication of the said algorithm. So maybe we could consider rule #4...when robots are proposed for deployment into new applications, the principle of parsimony shall apply.
As corollary, it could imply that any proposal for new deployment (e.g. autonomous vehicle) shall be scrutinized until the nature of the proposal is sufficiently evidenced as a "must have" while satisfying a local optimum (on a single event, the robot shall not be outperformed by a human in any circumstances) and a global optimum (on average superior social and economical benefit for the public).

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor