Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

Self Driving Uber Fatality - Thread II 8

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,860
0
0
CA
Continued from thread815-436809

Please read the discussion in Thread I prior to posting in this Thread II. Thank you.

--
JHG
 
Replies continue below

Recommended for you

Spartan5 - given that it took a fatal accident to get the FAA to issue a requirement to inspect fan blades more than a year after the first incident, I tend to agree that aircraft regulation is almost entirely reactive. In large part it is understandable, simply because of the nearly infinite number of things that can go wrong.

For example, it was typical on commuter twin turboprops to keep the far engine running while passengers were loading. Until a little girl lost hold of a stuffed animal and it blew through the couple foot gap between the fuselage and the pavement. The little girl ducked under to retrieve it before anyone could stop her and now, AfAIK, it's no longer allowed. The NTSB report indicated the change in procedure was at the airline level, not the FAA level, but there may be other guidance. The report indicates it wasn't a fatality, so maybe that's why there's no rule.
 
Spartan5 said:
You said "still buy tickets" which implies commercial airlines; for which there has not been a fatal crash in the US in the last 10 years. Hardly "relative frequency."

If you want to nitpick, you're not helping yourself. Per the data set for 2016 (latest year available from the NTSB), in that year there were 39 accidents which resulted in 27 fatalities involving aircraft operating under 14 CFR 135. Approximately 3 crashes and 2 fatalities a month is pretty regular. Those people bought tickets.

You're missing the entire point. Being condescending isn't helping your argument, either.

If you think rich people are better drivers, I guess you're welcome to go on thinking that. It's a pretty weird thing to say but it isn't really germane to this conversation.

jgKRI said:
The point is, if someone's only acceptable criteria for AVs to be present on public roads is that they will never cause a fatality of any kind under any circumstances from from now until the end of time, that person has a wholly unrealistic point of view with little connection to reality.

This, my friend, is not a straw man- it's actually what's called a reductive argument, directed at positions taken by other people in this thread. Google it.
 
3DDave,
I agree that accidents lead to regulation. It would be irresponsible for that not to be the case.

But that argument has no bearing on the fact that there is proactive regulation as well intended to prevent accidents from occurring in the first place.

Regardless; if accidents are to be the driver of regulation, where is the requirement that all AVs undergoing testing on public roads have active systems for aggressively monitoring backup driver attention (see the Cadillac "Super Cruise" system for how this could be implemented). That's not a high bar in these vehicles that are already extensively outfitted with sensors and computers.
 

Before I google "redactive argument", please quote me the people in this thread who have taken the position that "for AVs to be present on public roads is that they will never cause a fatality of any kind under any circumstances from from now until the end of time"

Maybe I missed it, friend.
 
Spartan5 said:
Before I google "redactive argument", please quote me the people in this thread who have taken the position that "for AVs to be present on public roads is that they will never cause a fatality of any kind under any circumstances from from now until the end of time"

A reductive argument is one made by taking a position, which may or may not appear to be logical, based on a fallacy to its logical conclusion in order to highlight the underlying fallacy.

So... No one is stating that explicitly, but it's the logical conclusion of a position taken, based on the fallacy that zero accidents is an attainable goal.
 
jgKRI said:
So... No one is stating that explicitly, but it's the logical conclusion of a position taken, based on the fallacy that zero accidents is an attainable goal.

Ok. Then support that with quotes again. Who has taken the position that "zero accidents" is the attainable goal?

You appear to be the only person to be bandying "zero accidents" about; and then arguing against it. That's the straw man.
 
Spartan5 said:
Are you claiming there is/was no proactive regulation of aircracft/airlines? That's quite a stretcher.

...

And to touch on your last point, to make a generalization, the sorts of people who are having their licenses away are not the sort who can afford to run out and buy the latest and greatest robotic car.

Igor Sikorski built an airliner in Russia just prior to WWI. One fine day about a century ago, someone offered to transport someone else from "here" to "there" for some sort of fee. The pilot almost certainly was licensed. The aircraft almost certainly was fabric covered and had one engine. The rules came in when these things started crashing.

If people are allowed to own robot cars and are not allowed to drive, the psychopath drivers will be replaced by people who want to re-program the robots. There will be accidents. As noted above, I think robot cars will be a service, not a possession.

--
JHG
 
I think that's just trollmanship; cars, while substantially more reliable in recent years, still have mechanical and software failures. My two hybrids have idiosyncratic startup behaviors, such that if you attempt to put the car into DRIVE before it's ready, it can't gracefully recover without turning off and restarting.

The notion of zero failure is ludicrous, as we, as a society, have "acceptable" risks for everything we do, including our cars, planes, buses, etc. The people that died on a bus on the way to Las Vegas took what they thought was an acceptable risk. Obviously, after the fact, the survivors and family have a different perspective. Nevertheless, probably all of them would get on a similar bus for a similar trip in the future.

Anything humans touch or build automatically incurs a certain level of risk of failure, and in some cases, such as the Colombian bridge disaster, the risk was both tangible and realized, and two engineering analyses point to a massive design failure. Toyota had a failure of their automobile ECUs that resulted in accelerations that couldn't be turned off or stopped. The electronics industry, as a whole, gave up complete testability, even at the basic "stuck at" logic levels, because there were so many hidden nodes that the test times required to access them all would result in years of testing.

Software testing is worse, in some ways, because there's not yet been a systematic way of testing, even at the module level. Intent and specification often cannot be rigorously verified.

What we do need to do is to determine what the acceptable level of risk is and move on. Certainly, those who are actually working on AV software need to study each and every one of the AV accidents to determine how to prevent them from happening again. That's been the model in the airline industry for decades, going back to the DC-10 engine failures that were traced to a less than desirable method of installing engines that American Airlines once used.

While there have not been many collisions of commercial aviation, there still have been deaths, most recently on a 737, where a turbine blade broke loose, tore through all the surrounding armor bounced about 4 times along the wing and fuselage and took out a window that resulted in the death of a passenger. That incident, which should have an unthinkable possibility, is now a realized risk, and there's going to be a bunch of engineers trying to quantify the likelihood of that happening again. Nevertheless, 737s are still flying around at the moment, albeit, subject to more detailed inspections for indications of similar and imminent fatigue failures. We've accepted this sort of thing as an acceptable risk, even with the human element in the entire maintenance and inspection process.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
jgKRI]*sigh*

Zero accidents is the redactive portion of the argument.

Maybe just google it and come back after some light reading.
I accept your surrender.

I googled "redactive argument, by the way. Nothing comes up [wink]
 
Spartan5: "That is due, in large part if not entirely, to extensive regulation. The exact opposite of what AVs being tested on public roads are subjected to."

Exactly. Thank you for stating the position I was attempting much better than I did.

 
"3 accidents against the number of successful detection/avoidance events (likely, at this point, to number in the hundreds of thousands at least across all companies testing AV tech)is the actual metric that matters.

We don't know the value of that metric."

No, we don't. There has been a conspicuous lack of transparency when it comes to how AVs have performed in real-life situations. In the case of Uber's experiment, we do know that the human "backup driver" (read "passenger") had to override the computer every mile on average to correct a critical incident. If you were riding with a human driver and you had to take the wheel every mile, you you ride with that person again? Uber's system in particular is obviously not ready to be on the streets yet.
 
HotRod10 said:
Uber's system in particular is obviously not ready to be on the streets yet.

We're back where we started.

Uber can't improve the system without it being on the streets. It's a chicken/egg problem. Or a catch-22. Or however you want to phrase it.
 
"Uber can't improve the system without it being on the streets."

Well then, they should abandon the project and leave the AV development to those companies who are willing to go to the effort and expense of thorough testing before introducing a potentially lethal machine into the public arena.
 
Uber can't do it, an no one else can either. It's integral to further development.

Other companies have different controls in place, but all of them are going to have accidents.

Still a catch-22.
 
There are other ways to reduce road rage than self driving cars. I believe the whole concept of 'traffic calming is one of the issues causing road rage. The stopping at every light leads people to accepting that running many of the lights helps them get to someplace quicker.
And to add to that the sheer number of cars on the road at some times.

A possible solution is companies open or start at different times. Say 7:15, instead of 7:00, or 7:45 instead of 8:00.

But self driving cars won't fix the number of cars on the road, but is likely to increase the number.

Self driving cars also won't fix some road rage, but make it worse, as some people will choose to drive themselves so they can travel faster (they are always late). In fact self driving cars are slower that cars driven by many other drivers.

For self driving cars to be safer, it may take us a redesign of many things, including locations of bus stops, truck allowed colors, jumbled lane markings, etc.

And if the truth be told, the cost of mass transit, is a major issue in the number of cars on the road. As well as dirty conditions, rude people (lack of respect), and the number of people all trying to get someplace at the same time.
 
Status
Not open for further replies.
Back
Top