Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

Self Driving Uber Fatality - Thread II 8

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,860
0
0
CA
Continued from thread815-436809

Please read the discussion in Thread I prior to posting in this Thread II. Thank you.

--
JHG
 
Replies continue below

Recommended for you

I was hoping you would be around for a week or so... Because of the economics of not having to hire a driver, this will become the mode in a very short period of time. When it comes to safety over profit... you know how most western governments work.

Dik
 
A lot of people assert that autonomous vehicles which only have a few mishaps is preferable to human operated vehicles which have a lot of mishaps because of operator error. I don't subscribe to that philosophy.
 
A problem with having a computer run an vehicle is clearly that it has to be programmed to deal with every foreseeable situation. I doubt there is an interest in doing that and dik's comment above gets to the bottom up that. In chemical plants we have far fewer variables to deal with and even there it isn't easy to get all situations covered.

An example: A couple of months ago I was about to tun right at an intersection onto a two lane road. An 18-wheeler from the opposite direction was turning left and decided to give him both lanes for his turn (I know that is unusual in Houston traffic). Then, while turning, a large metal saw horse looking structure fell off the trailer and located itself in the intersection. I managed to turn right next to it and catch up with the driver, get his attention and I pulled in front of him and stopped. Walked up to the cab and explained what happened. He walked back to the intersection to look at it and I drove on home.

Surely stuff can fall off trailers driven by computers but how do you alert them to such problems.

I suspect that there are drivers who will outperform a computer when situations are "non-standard". There are also a lot of bad drivers out there. We don't do a good enough job of dealing with bad drivers.


 
J, I always wondered how a muffler would find its way into the middle of the road. But can self driving cars lissen for problems? Like a dragging muffler, or squealing breaks, or the fire truck behind it? Can it ignore the sound of the dry bearings in the AC fan motor, or that hard rock station someone just turned on? Does it know what those flashing lights are right behind it? Or what about that smell that may be an engine fire?

There's more than just self driving. There is being aware of what is going on around.

As a friends dad once said, 'that flat tire isn't going to change itself'.
 
Actually, one of the mufflers is mine. I bought a car, after freshman year finals in college from an upper classman, and on the way home for the summer, the muffler just fell off. I noticed the change in sound, but didn't know enough about what have caused it, the car still ran fine, and I had just finished several days of all nighters, etc. The end result is that I didn't stop until I was about 200 miles up the road, and only then noticed that the muffler was gone.

It certainly would be good if the sensors can detect such things, but that's almost a separate layer of requirements.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Deaf people drive cars, and the inability to hear such things is a problem for them. For their benefit and that of hearing but unsophisticated drivers, I've thought that adding an audio analysis capability would do more to alert them to system failures long before they became serious problems. It's often the case when driving that I've wondered if some new noise was one that I just hadn't noticed or if it was getting worse.

Sadly, many defects that affect how cars operate don't make noise prior to failure - like on my car where the engine dropped dead on the highway two lanes from the shoulder in rush hour traffic; I managed to coast it, dodging other traffic. Sadly it was covered under warranty so I didn't get any notice about what caused it, but I'm pretty sure it's the same thing that killed it again out of warranty a year later - corrosion under the ignition module where a two-letter maker thought using dissimilar metals was a good idea.
 
Speaking of which, I've read that Marlee Matlin brought her car to a mechanic because the battery kept running down.

His diagnosis was instantaneous. The horn button was stuck on.

How much would you pay to provide a supervisory listening device with AI on every car?

Uncle Sam doesn't care what you think; if those idiots in DC decide you need it, it will be mandatory, no matter the cost to you.

... As has already happened.

There's no putting the sh!t back in that horse.


Mike Halloran
Pembroke Pines, FL, USA
 
Given my luck with human mechanics doing more damage than they repair, I'll take a shot with AI. At least the AI won't knowingly lie to my face.
 
AI has no 'knowing,' which is the point. If I fix the AI it won't repeat the mistake. If I point to the mechanic where he did damage, he laughs in my face and says it was like that, fresh, bright metal scrapes and all.

The Tesla (this thread keeps wandering) AI kept demanding the driver to take control, which the driver failed to do.
 
3DDave,

In my scenario above, if you and the robot do not hit the brakes, you will reach the child in 2.8[ ]seconds. If you are not paying attention and the robot signals that you must take control, I figure that you have one second to figure out what is happening and decide what to do. The reaction time of young humans, 100% driving the car and paying attention, is 0.75[ ]seconds. Try looking at 2.7[ ]second passing by on your watch.

The problem may not be a child on the side of the road, it may be a loss of traction for some reason. I wonder how easily a robot can spot potential black ice.

The mechanic is not relevant. I cannot believe you will be allowed to fix an AI[ ]car.

--
JHG
 
BrianPeterson - the thread had shifted to the Tesla crash; sorry for the confusion on the Uber crash.

drawoh - the debugging capability can be added to any car. It doesn't require the ability to steer. As I originally mentioned, this is a terrible problem for deaf people as it is. The same could be extended to household appliances which the deaf also interact with. I don't see a particular difficulty in using spectral analysis in conjunction with ODBII data to link a mechanical output (sound) to a mechanical input (particular engine rotation position, exhaust valve opening, wheel rpm, fan rpm, et al.) I expect there's insufficient upside for makers to do this as most consumers will ignore anything that doesn't prevent the car from moving.

Also, an autonomous car AI would not detect a collision and then alert the driver to do anything. In the case of the Tesla, the driver had exceeded the hands-off time and was being alerted to pay attention or at least touch the steering wheel. Scant attention is paid to CALTrans failure to replace a one-time-use crushable barrier after it was previously used, failed to add rumble strips to alert non-AI drivers, failed to add no-cross heavy stripemarking, failed to maintain lane striping; all considerations due to the number of times that particular feature had been hit by a non-AI driver.

 
3DDave said:
...

... In the case of the Tesla, the driver had exceeded the hands-off time and was being alerted to pay attention or at least touch the steering wheel.

This comes back to my point that there are not six levels of autonomous control of automobiles. There are two.

[ol i]
[li]The car has no controls other than a keyboard and/or microphone. The robot is in control.
The robot knows how to find the passenger's destination. The robot is able to dodge other vehicles, bicycles, pedestrians, children, pets, tree branches, Bambi and Bullwinkle. Unless the robot operates in a restricted environment, there will always be uncontrollable, unpredictable agents on the road that must not be hit. If the robot causes an accident, the manufacturer is responsible, which is why I think robot cars will be a service, rather than a consumer possession.[/li]
[li]The human is in control, operating the steering wheel and controlling the accelerator and brake. If there is an accident, the human is responsible. The robot is a back seat driver, able to ring (buzz?) alarms and jiggle controls.[/li]
[/ol]

If there is any emergency in which the human must take control, they must be looking out at the road, gripping the steering wheel, and having access to the accelerator and brakes. The reaction time is no more than a second.

--
JHG
 
drawoh said:
This comes back to my point that there are not six levels of autonomous control of automobiles. There are two.
...
...and jiggle controls.

I agree as well. It seems like everyone is very eager about the first type of vehicle (fully autonomous), but the "jiggle controls" part is very valuable - the computer is much better at bringing car back into normal driving mode after a slip than a human.

A basic "drive-by-wire" system should not look like something that can drive car on its own... but it should provide active help in avoiding sudden obstacles (particularly make a decision about going around an obstacle with regards to cars that may be approaching at higher speeds from behind where driver does not have full attention). Plus it could compensate for much erroneous input that would otherwise cause a slip in bad conditions.
 
drawoh, I think you are on the right track, but your option (i) is perhaps overegging the omelette. The AV doesn't have to know how to deal with every situation and treat it like a good driver would, it merely has to drive within its limitations. If it comes to a situation that it can't cope with then it should /gracefully/ take an alternative safe course of action, for instance park at the side of the road and phone for help.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
Status
Not open for further replies.
Back
Top