Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Self Driving Uber Fatality - Thread I 17

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,878
San Francisco Chronicle

As noted in the article, this was inevitable. We do not yet know the cause. It raises questions.

It is claimed that 95% of accidents are caused by driver error. Are accidents spread fairly evenly across the driver community, or are a few drivers responsible for most accidents? If the latter is true, it creates the possibility that there is a large group of human drivers who are better than a robot can ever be. If you see a pedestrian or cyclist moving erratically along the side of your road, do you slow to pass them? I am very cautious when I pass a stopped bus because I cannot see what is going on in front. We can see patterns, and anticipate outcomes.

Are we all going to have to be taught how to behave when approached by a robot car. Bright clothing at night helps human drivers. Perhaps tiny retro-reflectors sewn to our clothing will help robot LiDARs see us. Can we add codes to erratic, unpredictable things like children andd pets? Pedestrians and bicycles eliminate any possibility that the robots can operate on their own right of way.

Who is responsible if the robot car you are in causes a serious accident? If the robot car manufacturer is responsible, you will not be permitted to own or maintain the car. This is a very different eco-system from what we have now, which is not necessarily a bad thing. Personal automobiles spend about 95% (quick guesstimate on my part) parked. This is not a good use of thousands of dollars of capital.

--
JHG
 
Replies continue below

Recommended for you

jgKRI - That is not what I'm talking about. These failures have involved objects that don't have to be classified or direction determined or any other self driving BS like that. They were hard physical objects directly in the path of the cars.

A simple long range radar automatic emergency braking setup could have easily braked before hitting the barrier that Tesla hit and should have also been capable of emergency braking the Uber for 1-2 seconds before that impact.

Not being able to avoid a concrete and steel barrier directly in the path of the Tesla rather bluntly shows how piss poorly the Tesla self driving system is currently working. Sure, it works well a lot of the time, but I bet that's not much consolation to the families of the victims.
 
The Autonomous Vehicles of today seem to be working *worse* in terms of basic collision avoidance than the Collision Avoidance systems of several years ago. Then again, those infamous Volvo failed demonstrations were Collision Avoidance systems that didn't. One crashed straight into the rear of a stopped truck, precisely the way it wasn't supposed to. The other ran into a manager (?) that had instructed the driver to run straight at him. Videos on YouTube.

It's advisable to maintain a Confidence/Competence Ratio well below unity when dealing with Safety Systems. That doesn't seem to be happening here.

 
drawoh - Airbus aircraft were not grounded when a problem with the pitot heaters was found, leading directly to AF 477 crashing for no reason. Airbus aircraft were not grounded when it was found that pilots had too much control authority after one pilot tore off a vertical stabilizer. It requires a near certainty of a failure that results in hull loss to get a plane grounded.
 
"In the first case, I cannot see you being allowed to own the vehicle. If I were the manufacturer, I would own the car, and the maintenance facility. Anything with machinery or controls would inside a locked enclosure."

That would be a truly autonomous vehicle, which would have to be able to handle all situations; not just highways, but congested city streets with hundreds of other cars and pedestrians in close proximity. As the topic story demonstrates, these vehicles currently fail to operate safely under much less demanding circumstances.

"You are the driver. You grip the steering wheel. You control the gas (power?) pedal and brakes, and you are occupied full-time paying attention to driving. The robot, if present, is a back-seat driver, with some ability to nudge controls."

That's essentially the driver assist safety features we have now, other than most just produce warnings to the driver, which the driver must decide how to react to. Autonomous braking is a good feature, but I'd be leery of autonomous obstacle avoidance, etc.
 
LionelHutz said:
jgKRI - That is not what I'm talking about. These failures have involved objects that don't have to be classified or direction determined or any other self driving BS like that. They were hard physical objects directly in the path of the cars.

Yes it is- without you knowing.

These systems cannot differentiate between objects which are permanently affixed to the ground and objects which are not. They detect objects as a single snapshot in time, take another snapshot, and compare- and then estimate relative velocities and positions and potential threats.

A human can very readily differentiate between a car and a k-rail. An autonomous vehicle, with current technology and engineering on board, cannot do so nearly as accurately.

We know very, very little about this Tesla Model X fatality- so I can't speak intelligently on exactly what happened. But what you're saying, that every object should be instantly and accurately identified without a single failure, ever, is the exact problem that is yet unsolved. Because it is extremely difficult.

LionelHutz said:
A simple long range radar automatic emergency braking setup could have easily braked before hitting the barrier that Tesla hit and should have also been capable of emergency braking the Uber for 1-2 seconds before that impact.

Maybe so. But you don't know the full set of circumstances which caused this failure, so we don't know what other systems could have functioned more reliably.

LionelHutz said:
Not being able to avoid a concrete and steel barrier directly in the path of the Tesla rather bluntly shows how piss poorly the Tesla self driving system is currently working. Sure, it works well a lot of the time, but I bet that's not much consolation to the families of the victims.

We have, at this point in the Tesla's history, a couple of fatal failures (that I'm aware of) in a data set of literally millions of miles.

So that we're clear: I am no fan of this technology being let out of the gate at the infantile stage it is in. I think we are many years away from the vision of theses companies actually being technically possible (i.e. a car with no steering wheel that doesn't kill people).

With that said- ANY system WILL have failures and people WILL get killed as a result. This isn't fatalistic or pessimistic; it's a statistical fact.

Failures like this need to be evaluated with all the speed and intensity available to the NTSB or whomever else performs the work; but until the general public understands engineering to the point where they do not expect the long term failure rate to be zero (good luck waiting for that...) than the public will continue to cry out when these systems fail. Which they will continue to do until the end of time.
 
"A simple long range radar automatic emergency braking setup could have easily braked before hitting the barrier that Tesla hit and should have also been capable of emergency braking the Uber for 1-2 seconds before that impact."

Actually, this is not a "simple" thing. Take a 35 GHz radar with a 6-inch aperture. The blur spot is 7.9 degrees, which at 300 ft is 12.5 meters. This is why a lidar is would be doing the bulk of the target detection, under normal circumstances, since its beam footprint at 300 ft is 14.4 inches. The larger footprint means that there will be a large mixture of returns from all over the footprint, possibly making it difficult to get good answers without aiding from lidar. The lidar's weakness is heavy fog and/or precipitation, and possibly poor performance against sunlit surfaces and particulates in the air. Our obstacle avoidance system tended to have challenges flying against sun-backlit on-shore flow. It took a couple of design iterations to design a thresholding circuit that wouldn't just pin to a high threshold, thereby reducing detection range.

The receiver operating curve (ROC) for both the radar and lidar is a trade between 100% detection and an absurdly high false alarm rate, and a decent false alarm rate with tolerable detection probability. Every system in use makes such a compromise, which is another reason why Musk is smoking something when he thinks he can get away without something to back up his cameras.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
IRStuff said:
Musk is smoking something when he thinks he can get away without something to back up his cameras.

This in an interesting sub-topic unto itself.

For all of his knowledge, I think Musk's point of view truly is "humans do this job pretty much competently most of the time without LIDAR, so all we need to do is make the visual image processing we are doing as robust as that of the average human brain"

Which is something that might happen within my lifetime (I'm 32) but I wouldn't expect to see it before I'm retired at the earliest.

The task is simply monumental.
 
It seems we have comprehension problems here.

I DID NOT SAY EVERY OBJECT NEEDS TO BE TRACKED AND IDENTIFIED IMMEDIATELY. I POSTED THE EXACT OPPOSITE TWICE NOW. If the forward facing radar keeps returning a rapidly approaching result DIRECTLY in front of the car then it's likely a good idea to start braking at some point, regardless of what stupidity the AI is doing at the time.

Last I checked, the single task AEBS systems are a lot simpler than a full self driving setup. They wouldn't have to start emergency braking anywhere close to 300m from a concrete barrier to avoid smashing into it at full speed. LOTS of AEBSs are on production cars and working fairly successfully without LIDAR units. But sure, why not go ahead and twist the sentence I wrote completely out of context?

As for Musk and his cameras. My first though seeing that Tesla crash was that they turned off the radar to test with cameras only again.
 
My guess for L5, or widespread L4, is 2035.

On average AVs will never be as safe as good meat drivers, because good meat drivers don't crash (say involved in collisions that cause non trivial injuries or deaths) ever in their lives. That is not the objective. The objective is to be somewhat better than the average driver. This is difficult because 95% of drivers are better than average, according to them. That's why you have to use stats, not opinions.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
"If the forward facing radar keeps returning a rapidly approaching result DIRECTLY in front of the car then it's likely a good idea to start braking at some point, regardless of what stupidity the AI is doing at the time."

Our eyeballs are sensors, but it's the brain that makes sense of the detections. A radar will always detect something DIRECTLY in front of the car, namely, the pavement. It's the job of the processor to determine that those returns are either just the pavement, or a curb, or a k-rail. Again, the radar's footprint is huge, and it's the processor that weeds through the clutter to find the actual, real, targets, not the radar.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
FFS, can you be a bigger nitpicker?

Would radar subsystem be acceptable to you? Maybe radar backup system would be acceptable to you? The override the AI and stop the damn car system?

The stupid thing is that to get a radar result requires some kind of processor involvement. The acronym radar refers to the whole system from pulse generation to echo receiving to processing - RAdio Direction And Ranging. So, why do I have to spell out that there needs to be a processor to get a result from the radar?

It seems you've confused a radar system with the echo that is detected by the receiver.
 
AI cars don't have complete radar systems, they have radar sensors who pass the range data to the AI processor, which also receives the lidar and video data. There may be pre-processors that clean up the data or do specialized detection, but no processors, and certainly no connections to the car drive system, which is handled by the AI processor. Having multiple processors generating conflicting commands is a recipe for disaster.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
"Having multiple processors generating conflicting commands is a recipe for disaster."

Ya, why introduce a backup when the main processor is doing such bang up job at avoiding objects in front of the car? Tesla is proving their AI processor doesn't have that type of basic functionality, but instead is only reacting to the objects based on their classification. Not sure on the Uber yet, but it's likely the same.
 
"The pedestrian which was struck was not in the car's path until a very short time before it was struck. Despite what you might think, the reaction time of an autonomous vehicle is not on the millisecond scale. Autonomous technology is not 'better' than human drivers (if you believe it is better at all) because of speed."


The pedestrian was clearly on an intersecting trajectory, and if the autonomous technology cannot predict this, it is useless.
 
The video that I linked just a few posts above, is interesting. The left route option (which it appears to be following) seems to have two lanes. But it appears that the AV selected the *3rd* (rightmost) lane of the *two*. This hints that the system doesn't even bother to cross-check its decisions against the maps; maps which are available on-board (within the normal navigation system).

In other words, the system selected 'Lane 3' (rightmost non-lane median) of the two lanes available. Maps that would show these lanes (and thus non-lane medians leading to a barrier) would presumably already be on-board the vehicle, but presumably they're not integrated.

 
VE1BLL - it appears it "lost" track of the right dotted line and started only following the solid white "lane" marking on the left so probably expected it was still in the left of 2 lanes. It was flashing a white ring around the cluster which I think is the initial warning to to the driver to put their hands back on the wheel. I didn't see any indication in lane display of an object in front or a collision warning or that it engaged the emergency braking. The white lines on each side of the lane display do change indicating what lane markings it is following, or not as the case changes.

I've seen comments about the need to improve lane markings for the self driving cars. But blaming poor lane markings for failure of the system isn't a very useful or practical solution overall considering maintaining bright clear lane markings isn't a priority in many areas. That video illustrates how very far away the systems are from reliably working in more challenging situations.
 
Cross-checking against navigation isn't a strategy that can be fully relied upon, either. Construction or other emergency situations can alter the exact path of lanes in a flash ... faster than the navigation maps could be updated.

A human driver should have sorted this out by looking further down the road and establishing the difference between an actual travel lane and the "bull nose", and further establishing which travel lane was the desired one, without having to resort to a (potentially out of date) navigation system.

A road near me is under construction at the moment, being widened. One day all the travel lanes are to the north of what will be the new central divider, the next day they're all to the south, the next day the east and west lanes are in their proper positions with respect to the central divider but only the outer lanes are open, the next day only the inner lanes are open ... you never know what you're going to get. Humans have little difficulty navigating the path through the construction barriers and arrow signs ... provided they look far enough ahead.

True self driving systems will have to figure this out ... and they will also have to correctly respond to the occasional presence of a policeman directing traffic whilst the construction workers reposition the barriers!
 
Improved lane markings are not gong to help if it's snowing.

----------------------------------------

The Help for this program was created in Windows Help format, which depends on a feature that isn't included in this version of Windows.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor