Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

Self Driving Uber Fatality - Thread II 8

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,860
0
0
CA
Continued from thread815-436809

Please read the discussion in Thread I prior to posting in this Thread II. Thank you.

--
JHG
 
Replies continue below

Recommended for you

"Your LiDAR will impose a maximum speed on your car. If the LiDAR cannot see and identify the hazard, it must be moving slowly enough that it can react when the hazard becomes visible. Fog and curved roads all are an issue. "

Yes, which is why the maximum detection range needs to be at least 120 m. OASYS had a 400-m detection range for a nominal 120-kt speed, but the Cobra was way more maneuverable than a car in some situations. Fog and curves are separate issues. Lidar will suck in heavy fog, regardless; the maximum range is severely reduced, although an APD-based design, or a longer wavelength, could possibly get you better performance. Otherwise, radar is king in fog. This is why a single sensor modality is suicidal.

Curves are what they are; if there are obstructions, you'll have to slow down, otherwise, biasing the FOV into the curve is the most plausible approach, just like steerable headlights. That was the plan OASYS as well.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
jgKRI said:
If the processing system receives a frame from the sensor array that contains an area of ambiguous data, it is highly likely for there to be a routine which effectively crops out this portion of the frame (by truncating the output of the frequency domain conversion).

This is a necessary function, so that the system doesn't immediately fail if (when), for example, a rain drop hits the optics and causes a blurry spot on the images being processed.
This is a fail-SAFE system. One cannot simply ignore a portion of the frame because the data coming back from it is odd. If that was how the system was put together, a simple bit of mud would flumox the entire system and it would drive full force with zero reactions to anything... "If I can't see it, I can pretend there's nothing ever there!". Nope, not gonna happen.

Dan - Owner
Footwell%20Animation%20Tiny.gif
 
It's not yet clear whether all the required sensors and modalities have been fully fleshed out. Cameras have way more resolution than lidars ever will, and they can potentially see through car windows for lurkers. We'd do the same, if we could afford the distractions, which we can't, so we don't, and we take the risk of a pop-up target. I could sort of imagine wanting lidars mounted under the front bumpers so that they could see under parked cars; that would possibly provide some ability to see behind the shadows cast by the parked cars in the lidar and camera.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
MacGyverS2000 said:
One cannot simply ignore a portion of the frame because the data coming back from it is odd. If that was how the system was put together, a simple bit of mud would flumox the entire system

On the contrary- under certain conditions one must ignore portions of the frame. This is mandatory for any system which uses optics of any kind- whether they be LIDAR, visual range, infrared, whatever.

It is impossible to design a system in which the optics will never, under any circumstances, be fouled. It is completely impossible.

If the optics are fouled, the system must be capable of handling that condition, period.

Trimming frames is absolutely mandatory. Determining exactly what conditions should be allowed to trigger the frame trimming operation is very, very delicate, as frame trimming COULD result in this type of incident under very specific conditions.

As another disclaimer- I do not know with any certainty that this type of image processing error was the root cause of the Uber accident. It is just one of many possibilities.

MacGyverS2000 said:
and it would drive full force with zero reactions to anything... "If I can't see it, I can pretend there's nothing ever there!". Nope, not gonna happen.

In any class of autonomous vehicle operating, there will always be some set of conditions in which the vehicle will cease to operate.

Below Class 5, this means detecting anomalies which prevent reliable system operation, and returning control to the driver as immediately as possible.

IF an image processing error due to ambiguous data was the route cause of one of these incidents, than both design teams are on the hook for not building the architecture to return control to the driver under these conditions.

Once again, this is purely speculative.
 
In the case of the Velodyne design, fouling of any sort pretty much wipes out complete sectors of lidar FOV. Unlike the camera, it's nearly impossible for fouling to generate false returns, although I've been wonder how small raindrops on the exterior of the Velodyne system will affect its line of sight (LOS) performance; it counts solely on centrifugal forces to shed rain from its apertures. In SoCal, I guess it'll force me to wash the car regularly, or at least, the lidar.

Certainly, not getting any returns is an indication of sensor misbehavior. Another thing that might be lacking is intensity returns. The Velodyne only outputs range; target intensity could provide useful information for processing targets.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
The Velodyne returns intensity.

"Additionally, state-of-the-art signal processing and waveform analysis are employed to provide high accuracy, extended distance sensing and intensity data."
 
Missed this one of a Tesla driving into a median wall from last year as well (gif/video):


The solution to these Level 2 autonomy problems seems very straightforward as Cadillac has demonstrated. It's as simple as active-driver monitoring and geofences to only allow system activation in well-mapped and controlled access environments.

2018-04-05_14_36_20-teslacrash.gif_GIF_Image_780_602_pixels_vz4zlg.jpg

2018-04-05_14_37_09-teslacrash.gif_GIF_Image_780_602_pixels_neleff.jpg


Seems pretty stupid for a smart car.
 
I have been following the progress of self driving cars since the DARPA challenge DARPA Grand Challenge in 2004. It's interesting because it's the first widespread application of AI in a largely uncontrolled real world environment. Road vehicle fatalities are a big killer, and anything to reduce them should be encouraged. But the question is will people accept fatalities as a result of self driving cars, even if it reduces the overall rate? Computers might not make the same dumb mistakes people do, but will make dumb mistakes computers make.

We inevitably hear more about failures than successes. I don't have any stats, but this video shows a few of the times Tesla's have managed to avoid accidents. Tesla Autopilot Predicts Crash Compilation 2

I'm a bit dubious that self driving cars will ever reach Level 5, without changes to road infrastructure to support them. Unfortunately with self-driving cars, they can't be trained in a simulator, they have to be trained in the real world. At the moment, public opinion and authorities appear to favor their development. As the limitations of the software become more apparent, the tide of opinion may turn against experimental software being tested in the field when it has fatal consequences.
 
The Dallas incident is similar to the San Jose one, where the Tesla seems to mindlessly follows the lane markings, without doing much in the way of even paying attention to the fact that the right lane marking either disappears or diverges to the right.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Maybe something, or not, but it seems where ever I have traveled, the people tend to have a different set of bad driving habits. Thus CA drivers tend to have a set of bad driving habits, then that of TX.

So it may be that self driving cars may have a different set of bad driving habits depending on the set of programmers.

The trick maybe to know the expected range of bad driving, bad pedestrian habits. That might require a study that people will say is a waste of money, and will not believe some of the results (people don't like to hear bad things they do).

The results might work out that bad habits will be punished in the future as deaths, or increased accidents.

So will the insurance of AI skills be rated on the results of the long time driving tests? Or by the car type and brand?
 
IRstuff said:
The Dallas incident is similar to the San Jose one, where the Tesla seems to mindlessly follows the lane markings, without doing much in the way of even paying attention to the fact that the right lane marking either disappears or diverges to the right.

The Tesla software supports the case where there is a lane marking visible on only one side, the lack of a second lane marking isn't necessarily an error.

In the case of lane markings diverging, how does the software know _which_ lane marking is the "right" one to follow? Either one could be incorrect. Tesla explained that collision avoidance is not signaled to the driver if the driver can safely steer away from it, but that does not appear to be true based on the youtube video.

I believe in the Uber case, the safety drivers do two jobs: safety, and performance monitoring of the software and noting discrepancies for later analysis. Uber user to have two persons per car, they decided that the safety driver could both roles. Obviously, that is cheaper.

There are several problems with partial automation, where the user still has to monitor the computer, such as automation dependence. Inattention is prime cause of road accidents, now we are asking people to pay attention to the computer decisions (or non-decisions), which has already shown to be a human weakness. That doesn't add up.

In particular, the ability of the human to understand in all cases what the computer is doing is very poor. I also follow aviation accidents, and while automation probably saves a lot of accidents (there is no hard data on that), there are many accidents that occur due to the human/computer interface becoming decoupled, when if the pilots had simply turned of the autopilot and flown the plane manually, no accident would have occurred.

It's also worth noting passenger jet autopilots do not do obstacle avoidance, but there are TAWS and TCAS systems which do terrain and traffic detection but they only provide advisory alerts to the pilots.
 
"when if the pilots had simply turned of the autopilot and flown the plane manually, no accident would have occurred."

In many other cases, the warning systems engaged, but the pilot was so engrossed with an irrelevant input that the planes crashed. There was one where the copilot and navigator both recognized the stall warning, but were apparently afraid to prompt the pilot, and the plane crashed.

Tesla clearly screwed the pooch by naming it the way they did. Some idiot, probably Musk himself, approved that name. That was a disaster waiting to happen.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
There's also going to be the case where these types of systems have fewer and fewer incidents that become worse and worse. If they eliminate all of the fender benders but don't do well beyond that the number of incidents will drop dramatically and the severity will raise significantly. In that way the AI systems will always look worse than human drivers at first glance.
 
microwizard said:
...

There are several problems with partial automation, where the user still has to monitor the computer, such as automation dependence. Inattention is prime cause of road accidents, now we are asking people to pay attention to the computer decisions (or non-decisions), which has already shown to be a human weakness. That doesn't add up.

In my child-by-the-side-of-the road scenario, above, the car will decelerate at a half G to a safe halt in 5.66s. If the car does not react and is headed for the kid, impact will be in half that time, 2.83s. If you are sitting in the driver's seat, paying attention, it will take you something like 3/4s to react. Luckily, this provides time for a successful panic stop. If you are not paying attention, it will take you several seconds just to figure out what is happening. If you are quick, you will know what it was you hit, like that lady in the Uber car. Otherwise, you will be wondering what that loud thump was. This is why texting and driving is dangerous.

Aircraft accidents do not happen as fast as road accidents. A safety driver is of no use if they are not holding the control and are 100% paying attention.

--
JHG
 
At what point will we humans just be in the way?

If we just have most things delivered and we work at home, there would be fewer accidents.
 
Status
Not open for further replies.
Back
Top