Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Self Driving Uber Fatality - Thread I 17

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,878
San Francisco Chronicle

As noted in the article, this was inevitable. We do not yet know the cause. It raises questions.

It is claimed that 95% of accidents are caused by driver error. Are accidents spread fairly evenly across the driver community, or are a few drivers responsible for most accidents? If the latter is true, it creates the possibility that there is a large group of human drivers who are better than a robot can ever be. If you see a pedestrian or cyclist moving erratically along the side of your road, do you slow to pass them? I am very cautious when I pass a stopped bus because I cannot see what is going on in front. We can see patterns, and anticipate outcomes.

Are we all going to have to be taught how to behave when approached by a robot car. Bright clothing at night helps human drivers. Perhaps tiny retro-reflectors sewn to our clothing will help robot LiDARs see us. Can we add codes to erratic, unpredictable things like children andd pets? Pedestrians and bicycles eliminate any possibility that the robots can operate on their own right of way.

Who is responsible if the robot car you are in causes a serious accident? If the robot car manufacturer is responsible, you will not be permitted to own or maintain the car. This is a very different eco-system from what we have now, which is not necessarily a bad thing. Personal automobiles spend about 95% (quick guesstimate on my part) parked. This is not a good use of thousands of dollars of capital.

--
JHG
 
Replies continue below

Recommended for you

There are potential ways to avoid interference, such as the protocols used by GPS to avoid interference between satellite signals. One of the reasons cold boot takes a long time is that the channel receivers have to decode the sequences and then verify that that all the signals being received are consistent with the receivers decoding of the signals.

Alternately, one could imagine using something like programmable quantum cascade lasers with unique wavelengths.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
I presume these cars must have some kind of "decision log" .. "I have detected XYZ therefore I shall do ABC in response".

I wonder what this log would show in this case? I would be willing to bet that the woman + bike was detected by the sensors, but for whatever reason was not attributed to be a threat to be avoided? Perhaps she looked like a motorbike merging from the left into the lane in front of the car, and thus did not need to be avoided?

I wonder what decision is made when it is 'too late' to avoid a threat in a safe manner? Do you prioritise occupant safety (e.g. avoid sudden / dangerous braking which might result in a pile up) or do you prioritise pedestrian safety (and do absolutely anything possible to slow down before hitting them, even if this might escalate into a pileup)?

The video of the 'person behind the wheel' is shocking. The amount of inattention she is paying is criminal.
 
Don't like bush. Tumbleweed???

The Tesla decided the truck trailer which was in the path of it's windshield was not a concern since it was classified as an overhead sign and signs don't move and cars are supposed to be able to drive under them....

The most likely cause was her being classified wrong caused the "AI" to decide she wasn't a threat the car could hit. Being classified as a motorcycle travelling the same direction and changing into the same lane doesn't make much sense from the point that the car was rapidly approaching it, and it should brake or otherwise to try to avoid any another vehicle it is rapidly approaching. Being classified as an object that the car is not supposed to be able to hit makes more sense than a wrong one that moves and that it could hit.

The weather report for Tempe last Sunday says the winds were gusting to 26 mph. Was the foliage on the sides of the road blowing around enough to confuse the "AI"?

We can all speculate, but we'll only find out what really went wrong if/when Uber releases any findings on the accident.

Spartan5 - yes, I already posted a link to the street view pretty much pointing out the spot it happened even before the video was released. So, I'm quite aware of the location and street configuration. The evidence so far makes it quite clear there was lots of time for detection and she didn't abruptly dart into the cars path, so something else went wrong.
 
Is it possible to incorporate all LiDAR data to give a better 3D layout of the area?

Dik
 
Tomfh,

LiDAR fires a laser. A few micronanoseconds after the laser fires, the receiver sees what is called the t[sub]0[/sub][ ]blast. The LiDAR electronics start counting, waiting for the signal to bounce off something and reflect back into the receiver lens. The receiver probably will have a narrow band interference filter that excludes all light that is not within say 2nm of the laser signal. LiDARs now are fairly rare. That 905nm signal you are detecting almost certainly is yours. If fifty cars all have LiDAR, that signal almost certainly is not yours. You have no way to make sense of the other signals. There is not enough bandwidth to give each vehicle its own laser wavelength, even if this were practical in cost sensitive production.

The company I worked for was developing airborne LiDARs that flew high enough, and ran at high enough laser pulse rates that the lasers were firing before the previous pulse came back from the ground. There were all sorts of tricky electronics for dealing with that. Of course, this would not be a problem for a car approaching woman pushing a bicycle across the highway. LiDAR scanners are a whole lot of fun

--
JHG
 
drawoh: Thanks... didn't realise that the pulse it sends out is for timing and distance.

Dik
 
lidars measure distance using their time of flight (TOF) [TOF/(2*c) = distance]. Typical pulse widths are on the order of nanoseconds, as are the times of flight; for 100-m radius coverage, TOF is 667 ns.

The design beamwidth of a lidar might be on the order of 1.5 mrad, which is less than 0.1 deg. lidars need to be scanned to cover the 50 deg or so of frontage, so the receivers are aligned with and have fields of view (5 mrad-ish) comparable to the beamwidths. There needs to be a larger receiver FOV than beamwidth to allow for physical misalignment and TOF during the scan. Opposing lidar beam on own receiver will be very, given the small beamwidths and FOVs, and masking by other cars. But, such events are essentially non-events in the sense that the strength of the signals are likely to saturate the receivers.

Returns from cars going in the same direction are likewise relatively rare, as there also masking by other cars, and the limited time and angles over which a Lambertian return can actually get into the FOV of a receiver. Nevertheless, the interference can be mitigated by a pulse-coding scheme with a matched filter receiver. Additional mitigators could be varying the pulse energy as a function of traffic congestion, since having a car 20 ft in front of you means that firing the lidar to find a 100-m distant target is not realistic.

Additionally, the collision avoidance processor needs to maintain a 3D database of detected objects, and generate trajectories as required, and apply a fading memory to kill of older and no longer relevant objects.

cars_mtshwe.gif


TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Haven't dealt with LiDAR, but I have to wonder what techniques they use that are similar to spread spectrum stuff... Gold Codes and the like. Something along those lines would improve the cross-correlation of "your" signal versus those of the other 500 cars in visible range, but would require a bit more computation.

Also found this interesting snippet:

Dan - Owner
Footwell%20Animation%20Tiny.gif
 
The victim was walking her bicycle across the road. She was already in the left lane for an extended period. She did not dart out of the shadows; the Uber's headlights eventually reached her. The Uber drove past its headlights range, and its Lidar obviously failed to help.

It's a comprehensive failure.

IEEE Spectrum

Screen capture cropped:
Screenshot_20180323-131049_1_hofwni.jpg
 
IEEE Spectrum, "Although the video shows the pedestrian appearing almost out of thin air, she would have, in fact, crossed two turn lanes, a through lane, and half of the Uber car’s lane before being struck—that’s roughly 42 feet. Walking at a speed of 3.5 feet per second, the design walking speed for traffic light 'walk' signals, she would have been on the road for more than 10 seconds before impact."
 
Maybe a dumb question, but who called this in?
Would the car automatically do this? Or would it have kept on going?
Can an AI be ticketed for hit and run?
 
Daytime view of that location attached. The accident would have been approximately where the car in the right lane is located in the picture.
Note the pavers in the median on the left side. Just off the picture is a no walking sign the city posted in that area. I gather this must have been a problem even before this incident.

 
 http://files.engineering.com/getfile.aspx?folder=95f2d3b5-e21f-4161-84bd-768f0a6864da&file=mill_ave.JPG
Regarding “coming out of the shadows;” locals are starting to draw attention to just how poor the quality of the video that has been released is relative to what it is actually like at night there.

Another dash cam still from that exact spot:
kaufman_tempe-800x445.png
 
The police essentially aided and abetted Uber in potentially steering public opinion about their culpability in the accident. This is a huge fail, particularly given the example dashcam image with decent histogram equalization.

Uber shot themselves in the foot with their video, because the better video shows that TWO sensor systems failed to operate correctly; the video cameras should have been capable of seeing the pedestrian, and simple change detection would have detected the lateral motion into the car's lane and the lidar likewise should have detected the pedestrian from double or triple that distance.


TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Incidentally here's a video of a LiDAR system in action
Eyeballing the resolution and range the LiDAR was functioning it seems hard to believe that it would not have picked up a pedestrian several seconds before impact.

Here however is an old blog on the subject
"The most common errors for detectors are:

detecting tree leaves or traffic lights in background as pedestrian
detecting the same person twice
not detecting small persons
not detecting cyclists"

oo er.



Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
Seems like all those issues are not the "detector's" issues, per se, assuming we define detector as the transmitter and receiver. The issue is what to do with the detections that must obviously have occurred, which is a processing problem. The target trackers for ballistic missile defense were capable of tracking hundreds of targets simultaneously, and the trick is to figure out trajectories of the targets and whether the trajectories will cross into the car's lane.

US football receivers running at their fastest would seem to be a plausible upper bound for "pedestrians" at about 28 ft/s. This would suggest that the lidar needs to have a frame rate on the order of 5 to 10 Hz to be able to correlate runners as single targets moving at a high rate. Slow targets might be the senior citizen in front of me in the supermarket, moving at about 0.5 ft/s. Usually, the big challenge isn't the targets, it's the obscurations, such as when a slow moving target walks behind a wall or billboard. A conventional tracker might get fooled into thinking the target came to stop at the leading edge of the obscuration and decide not to look for the target to re-emerge on the far side of the obscuration. Faster targets are less problematic with obscurations, but they aren't problem free.

Lidars have one serious limitation that makes the processing so difficult, and that's shadows, i.e., the areas behind objects that block further transmission of the laser and where pedestrians tend to suddenly emerge into traffic.



TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
I've seen suggestions that target /recognition/ might have been more difficult because she was behind the bike. Does this mean that Uber cars will intentionally run into things 1.5m tall and 2 m wide simply because it can't recognise them? I'd have thought not hitting large objects was pretty much AV 101.

This is different to the Tesla invisible truck problem because the Tesla system lacks a LiDAR, so it has to have good visual analysis.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
The bike was behind her. The only thing that would explain the behavior is if the AI classified the inputs as a vapor cloud. I would say that adding a thermal camera would be the best discriminator for such an event.
 
"This is different to the Tesla invisible truck problem because the Tesla system lacks a LiDAR, so it has to have good visual analysis."

And it failed miserably at that. Change detection should have detected the sudden presence of a "overhead sign," and that alone should have been an issue. The fact that the "overhead sign" went below the clearance level of the car and it didn't conclude that was a problem, is a problem. The fact that it failed to detect the wheels and undercarriage of the truck as anomalies is also a problem.




TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
IRstuff said:
The police essentially aided and abetted Uber in potentially steering public opinion...

At least we can be thankful that Uber is such a model corporate citizen with no history of ethical misfires. They've always acted in accordance with only the highest moral principles. So we can rest assured that they'll fully cooperate, honestly and openly.

--

One of the mistakes that newbie or bad drivers can make is looking for obstacles ahead. The correct logic is to look for empty road ahead. (The wording here is a simplification, but I trust that the point is clear.)

Perhaps autonomous vehicles should be subjected to a blinding Sudden Fog Bank Test. Or a Blind Curve (with too generous speed limit) Test. Such testing should be complete with a brick wall final exam.

I'm not sure that this 'safe driving logic' point is related to what happened here. Although it seems to have driven straight into a non-empty road.

So far this accident seems inexplicable. Explanations offered so far are not merely 'lessons learned', but massive failures.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor