Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Self Driving Uber Fatality - Thread I 17

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,878
San Francisco Chronicle

As noted in the article, this was inevitable. We do not yet know the cause. It raises questions.

It is claimed that 95% of accidents are caused by driver error. Are accidents spread fairly evenly across the driver community, or are a few drivers responsible for most accidents? If the latter is true, it creates the possibility that there is a large group of human drivers who are better than a robot can ever be. If you see a pedestrian or cyclist moving erratically along the side of your road, do you slow to pass them? I am very cautious when I pass a stopped bus because I cannot see what is going on in front. We can see patterns, and anticipate outcomes.

Are we all going to have to be taught how to behave when approached by a robot car. Bright clothing at night helps human drivers. Perhaps tiny retro-reflectors sewn to our clothing will help robot LiDARs see us. Can we add codes to erratic, unpredictable things like children andd pets? Pedestrians and bicycles eliminate any possibility that the robots can operate on their own right of way.

Who is responsible if the robot car you are in causes a serious accident? If the robot car manufacturer is responsible, you will not be permitted to own or maintain the car. This is a very different eco-system from what we have now, which is not necessarily a bad thing. Personal automobiles spend about 95% (quick guesstimate on my part) parked. This is not a good use of thousands of dollars of capital.

--
JHG
 
Replies continue below

Recommended for you

dik said:
There is a potential for much better road safety than what is existing, and 'driverless' vehicles will eventually be 'talking' to each other and providing each other with a 'drive plan' so positions can be anticipated.

I certainly believe it's true that better/safer systems are in our future- where this argument (not that you invented it- it's a common one) loses traction for me is exactly such cases as we're discussing here.

From what I can tell, accidents due to an autonomous vehicle colliding with another vehicle are the minority; the majority of accidents with autonomous vehicles, at least the ones that get press, are collisions between autonomous vehicles and objects or pedestrians doing abnormal things or being in abnormal places.

Point is.... those types of problems are not solved by cars talking to each other. So how do they get solved?

Ultimately, better detection and processing are needed and that's a steep hill to climb.
 
GregLocock,

My old employer manufactured laser rangefinders that had to detect the bottom of potash holes. I recall that the reflectivity was something like 20%, and that the rangefinders were rated to do this at 160m.

From a LiDAR poiunt of view, bicycles are transparent. You get a return from the bicycle, and you get a return from the stuff behind the bicycle. We made LiDARs that had first and last pulse logic. From an aircraft, the first return comes from the top of the trees, and the last comes from the ground. What is the difference between a woman and bicycle, and a tumbleweed?

--
JHG
 
The woman wasn't a tumbleweed, and certainly was solid enough that any AI should have considered her as a possible collision object, particularly the closer the car got to her. Moreover, Uber has contaminated our perception of the collision with the crappy video they allowed the police to release. The actual video data would have likewise seen the woman and her bicycle, and the camera and lidar data fused together certainly should have set off warning bells in the software. The lack of any reaction is really disturbing. And let's not forget that the car ostensibly also had a radar, which should have gotten decent returns from the bicycle, particularly at the short ranges just prior to impact.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
A /robust/ trajectory estimator for a pedestrian would take the current velocity and position, and then create a bubble expanding at 1g in the horizontal plane.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
JStephen said:
But when these autonomous systems don't sense something, they just plow into it at full speed, which is not too pretty.
For all of the discussion (arguing?) here, I come back to one simple thought... the car didn't try to brake at all before hitting a sensor-solid object bigger than a breadbox.

All of this discussion about object classification is irrelevant to the above fact. It doesn't matter WHAT the object in front of the car was classified as, because it was in front of the car for an extended period of time (>1 second, plenty of time for practically every sensor to recognize it as existing).

I could likely ask a high school programmer to figure out the logic of applying brakes when "Object directly in front of car is closing in at same speed car is moving", i.e., "I'm approaching a stationary object directly in my path".

Anywhere within that 1+ second interval the car could have at least shed some speed, but it didn't. It went headlong into the object.

I have to make certain assumptions, though they may be incorrect. First and foremost, the software recognizes when sensor data isn't being properly received... missing data from an unplugged cable, high incidence of out-of-range data (i.e., the sensor itself is going bad), etc. Second, the Kalman filter is set up correctly... the wrong coefficients here can easily lead to incorrect predictions, particularly if those predictions are based upon "tamed" data (i.e., the system has not been tested with impulse data, like someone stepping out from behind a tree and jumping into the travel lane). If the first issue has not been handled correctly (bad/corrupted sensor data), then the output of the filter would also be highly suspect.

This is where I'm leaning... a mixture of bad sensor data and poorly trained prediction filters led to the car initially not having enough valid data to operate on (or it had enough valid data that was too heavily tainted with bad data), and once the well had been poisoned, the "reaction" algorithms couldn't make a proper decision. It's possible, given valid/untainted data and a few more seconds to move that data through the Kalman filter, a valid decision would have been arrived at. I would really love to see the dataset it was working off of about 30 seconds before up to 10 seconds after the accident.


Dan - Owner
Footwell%20Animation%20Tiny.gif
 
"All of this discussion about object classification is irrelevant to the above fact. It doesn't matter WHAT the object in front of the car was classified as, because it was in front of the car for an extended period of time (>1 second, plenty of time for practically every sensor to recognize it as existing)."

When the system is only reacting to objects that might pose a collision threat but has classified the object in front of the car as an object that it believes could be ignored then it will blissfully ignore it.

Oh wait, I'd better say that when the processor decides the data from the object is classified as data to be ignored, then it ignores that data and for all intents and purposes is blind to the object even when it is in the path of the car, just in case only saying object "offends" someone yet again.

If you didn't notice, me pointing out that the car should have some type of basic backup emergency braking function started a whole bunch of the stupid mess you see above.
 
This Uber accident discussion Thread I is CLOSED.

Please continue and additional discussion in Thread II:

thread815-437388

Thank you.

--
JHG
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor