Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Self Driving Uber Fatality - Thread I 17

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,878
San Francisco Chronicle

As noted in the article, this was inevitable. We do not yet know the cause. It raises questions.

It is claimed that 95% of accidents are caused by driver error. Are accidents spread fairly evenly across the driver community, or are a few drivers responsible for most accidents? If the latter is true, it creates the possibility that there is a large group of human drivers who are better than a robot can ever be. If you see a pedestrian or cyclist moving erratically along the side of your road, do you slow to pass them? I am very cautious when I pass a stopped bus because I cannot see what is going on in front. We can see patterns, and anticipate outcomes.

Are we all going to have to be taught how to behave when approached by a robot car. Bright clothing at night helps human drivers. Perhaps tiny retro-reflectors sewn to our clothing will help robot LiDARs see us. Can we add codes to erratic, unpredictable things like children andd pets? Pedestrians and bicycles eliminate any possibility that the robots can operate on their own right of way.

Who is responsible if the robot car you are in causes a serious accident? If the robot car manufacturer is responsible, you will not be permitted to own or maintain the car. This is a very different eco-system from what we have now, which is not necessarily a bad thing. Personal automobiles spend about 95% (quick guesstimate on my part) parked. This is not a good use of thousands of dollars of capital.

--
JHG
 
Replies continue below

Recommended for you

The statistical angle is the entire point of the graphs I found. The best human drivers have perfect records. The worst human drivers smash into things quite often. Whether AVs are worth having very much boils down to where, within those extremes, on average, AVs fall out. My guess is that from say 1 pedestrian death in (WAG) 10 million miles that they are doing worse than the average driver. But according to the first graph I posted, in 10 million miles even the best cohort of drivers would expect 4 crashes per million miles, or 40 crashes in 10 million miles.

I don't know what proportion of crashes result in pedestrian deaths, I don't know how much that 4 per million has changed since 1990 (quite a lot actually), but what I do see is that the numbers aren't immediately screaming that prototype AVs are just randomly mowing down people right, left and centre.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
I have read some interesting commentaries that basically stated the "AI" in these cars is doing a lot of learning by example and a huge amount of data processing. Together, this makes it impossible to log the details of the object processing and subsequent step by step decision making. In other words, it's difficult to figure out exactly why the car did something.


Greg - No the cars are not displaying any signs of being particularly dangerous. But they should do much better than a human, especially in conditions that make driving more difficult. We'll likely never see the detailed report on the accident, but from what I have see so far it appears this accident could have been avoided if the right precautions were taken as the car approached the woman.


From the video in the news reports, belongings were on the ground approximately above the "N Mill Ave" in this street view

 
Wouldn't the more sophisticated control system be expected to be more responsible for avoiding the collision? For certain we don't have a handle on how the non-AI works.
 
News: "...10PM... Pushing a bicycle laden with plastic shopping bags, a woman abruptly walked from a center median into a lane of traffic..."

Assume for a moment that the pedestrian in this case was visible, while still walking on the median.

A human driver might have been able to make some assumptions about those circumstances - seeing what is probably a homeless person, in the median, at 10PM, and therefore assume that they're perhaps somewhat less predictable than normal. So an attentive and cautious human driver might either slow down or perhaps even change lanes if possible. Mental alarm bells should be ringing, because of the context.

To help make this point crystal clear: an attentive human driver would certainly take extreme caution if they saw a toddler or small child wandering around in the center median, within a few steps of the lane, not firmly holding hands with a parent. In such an extreme example, one would probably even turn on the 4-way flashers, stop, rescue the child, call the police, etc. Would an autonomous vehicle have any such inkling of the increased risks? Do autonomous vehicles understand 'children', 'holding hands with a parent' and 'not trying to wriggle away', or 'the homeless' yet? It seems not.

It's going to be a very long time before autonomous vehicles have any common sense about the real world. And without such common sense, they'll inevitably get them themselves into accidents of a different sort than would a cautious human driver.

At this point the proponents are forced to abandon any overhyped claims about an AI-driven "Accident-Free" Utopia. (Yes, such ridiculous claims have been made; perhaps from the sidelines and/or marketing departments.)

The more rational proponents can retreat to statistical comparisons, and claim autonomous superiority once the lines cross. While such a comparison is reasonable, it still leaves a vast legal and regulatory quagmire to be sorted out.
 
3DDave,

Your link explicitly describes the problem I was noting. The LiDAR's range and resolution are not sufficient for cars travelling at highway speeds. Think through what the LiDAR or video camera has to do. It has to recognize the object. It has to recognize that this is the same object it saw 100ms[ ]ago and that it has moved. If the object is moving steadily, the robot can conclude it will continue to move steadily. Can the robot detect erratic movement or even someone's head turning to indicate a sudden change of direction, possibly in front of the robot?

--
JHG
 
JHG said:
Personal automobiles spend about 95% (quick guesstimate on my part) parked. This is not a good use of thousands of dollars of capital.

Autonomous Cars may indeed lead to much more Car Sharing, so these quite separate and distinct topics are related.

[To be clear, referring here to Car Sharing (alone, one by one) not Ride Sharing (i.e. car pool).]

It's worth noting that Car Sharing inherently increases total distance driven, road usage, traffic, energy (fuel) expended, and wear and tear. Applies to Taxicabs, Uber services, or any future autonomous fleets wandering around.

Because (A-to-B) + (C-to-D) < (A-to-B) + (B-to-C) + (C-to-D), where (B-to-C) is the 'extra' movement.

Nobody ever seems to think about that. Which is annoying considering how obvious it is. At best, there's some hand waving about future efficiencies somehow compensating for the extra distance.

The distance ratio could probably be determined by asking Taxi or Uber drivers about their total working mileage per year versus how much of that is 'paid' mileage. It would presumably vary by location. Hopefully it's more efficient than 50%, and it can't be 100%; so I'd guess it's about 75%.

Yes, there are many obvious upsides of Car Sharing; but they're well known.

 
Most drivers currently cannot accurately detect that. At least not so as to take evasive action. Instead they usually just hit the horn and expect the other person to cope. The benefit of AI cars will be that they behave uniformly.
 
VE1BLL,

I also did not account for heavy usage of automobiles at rush hour, when maybe half of them are on the road.

--
JHG
 
As in the case of the Tesla accident, it's likely that the AI is simply maintaining context information. In the Tesla case, the truck that was hit had to have been detectable prior to it turning across the road, but it's likely the Tesla had essentially "forgotten" that there was even a truck in the vicinity.

Likewise, it's likely the LIDAR on the Uber detected the victim well before the impact, but essentially forgot the detection once a new set of detections were acquired.

We used to have a guy that worked on tracking algorithms, and when queried about why the tracker had clearly ignored a previous detected target, he stated, "Oh, I only use data from the current frame for detections."

A human driver, having detected a pedestrian might pay more attention than normal to insure that the they can avoid the consequences of the pedestrian doing something silly, which the victim did. If I see a pedestrian too close to the road edge, I will sometimes change lanes to be further away, just in case.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
GregLocock said:
Accident rates per mile driven are biased highly towards new drivers (the stats are complex), young drivers, and old drivers
.

Does this account for young drivers doing most of the driving?

Dik
 
IRstuff,

If the robot cannot maintain context information, then it cannot determine the direction and velocity of whatever object it sees. I noticed this this morning with my car GPS. It does not know what direction I am pointed in until I start moving.

--
JHG
 
"...the Tesla had essentially 'forgotten' [about the] ...truck..."

Based on the reports, I thought that the Tesla had failed to even see the truck due to a lack of contrast against the sky. It had been mentioned that the Tesla had utterly failed to brake before, during, and even after the crash. It was a fairly comprehensive failure, seeming to do precisely nothing correctly.

Perhaps the findings have changed since I last saw it.

Old curse: "May you live during interesting times."

 
I have a rental car for the week. It has lane-departure warning that's supposed to beep at you when you go out of your lane. The display also indicates whether it is currently recognizing the lane.

It only works with painted lane markings that are clearly visible.

It doesn't recognize guardrails, unmarked roadways, painted lines that are worn down or obscured by dust or dirt or damaged/repaired pavement. If there is a visible transition in the pavement that is separate from the painted lane marking (e.g. in construction zones where the temporary lane position doesn't correspond with what it's meant to be by design) it sometimes sees the wrong one and false-triggers. It gets confused on curves. It gets confused in roundabouts. If I intentionally shift to the side of a clearly marked lane for a rational purpose - e.g. to be further away from a vehicle that appears errant or is throwing off wind-buffeting, or to smooth out an errant lane marking - it beeps because it doesn't realize that what I'm doing has a purpose. I haven't tried it in rain at night when the shine from the rain makes lane markings hard to distinguish.

In other words, it works only in situations where it isn't needed, and it hardly ever works in situations where it might serve some purpose.

I've had other rental cars that have blind-spot warning systems (this one doesn't have that) and they don't detect something coming up from behind in the adjacent lane at a significant speed difference. They don't look far enough behind. IIRC the Germans criticised Tesla's autopilot for that ... and in a situation where you're doing 130 km/h and the car coming up from behind is legally doing 230 km/h, that's pretty important.
 
"Based on the reports, I thought that the Tesla had failed to even see the truck due to a lack of contrast against the sky. It had been mentioned that the Tesla had utterly failed to brake before, during, and even after the crash. It was a fairly comprehensive failure, seeming to do precisely nothing correctly."

The claim was that the sides of the trailer were white, and confused the image processor. But, it order for the side of the trailer to get that far into the field of view of the camera, the tractor had to have passed through the field of view, so the image processor must have "seen" the tractor, but didn't worry that it couldn't see the trailer.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
The Tesla could 'see' that the road was continuous under the trailer, just the same as it can 'see' the road is continuous under overpasses and overhead signs. It did not matter that it could see the tractor leave the path; not all tractors have trailers.

Look at 11foot8.com. Which just nailed a 'secret military' truck. Even humans stink at this and that's with warning signs, flashing lights, and every other option available to deflect the stupid from hitting a bridge that is older than most people. And people wonder why military equipment costs so much.
 
The object of issue, the tractor, had passed out of the collision zone. If the car doesn't recognize the invisible trailer that no-longer-dangerous tractor is hauling, we're still back to the fact that the software did not recognize a secondary danger. The tractor is no different than another car... so we we write the algorithm to more fully recognize a tractor and expect a trailer might be attached, no matter the color?

Dan - Owner
Footwell%20Animation%20Tiny.gif
 
"...must have 'seen' the tractor, but didn't worry that it couldn't see the trailer."

A.I. sometimes means Artificial Imbecile. :)

We'd all be safer if the decision makers kept this in mind.
 
For the Tesla accident, forget for a moment that it failed to identify the trailer. What I consider the second and worse failure of the Tesla goes like this. It should have detecting the wheels of the trailer moving towards it's path. So, a truck (or big car) had just crossed it's path and another object was moving towards it's path. I have no idea what it thought the trailer wheels were, but if a vehicle had just crossed your path and there was another "thing" moving in that same direction then wouldn't you proceed with caution instead of trying to blast through the gap at full speed???
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor