Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Self Driving Uber Fatality - Thread I 17

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,878
San Francisco Chronicle

As noted in the article, this was inevitable. We do not yet know the cause. It raises questions.

It is claimed that 95% of accidents are caused by driver error. Are accidents spread fairly evenly across the driver community, or are a few drivers responsible for most accidents? If the latter is true, it creates the possibility that there is a large group of human drivers who are better than a robot can ever be. If you see a pedestrian or cyclist moving erratically along the side of your road, do you slow to pass them? I am very cautious when I pass a stopped bus because I cannot see what is going on in front. We can see patterns, and anticipate outcomes.

Are we all going to have to be taught how to behave when approached by a robot car. Bright clothing at night helps human drivers. Perhaps tiny retro-reflectors sewn to our clothing will help robot LiDARs see us. Can we add codes to erratic, unpredictable things like children andd pets? Pedestrians and bicycles eliminate any possibility that the robots can operate on their own right of way.

Who is responsible if the robot car you are in causes a serious accident? If the robot car manufacturer is responsible, you will not be permitted to own or maintain the car. This is a very different eco-system from what we have now, which is not necessarily a bad thing. Personal automobiles spend about 95% (quick guesstimate on my part) parked. This is not a good use of thousands of dollars of capital.

--
JHG
 
Replies continue below

Recommended for you

Spartan5 said:
jgKRI: Have you ever driven a car with adaptive cruise control or forward collision avoidance systems?

Every day. I own two of them.

On the Honda the adaptive cruise gets turned off frequently- it gets really pissed when trying to follow a target around a curve that's below a certain radius. These systems aren't flawless.

Spartan5 said:
They are very capable of over-rididing driver functions and emergency brake when they detect a stationary object; so over-riding an autonomous system should be no different. They handle stationary objects being 'revealed' when cars between the autonomous car and the object with relative ease because it's just a matter of comparing the targets in one frame to the targets in the next frame and identifying what the closing rate is.

I think the disconnect here is that you are interpreting my stance as "the task being asked of this system is impossible".

I am most certainly not saying that. What I'm saying is that these systems have, and will always have, a nonzero failure rate. Only the catastrophic failures make news; you don't hear a story on CNN every time a Volvo plows into the back of something, so long as no one dies- and this occurrence is more common than you might think. Google it if you don't believe me- but stationary object revealed in front of automated vehicles is a legitimate situation that causes problems. Seriously, google it. These systems fail every day. But taken as a whole (i.e. the number of failures against the number of opportunities for failures to happen) the rates are extremely low. But still nonzero.

That's what I'm saying.

LionelHutz said:
Bullshit or every car with a functioning AEBS would never move.

You're missing the point again. It's not about detection- its about tuning. It's a software programming problem much more than a hardware problem. If every stationary object detected by AEBS caused the car to stop, the car would never move because there is always a stationary object nearby. What makes the system function is the software which determines which objects are stop-worthy and which aren't.

When the system now has to handle other tasks, such as route planning, that involve the same functions under AEBS control, the system has a couple dozen additional degrees of freedom and the level of complication goes up by a couple orders of magnitude. I honestly don't understand how you don't seem to understand that.

LionelHutz said:
When do you stop predicting what the barrier might do and actually apply the brakes, or at least start slowing down? 200' away, 150' away, 100' away, 50' away or just never because it's predicted to move out of the path?

If you find an answer to this question which has a zero failure rate over tens of millions of accumulated system miles driven, you should start your own company building autonomous vehicles.

LionelHutz said:
How is the radar tuned exactly?

Really? The processor takes the radar system's output and decides what to ignore and what not to ignore. Software.

LionelHutz said:
What does Musk means by "looks like an overhead sign"?

He means that early on in testing they identified that overhead signs created radar returns which caused unnecessary braking events, so they tuned the system to ignore inputs of that type.

LionelHutz said:
The software is deciding the data from a certain area represents an overhead sign and because it's a sign it can be ignored.

No, it isn't. If you were able to look at the internal logic tree of the software, there isn't a variable called 'overhead sign' that gets set to yes or no. There isn't a table of classes of objects. There's a bunch of equations relating to object apparent sizes and trajectories, and a bunch more equations dictating what radar return characteristics constitute threats and what returns don't.

This isn't how robot vision software works- if you've never written any, I don't know how else to explain it to you. But classifying objects would require 1) a complete set of all objects which could ever be encountered by the system (which is impossible, and would be dangerous to attempt) and 2) would require that classification step to be completed reliably before any processing or path prediction could take place. If the system was designed this way it would be waaaaaaaaaaay to slow to be safe.

LionelHutz said:
If you want to tout how safe it is then the important statistic should be how many times did the human have to intervene to avoid an accident.

If you think I'm 'touting' anything, you're either not reading my posts, or you've decided what I mean before reading my posts.

I think this technology is many years from being ready to deliver what the developers are trying to sell today. But I also believe in interpreting the data that is actually available, not making guesses about things I don't know. And I don't know how many times the human has had to intervene to avoid an accident. I bet Uber does.

LionelHutz said:
And other bad accidents shouldn't be counted just because no-one died?

That's not what I said. What I said was that the accident didn't make news because no one died. So there's been a lot less focus on that particular incident.
 
My impression was that AEBS technology seems to be working relatively well. They're being deployed on many brands of cars. They're certainly not making the news. My resultant assumption is that they're a relatively mature technology.

"...would have prevented both of these accidents."

Both? I had listed five examples above. Videos on YouTube, or just browse and you'll probably find many others I've not seen.

"...a stretch."

Well, that's what AEBS is designed to do. So it's my assumption (see rationale above) that it does what it's supposed to do almost all of the time. Hardly seems like "a stretch", unless you mean that it might occasionally fail, then sure, agreed. Being a safety intervention system, AEBS doesn't have to be 99.9999999% accurate. 98% is probably fine.

I have no idea how you arrived at it being "a stretch".

 
jgKRI\ said:
I think the disconnect here is that you are interpreting my stance as "the task being asked of this system is impossible".

I am most certainly not saying that. What I'm saying is that these systems have, and will always have, a nonzero failure rate. Only the catastrophic failures make news; you don't hear a story on CNN every time a Volvo plows into the back of something, so long as no one dies- and this occurrence is more common than you might think. Google it if you don't believe me- but stationary object revealed in front of automated vehicles is a legitimate situation that causes problems. Seriously, google it. These systems fail every day. But taken as a whole (i.e. the number of failures against the number of opportunities for failures to happen) the rates are extremely low. But still nonzero.
I didn't misinterpret you. I quoted you verbatim and pointed out that this wasn't the case.

I'm not suggesting there will ever be a non-zero incident rate. But don't suggest that these are newsworthy because they are catastrophic. They are newsworthy because they should have been within the capabilities of these systems. Tesla's "autopilot" shouldn't autopilot cars into stationary, immovable objects; or the sides of semi-trucks that can be seen for hundreds of feet. Unless it really isn't an autopilot at all, but a supplemental system. They should be called on that and taken to task for misrepresenting what it is and/or take a more aggressive stance in ensuring that the drivers are using it like the Level 2 system that it really is.

And Uber shouldn't be beta-testing a fully automated Level 4/5 car on public streets without, at a minimum, proper safeguards in place to enure the "backup" driver(s) is an active participant in the endeavor. That's what's newsworthy about this.
 
VE1BLL said:
unless you mean that it might occasionally fail, then sure, agreed.

This is exactly what I've been saying this entire time. That the failure rate is small but nonzero. That's it.
 
and from the NTSB:

"In each of our investigations involving a Tesla vehicle, Tesla has been extremely co-operative on assisting with the vehicle data.

"However, the NTSB is unhappy with the release of investigative information by Tesla."

I wonder who made them the guardians of the 'truth'. Potential for a dangerous overstep. I'm trying to find an article where Musk summarised accident information.

Dik
 
"What happens if I want to steer during an emergency?"

I presume that the ABS is not overridden, so some steering control may still be present. Assuming that your face isn't being torn off by the negative g force *.

The AEBS that I've seen wait until the last possible time to brake. Probably too late to steer around anything.

Some of the earlier models didn't even bother to prevent the accident. Merely reduce the impact. Covered in some detail on the Fifth Gear TV program.

* Once upon a time, I triggered off the Mercedes Brake Assist System (BAS) by moving my foot between the accelerator and brake pedals far too quickly (for a reason). Oh. My. Gawd. The pedal was pulled down under my foot (felt soft), my head was yanked forward, the seatbelt tightened (motorized tighteners), and all I could see was the speedometer unwinding like a tach. The ABS worked in parallel; wheels were not quite locked up. Once the speedo reached a satisfactory value (maybe two seconds), I released some foot pressure, the BAS disengaged, the brake pedal popped back up, and I drove around the corner at a reasonable speed.

 
"That the failure rate is small but nonzero. That's it."

Okay.

Safety systems can get away with 98% or 99% or 99.9%. And that's probably fine, both ethically and legally. Sort-of like the 'Good Samaritan' concept. Make a good effort, and occasional failures to prevent harm would not be unacceptable.

AV systems needs to be 99.9999...% otherwise they'll be involved in an endless string of inexplicable accidents. The ethical considerations are much different. Merely improving on the human driver accident rates in itself is insufficient, due (for example) to liability concentration. Widespread deployment of imperfect AV would also require legislation to sort out the liability, for the great good (if so decided).

The distinction between these two ethical polarities (preventing vice causing harm) is not as clear cut as I've described. It's unfortunately more muddled. But there is this important distinction.
 
Spartan5 said:
I didn't misinterpret you. I quoted you verbatim and pointed out that this wasn't the case.

You most certainly didn't quote me verbatim. There's no quote of mine, in a post of yours, that I can find in the lower half of this thread.

Spartan5 said:
I'm not suggesting there will ever be a non-zero incident rate. But don't suggest that these are newsworthy because they are catastrophic. They are newsworthy because they should have been within the capabilities of these systems.

If you're comfortable with these systems having a nonzero rate of failure... exactly what type of failures, then, are not newsworthy?

Saying that they are newsworthy because people were killed isn't conjecture.. it's a statement of fact. In this very thread VE1BLL has come up with several other instances, and I have contributed one as well, where these systems failed. The ones where no one died got a lot less coverage. That's an indictment of our society much more than it is an indictment of anything related to the topic we are discussing here.

Spartan5 said:
Tesla's "autopilot" shouldn't autopilot cars into stationary, immovable objects; or the sides of semi-trucks that can be seen for hundreds of feet. Unless it really isn't an autopilot at all, but a supplemental system.

Once again.. if the failure rate being nonzero is ok, what failures are you willing to accept?

Spartan5 said:
They should be called on that and taken to task for misrepresenting what it is and/or take a more aggressive stance in ensuring that the drivers are using it like the Level 2 system that it really is.

I couldn't agree more on this point. The tech isn't ready. I've been making that argument too, in this thread I've said it at least twice. I'm not in here advocating for Tesla or Uber or anyone else.
 
VE1BLL said:
AV systems needs to be 99.9999...% otherwise they'll be involved in an endless string of inexplicable accidents. The ethical considerations are much different. Merely improving on the human driver accident rates in itself is insufficient, due (for example) to liability concentration.

I couldn't agree more- although I think your percentage for success needs about 20 more 9s before the general public will be ok with things.

VE1BLL said:
Widespread deployment of imperfect AV would also require legislation to sort out the liability, for the great good (if so decided).

This is already happening- and from my chair it's a major problem that probably won't get solved until way too late.
 
"He means that early on in testing they identified that overhead signs created radar returns which caused unnecessary braking events, so they tuned the system to ignore inputs of that type."

"No, it isn't. If you were able to look at the internal logic tree of the software, there isn't a variable called 'overhead sign' that gets set to yes or no. There isn't a table of classes of objects. There's a bunch of equations relating to object apparent sizes and trajectories, and a bunch more equations dictating what radar return characteristics constitute threats and what returns don't."

Contradict yourself much?

I NEVER posted anything about these tables you're trying to push on me. Since these equations represent if the object is a threat or not, I can rightly say the objects are classified by the equations. Maybe not as the exact object, but as an object that is one to be concerned about, or not. But, I bet it goes much further than just an object that could be a threat and gets into classifying those objects into further subcategories depending on the level of thread and the characteristics that type of object exhibit. By all intents and purposes, it detects certain objects as overhead "things", one of the possibilities being an overhead sign. And at this point, I really could care less about arguing these stupid semantics with you any further.

How do those equations get created exactly? Do they just appear out of thin air?

Tesla said they first recorded a whole bunch of sensor data from the cars on the road (for something like a year or more worth) with the hardware before they rolled out any part of their autopilot software to the public. What do you think that data was used for? It was used to help build those equations you're going on about.

 
They're based on a LOT of data characterization, as you're stating.

Not really sure what the argument is that you're trying to make at this point.
 
"Not really sure what the argument is that you're trying to make at this point."

Same here.
 
LionelHutz said:
Contradict yourself much?

No.

You're seeing semantics where discrete points exist.

Google fourier transform + image processing. If you like the math it is actually a pretty interesting topic.

That is how the algorithms used in these systems are created and modified over time.

A computer does not process or recognize images in the same way that your brain does. Until you understand that, you'll continue to not understand why this stuff is not as easy as we would all like for it to be.
 
Meanwhile, every other industry (Aircraft, Ships, Submarines) that has had autopilots for a long time is shaking their heads in disbelief at the idea that you should only call something an autopilot if it is clever enough to stop you bumping in to inconvenient obstacles.

Bumping into inconvenient obstacles is the one thing that autopilots have historically been good at.

A.
 
I was a little surprised to find that the predictions for the paths of targets are done using Kalman filtering. This seems to me to be optimistic. A pedestrian can sidestep, for example. Once a target is classified, its future position should be an expanding cloud of all possible trajectories. Humans can accelerate at about 1g.

This is irrelevant in the Uber case, since they were on a collision course her relative bearing remained constant and a Kalman prediction would be fine.

Back on the LiDAR and reflectivity. If her clothing was dark wool it might have a low reflectivity like asphalt. If they filtered out all returns with a reflectivity of about the same as asphalt as a first step to eliminate irrelevant data, then she might have been edited out of the LiDAR picture.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
GregLocock said:
I was a little surprised to find that the predictions for the paths of targets are done using Kalman filtering. This seems to me to be optimistic. A pedestrian can sidestep, for example. Once a target is classified, its future position should be an expanding cloud of all possible trajectories. Humans can accelerate at about 1g.

If not Kalman, which way do you go- do you add another layer of complexity and go into Bayes? If you do that, you wind up having to estimate a lot of non-linearties and you also add the problem of differentiating non-lineary from detection system noise. Either way you basically have to use direct methods, based on current processor speeds and the processing time of images from the visual part of sensor packages. I think once the hardware is capable, the change to indirect methods of path optimization will yield some gains in efficiency and safety. But that's a ways off.

zeusfaber said:
Meanwhile, every other industry (Aircraft, Ships, Submarines) that has had autopilots for a long time is shaking their heads in disbelief at the idea that you should only call something an autopilot if it is clever enough to stop you bumping in to inconvenient obstacles.

With all due respect... Autopilots for planes/ships/submarines have to deal with much lower quantities of obstacles, approaching over much larger time scales, and following much more predictable paths. It doesn't perplex me one bit that we had aircraft autopilot figured out decades ago but don't have cars figured out yet.
 
Greg

Lots of what is being claimed and done is based on optimism.

As for what happened, I'm highly doubtful we will ever know what really happened. The NTSB report from the Tesla death didn't get into reasons why the autopilot failed to react. So, I'm doubtful details on what happened within the Uber system during this accident will be investigated or reported. And, it's not like Uber will tell unless something is leaked or whistle blown.
 
zeusfaber said:
Meanwhile, every other industry (Aircraft, Ships, Submarines) that has had autopilots for a long time is shaking their heads in disbelief at the idea that you should only call something an autopilot if it is clever enough to stop you bumping in to inconvenient obstacles.

And yet, no one, to date, has brought to market an obstacle avoidance system (OASYS) for aircraft, and THAT concept has been around for nearly 30 years for helicopters. Helicopters fly high, most of the time, specifically to avoid dealing with obstacle avoidance. NVESD paid for and flew an OASYS in 1994 on a AH-1 Cobra. That system only provided visual and aural warnings to the pilot. Aside from getting a decent ROC curve, cost was a serious problem, even though the platforms being protected are substantially more expensive than a typical car, even one that's tricked out with a boatload of sensors. But, much of the cost had to do with getting sufficient processing power, and developing sufficient algorithms.

The data is meaningless without the algorithms, and even sensors with algorithms need navigation information, as well as a rock-solid moving map representation of the local universe, including what to do with data, over time, and what to do with conflicting data. Conflicting data invariably arise, and having the vehicle stop or turn at every single glitch of data will be just as bad as missing real obstacles. This is why the ROC curve is so important. Not only do you have to have a gazillion 9s of detecting real targets, you need a gazillion inverse 9s for the false alarm rate. For typical driving, a false alarm rate on the order of 1 per week or less would be needed. That precludes any sensor that doesn't do a serious amount of processing on the data.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
"And yet, no one, to date, has brought to market an obstacle avoidance system (OASYS) for aircraft..."

I was wondering about that when zeusfaber mentioned autopilots for aircraft. The typical autopilot feature on aircraft has more in common with an automobile cruise control than anything else, does it not? I believe the response from Tesla about the collision with the truck was that their "autopilot" was not meant to be a self-driving system, but a driver assistance feature.
 
jgKRI said:
Once again.. if the failure rate being nonzero is ok, what failures are you willing to accept?

As long as the rate of failure is significantly better than what is existing... How much better, is a question. 10% better is still 'better'... The start up is likely fraught with problems and the software/hardware will require numerous revisions. There is a potential for much better road safety than what is existing, and 'driverless' vehicles will eventually be 'talking' to each other and providing each other with a 'drive plan' so positions can be anticipated.

Dik
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor