Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Self Driving Uber Fatality - Thread I 17

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,878
San Francisco Chronicle

As noted in the article, this was inevitable. We do not yet know the cause. It raises questions.

It is claimed that 95% of accidents are caused by driver error. Are accidents spread fairly evenly across the driver community, or are a few drivers responsible for most accidents? If the latter is true, it creates the possibility that there is a large group of human drivers who are better than a robot can ever be. If you see a pedestrian or cyclist moving erratically along the side of your road, do you slow to pass them? I am very cautious when I pass a stopped bus because I cannot see what is going on in front. We can see patterns, and anticipate outcomes.

Are we all going to have to be taught how to behave when approached by a robot car. Bright clothing at night helps human drivers. Perhaps tiny retro-reflectors sewn to our clothing will help robot LiDARs see us. Can we add codes to erratic, unpredictable things like children andd pets? Pedestrians and bicycles eliminate any possibility that the robots can operate on their own right of way.

Who is responsible if the robot car you are in causes a serious accident? If the robot car manufacturer is responsible, you will not be permitted to own or maintain the car. This is a very different eco-system from what we have now, which is not necessarily a bad thing. Personal automobiles spend about 95% (quick guesstimate on my part) parked. This is not a good use of thousands of dollars of capital.

--
JHG
 
Replies continue below

Recommended for you

dgallup - that just means more snow days because your car isn't able to get you to work. You'd also have to stay home any days it might start snowing or else you'd get stuck at work.
 
Here is a thought. The victim seems to have stepped out in front of the vehicle. This seems rather unlikely, although not impossible. Could they both have been dodging each other, in the same direction? I have heard it claimed that if headed for a pedestrian, you maintain your direction and let them jump out of the way. This is a very complicated problem in AI.

Getting on the brakes is the obvious solution, but I lived all my life in a city and I am an habitual jaywalker. I expect cars to move at a steady speed. This makes for more fun with AI.

--
JHG
 
"...been dodging each other...?"

The available video doesn't seem to show that.

The pedestrian appears to be crossing the street in a fairly normal manner as far as can be seen.

Well, 'normal' except 1) not in a crosswalk, and 2) bizarrely trusting of the oncoming vehicle.

---

"...headed for a pedestrian, you maintain your direction and let them jump out of the way."

The advice about heading straight towards the obstacle is normally confined to Stock Car Racing, where the spinning vehicles in front are just as likely to have moved left or right. I've heard that one before.

But I've never heard of this advice (?) being applied to pedestrians, due to the obvious risk of 'Vehicular Manslaughter' being the outcome.

 
LionelHutz said:
Ya, why introduce a backup when the main processor is doing such bang up job at avoiding objects in front of the car? Tesla is proving their AI processor doesn't have that type of basic functionality, but instead is only reacting to the objects based on their classification. Not sure on the Uber yet, but it's likely the same.

2 accidents of this type from a dataset of millions of miles driven does not indicate a lack of basic functionality.

Quite the opposite, in fact.

TenPenny said:
The pedestrian was clearly on an intersecting trajectory, and if the autonomous technology cannot predict this, it is useless.

The situation is much, much more complicated than that.

How does the system differentiate between a person walking on an intersecting sidewalk, who is going to stop before stepping off the curb, and a person who isn't? The answer is- it can't.

If the problem you see with this incident is the system not detecting the pedestrian, you're missing the entire point. The giant problem facing the designers of these systems is NOT detecting objects- it is classifying them, predicting their paths, and knowing when their paths need to result in an altered vehicle trajectory.

This problem is much, much more difficult than you are (and LH) are making it out to be.

The system doesn't know reliably how many lanes are on the road its on; I suspect the system detected this woman and 'predicted' that she was on a sidewalk, and that her trajectory would stop before intersecting with that of the vehicle. (this is purely conjecture on my part, but based on some idea of how these systems work).
 
"Ya, why introduce a backup when the main processor is doing such bang up job at avoiding objects in front of the car? Tesla is proving their AI processor doesn't have that type of basic functionality, but instead is only reacting to the objects based on their classification. Not sure on the Uber yet, but it's likely the same."

A backup processor is not the same as using the radar and AI separately, since the radar is useless without the sensors for navigation, speed, steering angle, etc. A backup processor would be more like a redundant processor, fully capable of performing the navigation on its own, but possibly with a completely alternate software architecture and algorithms. Using the radar by itself could result in things like dust, smoke, and mylar balloons causing the car to abruptly stop, without any regard for whether it's safe to do that, as opposed to changing lanes. Let's not forget that the car had two other sensors, the lidar and camera array, both of which should have independently detected the pedestrian. I think everyone is making too big a deal over the concept of "classification." All three of the primary sensors should have detections of the pedestrian that should have been processed as a solid, slow moving, object that was on a collision course.

The Uber was not dodging anything because it would have tried to change lanes to outrun the pedestrian, and it certainly would have had enough time to change to the right lane and avoid the collision. And while an AI might not have millisecond response time, it should be much better than that of a human, particularly since it has precise knowledge of closing speed to obstacles.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Lots of cars have an AEBS that uses radar sensing ONLY. Trying to claim that radar can't be used by itself to EMERGENCY BRAKE for an object in front of the car is complete and utter nick picking stupidity at it's finest.

The Uber was presented with an object that was clearly visible to it's sensors for plenty of time before the impact occurred. Watching the video, it appears to have failed completely at EVERY part of the classification and tracking process.

The classification part IS important. The first step of the systems seems to be classifying objects so the systems knows what the object might do based on it's classification. If a woman is classified as a bush then the system can "safely" ignore her because a bush doesn't enter the roadway. Same with the Tesla and the truck. Once the trailer was classified as an overhead sign it could be ignored since it doesn't need to be avoided.

The Uber WAS in the right lane the whole time.
 
LionelHutz said:
Lots of cars have an AEBS that uses radar sensing ONLY.

Yes. Those systems are intended for detection across a very narrow, very specific range of circumstances. There's a reason why they can be so simple.

LionelHutz said:
Trying to claim that radar can't be used by itself to EMERGENCY BRAKE for an object in front of the car

No one is claiming that. Systems which do that already exist. What matters is that functionality as part of a system. It seems clear from your posts that your base assumption is 'widget A does task X when left by itself to do task X. Widget A should then be able to do task X without an issue while also doing tasks Y, D, P, M, and Q'.

It just doesn't work that way. The things that an autonomous vehicle's radar systems are being asked to do are a LOT larger in quantity and complication than those being asked of the radar system for automatic cruise.

You seem to think you're comparing apples to apples, but you just aren't.

LionelHutz said:
is complete and utter nick picking stupidity at it's finest.

Be nice. If you can't keep yourself from taking this so personally, maybe it's time to move on to another thread.

LionelHutz said:
The Uber was presented with an object that was clearly visible to it's sensors for plenty of time before the impact occurred. Watching the video, it appears to have failed completely at EVERY part of the classification and tracking process.

Visible to its sensors, likely. Known to be on a path which would, without a doubt, intersect with the vehicle's path and make an emergency braking event the best course of action? Much, much less clear. The thing you are shouting about being the problem isn't actually the problem.

LionelHutz said:
The classification part IS important. The first step of the systems seems to be classifying objects so the systems knows what the object might do based on it's classification. If a woman is classified as a bush then the system can safely ignore her because a bush doesn't enter the roadway. Same with the Tesla and the truck. Once the trailer was classified as an overhead sign it could be ignored since it doesn't need to be avoided.

That's not how these systems work. This is a computer, not a human brain.

Your brain sees a bush and says "that's a bush, bushes don't move, I don't have to worry about that bush leaping in front of my car".

The autonomous vehicles system's don't recognize a bush; they see an object, and that object is either in motion relative to the vehicle's predicted path, or not. If the object is moving, it is either likely to stop moving, or not. If the object is stationary, it is either likely to start moving again, or not.

This means that the system is constantly assailed with objects fixed and in motion around it- in an urban environment, there are likely to be hundreds of them at any one time. The system has to know which ones represent threats which call for avoidance, and which ones do not. If you don't understand how intensely complicated it is to figure this out, I don't know what to tell you.

If all the system had to do was figure out if a moving object was going to intersect with the car's path, and then brake whenever that happened, the system would be relatively simple and the car would never go anywhere because every car is constantly surrounded by things which MIGHT intersect with its path.

Any time a car changed lanes nearby, a plastic bag blew across the road, a bird crossed in front of the car, a squirrel paused in the divider, a leaf blew around in the air, a pedestrian approached the road, blah blah blah- these events would ALL cause the car to stop. And the car would be useless.

The problem these systems are trying to solve is determining 1) what objects from the set of detected objects constitute a threat and 2) what objects, if any, are likely to deviate from expected behavior (such as not entering the travel lane).

These operations are, by definition, predictive. Predictive operations will ALWAYS have some failure rate, and that rate will always be non-zero.
 
I was reminded of some of those difficulties yesterday, while driving through a parking lot, and large groups of birds were in front of me, but moving out of the way as I drove through.
In years past, we would drive through the Eisenhower Tunnel around Thanksgiving. Normally, the roads would be all snowy, no visible markings, and the three marked traffic lanes would actually have two lines of cars sort of straddling the lanes. I pity the programmer that has to work through that little problem.
I've noticed on the local freeways, that truck drivers generally know which lane they need to be in ahead of time, since a good many of them drive the same routes repeatedly. That is very useful information that ought to be incorporated into any autonomous driving system. I know my commute route, and know which lane is slower when, where people are going to be merging, where there is no acceleration lane, etc.
One thing I've noticed is that in the few autonomous accidents, they tend to be severe. A lot of times, when people crash, they react too late, but still manage to shave a lot of speed off or head towards an open spot immediately prior to the crash. But when these autonomous systems don't sense something, they just plow into it at full speed, which is not too pretty.
It occurs to me that if you have an autonomous system that drives safely and carefully to the destination, a goodly percentage of current drivers will be unhappy with it- they're the bozos that are swerving in and out of lanes now. So they need to come up with a slider control in those cars that has "Prudent Patient Driver" at one end and "A-Hole Crackpot" at the other end, so people can adjust it to meet their perceived needs. Otherwise, you'll have some third-party software vendors that jump in to meed that demand.
 
The cars DO classify various objects by type. Every company involved describes how their system classifies each "thing" that is detected by type such as car, tree, bike, sign, etc, etc so the system can then predict what level of threat that "thing" poses and what that "thing" might do in the future. Tesla themselves said the Tesla that went under the transport trailer classified the trailer as an overhead sign and overhead signs aren't a driving threat hence it drove under it regardless of the fact it was in the path of the car.

It's hard to be nice to people twisting my responses. I did not post to brake at any object that might cross into the cars path. I did not post that the parts work independently. I posted that it'd be a damn good idea to brake when any solid object IS in the path of the car. The system doesn't have to do any predictions about objects it will very shortly be interacting with. What did the Telsa system have to predict about the big concrete lane divider barrier which was dead ahead? If it might move out of the way in time?

 
LionelHutz said:
that it'd be a damn good idea to brake when any solid object IS in the path of the car.

This is exactly what I'm saying. You do realize that if the radar system was able to override autonomous functions and emergency brake every time it detected a stationary object in front of the car, that the system would not function and the car would never move?

Prediction of what objects will do based on a static snapshot in time is very, very hard.

Prediction of what objects will do based on a static snapshot in time is what these systems are tasked with in order to operate safely.

LionelHutz said:
What did the Telsa system have to predict about the big concrete lane divider barrier which was dead ahead? If it might move out of the way in time?

Uh.. actually, yes. It most likely detected the barrier and predicted, incorrectly, that it was either out of the car's trajectory or would move out of the car's trajectory before it got there.

Stationary objects being 'revealed' when cars between the autonomous car and the object are a difficult problem to solve- because at first glance from the system, they look like moving objects.

There was a similar incident a while back with another Tesla, where it struck the back of a stationary fire truck after another car changed lanes to miss it. That situation is difficult for the system to resolve. That incident didn't make much news (that I'm aware of) because no one was killed.

LionelHutz said:
Tesla themselves said the Tesla that went under the transport trailer classified the trailer as an overhead sign and overhead signs aren't a driving threat hence it drove under it regardless of the fact it was in the path of the car.

No, they didn't- the mad genius himself tweeted this:


That does not mean that inside the control system the car says "OH THAT RIGHT THERE IS AN OVERHEAD SIGN".

It means that the system is tuned to ignore radar hits similar to those produced by overhead signs.

This is a very different thing. If you don't see the difference, and think this is nitpicking.... that's an indication that you're out of your depth to be in a conversation about how these systems work.

Your posts demonstrate a pretty naive point of view about what these systems are actually capable of. What you are saying should be easy is, in fact, immensely hard. No one is 'twisting' your responses. You're saying things that sound a certain way, and getting upset when people take your posts either literally, or to their logical ends. Calling people stupid isn't going to fix your inability to properly communicate whatever it is that you're actually trying to say.
 
I see your point, an autonomous vehicle shouldn't be able to predict people crossing in front of it, and should also not be able to notice a concrete abutment in its path, because, apparently, that's hard.

Math is hard, said Barbie.
 
JStephen, interesting idea, the "slider", so your factory "A-Hole Crackpot" is still not aggressive enough, then you call in the third parties :)

The problem with sloppy work is that the supply FAR EXCEEDS the demand
 
TenPenny said:
I see your point, an autonomous vehicle shouldn't be able to predict people crossing in front of it, and should also not be able to notice a concrete abutment in its path, because, apparently, that's hard.

Math is hard, said Barbie.

Well.. that's constructive. Way to contribute.
 
jgKRI said:
You do realize that if the radar system was able to override autonomous functions and emergency brake every time it detected a stationary object in front of the car, that the system would not function and the car would never move?

Why not leave the simple and reliable Automatic Emergency Braking System intact (separate system), just in case the complicate and unreliable AV system fails to see a pedestrian? Or a concrete barrier? Or a cross-traffic truck? Or a car parked ahead? Or a bright yellow barrier for a closed lane?

I see you folks arguing. But I'm not quite sure what you're arguing about...
 
VE1BLL said:
Why not leave the simple and reliable Automatic Emergency Braking System intact

What happens if I want to steer during an emergency?

--
JHG
 
VE1BLL said:
Why not leave the simple and reliable Automatic Emergency Braking System intact (separate system), just in case the complicate and unreliable AV system fails to see a pedestrian? Or a concrete barrier? Or a cross-traffic truck? Or a car parked ahead? Or a bright yellow barrier for a closed lane?

The assumption there is that the 'simple' emergency braking system would have prevented both of these accidents.

That is, at best, a stretch.

It also means redundancy of a lot of expensive sensors (assuming you mean fully redundant systems) and/or processing capability- which represents a potentially large bottom-line hit, with unknown improvement in safety.

We're all engineers here, we'd be kidding ourselves if we didn't acknowledge that these companies are trying to find the optimized point of safety-per-dollar, not just the safest overall system.
 
Asking a simple radar to determine whether an object is solid is non-trivial. "car, tree, bike, sign," cannot be recognized by a simple radar, and possibly not even a lidar; most systems that can distinguish between these objects is a vision system, not a radar.

With a 41 ft footprint at 300 ft, the radar sees, at best, a rather huge blob, even at 100 ft, the blob would be about 13 ft across, which is still many traffic lanes, which means that anything alongside the lane would be detected, even if they weren't hazards. One could employ a phased array radar, but that would be significantly more expensive and processor intensive than pretty much anything else in the car. A car going 40 mph requires a minimum of 76 ft stopping distance, plus processor reaction time, so 100-ft range is a must. Only the lidar and the camera array have sufficient resolution to adjudicate objects at such distances.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
jgKRI: Have you ever driven a car with adaptive cruise control or forward collision avoidance systems? They are very capable of over-rididing driver functions and emergency brake when they detect a stationary object; so over-riding an autonomous system should be no different. They handle stationary objects being 'revealed' when cars between the autonomous car and the object with relative ease because it's just a matter of comparing the targets in one frame to the targets in the next frame and identifying what the closing rate is.

In these Level 1 (Driver Assistance) and Level 2 (Partial Automation) types of systems there aren't generally issues, because they supplement the driver who is still actively engaged in driving the vehicle. Telsa's shortcoming is that they took a Level 2 type system and branded it as "autopilot" instead of a safety or supplemental system. "The car can drive itself." Which gives the drivers a false sense of security. And leads to them driving into "overhead" signs that are really at eye-level and wind up decapitating them. There are 5 classifications of vehicle autonomy. Tesla accomplished the second of five. And this is called Autopilot? They should have a class-action lawsuit levied against them for false advertising alone.

Levels 3 and 4 are Conditional Automation and High Automation respectively.


Keeping a car between the lines is easy (mostly). In the incident related to this thread, Uber is beta testing (and that is generous) a Level 5 system (full automation). They have no safeguards in place (which would be simple [eye tracking, steering inputs) to ensure that the "backup" driver(s) is paying attention. Level 5 cars need to be able to discern whether a car changed lanes nearby, a plastic bag blew across the road, a bird crossed in front of the car, a squirrel paused in the divider, a leaf blew around in the air, a pedestrian approached the road, etc. AKA all the things a human driver does 95+% of the time they are operating the vehicle. They certainly need to be able to identify a bicycle moving across 40'+ of clear pavement into their path and brake and/or swerve into something they have identified as a safe place to move into. Sometimes they will have to decide who dies and who lives. They will certainly have to know that that overhead sign is only 3' above the ground and closing fast.


Because no two automated-driving technologies are exactly alike, SAE International’s standard J3016 defines six levels of automation for automakers, suppliers, and policymakers to use to classify a system’s sophistication. The pivotal change occurs between Levels 2 and 3, when responsibility for monitoring the driving environment shifts from the driver to the system.

Level 0 _ No Automation
System capability: None. • Driver involvement: The human at the wheel steers, brakes, accelerates, and negotiates traffic. • Examples: A 1967 Porsche 911, a 2018 Kia Rio.

Level 1 _ Driver Assistance
System capability: Under certain conditions, the car controls either the steering or the vehicle speed, but not both simultaneously. • Driver involvement: The driver performs all other aspects of driving and has full responsibility for monitoring the road and taking over if the assistance system fails to act appropriately. • Example: Adaptive cruise control.

Level 2 _ Partial Automation
System capability: The car can steer, accelerate, and brake in certain circumstances. • Driver involvement: Tactical maneuvers such as responding to traffic signals or changing lanes largely fall to the driver, as does scanning for hazards. The driver may have to keep a hand on the wheel as a proxy for paying attention. • Examples: Audi Traffic Jam Assist, Cadillac Super Cruise, Mercedes-Benz Driver Assistance Systems, Tesla Autopilot, Volvo Pilot Assist.

Level 3 _ Conditional Automation
System capability: In the right conditions, the car can manage most aspects of driving, including monitoring the environment. The system prompts the driver to intervene when it encounters a scenario it can’t navigate. • Driver involvement: The driver must be available to take over at any time. • Example: Audi Traffic Jam Pilot.

Level 4 _ High Automation
System capability: The car can operate without human input or oversight but only under select conditions defined by factors such as road type or geographic area. • Driver involvement: In a shared car restricted to a defined area, there may not be any. But in a privately owned Level 4 car, the driver might manage all driving duties on surface streets then become a passenger as the car enters a highway. • Example: Google’s now-defunct Firefly pod-car prototype, which had neither pedals nor a steering wheel and was restricted to a top speed of 25 mph.

Level 5 _ Full Automation
System capability: The driverless car can operate on any road and in any conditions a human driver could negotiate. • Driver involvement: Entering a destination. • Example: None yet, but Waymo—formerly Google’s driverless-car project—is now using a fleet of 600 Chrysler Pacifica hybrids to develop its Level 5 tech for production
 
"This is exactly what I'm saying. You do realize that if the radar system was able to override autonomous functions and emergency brake every time it detected a stationary object in front of the car, that the system would not function and the car would never move?"

Bullshit or every car with a functioning AEBS would never move.

When do you stop predicting what the barrier might do and actually apply the brakes, or at least start slowing down? 200' away, 150' away, 100' away, 50' away or just never because it's predicted to move out of the path?

How is the radar tuned exactly? What does Musk means by "looks like an overhead sign"? The software is deciding the data from a certain area represents an overhead sign and because it's a sign it can be ignored.


"2 accidents of this type from a dataset of millions of miles driven does not indicate a lack of basic functionality."

If you want to tout how safe it is then the important statistic should be how many times did the human have to intervene to avoid an accident.

And other bad accidents shouldn't be counted just because no-one died?
 
There are places in town where the homeless will walkout in traffic with no warning, or crosswalk. We drivers are, for some reason, expected to stop.
Most drivers know that and slow down in that area.

I don't believe GPS maps advise drivers to slow down, and maybe they should.

And to add to a point that someone made, in construction zones it is typical for the cones to direct traffic to cross several lines. And in a few cases there are lines that apply only if you are turning.

So what I hear is that the AI is only smart enough for the boring driving, but not smart enough for special cases. So I as a car owner, would know in what way that the AI is confused?

What of other special cases? Lanes marked by overhead X's or arrowx? School lights? High winds? smoke or grass fire? Tumbleweeds?
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor