Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Self Driving Uber Fatality - Thread II 8

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,878
Continued from thread815-436809

Please read the discussion in Thread I prior to posting in this Thread II. Thank you.

--
JHG
 
Replies continue below

Recommended for you

Isn't it amazing that all these FUD ideas still keep cropping up on a professional forum and presented as "OMG I'm the first person to have ever thought of this".

I think that might be my new sig.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
IRS said:
the probability of both occurring simultaneously is essentially on the order of 1.6E-9

Not disputing the claim, can you tell me how you arrived at this value? Personally curious... one of my failings since childhood...

Dik
 
I brute forced it based on the duty cycle squared. The duty cycle is essentially the probability that a single receiver's IFOV is illuminated, so having it illuminated twice will result in that probability squared, roughly. As a practical matter, the probability is not random, so if we assume that all lidars on the road scan the same way, i.e., they are synchronized in scan position, then the probability is either unity or zero. As the duty cycle is quite low, the actual probability is closer to its duty cycle than it is to unity.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Even the most sophisticated LIDAR does not come close to what the human eye and the human brain can achieve. For example, as I was driving down the interstate a few weeks ago, I saw at a glance a car on a side road about a quarter mile away and knew instantly several things - it was a sedan, it was moving north, it was not going to intersect my path, etc. How long would it take for a LIDAR system to even see it, if it did at all? How long for the processor to figure out what it is, if it even could? Would it know the difference between a human-shaped bush lowing in the wind on the side of the road and a human who is standing still? Our eyes can scan a wide field of view and see and recognize hundreds of objects in that field of view, assessing all of them in a fraction of a second. We are so far beyond anything artificial in not only our ability to scan the environment, but also in our ability to anticipate and extrapolate, that I don't believe a self driving system will ever be as capable as an attentive human driver. As such, I believe a computer driver will always be a less capable driver and a greater danger to everyone. That is certainly the case at the present time.
 
That's all interesting, but not relevant. The issue is reliability and consistency. Human drivers suck at being consistent; they fall asleep, they randomly decide to step on their brakes, they suddenly swerve across 4 lanes of heavy traffic because they suddenly realize that the offramp is only 1000 ft downrange. And you probably saw that sedan during daylight, but at night or in fog, human vision drastically degrades, while lidar or radar can keep pumping out returns. Moreover, humans just get bored and inattentive, as was the case with the Uber backup driver.

Much of what I see as traffic problems are due completely to just plain bad driving and habits of humans; traffic would move much better if all cars behaved consistently. People drive at random speeds, both individually, and within seconds.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
"I don't believe a self driving system will ever be as capable as an attentive human driver." Correct. On average an AV will probably never match the driving ability of the best drivers and certainly won't match the accident record of the safest drivers. Luckily it doesn't have to. All it has to do is what it has conspicuously failed to do so far, driver better than an average driver and have fewer accidents on average. One debate that needs to happen is how much improvement is needed. Setting silly performance targets won't help.



Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
GregLocock said:
what it has conspicuously failed to do so far, driver better than an average driver and have fewer accidents on average.

If this is your metric, autonomous vehicles are safer by an order of magnitude.

The average American gets into an accident every ~175000 miles.

Uber has accumulated more than 2 million miles of autonomous testing.

There have not been 11 accidents, that we're aware of.

In my opinion whether or not an accident results in a fatality is an extremely high variance event.. meaning that with regard to fatalities specifically, we won't know if autonomous vehicles are 'safer' for a long time, until billions/trillions of miles have been accumulated. And that is a LONG way off.
 
"Human drivers suck at being consistent;"

Apparently, so do the computers, unless you consider them to be consistently poor. Currently, there have been a shockingly high number of wrecks among the ranks of computer drivers, considering how few total hours they have driving.

"...they randomly decide to step on their brakes, they suddenly swerve across 4 lanes of heavy traffic"

Better than not hitting the brakes when they should and running over pedestrians and swerving into concrete barricades.

"Moreover, humans just get bored and inattentive, as was the case with the Uber backup driver."

Of course they do, especially when they're not actually driving. The "Uber driver" wasn't a driver, he was a passenger.
 
HotRod10,

Look carefully at the numbers I posted above. If a vehicle is travelling at 100kph, a robot can make a fairly aggressive stop in under 80m. This is well inside the range of the LiDAR based on my calculations. The object has been scanned several times, and its velocity and acceleration vectors arew known. I don't think the LiDAR is functional a mile away, but it does not need to be. The LiDAR's range will limit the speed of the car.

If my robot detects an object on the side of the road that is moving out in front of the vehicle, the vehicle must take evasive action. The object can be a child, or a dog, or a garbage can being blown around by the wind. That may be a pile of leaves out there, but your robot does not know what is inside it. Until such time as the robot can reliably identify an undesirable terrorist, there is no excuse for not stopping.

--
JHG
 
An automated vehicle that swerves or slams on the brakes for a piece of paper blowing in the wind WILL cause a collision sooner or later.

It might be a collision where the automated vehicle is not legally at fault ... but in the real world, the driver (automated or otherwise) who slams on the brakes unexpectedly (to other drivers) in a travel lane of a motorway is the one who actually causes the ensuing collision, even if the one behind (which might be a fully loaded 18 wheeler which cannot stop on a dime) is the one legally at fault.
 
Strange that no one has spoken about what happens after a self driving car hits something. If that something disables the car, then there is no conversation. But what if it was to hit something that does not disable it? Does it know to stop and call 911, or does it take off, causing a hit and run condition?

There is a difference, because hitting a person and driving off is a crime. Where hitting a bird and driving off is not.

 
"Uber has accumulated more than 2 million miles of autonomous testing."

Assuming that 2 million miles are real-world miles, the fatality rate for the autonomous Uber cars is still about 50 times the national average human drivers.

"The average American gets into an accident every ~175000 miles."

I'd like to see where you got that number, because according to the NHTSA in the US in 2012, there were 30,800 fatal crashes (33,561 fatalities), 1.634 million injury crashes (2.362 million injuries), and just under 4 million "property damage only" crashes. Total vehicle miles traveled (VMT) - just under 3 billion Link. That's 1 fatality for every 88.5 million VMT, 1 injured person for every 1.25 million VMT, and 1 property damage only crash for every 750,000 VMT. Added together, that's 1 crash for every 467,857 VMT, not 175,000, so unless the crash rate has nearly tripled in the last 6 years, you're way off.
 
"If my robot detects an object on the side of the road that is moving out in front of the vehicle, the vehicle must take evasive action. The object can be a child, or a dog, or a garbage can being blown around by the wind."

Great! So the car will take evasive action into a vehicle in the adjacent lane because a garbage can blew into the street? That'll be popular. Btw, what happens when the child is standing still on the curb until half a second before your robot drives by and then runs into the street? Does your robot anticipate this completely illogical action as most humans would?

 
HotRod10,

Hitting the brakes is evasive action.

How about approaching a building a meter away from your road, at 100kph, with someone posssibly standing behind it? At some point, the robot cannot see around a corner, and it must slow down just in case.

--
JHG
 
"...at night or in fog, human vision drastically degrades, while lidar or radar can keep pumping out returns."

Then perhaps, instead of taking the human's ability to assess, anticipate, and extrapolate out of the picture, and try to replace it with a far less advanced computer "brain", we should put our efforts towards supplementing and augmenting the human driver's ability to see at night or in fog.

The problem with the push for self-driving cars is that they are so far from being a solution. OTOH, if the technology was applied to solving the real problems - the limitations of human vision and inattentiveness, then real progress could be made. Some has already been implemented in a few cars - headlights that turn to follow the road, thermal imaging, blind spot sensors, lane departure warnings, etc. If the efforts were aimed towards helping the driver, rather than replacing him, the roads would become safer. Replacing human drivers with machines that are not up to the task, makes the roads more dangerous. Maybe someday autonomous vehicles will be ready to be on the street, but until they are, they shouldn't be let loose on the unsuspecting public.
 
HotRod10 said:
I'd like to see where you got that number, because according to the NHTSA in the US in 2012, there were 30,800 fatal crashes (33,561 fatalities), 1.634 million injury crashes (2.362 million injuries), and just under 4 million "property damage only" crashes. Total vehicle miles traveled (VMT) - just under 3 billion Link. That's 1 fatality for every 88.5 million VMT, 1 injured person for every 1.25 million VMT, and 1 property damage only crash for every 750,000 VMT. Added together, that's 1 crash for every 467,857 VMT, not 175,000, so unless the crash rate has nearly tripled in the last 6 years, you're way off.

Ok, my number was wrong. If we use yours, Uber's cars are still 'safer' by a factor of 4 or 5 instead of 10.

One pedestrian fatality does not a trend make.

If this woman had been injured instead of killed, suddenly the numbers for Uber's development program would be somewhere near the national average (1.25 million vehicle miles per injury) and all the hair pulling would be drastically subdued.

The difference between her being an injury and her being a fatality is a matter of a couple of feet one way or the other- by definition, a high variance event. There isn't anywhere near enough data yet to determine either way whether these vehicles are an improvement, and there won't be for a long, long time.
 
It appears plausible that the Uber cars do have 2 million "semi-autonomous" miles, but when a human still has to take over to avoid crashing into something once every mile on average (at least it's improving), that doesn't inspire much confidence.

Link
 
"Uber's cars are still 'safer' by a factor of 4 or 5 instead of 10."

Only if you compare the Uber cars' fatality rate to the overall crash rate, the bulk of which are property damage only (PDO) crashes. We only heard about this one because it resulted in a fatality. How many PDO crashes have Uber cars had? If you can find it, you're better than I. It seems they're being pretty tight-lipped about that. I wonder why?
 
I fail to understand why pointing out that our data set is incomplete (which it is, you're 100% correct), which is the point I've been trying to make, then warrants support of any conclusion whatsoever.

This is an engineering forum, isn't it?
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor