Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Self Driving Uber Fatality - Thread I 17

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,878
San Francisco Chronicle

As noted in the article, this was inevitable. We do not yet know the cause. It raises questions.

It is claimed that 95% of accidents are caused by driver error. Are accidents spread fairly evenly across the driver community, or are a few drivers responsible for most accidents? If the latter is true, it creates the possibility that there is a large group of human drivers who are better than a robot can ever be. If you see a pedestrian or cyclist moving erratically along the side of your road, do you slow to pass them? I am very cautious when I pass a stopped bus because I cannot see what is going on in front. We can see patterns, and anticipate outcomes.

Are we all going to have to be taught how to behave when approached by a robot car. Bright clothing at night helps human drivers. Perhaps tiny retro-reflectors sewn to our clothing will help robot LiDARs see us. Can we add codes to erratic, unpredictable things like children andd pets? Pedestrians and bicycles eliminate any possibility that the robots can operate on their own right of way.

Who is responsible if the robot car you are in causes a serious accident? If the robot car manufacturer is responsible, you will not be permitted to own or maintain the car. This is a very different eco-system from what we have now, which is not necessarily a bad thing. Personal automobiles spend about 95% (quick guesstimate on my part) parked. This is not a good use of thousands of dollars of capital.

--
JHG
 
Replies continue below

Recommended for you

I just meant it was a mixed blessing in that it was pretty graphic and shows their product causing the end of someone's life. That, in and of itself, has some inherent downside to it. I agree that on the whole it has worked in their favor though.
 
Spartan5,

One of my theories is that the camera contrast ratio was not sufficient to transition from dark areas to fully lit areas. People are posting stuff here that shows that cameras are available that have the contrast ratio. That is not good for Uber.

--
JHG
 
That assumes that:
a) it even needed a high contrast resolution
b) there was even a high contrast situation

The other dash cam videos show that a high contrast situation didn't even exist, so I'm tempted to think that Uber released video that was purposely altered in contrast to make it appear as it the accident was unavoidable. BUT, it wasn't because the radar and lidar which were supposedly installed on this car don't require ambient light at all, i.e., had there been total darkness, the pedestrian should still have been detectable. Had there been a searchlight blinding the camera, the accident ought not have occurred.

The fact that people are lamenting the video is a strong indication of how big a bonus the person at Uber who released the video will be getting this year.

The video is completely and totally irrelevant to the collision that the car ought to avoided with ease.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
The video proves she was on the road and was travelling in a constant direction so the sensors had an unobstructed "view" of her for enough time that the accident should never have occurred.

It could be said the video shows the backup driver might not have been able to react, but I'm not buying that the backup driver could only we what that video shows. The glances the backup driver was giving might be indicative of how far she could see ahead, maybe not clearly but still with some visibility.

By the reports it didn't seem like much time elapsed between the crash and the police viewing the video.
 
So, the "design plan" of these self driving systems is not looking for physical objects in the path of the car but rather to only deal with things that are classified as objects that can be in the path of the car?
 
There needs to be a lot more development and testing before an AV is allowed on a public roadway. I think it will become very obvious in any remotely realistic "real world scenario" that the AI is no where near sophisticated enough to process the massive amount of input presented by the real world. I don't believe AI will ever be able to do the extrapolation required to drive as well as a human CAN. Obviously, even what's available now surpasses what human drivers sometimes DO, but "better than an oblivious idiot" is ridiculously low and unacceptable standard.

Rather than attempting the impossible with AI, if the the LIDAR, etc. were incorporated into regular automobiles, so that drivers could "see" what is not illuminated by the headlights,that would actually improve safety. Especially if such systems were incorporated as a heads-up display, showing the objects where they are from the driver's perspective. Some already have thermal imaging, but there is so much more that could be done.
 
"I don't believe AI will ever be able to do the extrapolation required to drive as well as a human CAN"

I probably agree, but the point is irrelevant. On average human drivers don't perform anything like as well as 35-60 year old human drivers. So by your logic nobody younger than 35 or older than 60 should drive. If AVs are (for the sake of argument) 4 times safer than the average human driver, that would be a big step forward. They still wouldn't be much safer than good drivers. But that'd be a big step forward for traffic fatalities.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
Some of these accidents (system design failures) clearly show that the claimed capabilities have been over-hyped.

Hopefully this doesn't continue until Autonomous Vehicles becomes another chapter in the Engineering Ethics textbooks.

 
"So, the "design plan" of these self driving systems is not looking for physical objects in the path of the car but rather to only deal with things that are classified as objects that can be in the path of the car?"

They're basically the same thing. We "classify" objects based on what our senses (sensors) tell us, which is why we are often misled by optical illusions. Almost all optical illusions are based on classification quirks and shortcuts of our vision processing in the brain. Looking for range returns from lidar or radar and shapes in the camera system is what allows the AI to determine whether it's detecting a physical object or not.

However, this is was not the case of some oddball optical illusion misleading the processors, unless the pedestrian was somehow in a "stealth" mode or cloaked with a Klingon cloaking device. The processor did something anomalous, maybe like us having a stroke.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Maybe a black cow cloaking device. Seems to fool many human drivers every year.

Human pedestrians have some ability to improve their ability to be seen, although it's not very common that they do so.

Dark colored clothes, no reflectors on the bike, crossing outside normal crossing zones, etc. is not the complete fault of the driver.
Same thing as warp factor 10 drivers with no lights at night, in black cars.

I think at some point people need to understand they need make an effort to be seen.

Not that I trust self driving cars either.
 
Certainly, situational awareness is something that everyone should cultivate. Regardless of whether the Uber car was behaving correctly, assuming that the car was going to stop or otherwise avoid the collision would have been foolhardy. This pedestrian certainly seemed to be oblivious to their impeding doom, and had they been more mindful, they might have avoided the encounter. But, had they done that, we wouldn't know about this failure until much later or in a much worse situation.



re. cloaking device -- In Star Trek Discovery, the Klingons are using a cloaking device at least 10 years before they supposedly got the technology from the Romulans, and before Kirk encountered the Romulans with their cloaking device.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
No, they are not the same thing. It seems like neither car was doing any kind of operation that simply detected a physical impediment in it's path, regardless of what it was classified as.
 
LionelHutz said:
It seems like neither car was doing any kind of operation that simply detected a physical impediment in it's path, regardless of what it was classified as.

The pedestrian which was struck was not in the car's path until a very short time before it was struck. Despite what you might think, the reaction time of an autonomous vehicle is not on the millisecond scale. Autonomous technology is not 'better' than human drivers (if you believe it is better at all) because of speed.

LionelHutz said:
So, the "design plan" of these self driving systems is not looking for physical objects in the path of the car but rather to only deal with things that are classified as objects that can be in the path of the car?

Objects have to be detected before they can be classified. Yes, these cars are constantly scanning for obstacles along the planned travel path.
Detection and reaction become difficult for a lot of reasons- some of them covered in this thread, and some not. Without knowing the exact technology employed in Uber's system, it is hard to know what might have gone wrong; but I can tell you to a certainty that at the current level of autonomous vehicle technology, even with cutting edge equipment and cost-no-object engineering applied, expecting a vehicle to operate with a complete understanding of EVERY moving object in its surroundings and completely accurate evaluations of those objects trajectories, is wholly unrealistic.

There is a reason these cars are running around with people behind the wheel.
 
jgKRI said:
There is a reason these cars are running around with people behind the wheel.

That doesn't seem to be very useful. This operator clearly was not paying attention and the Tesla drivers have not been either. I think it's very difficult to stay vigilant when you have been relieved of all responsibilities except the life saving last second over ride.

I think all these self driving vehicles should be proceeded by a man on foot waving a red flag.

----------------------------------------

The Help for this program was created in Windows Help format, which depends on a feature that isn't included in this version of Windows.
 

GregLocock: "If AVs are (for the sake of argument) 4 times safer than the average human driver, that would be a big step forward."

Perhaps, but perhaps not. If a human driver does something really stupid and irresponsible, at the very least that person's license is revoked; they are taken off the roadways. If one AV does something stupid, do all of the AVs using the same programming get banned from the roadways?

There's also a long way to go before an AV comes close to being as safe as a good driver. I believe if AI ever becomes powerful enough to process all the input from a real world situation, or sophisticated enough to separate the important input from the extraneous, we will have created a greater danger than we have mitigated (the Terminator movies come to mind).

I think the expansion of technologies that ASSIST the human driver, rather than replacing the driver, is a much better and more attainable way to improve traffic safety. It may not be the money-saving opportunity for Uber or UPS, like AVs are, but when did making Uber profitable become a public concern?

GregLocock: "But that'd be a big step forward for traffic fatalities."

IF your assumption could be proven true, perhaps there would be an overall reduction, but proving such a system to be not only smart enough, but reliable enough, is a tall order. What happens when some terrorist finds a way to hack into them?

There's also the matter of liability. Who will be responsible the next time an AV fails to recognize a pedestrian in the roadway, or a truck crossing the vehicle's path? Tesla backed off of their claims rather quickly, claiming their "auto-pilot" is only a driver assistance feature, not intended to be autonomous.
 
"I think all these self driving vehicles should be proceeded by a man on foot waving a red flag."

I wouldn't want to be that guy.
 
dgallup said:
That doesn't seem to be very useful. This operator clearly was not paying attention and the Tesla drivers have not been either. I think it's very difficult to stay vigilant when you have been relieved of all responsibilities except the life saving last second over ride.

I would very, very strongly agree with you.

But such is the current state of the technology.
 
HotRod10 said:
Perhaps, but perhaps not. If a human driver does something really stupid and irresponsible, at the very least that person's license is revoked; they are taken off the roadways. If one AV does something stupid, do all of the AVs using the same programming get banned from the roadways?

If some safety feature on aircraft are known to not work, the aircraft are grounded.

Ignore the technology for the moment. There are supposed to be five or six categories of intelligent car. From a legal point of view, I see two.

[ol]
[li]The car is equipped with a microphone and/or keyboard. You get in and you tell it where you want it to do. It takes you there. If the vehicle causes an accident, the manufacturer of the vehicle is responsible.[/li]
[li]You are the driver. You grip the steering wheel. You control the gas (power?) pedal and brakes, and you are occupied full-time paying attention to driving. The robot, if present, is a back-seat driver, with some ability to nudge controls.[/li]
[/ol]

In the first case, I cannot see you being allowed to own the vehicle. If I were the manufacturer, I would own the car, and the maintenance facility. Anything with machinery or controls would inside a locked enclosure.

The safety observer is not of much use if they are not in control and continuously paying attention. Accidents happen way too quickly for an observer to look away from a book or movie.

--
JHG
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor