Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Self Driving Uber Fatality - Thread II 8

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,878
Continued from thread815-436809

Please read the discussion in Thread I prior to posting in this Thread II. Thank you.

--
JHG
 
Replies continue below

Recommended for you

HotRod10 said:
know I may sound like a nut when I say this, but as I said before, an AI that sophisticated poses a greater danger to humanity than a few car wrecks.
I tend to agree... not with you sounding like a nut, but with the danger.
Speaking of nuts, I think feasible AV technology needs a playing field that resembles Disneyland's Autopia. Successfully, with high probability tackling the entire real world of possibilities that a free roaming automobile might encounter, and not excluding nefarious traps and ruses that might be staged in the path of an AV, is a pretty big nut to try to crack. Bigger, probably, than is cost-effective with today's and foreseeable technology. And that brings us back to HotRod10's concern.

"Schiefgehen wird, was schiefgehen kann" - das Murphygesetz
 
VE1BLL said:
If a system can handle a very busy street, then it should have enough processing power that it can pay attention to a single pedestrian on an otherwise deserted street.

I agree. My post was an explanation of how these systems work, not an explanation of a failure mode that lead to the Uber accident.

VE1BLL said:
It'd be a design flaw if it was overzealous in ignoring the one and only moving object about to intersect, given that it should have had not much else to do.

I'd agree- and this type of flaw appears to be what all the fuss is about. And rightfully so.

 
"That's not been proven or even likely in the collision in question."

Sure it's not proven, and the cause will likely never will be publicly released unless the NTSB forces it out of Uber.

But, if the system had decided there was something in the data which did indicate an object existed which was important and should be tracked then it would have been tracking said object. If said object was being tracked as travelling into it's path that would have led to the system doing something to try and avoid said object. So, my expectation is that the wrong classification of the data is exactly what happened. My guess would be that the data representing the woman was included as being part of the data representing the background vegetation. That type of data would be filtered out as data that can be ignored since data from background vegetation isn't a concern when the task is to drive a car down a street. Or as I put it way back, the AI probably decided she was a bush and bushes can be safely ignored.

I'd think it's way less likely for the system to have processed the data and determined there was an object in front of the car yet done nothing about said object being in front of the car.


Now, after saying data from background vegetation is likely being ignored, that does create an interesting question of what would happen when the car approached a big tree limb or something else similar in it's path.
 
I have been working this out in Octave. The car is travelling at 100kph.

Code:
         Velocity:  27.8 m/s
Stopping distance:  78.7 m
       FOV across:   698e-3 rad
           FOV up:   175e-3 rad
 Laser pulse rate:  1.00e6 Hz
  LiDAR scan rate:  10.0 Hz
Stopping distance:  78.7 m
        LiDAR FOV:  2.21e-3 rad
       LiDAR scan:  632x158 = 99856
  LiDAR spot size:   174e-3 m
Sanity Check: Laser TOF should be shorter than laser period. 
   Time of flight:   524e-9 s
     Laser Period:  1.00e-6 s
           Factor:  1.91  Okay.

I am getting a much smaller spot size than I thought. This is good. The attached code is Octave, but it is supposed to execute in MathCAD.

--
JHG
 
 http://files.engineering.com/getfile.aspx?folder=bba86c84-efc6-4470-9c88-84b45718d903&file=lidar.m
> Even a bush, particularly one that's 4 feet tall can do serious damage to the car, and likely to give the driver whiplash.

> No sane person would assume that a bush that big won't cause damage to the car. Some bushes are designed to hide even more solid things, so again, potential for serious damage to the car

> Bushes don't suddenly appear in the middle of traffic lane, and if they did, they might have safety curbs that could cause damage to the car.

> This bush moved across two traffic lanes in the time the lidar should have been able to detect the target. That potentially implies a bush on a cart, which is a risk for serious damage to the car

> Because of the latency from lidar frame to lidar frame, each frame results in a new detection, so the AI didn't ignore "a" bush, it ignored multiple bushes popping out of the pavement, which implies trapdoors that might cause serious damage to the car.

Conclusion: the AI wanted to collide with the bush because it wanted to hurt itself.




TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
You're free to conclude whatever silly explanation you want

As to your points. If the car had decided the data indicated there was object in front of it then it would have taken avoidance measures. Or working backwards, the car didn't take any avoidance measures so the most likely conclusion is that the sensor data processing algorithm didn't return a result that indicated an object of concern was in front of the car. As you've pointed out already, the sensor data will always returns data from the objects in front of the car (the road itself being the minimum thing), so the processing does have to determine what is of concern not just that there is something there.

There is no reason to expect the processing decision to classify the data as background vegetation would change over time. If the mistake was made once, it can just as easily be made multiple times.

And to go back to the simplistic, if bushes are only programmed as a thing that can be ignored then they will be ignored. It doesn't matter where the bush is, it is a bush and the AI was told to ignore bushes so it does as it was told and ignores bushes.

And the car is not a person, I'd think that fact was well established by now. It can't do anything that it hasn't be programmed to do, or it hasn't been "taught to do" if you would prefer to say that. It doesn't know that bushes could cause serious damage, or that bushes could be on wagons, or that bushes could have curbs around them if it hasn't be programmed to know those things.

Concludion: You're just posting silly crap to be argumentative.
 
lionelhutz said:
You're free to conclude whatever silly explanation you want

Who are you directing your message to?

Dik
 
I just noticed that the HDL-64E that was cited in the original thread is from March 2008. The current version of the HDL-64E is here, which states that it can do 1.3 Mpps, with 2.5x better range resolution.

re: Time of flight: 524e-9 s
Laser Period: 1.00e-6 s
Factor: 1.91 Okay.

"Factor" needs to be based on the maximum range, which is 120 m, so TOF is 801 ns. Nevertheless, there's no interference with the next pulse because the PRF of each laser is only about 20 kHz (1.3 MHz/64). Although the top and bottom blocks fire one pulse each simultaneous, there's no interference because the spots are about half the vertical FOV apart.

The HDL-64E's vertical FOV and IFOV are fixed by the optomechanical design. The horizontal angular resolution is dictated by the frame range, which ranges from 1.55 mrad at 5-Hz to 6.19 mrad at 20-Hz frame rate. But, the horizontal IFOV of the receivers need to accommodate the largest angular resolution, so it's at least 6.2 mrad, which only really affects the noise floor of the receiver. The receiver IFOV might be even larger to accommodate the walking of the laser return due to the scan speed of the laser head, although it's possible that the lasers are aligned to the leading edge of the receiver IFOV, so that at max range, the return will end up on trailing edge of the receiver IFOV.

Note that while the horizontal FOV is seemingly programmable, that does not change the firing rate, so all that happens is that the unit does not transmit data from outside of the programmed FOV. Note also, that the HDL-64E does not process the data at all; calibrations and point-cloud processing is done by a user-supplied external processor.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
My point is that bushes can never be allowed to be ignored, because even if they weren't in the current path, they could be later on, just like the car in the adjacent lane can never be ignored, in case you need to perform an emergency maneuver, in which case it would be absurd to have ignored it previously.

I don't think for a second that the processor would ever "ignore" any sizable target, simply because it doesn't ever know enough about what's behind the detected object that's hiding in the shadow of the detected object. To wit, we often see sports teams crashing through sheets of paper, but that's because they have explicit and verified knowledge that there's nothing on the other side of the paper. So, even if the lidar and the object processor detected a large sheet of paper in its path, it cannot ignore it because it can't see behind the paper to the boulder behind it.

If you want a more plausible explanation to believe in, it would be more likely that the processor got confused and placed all the detected objects in the wrong places in its world model. Or, the processor managed to erroneously program the HDL-64E's FOV to not include the front of the vehicle, so that it never received any detections from the lidar at all.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
My suggestion of the reflectance threshold being set too high is not the way things are done? Just checking.

I assume that at some deeper level for an L4 car an integrated 2d world is assembled from the various sensors, and that this integrated picture is what the AV driver actually uses to decide on whether to brake, steer or accelerate. The other way is to build behaviours up, so there's a braking module, and a lane following/changing module, and a speed control module, all working off different sensors as needed, which may be more 'evolutionary' in approach, that perhaps is how Tesla's system works.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
"My point is that bushes can never be allowed to be ignored"

What the system should be doing and what it is doing are two different things.

Ignoring the data that appears as vegetation on the sides of the road and continuing to look for other objects on the side of the road can both be accomplished as the data is processed.

If the processor had mistakenly shifted it's "view" of the world then the car would not have been properly driving down the center of the lane. Wrongly programming the LIDAR unit doesn't sound more probable then a data processing error.
 
LionelHutz said:
As you've pointed out already, the sensor data will always returns data from the objects in front of the car

There's an important distinction to be made here- the hardware will always detect some object in front of the car. But based on how the hardware is calibrated and how the processing is set up, it is possible for the hardware to not detect an object directly in front of the car. As an example- Greg's point about the reflectance threshold of the LIDAR array being set to high, resulting in a low-reflectance object not being passed on to the processor regardless of distance or closing speed.

Depending on exactly how all 3 systems are calibrated, I think it is possible for a 'coffin corner' to exist where certain ambient conditions combined with an object or pedestrian bearing certain characteristics could cause all three systems to fail to correctly determine if the detected object was necessary to track.

I don't think that's what happened here but what I think is, obviously, just conjecture.
 
Now that I think about it... was she pushing the bike in front of her, with hands on the handlebars?

There are certain objects which will create data that the system will have difficulty resolving into an object at all- someone (IR I think) already mentioned that a bicycle is potentially invisible to LIDAR depending on distance.

If the processing system receives a frame from the sensor array that contains an area of ambiguous data, it is highly likely for there to be a routine which effectively crops out this portion of the frame (by truncating the output of the frequency domain conversion).

This is a necessary function, so that the system doesn't immediately fail if (when), for example, a rain drop hits the optics and causes a blurry spot on the images being processed.

I'm wondering if the front half of the bike which was visible to the sensor array- front wheel/tire, front half of the frame, was detected by the system but not resolvable, leading to the system responding by truncating this object out of each frame as it moved. This would, in turn, not cause the system to 'incorrectly process' the detection data for the bicycle- it would cause the system to not even try.

This failure mode, if realistic, is still the result of system design error by humans. The truncating operation is necessary, but the conditions which cause this truncation, and the width of the window around the ambiguous data to be truncated, are determined by the programmer.
 
The bike isn't relevant; the pedestrian, by herself, should have presented a more than valid and substantive target. The lidar is specified to have at least 60-m range against pavement with 20% reflectivity. A person, wearing black clothing, should be very visible at 60 ft; the lidar should be able to see reflected signal from a 2% reflective surface.


TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
IRStuff said:
The bike isn't relevant; the pedestrian, by herself, should have presented a more than valid and substantive target. The lidar is specified to have at least 60-m range against pavement with 20% reflectivity. A person, wearing black clothing, should be very visible at 60 ft; the lidar should be able to see reflected signal from a 2% reflective surface.

Unless the bicycle created some ambiguity which in effect 'confused' the processor, and caused the woman to be truncated out because of a wide error clearing window.

Not highly probable, admittedly- but not impossible.
 
IRstuff,

I am not looking at a specific LiDAR. I am describing a generic one appropriate for a robot car. In my model, at 100kph, the robot must collect enough info on an object 80m away in case it must come to a full halt, without being rear-ended.[smile]

A 360[°] LiDAR will have poor resolution at speed at critical decision distances. You need an additional forward LiDAR with a limited field of view, and high resolution. The scanner FOV is a function of how you design your scanner.

--
JHG
 
jgKRI,

I was the one who pointed out that the LiDAR would see through the bicycle. It would get a relatively weak return from the bicycle, and then it would see the background behind the bicycle. This is not necessarily a bad thing. A bicycle potentially has a unique signature that tells the AI how fast it is capable of moving.



--
JHG
 
and, a more sobering thought... They won't be the last fatalities...

Dik
 
"at 100kph"

In the US, we drive WAY faster ;-) This morning I got buzzed by a motorcycle doing at least 100 MPH

When we worked on OASYS, we only had a forward-looking lidar with a 50-deg x 25-deg FOV. Turning proved quite scary. If all you ever did was straight line travel, side looking wouldn't come up as an issue. But if you decide to slow down and turn, the new detected objects AND the previously detected objects become significant, particularly with regard to low objects, since they tend to fall into the blind zone of the lidar, which is about 9 ft in radius.

In order to make full use of the pulse rate of the lasers in a reduced FOV, you'd have to give up on the 360 scan, which tends to drive the design to a mirror scanner. However, mirror scanning systems tend to be noticeably larger in volume, and you'd need a minimum of two lidars, one on each side of the car.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
IRstuff,

Your LiDAR will impose a maximum speed on your car. If the LiDAR cannot see and identify the hazard, it must be moving slowly enough that it can react when the hazard becomes visible. Fog and curved roads all are an issue.

I figure that a 40[°] system pointed straight forward, plus a 360[°] system should work. You would need a way to identify things right next to you. I am focused on three-year old kids in front of you. If you are pulling out onto a highway, the things that will hit you are large enough to be detected by a lower resolution LiDAR.

--
JHG
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor