Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations GregLocock on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Self Driving Uber Fatality - Thread I 17

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,911
San Francisco Chronicle

As noted in the article, this was inevitable. We do not yet know the cause. It raises questions.

It is claimed that 95% of accidents are caused by driver error. Are accidents spread fairly evenly across the driver community, or are a few drivers responsible for most accidents? If the latter is true, it creates the possibility that there is a large group of human drivers who are better than a robot can ever be. If you see a pedestrian or cyclist moving erratically along the side of your road, do you slow to pass them? I am very cautious when I pass a stopped bus because I cannot see what is going on in front. We can see patterns, and anticipate outcomes.

Are we all going to have to be taught how to behave when approached by a robot car. Bright clothing at night helps human drivers. Perhaps tiny retro-reflectors sewn to our clothing will help robot LiDARs see us. Can we add codes to erratic, unpredictable things like children andd pets? Pedestrians and bicycles eliminate any possibility that the robots can operate on their own right of way.

Who is responsible if the robot car you are in causes a serious accident? If the robot car manufacturer is responsible, you will not be permitted to own or maintain the car. This is a very different eco-system from what we have now, which is not necessarily a bad thing. Personal automobiles spend about 95% (quick guesstimate on my part) parked. This is not a good use of thousands of dollars of capital.

--
JHG
 
Replies continue below

Recommended for you

I don't disagree that some sort of testing to verify a minimum threshold of capability needs to be performed, particularly after this.

However, I'm struggling with how trivial this scenario ought to have been. This is like worrying about a kindergartener running well, when they seem to have failed to tie their shoelaces.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Capability tests will be "gamed". The systems will be programmed to pass them. The problem is for the systems to react properly to situations that they were not explicitly programmed for, and those will be the situations that for whatever reason (sometimes seemingly inexplicable to humans) fell through the cracks. Situations like, oh, driving underneath an overhead sign board through the gap between the back of a truck and some other unknown moving object about 40 feet behind it, or failing to recognize a bicycle that is being pushed rather than ridden as something that perhaps shouldn't be hit.

The above statement that drivers (whether human or otherwise) should be aiming for empty road is an extremely important one. It's still not without its share of headaches.

Does a pothole disqualify empty road? A little pothole? A big one? A sinkhole? Where's the threshold between stopping/swerving and driving over or through it?

Does a piece of paper ahead disqualify empty road? A small piece of debris? A truck tire tread? A squirrel? A cat? A dog? A small deer? A moose? A small human? A big one? Where's the threshold? You do not want self driving cars dodging a plastic bag or stopping in a traffic lane of a motorway.
 
"Capability tests will be "gamed". "

But, now that you know that they're going to want to game the system, there are other approaches to the problem, such as demanding source code and program memory inspection, or randomly selected scenarios. Even now, we demand that we can arbitrarily build executables in a traceable fashion, simply so that we can avoid other silly problems like repeatable builds.


The smog tests are absurdly simple compared the tests required of a target detection and tracking system.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
With the disclaimer "I Am Not A Software Guy" ... I cannot begin to imagine how complex and abstract the relationship is between the executable machine code and what the user sees. To debug that by looking at source code would be a herculean task at minimum.

It's one thing to look for a logic fault when a programmed system has a repeatable flaw and you have a clue where to look. "Oh crap, we have an OR rather than an AND between these two logic rungs." (been there!) It's quite another to have millions, possibly billions, of lines of code laid out with the task "Find all the problems with this."

How many times do you get "Windows Update" ...

Random test scenarios would have to be part of the picture, but it is the nature of statistics that they will not find every flaw.

Self-driving cars are essentially going through random test scenarios right now. This random test scenario found a bug.

And for those saying "this shouldn't be happening in public", I don't disagree, but at the same time, in controlled test scenarios, probably that pedestrian wouldn't have pushed that bicycle across the road in that manner under those lighting conditions.
 
It's essentially an AI problem; at least the tricky bits are. Famously, "AI is hard", where 'hard' is a computer science keyword that isn't very distant from 'impossible'. This conclusion goes back decades.

Historically, AI has been 'an indoor cat', assigned to finite problems within defined problem spaces. Now it's being taken outdoors, where the problem space is unbounded. I expect that it will soon be realized that "AI Outdoors is VERY hard."

There's also the issue of sensors. It's hard to appear intelligent if you're oblivious to what's going on around you. Autonomous vehicles should have microphones to hear the sirens of emergency vehicles, but nobody seems to have thought of even that obvious example. Smoke, vibrations, sudden banging noises, screams of terror from the passengers, etc.; all should be inputs. Successful AI Outdoors will need a large range of sensors.

Given the wildly optimistic naivety, these sorts of accidents are not surprising. They'll continue, and lives and billions will be lost.

I expect that it'll be a bit like Fermat's Last Theorem. Yes, Wiles' [edited] 129-page solution certainly would not fit in the margin. When Autonomous Vehicles are finally fully sorted out (10+ years from now), they'll look back and then realize how the problem was so much bigger than they expected.
 
I just saw that the LiDAR manufacturer introduced a new model late last year with 0.1 degree resolution and 300m range. It seems likely that the Uber would have had the previous generation model, at 0.4 degrees and 120m range. That means that 6 seconds from impact the woman with the bike would come into range and be a blob about 2 pixels by 2 pixels, in a picture 900 pixels wide. Braking time from 40 mph is about 2 seconds. The blob would be persistent and therefore easy to track. How on earth the software copes with blobs that move fast enough to have distinctly separate images in each frame I don't know. Obviously there's no hope of doing image recognition on a 2 pixel by 2 pixel blob (you could get fancy and use lots of frames of data to give better resolution).






Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
"...consecutive frames..."

I wonder what the frame rate of Lidar is? Being mechanically scanned (laser and spinning mirrors), I assume it's slow.

 
The Velodyne 64E has a complex relationship between the upper and lower banks of lasers and the rotation rate. The elevation increment of approx. 0.4 degrees is constant; nothing else is except sample rate. Higher rate = lower resolution. It also seems to depend on which data is being accessed. I've only given it a cursory reading, but one thing that stands out is the sensitivity to reflectance; the spec indicates the limit for pavement might be only 50m based on a reflectance of 0.1.

See
One characteristic that I expected but did not find is the beam divergence.
 
The angular resolution depends on the LiDAR's rps, it can scan at 5 10 or 15 rps. Long range resolution is then set out in Appendix B of that manual, and is substantially better than 0.4 degrees, at the lowest speed it is 0.1152 degrees and is proportional to the rps. So you can have 15 frames per second at 0.34 degrees resolution, or 5 frames per second at 0.12.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
The apparent movement is tangential from any viewpoint in the vehicle except the point(s) of collision. It will have a tangential component if either member to the collision is on a non-linear path or has a non-linear velocity regardless of view point.

 
IRstuff, I think your IRstuff (Aerospace)22 Mar 18 02:16 image is misleading, I don't see any sign of vertical scanning as such in that manual, just an upper and lower bank with different fields of view. So the LiDAR map would just be a set of distances at the two different heights at the angular resolution? Am I missing something?

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
There are 64 individual lasers in the scanner; 32 upper and 32 lower, spread to cover a range of angles.
 
3DDave beat me to it. Most commercial lidars are scanning in azimuth only, and use either an array of transmitters and receivers or a single fan-shaped transmitter beam and an array of receivers. An array of transmitters AND array of receivers seems way more complicated than I would hope for, but that does help out on the pulse repetition rate, which is the limit of the frame rate vs. resolution problem.

"To debug that by looking at source code would be a herculean task at minimum."
The first thing to do is to start with the recorded data and processor logs. Since they are in the testing phase, there should be copious amounts of both. If the data log is empty, heads will roll.

Note that we were referring to acceptance tests, not engineering tests. The engineering tests are performed by the supplier, and should involve a progression of tests starting at the smallest software module, and then progressing to ensembles of modules. Acceptance tests are not intended to exhaustively test functionality, just like IIHS or DOT tests only test specific things, which were gamed by VW and others. But, one can demand, justifiably, that a testing authority have access to the code, and witness the programming of such code, and tested with a series of random scenarios.

The HDL-64 has 0.4-deg vertical resolution and almost exactly 2-mrad horizontal resolution at 5-Hz frame rate, so at 20-m range, it would have 0.04-m horizontal resolution, which means there were something like 205 lidar returns from the pedestrian every 0.2 seconds at the instant her feet were visibly illuminated by the headlights. At that frame rate, even if she were moving at 4 mph, there would have been minimal horizontal separation between successive lidar return clusters. It should have been trivial for the object processor to determine that there was a moving object about to get hit by the car. Consider that in the 1.2 seconds from that point, there should have been at least 6 complete frames, and more than 1230 lidar returns from the pedestrian (actually way more, since the range was decreasing), it should have been impossible for the object processor to ignore that pedestrian.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
3DDave said:
It will have a tangential component if either member to the collision is on a non-linear path or has a non-linear velocity regardless of view point.

In this case, for a vision based system, given that we're apparently talking about a time interval of only about 2 or 3 seconds and neither was obviously turning, both motions are going to be effectively linear.

And the scale of the "point of impact" doesn't really help much except in the final too-late fraction of a second.

Greg touched on an interesting point for vision systems. A lack of apparent relative motion for objects on a collision course. At least until it's perhaps too late.

Vision systems would perhaps benefit from widely spaced cameras, indicating placement on the outside mirror housings.


 
That thing about collision courses is the first rule of watchkeeping on a boat, if another ship has no bearing rate then you are both on a collision course.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
Oh, and if the sensors only have a range of 50m sometimes, the braking distance from 70 mph is 75m, in other words the 50m range lidar is unfit for purpose at 70 mph, even if the car can immediately recognise a problem. The more sensible alternative, swerving, may need less distance, but of course requires more situational awareness and skill. I guess this speed limit is why the L4 testing is being done in urban areas.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor