Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

Self Driving Uber Fatality - Thread II 8

Status
Not open for further replies.

drawoh

Mechanical
Oct 1, 2002
8,860
0
0
CA
Continued from thread815-436809

Please read the discussion in Thread I prior to posting in this Thread II. Thank you.

--
JHG
 
Replies continue below

Recommended for you

I am trying to think through the problem of a pure LiDAR vehicle driving down a road looking for hazards. Here is the scenario.

[ol 1]
[li]The vehicle moves along the road at some maximum safe velocity, defined by the conditions that follow.[/li]
[li]If the vehicle sees a hazard, it must have the option of decelerating to a complete stop at 1/2G.
This deceleration is slow enough that a vehicle behind will be able to equal this and not rear-end the robot.[/li]
[li]There are various moving hazards. A hazard 0.6m (2ft) tall by 0.3m (1ft) across simulates a small child, who is capable of running at 2m/s. An alternate target could be bigger and faster.[/li]
[li]The LiDAR must have complete coverage, i.e., there cannot be gaps between the laser spots.
[/li]
[li]The laser spots must be small enough to resolve the target. It does not have to identify Billy Smith of 123[ ]Any Street, but it must be able to see that there is an object[/li]
[li]The scan rate and the resolution must be enough that the object cannot move more than 50% out of the position in which it was spotted during the previous scan. This gives the AI a chance to recognize that these are the same objects.[/li]
[li]The target may not be running in a straight line. I think a straight line across at full speed is the biggest problem, but I am not sure.[/li]
[li]The time in which the laser fires and the receiver captures the signal, must be less than the laser pulse period. We don't want multiple signals from the same LiDAR.[/li]
[li]We need some AI solution for recognizing and rejecting laser spots from the vehicle next to the robot. [/li]
[li]Almost all of the problems with speed and resolution are in front of the vehicle. If the vehicle has two LiDARs, one can watch forward with a 40[°]FOV, and the other can scan more slowly at lower resolution at 360[°]. On the highway, scary things behind you are big and close. If you are backing up, you are doing it at low speed.[/li]
[/ol]

Note how this imposes limits on the speed of the vehicle, as well as on the field of view of the laser and receiver.

--
JHG
 
That case is certainly the cleanest one from a thought experiment standpoint.

When you add stationary objects, it gets more complicated. Those objects have also moved relative to the LIDAR array. Have they moved relative to the car's predicted path?

What happens when a detected object (such as the small child) moves behind a stationary object (phone booth, mailbox, whatever) and is missing from the data set for a few frames? How does the system handle motions which deviate from what it 'predicts'? That's where things get difficult.
 
I had a programming project about30 years ago... it was a camera on a fairly flexible 'flagpole' in a parking lot. I had to 'fix' the parking lot in space and accommodate wind blown changes to keep a steady state background. On this background, I had to detect any changes and trigger an alarm... Same sort of an issue, where movement has to be detected on a fixed, but moving, background.

It was a fun project... I later did my first attack resistant reception for this same firm.

Dik
 
That's an interesting analogy, certainly.

What image processor were you using, or did you write one yourself? How did you handle filtering of the altered POV once the images were converted to the frequency domain (assuming that's the method used...)?
 
It's fuzzy, but, was in the early days lf EGA and a lot of the programming was using assembly directly to the CRTC controller for real time speed.

Dik
 
"When you add stationary objects, it gets more complicated. Those objects have also moved relative to the LIDAR array. Have they moved relative to the car's predicted path?"

This was solved in the helicopter OASYS, by placing each detected object into a virtual world within which the vehicle moved. Each object is timetagged for a certain level of persistence, so that if they are moving into your path, you potentially have the option to stop or move into space they vacated.

Note that for a typical car, a 2-ft tall obstacle is way gigantic. Anything taller than about 6 inches is already problematic. Just consider what would happen if you hit a curb at 40 mph; there's the potential of serious breakage of your own car, and the possibility (if only one wheel hits) of being diverted into adjacent or opposing lanes. There's a YouTube channel of a low underpass coupled with a curve in the curb where vehicles miss the curve and the cars either die of broken axles, or get propelled out of their own lanes.

People tend to think that it's relatively easy to create an AI to drive because people can drive almost without thinking about it. But, AIs have yet to be fully capable of even just analyzing images alone, and even after 4 decades of image recognition research, the AIs are barely able to a tolerable job, and there are images that a child could figure out that an AI can't. The human brain, in addition to parallel processing and fusing all its sensory inputs, has a massive associative memory capability that can pull up context and history that aid us in detecting and classifying threats.

Likewise, consider how easy it is for us to walk, and how hard it is for robots to do the same.

For an AI car, there are currently only 4 sensors types that can do much of anything, sonar, camera, lidar, and radar. Much hope has been placed on radar, but the reality is that wavelength of radar is so large, even at 95 GHz, that a phased array is possibly the only solution to get sufficient resolution. If we consider the 6-inch curb at the nominal stopping distance for 40 mph, we need a 46-in wide antenna at 95 GHz to fully resolve that curb at 76 ft. Any aperture smaller than that will result in blobs and unresolved objects. Sonar is limited to around 30 ft. Cameras could do 3D, but require either multiple cameras with massive image processing, and are limited by lighting. Lidar is its own light, and can easily achieve 6.6 mrad resolution needed to detect at 6-in curb at 76 ft.

Every single detection is a potential threat at some point in time or path, so a system architect for an AI needs to consider the same things that were considered for the helicopter OASYS, namely, context, history, persistence, etc. of all detected objects. Google doesn't even do that on Google maps, and it gets easily confused in freeway interchanges, where it suddenly thinks that you are on a different part of the interchange, precisely because it throws away all history about you the instant it gets a new GPS position. I suspect that most car AIs have similar behavior and similar lack of awareness of where it was, what was around it a second ago, etc.

That comes from poor systems engineering, and "backup sensing" is simply applying a poor patch on a poor design. If you need to do that, the design should be ripped up and restarted.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
IRstuff,

I am playing with a spreadsheet here, and I need to check my numbers and my logic. If my vehicle is doing 80kph (50mph), my 1/2G stop takes 50m (170ft). My laser can run at 1.5MHz, I can scan at 10Hz, and my spot size at 50m is 270mm[ ](11in). A six inch object will cause some sort of LiDAR return, but it will be a weak one. I think a six inch curb across the road will have a distinct signature.

--
JHG
 
I think it is important that we be clear to differentiate and be mindful of the different levels of autonomy while we continue this discussion.


Because no two automated-driving technologies are exactly alike, SAE International’s standard J3016 defines six levels of automation for automakers, suppliers, and policymakers to use to classify a system’s sophistication. The pivotal change occurs between Levels 2 and 3, when responsibility for monitoring the driving environment shifts from the driver to the system.

Level 0 _ No Automation
System capability: None. • Driver involvement: The human at the wheel steers, brakes, accelerates, and negotiates traffic. • Examples: A 1967 Porsche 911, a 2018 Kia Rio.

Level 1 _ Driver Assistance
System capability: Under certain conditions, the car controls either the steering or the vehicle speed, but not both simultaneously. • Driver involvement: The driver performs all other aspects of driving and has full responsibility for monitoring the road and taking over if the assistance system fails to act appropriately. • Example: Adaptive cruise control.

Level 2 _ Partial Automation
System capability: The car can steer, accelerate, and brake in certain circumstances. • Driver involvement: Tactical maneuvers such as responding to traffic signals or changing lanes largely fall to the driver, as does scanning for hazards. The driver may have to keep a hand on the wheel as a proxy for paying attention. • Examples: Audi Traffic Jam Assist, Cadillac Super Cruise, Mercedes-Benz Driver Assistance Systems, Tesla Autopilot, Volvo Pilot Assist.

Level 3 _ Conditional Automation
System capability: In the right conditions, the car can manage most aspects of driving, including monitoring the environment. The system prompts the driver to intervene when it encounters a scenario it can’t navigate. • Driver involvement: The driver must be available to take over at any time. • Example: Audi Traffic Jam Pilot.

Level 4 _ High Automation
System capability: The car can operate without human input or oversight but only under select conditions defined by factors such as road type or geographic area. • Driver involvement: In a shared car restricted to a defined area, there may not be any. But in a privately owned Level 4 car, the driver might manage all driving duties on surface streets then become a passenger as the car enters a highway. • Example: Google’s now-defunct Firefly pod-car prototype, which had neither pedals nor a steering wheel and was restricted to a top speed of 25 mph.

Level 5 _ Full Automation
System capability: The driverless car can operate on any road and in any conditions a human driver could negotiate. • Driver involvement: Entering a destination. • Example: None yet, but Waymo—formerly Google’s driverless-car project—is now using a fleet of 600 Chrysler Pacifica hybrids to develop its Level 5 tech for production

The question was asked of me in the previous thread, "If you're comfortable with these systems having a nonzero rate of failure... exactly what type of failures, then, are not newsworthy?" And the answer to that question depends on what level of automation the vehicle is capable of; as well as what the automaker is MARKETING/ADVERTISING/SUGGESTING what it is capable of.

In both of Tesla's noteworthy accidents, it's not so much a failure of the system (it's only Level 2) as it is the operation of the system. And Tesla bears direct responsibility for that in my opinion. Note the names of the other Level 2 systems: Audi Traffic Jam Assist, Cadillac Super Cruise, Mercedes-Benz Driver Assistance Systems, Volvo Pilot Assist. See how ASSIST is prominently featured? The exception is Super Cruise. Which isnote worthy because it won't change lanes (the driver still has to do that), because it is geofenced (only will operate on limited access HD mapped roads [no intersections allowed]), and has robust, ACTIVE driver attention monitoring systems. Any two of those three would have prevented the Tesla from driving into the side of the semi or the median wall. So there's the unaccepable engineering failure. There's what's newsworthy about them. And that's more so the case when Tesla is ostensibly encouraging their systems to be abused in this fashion touting them as self-driving "autopilot" cars. Musk can launch a car into space, but he can't take simple steps to make his systems safer? C'mon, man. Of course, it would be a lot tougher to blow marketing smoke up everyone's ass if he did that.

In this Uber incident, it is not clear what level of autonomy they are trying to accomplish. Though it can be assumed that since having a driver in the car might be viewed as an expense they would like to eliminate, that they are trying for Level 5. Or maybe level 4. I don't think it is unreasonable to expect a Level 4 or 5 vehicle to be able to manage avoiding the things that most attentative, actively-engaged drivers would. So this incident is noteworthy because the SUV failed miserably and what should be something a fully-automated vehicle could manage. This was not something appearing out of nowhere from behind a blind corner or moving erratically or otherwise obscured. This is further a newsworthy failure because these vehicles are being operated in the public sphere with no (effective) safeguards in place. As long as these vehicles are being "tested" on public roads, they should be equipped with the same driver attention monitoring systems as Cadillac's Level 2 systems. And since they aren't, I'd have no problems with Uber and/or the convicted armed robbery felon backup driver being charged with negligent homicide. Once they are functional, and can deal with bikes crossing the road and kids running behind a mailbox for 0.25 seconds before emerging in from of them, then set them loose.
 
The Google car that was hit by a bus was stopped and trying to avoid a 3-4 inch tall sand filled sock that was to divert silt from a storm drain.
 
I don't think I said it couldn't. Assuming that the lidar is probably the must useful, overall, of the sensors we could have, it needs to detect potential obstacles out past about 365 ft, giving the system about 0.5 seconds of reaction time, at a maximum speed of 80 mph.

But, not all obstacles are across the road, per se. The YouTube channel's curb is actually on the side of the road, and people routinely fail to see it until their wheels hit the protrusion. There may be dips or rises in the road that obscure the potential obstacle until you're well past the safe stopping distance, or it might be much narrow, such as the wheel of a small car, which might be about 6 inches tall and 14 inches across.

And if you saw it once, but not later, did it move or did it get obscured by something else in the mean time?

I think there's been much confusion about "classification," because in my mind, anything taller than 4 inches is already a potential hazard. I wouldn't architect a system that would manage to forget or ignore such detections.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Spartan5 said:
I think it is important that we be clear to differentiate and be mindful of the different levels of autonomy while we continue this discussion.

I agree it's good to be mindful of exactly what we're talking about.

Spartan5 said:
I don't think it is unreasonable to expect a Level 4 or 5 vehicle to be able to manage avoiding the things that most attentative, actively-engaged drivers would

Not at all. I'm convinced, in fact, that for the general public to be vocally accepting of Class 5 vehicles on a large scale, the rate of injury and fatal accidents will have to be MUCH lower than the human-piloted rates at the time Class 5 operation actually becomes a possibility. From the standpoint of convincing the average midwestern dirt farmer that the shiny new robot car is safe, saying "It's just as safe as the median American driving the median car on the median freeway!!!!" won't be anywhere near enough.

That is a PR problem, not an engineering one. I digress.

As far as uber's ultimate goal for the level of automation attained by their equipment- I don't doubt for a single second that their ultimate goal is Class 5 operation. As you've already identified, this requires on-road testing of vehicles fully equipped for Class 5 operation but operating in a tier one or two or more steps down while gathering data. I agree that letting these systems operate without any real safeguarding is a recipe for disaster.

I wouldn't be terribly surprised if Class 2 or 3 or 4 systems wind up with government-mandated 'attentiveness monitoring' systems. Tesla's method of just having to have your hand on the wheel isn't very robust.
 
"I wouldn't architect a system that would manage to forget or ignore such detections."

You can't track an "something" until you've determined it's "something" that needs to be tracked. You can call it classification or something else, but determining there is "something" out there that is of concern is the first step.

To make the load on the system easier, the designers certainly are deciding to ignore certain data once it's been deemed unimportant to the task of getting the car safely down the road. That way, they have enough processing power to process the important data.
 
Spartan5,

I don't see those six levels in an emergency. Car accidents happen within a couple of seconds. There is not time for a safety driver to put down a book and scan out the window and over the instruments to figure out what is happening. In my scenario above, you are not much more than two seconds away from killing a small child. Either the car is a full-time robot, or the human is full-time in charge and responsible.

--
JHG
 
I would expect there is a very different public acceptance level between a person getting hit after they walked across 40' of open pavement vs a person getting hit after they jumped right into the bumper of a car from behind a big solid object. After the public sees a few accidents that appear to be very simple to avoid, they don't have a very warm fuzzy feeling about the current systems.
 
"To make the load on the system easier, the designers certainly are deciding to ignore certain data once it's been deemed unimportant to the task of getting the car safely down the road. "

That's not been proven or even likely in the collision in question.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
IRstuff said:
That's not been proven or even likely in the collision in question.

This is absolutely what happens. If the system tried to plot trajectories for every object detected in range to the same level of fidelity, there wouldn't be nearly enough processing power to predict everything.

Objects which are close, large, and moving fast are at the top of the list with regard to fidelity of path estimation 'requested' from the processing stage.

I am convinced that both the Uber and Tesla failures under primary discussion here were the result of failures of the software to correctly prioritize predictions, not failures of the hardware to detect objects.

This is why the deep dive into the meaning of the word 'classification' in the last thread, which I'd rather not revisit here.
 
"Either the car is a full-time robot, or the human is full-time in charge and responsible."

That is true not only from the perspective of safe operation, but from a liability standpoint as well. Either the driver is responsible for being continuously aware of potential dangers and reacting to them at a moment's notice, or the car is. The cutoff for me is the point where the car is steering itself, because the driver is no longer engaged in operating the vehicle. Emergency braking, lane departure warnings, etc. are all great driver assistance features, but once the car is doing the driving, it has to be able to do it all as well as a human driver who is capable and attentive. The technology is nowhere close to that now, and I have my doubts about it being there anytime in the near future. I know I may sound like a nut when I say this, but as I said before, an AI that sophisticated poses a greater danger to humanity than a few car wrecks.
 
jgKRI said:
If the system tried to plot trajectories for every object detected in range to the same level of fidelity, there wouldn't be nearly enough processing power to predict everything.

If a system can handle a very busy street, then it should have enough processing power that it can pay attention to a single pedestrian on an otherwise deserted street.

You may recall that even the Apollo 11 LM computer in 1969 did an excellent job in prioritizing.

It'd be a design flaw if it was overzealous in ignoring the one and only moving object about to intersect, given that it should have had not much else to do.

This wasn't Shibuya Crossing in Tokyo.



 
drawoh said:
Either the car is a full-time robot, or the human is full-time in charge and responsible.

I would disagree... I think you can have a balance of the 'best of both worlds'. With driver assist I can see a driver being lulled into a false sense of security; Under all conditions the driver must be attentive and under control.

Dik
 
Status
Not open for further replies.
Back
Top