Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations waross on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

2 dead in Tesla accident "Noone wasdrivingthe car" 15

Status
Not open for further replies.

MartinLe

Civil/Environmental
Oct 12, 2012
394
DE

“no one was driving” the fully-electric 2019 Tesla when the accident happened. There was a person in the passenger seat of the front of the car and in the rear passenger seat of the car.

the vehicle was traveling at a high speed when it failed to negotiate a cul-de-sac turn, ran off the road and hit the tree.

The brother-in-law of one of the victims said relatives watched the car burn for four hours as authorities tried to tap out the flames.

Authorities said they used 32,000 gallons of water to extinguish the flames because the vehicle’s batteries kept reigniting. At one point, Herman said, deputies had to call Tesla to ask them how to put out the fire in the battery.
 
Replies continue below

Recommended for you

RedSnake said:
Yes it should, but in this case it wasn't.
Why?
Probably because it was a cloudy day, the car could only see a white static floating thing the tail with no connection to the ground and probably against a white background, in the photo it isn't possible to see it, but it wouldn't surprise me if there where white hangars behind the plane in the direction the car was going.
I would presume that it has depth perception through more than one image. I would also presume that this depth perception is further enhanced by movement. EG even if the plane is close in colour to the background it should be easy to tell that it is a 3D object that is L distance away. If the Tesla can't accomplish this aspect of things then there are MUCH bigger issues and it would be unlikely to be able to accomplish the level 2 autopilot it does mostly achieve.

RedSnake said:
A human would see the whole plane and get the picture, but looking at a the same photo the car saw it's not even sure a human would have understod what was in front
The car is not looking a photo. It is looking a thousands from different cameras pointing in different directions. It also should have a whole history of data as it approached the object (the plane).

RedSnake said:
I guess if the car is supposed to recognize "things" aeroplans where forgotten since they normally don't occupy motor roads, unless it is a highway and they need to make a emergency landing that is. ponder
I think they need to include them in the picture databas...
You see, that was my point earlier. Recognising and identifying things from a picture should be low priority compared to mapping out the 3D space. If Tesla software isn't mapping things out in 3D space then their approach is far worse than I imagined, but that might explain things. If a UFO hovered in the middle of the road a human would recognise it as a 3D object even if it they had no idea what it was.

There is a very good reason why most self driving car companies are also using LIDAR or other methods of scanning the space around the car. Mapping out 3D space becomes much easier.

Close to home for me. A poor lady was hit last month by a Tesla on autopilot.
 
Mapping out 3D space becomes much easier.

The Uber that killed the pedestrian had lidar; it was again a failure to do systems engineering. Obstacle avoidance, regardless of the specific object, should have been paramount. In both Uber's and Tesla's cases, there was no real track history, only a sequence of unrelated detections of possibly different things.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
"Is it possible to know the total system experience? All sensory inputs, all decisions /branches, outputd / commands exactly as occurred in the incident? Can we "kmow" what the car "knew"? Pardon the imprecise terminology"

Tesla's perhaps best justification for beta testing their murder-car software on public roads is that it allows them to collect data on edge cases, so one would hope that yes, the vehicle state and history is recorded. If it is isn't, then, not for the first time, they are stretching the truth.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
Either these upstart companies are not aware of systems engineering, or if they are, they have no clue what it is really.

"Schiefgehen wird, was schiefgehen kann" - das Murphygesetz
 
Either these upstart companies are not aware of systems engineering, or if they are, they have no clue what it is really.

Unfortunately, it's all too common. These guys seem to think that finding the objects is the end-all and be-all, and that object history and behavior is irrelevant. I had someone like that on a previous program; they thought that finding the target in every image was all there was to target tracking and the end result was utter chaos.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Which would be quite odd. Because "seeing" a static object and then determining its distance should be about the easiest part in the whole AI. In fact AI should barely come into it. 3D positioning of objects and the vehicle should be the easy part of self driving cars. That aspect isn't traditionally refered to as AI.

Maybe you should apply to work for Tesla? They've been at it for many years and still can't get it right.

Both Tesla and Uber seem to be working on a object identification first principle, not the lets avoid any object that appears in the path of the car principle. Knowing how to react seems much simpler if you start with a list of objects tied to the type of reaction needed for each object. However, it breaks down when you fail to detect the object correctly.

The Uber accident report clearly showed that the AI was determining what the "object" was on each scan and then deciding what to do about it each scan. There seemed to be a lack of both history tracking of objects as the classification changed or any kind of "avoid any object that will be in the cars path" logic. This sounds very simple with the Lidar system by just tracking the fact that there was an object and it was on a collision course, but they obviously struggled with this.

It must be difficult to use multiple cameras and process the images into a 3-D map of the environment or else Tesla would be doing it. They'd have a constantly updating 3-D map that could be used to keep the car from crashing into anything. They seem to first detect what the object is and then decide what to do about it. In the case of the truck accident, the processing determined the truck trailer was an overhead road sign and overhead road signs were programmed to be of no concern. The historic tracking of the rest of the truck is something that a human would do, but it is something that has to be programmed into a computer. I'm sure they have tracking for some objects in the logic, but obviously only the cases they have thought about. For example, tracking the expected path of a pedestrian who leaves the sidewalk in the direction of the road to ensure the car doesn't hit them. Once again though, it can fail when the object detection fails.

Now, if you think about it you can understand why they aren't acting on every detection of an "object" possibly in the cars path. A false detection of an object of concern leads to unnecessary avoidance and they are trying to avoid turning or braking unless it is actually necessary. Falsely detecting an overhead sign as a truck in the cars path leads to the car coming to a sudden stop on a high speed road where is it highly susceptible to being rear-ended.


Is it possible to know the total system experience? All sensory inputs, all decisions /branches, outputd / commands exactly as occurred in the incident? Can we "kmow" what the car "knew"? Pardon the imprecise terminology

I think they can determine what the processing decisions were, but they can't determine why the car got the detection of the plane wrong. My understanding is that with the image learning and classification systems you can't find out what things the processing keyed-on as important when it learns that a group of images equals a certain type of object when model building. You also can't find out what parts of the image it used to determine what the object was when doing detecting.

You can't log the image processing logic, it's working too fast. I've been playing with a $40 Google Coral processor for security cameras and it processes images about 10x faster then a new CPU can. These cars are using processing units that have hundreds of parallel processors and each processor is much more capable than this Google one. The amount of data crunching going on is far beyond any logging capability. Then, you have the issue that when you record the camera images and feed them into an image processor again later you just might get another result.
 
It must be difficult to use multiple cameras and process the images into a 3-D map of the environment or else Tesla would be doing it. They'd have a constantly updating 3-D map that could be used to keep the car from crashing into anything.

I've ridden in a Tesla, and it does all that, but you can see from the display that it, like the Uber, doesn't create a track history, and it doesn't make use of the contextual information about what the detected cars are doing, which is why its lame-brained lane-following miserably failed on the 101 freeway in San Jose; it blindly followed an errant lane marking into a gore point that hadn't had its collision bumpers refilled, resulting in the death of the driver.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Obviously it's not doing that. I've seen the display, its just mapping all the objects detected by type. It wouldn't have crashed into that plane if it had a proper 3-D spatial map of it's surroundings unless the programmers were complete idiots and did the 3-D map part but didn't bother with the avoid objects in the 3-D map part.

The line was good, it was just on the wrong side of the line.
 
The problem, as you mentioned is that most of these systems determine collision probability based on the classification of the target. In the case of the plane, it likely did exactly the same thing it did in Florida, which was to classify the tail of the plane as an overhead sign, so nothing to see there, and nothing to worry about.

This is also why it has gobs of trouble in city streets, because it continually mis-classifies things and tries to drive through the non-existent objects.

When we worked on military vehicles, we did trafficability the other way around; laser range the objects and determine the clearance, then move only if there was sufficient clearance, unless the driver overrides the warnings. Tesla's lack of a rangefinder, and removal of radar is going in the wrong direction.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
LionelHutz said:
Maybe you should apply to work for Tesla? They've been at it for many years and still can't get it right.
Because it seems they have been going down completely the wrong track. There are other self driving cars out there that are working from the build a 3D model track and doing it quite successfully.

LionelHutz said:
Both Tesla and Uber seem to be working on a object identification first principle, not the lets avoid any object that appears in the path of the car principle.
WHICH THE THE ELEPHANT IN THE ROOM PROBLEM. Speaking of which I wonder how well Tesla identifies an elephant....

LionelHutz said:
Knowing how to react seems much simpler if you start with a list of objects tied to the type of reaction needed for each object. However, it breaks down when you fail to detect the object correctly.
Exactly. So this seem entirely the wrong approach.


Regarding its 3D mapping. It seems to be failing badly as several cameras of known position should be able to pin point something in 3D space. If it is confusion the side of a truck with a road sign above the road then something is going wrong badly and it sounds like it is identifying before it is positioning. You don't need AI to 3D build an environment. Like I keep saying that is the easy part. We use visual mapping combined with LIDAR for pin point mapping of the environment for construction purposes. It is relatively cheap these days and more than accurate enough for navigation. The hard part is everything that comes after this.

But it seems that Tesla is trying to take shortcuts on the fundamentals. There are many other companies in this field that are well ahead of Tesla. They just aren't rolling out their systems with erroneous names like autopilot that lead consumers astray.
 
Well, I feel better now :)

The problem with sloppy work is that the supply FAR EXCEEDS the demand
 
LionelHutz said:
Knowing how to react seems much simpler if you start with a list of objects tied to the type of reaction needed for each object. However, it breaks down when you fail to detect the object correctly.

There's also the issue of what happens when it encounters an object that's not in the list, which is bound to happen.

Rod Smith, P.E., The artist formerly known as HotRod10
 
If it is confusion the side of a truck with a road sign above the road then something is going wrong badly and it sounds like it is identifying before it is positioning.

Their 3D map is ephemeral, i.e., it has no memory, and it does no processing on the 3D map itself. The fatality on the 101 freeway exemplifies that; a half-asleep driver will follow the flow of traffic and would have avoided hitting the gore point. The Tesla did not, because it basically does not integrate information about the route it's traveling with the positions/velocities of other cars, etc. It's only lane following, even though it has all this other information that could inform its lane following

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
One p*ssed Cirrus owner, though:)[ignore][/ignore]

The problem with sloppy work is that the supply FAR EXCEEDS the demand
 
Something peculiar about that story, or at least the car manual's description of the Smart Summon feature:

the owner was using Tesla’s Smart Summon, a feature that’s designed to bring your car to you. It’ll maneuver the car out of parking spots and around corners...The manual also states that the operator should keep a clear line of sight to the crossover when using the Smart Summon feature so one can prevent the car from crashing into things. The function only works when the user’s smartphone is within approximately 6 meters (19 feet) of the vehicle.

There doesn't seem to be anyone within 19' of the vehicle.

If you do have to be within 19' of the vehicle, the feature would seem to be of very limited usefulness. I guess if someone parked next to it, too close for you to get in it, you could have it pull out of the parking spot so you could get in, but other than that, it would seem to be fairly useless, if you have to be that close.

Based on the video, it would seem the manual is fibbing a bit, or maybe more than a bit.

Rod Smith, P.E., The artist formerly known as HotRod10
 
The 10m only makes sense if the phone needs to have a direct connection with the car to activate it. Even then, Bluetooth and WiFi to the car could work at more than 10m distance. It could be activated from anywhere if using cell service.
 
So, the 6m proximity is just a bogus number put in the manual to make people feel better, and Tesla's could be driving all over the place without the owner being anywhere around? That's comforting. What if someone hacks the owner's phone? Who is held responsible for the damage and injuries?

Rod Smith, P.E., The artist formerly known as HotRod10
 
The manual also states that the operator should keep a clear line of sight to the crossover when using the Smart Summon feature so one can prevent the car from crashing into things.

That's even stoopider than thinking the plane is an overhead sign; car with collision sensors doesn't even bother using them.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top