Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations waross on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

2 dead in Tesla accident "Noone wasdrivingthe car" 15

Status
Not open for further replies.

MartinLe

Civil/Environmental
Oct 12, 2012
394
DE

“no one was driving” the fully-electric 2019 Tesla when the accident happened. There was a person in the passenger seat of the front of the car and in the rear passenger seat of the car.

the vehicle was traveling at a high speed when it failed to negotiate a cul-de-sac turn, ran off the road and hit the tree.

The brother-in-law of one of the victims said relatives watched the car burn for four hours as authorities tried to tap out the flames.

Authorities said they used 32,000 gallons of water to extinguish the flames because the vehicle’s batteries kept reigniting. At one point, Herman said, deputies had to call Tesla to ask them how to put out the fire in the battery.
 
Replies continue below

Recommended for you

Looking on the bright side then... :)

The problem with sloppy work is that the supply FAR EXCEEDS the demand
 
"neither we nor Tesla really know what the NN (neural network) actually does in making its decisions"
That is a generic problem with current NNs. You can't interrogate the ones that are currently in use to find why they made a particular decision. I know of people who are working on that issue.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
"neither we nor Tesla really know what the NN (neural network) actually does in making its decisions"

I think that is the whole POINT.

If you're going to micro-manage them, you might as well just buy a whole pile of relays.

That said, hitting a stationary object is a surefire example of poor job performance--perhaps a few days off without pay will cure the problem.


spsalso


 
Maybe Tesla needs 3 different neural networks trained on mutually exclusive datasets that will vote on what action to take. So much for KISS.
 
redsnake said:
We have had at least two incidents here with busses, same problem.
Here they are talking about ways to move the burning vehicles in a safe way to a place where they just can let them burn out in a controlled environment.

Maybe a ceramic lined dump truck as part of the response by the fire department.
 
The only suggestion right now is that after the open flame fire is put out, there should be temperature monitoring if possible and then..

MSB The Swedish Civil Contingencies Agency said:
When the fire is extinguished, the heat development often continues for a long time, and that prolonged cooling with large amounts of water may be required. Consider the possibility of immerse vehicles in water-filled containers or similar.

Of course there is a lot of other recommendations regarding toxic gases and other safety measures.

/A

“Logic will get you from A to Z; imagination will get you everywhere.“
Albert Einstein
 
moon161 said:
Maybe a ceramic lined dump truck as part of the response by the fire department.

I have seen other pictures, but you get the idea... basically a dumpster filled with water, and the car is dunked for a long period of time.

Dan - Owner
Footwell%20Animation%20Tiny.gif
 
Dunking it in a dumpster dunkster seems like a great idea...

Rather than think climate change and the corona virus as science, think of it as the wrath of God. Feel any better?

-Dik
 
If you're going to micro-manage them, you might as well just buy a whole pile of relays.

That's not the issue; it's a question of design verification/validation. Did the NN learn exactly what is needed, or did it learn something that LOOKS LIKE what is needed? As mentioned before, there was a case where people trained a NN to automatically recognize targets, and it worked great during testing, but failed in actuality because NN "learned" to recognize the test track and not the targets on the test track, so when the test track wasn't present it couldn't find anything. Because the trainers had no observability within the NN, they wasted a bunch of time and money on a poorly trained NN.

The Arizona pedestrian case is an offshoot of that; the logs showed that the system recognized a pedestrian, a bicycle, etc., but failed to recognize that there was a moving SOMETHING on a collision course.

Musk and image processing guys get all enamored with the accuracy of detections and identifications, but often fail to deal with the decision trees for the actual task at hand, which is to prevent collisions.


NNs, in general, fundamentally do not have the ability to separate wheat from chaff, while humans are often too good at it. Once we learn about tables, we can recognize tables irrespective of their backgrounds and sometime recognize parts of a table and fill in the rest, which is where optical illusions get us. Show us 4 corners arranged correctly, and we'll fill in the rectangular sides, even if they don't actually exist

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
"neither we nor Tesla really know what the NN (neural network) actually does in making its decisions"

"That's not the issue..."

You either need to know what the NN actually does in making its decision, or you don't. If you do, you have to understand everything about it that impacts its decisions. Then, it seems, you are getting into "relay land".

If you DON'T need to know, then, yes, you DO need to come up with a way to evaluate the success of your project.

With the former, you can predict the outcome. With the latter, you will have to evaluate the outcome.



spsalso

 
I need to know that it's doing it correctly. If you don't look inside, how do you know it was trained correctly? Did you understand my example? The Uber accident is somewhat related; they basically never even trained the system, because it never needed to warn the driver of a potential collision, and continued on its merry way until the collision was unavoidable, and then, the system warned the driver.

Tesla's NN is famous for being poorly trained; in an accident in Florida, it recognized broadside of a semi-trailer as an overhead sign, even though it must have seen the cab mere seconds earlier. There's has to be a better way than killing someone and then finding out that the NN training was completely erroneous.

Moreover, given these two outliers, how many more are hidden in the bowels of the NN? Do you want to be the sacrifice that finds another error in Tesla's NN training?

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
"I need to know that it's doing it correctly. If you don't look inside, how do you know it was trained correctly?"

If it doesn't make mistakes, it was trained correctly.

If it DOES make mistakes, it was not trained correctly. OR it's trying to do something it cannot do ("Flap your arms FASTER, damnit!"). OR it could do it, if it had the proper tools.



"...in an accident in Florida, it recognized broadside of a semi-trailer as an overhead sign..."

Really. And it didn't notice the "overhead sign" was on the roadway? "Oh, it's only an overhead sign on the roadway--I'll just drive right through it." Pathetic. And I'm not referring to the NN.





spsalso
 
If it DOES make mistakes, it was not trained correctly. OR it's trying to do something it cannot do ("Flap your arms FASTER, damnit!"). OR it could do it, if it had the proper tools.

Obviously and exactly, but unless you know HOW it was incorrectly trained, you're just going re-train it, incorrectly.



"...in an accident in Florida, it recognized broadside of a semi-trailer as an overhead sign..."

Really. And it didn't notice the "overhead sign" was on the roadway? "Oh, it's only an overhead sign on the roadway--I'll just drive right through it." Pathetic. And I'm not referring to the NN.

No, the NN thought the side of the truck was an overhead sign, and decided to drive under it. The driver was otherwise occupied, as Tesla drivers seem to often be, when using something advertised as "Auto Pilot"



TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
I think Darwin invented Tesla's autopilot...

Rather than think climate change and the corona virus as science, think of it as the wrath of God. Feel any better?

-Dik
 
Having worked in the software industry for some 36+ years, it's interesting to hear people talk about software errors. Someone will say something like "the software failed". Or "it didn't work". Or some such things.

Well, guess what, by definition, in virtually every case, the software ran without any errors. Now don't get me wrong, there are a lot of problems caused by software not running as expected, or not running as it should have, but in nearly 100% of those cases, it's NOT the software that failed but rather the programming/design that was at fault. Software always does what software is told to do. So unless there is some sort of hardware problem, the result will be exactly what the software was programmed to do. If you're a software guys, it's always a 'hardware problem' ;-)

Now getting back to the issue of this thread, if the Tesla auto-navigation system fails, it's probably NOT a software failure, as in 'it didn't do what it was programmed to do'. It's more likely that it did EXACTLY what it was programmed to do. It's just that whatever the problem was, 1), it was something that was not properly anticipated (programming/design error), or 2), it was something the system was not able to detect (hardware error, although it could also be that the software was not designed to detect, which goes back to issue 1).

Now I'll be the first to admit that since the advent of so-called AI-based systems, this idea that the software will ALWAYS execute exactly as programmed starts to get a bit gray because, by definition, with true AI systems, the execution can't always be predicted 100%, or at least one can't predict with 100% certainty how a true AI system will 'see' an unexpected or unforeseen situation and how it will 'choose' to react to that situation. This is why, contrary to a lot of marketing hype, there are very few true AI systems out there, first because it's hard as hell to program, and second, because turning a true AI system on is very worrisome for people who design products, particularly where liability, either human or financial, is at high risk. Of course, the problem is that without the hope of what AI will bring to solving the problem, that the problem may not have any practical solution to start with. This is why they talk a lot about AI systems 'learning' as they go since it's impossible to anticipate everything and therefore the idea is that we need systems which can adapt to the what's happening around them. Now, I'm not sure how much of that sort of thing is playing a role in today's auto-navigation systems, but people are certainly assuming that in order to truly build a system that will approach 100% reliability in the future, we will need a fare amount of that hoped for AI 'magic'.

John R. Baker, P.E. (ret)
EX-'Product Evangelist'
Irvine, CA
Siemens PLM:
UG/NX Museum:

The secret of life is not finding someone to live with
It's finding someone you can't live without
 
Well as a automation and safety programmer working with machines that are considered the most dangerous in the industri, I have a hard time understanding that this type of "experimenting" can be allowed.
Even if a car does not go under the the Machinery Directive, it is still strange since the consequences are the same when things go wrong, mainly death.
If this had been a machine the rule, no single error must lead to danger everything must be doubled and supervised would apply.

Not even knowing what level of safety that is achieved is kind of, more then crazy..

/A

“Logic will get you from A to Z; imagination will get you everywhere.“
Albert Einstein
 
RedSnake said:
If this had been a machine the rule, no single error must lead to danger everything must be doubled and supervised would apply.

Even the most diligently applied programmable safety system, structure category 4 PL e, with redundant diverse processors and watchdog systems, will still execute faulty logic with extreme precision. If someone does not anticipate a certain situation in the logic, and yet the situation happens, it will not be handled properly. If the system does not have sensors capable of reliably detecting a certain thing happen, they will not be detected. I work with these systems, too, and there are always situations that the automation cannot reliably handle, and have to be covered in the "information for use" (the safety chapter in the instruction manual). And that can only cover the things you know that you don't know. The stuff that you don't know that you don't know ... good luck.

I don't know how the self-driving developers get away with it, either.
 
Yes that is true.
The hydraulic presses we have, only have one written rule, never go inside the press if the ram isn't mechanically locked.
The safety around them has been developed under so long time though so at least for our oldest ones that are actually also the most secure ones, believe it or not, I can't think of even one singel fault that could make them hazardous unless some one has made some electrical or hydraulic changes to them and even then there has to be at least two faults otherwise they want run.
They have a combination of both electrical, hydraulic and mechanical safety and monitoring systems.

For other types of complex machines there needs to be much thought and knowledge to get all eventual hazards removed, sometimes it comes down to cost as well.

I think the hardest part is to determine the actual level, for a hydraulic press it's easy there is only one, get it wrong and you are dead.

/A

“Logic will get you from A to Z; imagination will get you everywhere.“
Albert Einstein
 
"No, the NN thought the side of the truck was an overhead sign, and decided to drive under it."

So NN is approaching an object about 50' wide, 10' tall, that is 3'-6" above the ground. NN interprets that as an overhead sign, which may well be 50' wide and 10' tall. But is more like 20' above the ground. NN, of course, knows the distance between itself and the sign/truck, and the angular width.

From a great distance, it is not unreasonable to confuse the two. But as the distance closes, hints start coming in. In particular, where is that 20' NN is expecting? And why isn't that overhead sign NOT going upwards in NN's angular field of vision? At 300' out, there is still time to stop without impact. Did NN EVER try to stop?


If the NN can be "interviewed" or "deconstructed" or whatever it is one does to be able to look into its every nook and cranny, these questions, and more, will have answers.


I do have a question for those with an answer: Does each individual NN have to learn everything on its own, sorta like a baby (sorta)? Or does only ONE NN learn all the stuff, and then get exactingly reproduced? If the latter, then all learning would then be shut off--otherwise it starts looking like the former.


I do get a feeling that the designers of this system spent more time in teaching "driving" and less time in "collision avoidance".



spsalso

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top