Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

2 dead in Tesla accident "Noone wasdrivingthe car" 15

Status
Not open for further replies.

MartinLe

Civil/Environmental
Oct 12, 2012
394

“no one was driving” the fully-electric 2019 Tesla when the accident happened. There was a person in the passenger seat of the front of the car and in the rear passenger seat of the car.

the vehicle was traveling at a high speed when it failed to negotiate a cul-de-sac turn, ran off the road and hit the tree.

The brother-in-law of one of the victims said relatives watched the car burn for four hours as authorities tried to tap out the flames.

Authorities said they used 32,000 gallons of water to extinguish the flames because the vehicle’s batteries kept reigniting. At one point, Herman said, deputies had to call Tesla to ask them how to put out the fire in the battery.
 
Replies continue below

Recommended for you

spsalso said:
IF reaction time can be brought to zero, then a self-driving car trailing you should be able to drive at a distance of 6" at 45 mph. Or 85 mph. Included in that claim is that the self-driving car can stop as quickly as you can.

I have done an analysis of a robot car using Lidar. I worked out that the car would need two, for highway driving anyway. One would scan a 40[°] field of view in front of the car at 8Hz. The other would scan 360[°] at 10Hz. It takes three scans to detect acceleration. The first scan tells the robot you are in front of it. The second scan detects relative velocity. The third scan detects acceleration. The robot needs a quarter of a second to detect that you have hit the brakes. I assume that the computer processing all this is approximately instantaneous. There is still time for the brakes to engage. A young, attentive human reacts in 3/4 of a second.

--
JHG
 
There have been many examples of collision avoidance failing, particularly with Tesla, so if the car in front of you is one of them, there is a finite probability that it might collide with something relatively immovable, resulting in it stopping much quicker than with brakes. Meanwhile, if your car is following closer than emergency braking distance, you'd smash into that car.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Another case of no one was driving; Smart Summons results in Tesla colliding with private jet. Yet another example of Tesla's camera-only obstacle detection failure


TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
That little car looked SO guilty, hesitantly starting up again and stopping. And sorta hiding behind the plane.

"You are a BAD Tesla. BAD!"

Yeah. I read the article and it appears the car owner is the dim bulb, here. But looking at how the car moved reminded me of a big dog who knocked something over, and is kinda ashamed.


spsalso
 
And it doesn't even have the excuse that it was going 85MPH.


spsalso
 
Bad image processing is bad at any speed. There's a fundamental flaw in assigning collision avoidance to an image processing specialist; their use image processing as a hammer, when they need a screwdriver.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
While humans use sight to wander around in the world, they also use hearing and touch. Even smell on occasion.

And then there's bats. Cute little critters. BIG on sound, you know.

I'm pretty sure most of us here would use "any and all" systems that would aid in safe navigation.


spsalso
 
Anyone deploying so-called artificial intelligence in an environment where human intelligence is needed, doesn't understand what intelligence is.

"Schiefgehen wird, was schiefgehen kann" - das Murphygesetz
 
Anyone deploying so-called artificial intelligence that thinks/believes that it will magically do whatever it takes to get the right answer is not an engineer. Undoubtedly a liberal arts major, probably in psychology or political science. Or bus ad, perish the thought!


spsalso
 
I believe it is fairly irresponsible of Tesla to let its clearly incapable auto driving software into the hands of owners who are not in the position to understand the effectiveness or ineffectiveness of the software.

IRstuff said:
Bad image processing is bad at any speed. There's a fundamental flaw in assigning collision avoidance to an image processing specialist; their use image processing as a hammer, when they need a screwdriver.
I'll play devils advocate here and say that I drive relying heavily on my image processing brainware and it seems to work out ok.

My issue with this is where does the error lie:
1. The car didn't see the obstacle.
2. The car did see the obstacle but thought it wouldn't hit it.
3. The car did see the obstacle and did recognise that it would hit it and did so anyway.

All of these are a problem whether or not it correctly identified obstacle.
 
A human brain is many orders of magnitude more capable in both image processing and decision making than the processor in these cars.

I suspect it saw the plane and classified it as something it would not hit.

 
LionelHutz said:
A human brain is many orders of magnitude more capable in both image processing and decision making than the processor in these cars.
Well that's the thing. Image processing and decision making processing should have barely come into this situation. This was a solid static object and the speed was low. An AI computer program shouldn't need to identify the object to avoid hitting an object.

LionelHutz said:
I suspect it saw the plane and classified it as something it would not hit.
Which would be quite odd. Because "seeing" a static object and then determining its distance should be about the easiest part in the whole AI. In fact AI should barely come into it. 3D positioning of objects and the vehicle should be the easy part of self driving cars. That aspect isn't traditionally refered to as AI.

The hard parts are classifying detected objects and even worse predicting the behaviour of objects. And lets not open the decision making can of worms. Predicting whether a pedestrian is going to step out in front of the car or to stop at the curb is well beyond most AI that I am aware of. Likewise the decision to drive through obstacles like a plastic bag blowing in the wind is challenging.
 
Which would be quite odd. Because "seeing" a static object and then determining its distance should be about the easiest part in the whole AI.

It's trivial for a human, because of both common sense and contextual cues; AI is likely decades away from that sort of processing. In the meantime, Musk eschews lidar, because of the expense, but lidar would have made it obvious that there was something in the way. Additionally, human vision processing has been honed for eons for tracking and detecting objects, but like I said, if only image processing gurus were involved, then they might not have done the back end logical processing. Uber's pedestrian fatality in Arizona was like that; the lidar actually detected the victim well in advance, but classified them as a different object on each detection, and there was ZERO logical processing that should have said, "I've detected a number of different objects along a collision course, I need to stop or maneuver NOW." It's lame-brain systems engineering, or actually, lack of systems engineering.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
human909 said:
Which would be quite odd. Because "seeing" a static object and then determining its distance should be about the easiest part in the whole AI.

Yes it should, but in this case it wasn't.
Why?
Probably because it was a cloudy day, the car could only see a white static floating thing the tail with no connection to the ground and probably against a white background, in the photo it isn't possible to see it, but it wouldn't surprise me if there where white hangars behind the plane in the direction the car was going.

A human would see the whole plane and get the picture, but looking at a the same photo the car saw it's not even sure a human would have understod what was in front.

“Logic will get you from A to Z; imagination will get you everywhere.“
Albert Einstein
 
Probably because it was a cloudy day, the car could only see a white static floating thing the tail with no connection to the ground and probably against a white background, in the photo it isn't possible to see it, but it wouldn't surprise me if there where white hangars behind the plane in the direction the car was going.

Tesla already burned that excuse many years ago; the software should have been fixed for that many years ago. One obvious failing is that the software probably couldn't figure out what it was seeing at all, and "decided" it was OK to go ahead, while a human would have slammed on the brakes, although the human would have had the world experience to recognize the plane and know to stop. THAT is a failing of the (non) systems engineering, an unrecognized object with bulk and possible depth should be sufficient to cause the car to stop. Moreover, there had to have been previous clues about object; as in the Florida case alluded to earlier, car had to have seen the semi's cab prior to it turning in front of it, and any human driver would have registered and tracked the movement to its logical conclusion, an obstacle on a collision course. "You saw a truck turning in front of you 2 seconds ago; it cannot have disappeared."

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Is it possible to know the total system experience? All sensory inputs, all decisions /branches, outputd / commands exactly as occurred in the incident? Can we "kmow" what the car "knew"? Pardon the imprecise terminology

EDIT: "know" not "kmow" :)

The problem with sloppy work is that the supply FAR EXCEEDS the demand
 
Can we "kmow" what the car "knew"?

That depends on the diligence of the designers; in the case of the Uber pedestrian collision, there was sufficient datalogging to know that the sensors detected a series of different objects, all on the same trajectory/collision course, but apparently ignored them, possibly because they were all different, until the pedestrian was directly in front of the car before initiating the driver warning to brake.

In the case of the Tesla incident in Florida, it was said that the car thought the side of the truck was a highway sign, something that no human would have assumed. Therein lies a gigantic fallacy of AI target recognition; you can get totally wrong and incongruent answers that humans would never come up with.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
"You saw a truck turning in front of you 2 seconds ago; it cannot have disappeared."

Sure? It could have been beamed away by Enterprise..

I guess if the car is supposed to recognize "things" aeroplans where forgotten since they normally don't occupy motor roads, unless it is a highway and they need to make a emergency landing that is. [ponder]
I think they need to include them in the picture databas...
Or just skip the AI self driving thing all together.

“Logic will get you from A to Z; imagination will get you everywhere.“
Albert Einstein
 
Focusing on objects and trying to identify them shouldn't be the first priority.

The first priority should be "look for clear road". Doesn't matter if you know what a potential obstruction is. It's still an obstruction. Don't hit it.

Once you have identified the clear road THEN you can start picking out and identifying objects to see if they might affect your path of clear road prior to you passing through that clear section of road, e.g. some other vehicle also heading in that direction.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor