Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations GregLocock on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Tesla Autopilot, fatal crash into side of truck 6

Status
Not open for further replies.
"Statistics" necessarily being "statistics to date" may be enough to say that using the buying public for such beta testing was somewhat better than totally dumb . . . provided that every customer was fully aware that they would be in that situation (and by a lot more than some fine print buried in a hundreds-of-pages-long owner manual).

But statistical grounds are still not enough to take it out of the realm of "personally risky" to each owner, given that a near-infinite range of possibilities exist that you'd be trying to use a finite and ultimately small sample size to predict the breadth of.


Norm
 
Regarding statistics, Tesla appears to be playing fast and loose in that regard too with their metric of fatalities per mile driven. Inasmuch as they conflate all roads miles for cars controlled by gray matter vs. those controlled by chips which are only supposed to be driven on limited or controlled-access highways (which are inherently safer on a miles driven basis). How many of Tesla's 130 million miles were driven on roads with no at-grade interchanges? Nearly all of them would be my guess for this very reason.

The Tesla must only be driven on those types of highways because it doesn't know how to resolve traffic control devices, bicycles, pedestrians, or (as brutally evident) cross-traffic. Regardless of what it thought the big white trailer was, its computer managed to thread a pretty narrow window to kill the driver. It had to have seen the tractor, trailer wheels, and landing gear pull in front of it, and the rear of the trailer entering the intersection after it and thought, "hey, I have one second to shoot this 25' gap that is going to open up in front of me under this road sign that just appeared." I read it somewhere, that the manufacturer of the sensor suite says that it just doesn't do cross-traffic. At this point, it's a novelty.
 
Interesting that since this happened, I've gotten at least three ad emails from Tesla touting their safety performances.

Please remember: we're not all guys!
 
Who is that said "The best defense is a good offense."

John R. Baker, P.E. (ret)
EX-Product 'Evangelist'
Irvine, CA
Siemens PLM:
UG/NX Museum:

The secret of life is not finding someone to live with
It's finding someone you can't live without
 
Sounds like another autopilot crash, but this time the autopilot was engaged not as recommended. The road had no center marking, which is not an approved operating condition. Can't say this was an autopilot failure. Maybe more of did not read the manual, or who cares what the manual said.
 
"The road had no center marking, which is not an approved operating condition."

This sort of implies that the driver has to keep track of and remember all the caveats. Seems to me that the program ought to have its own check, "Road center marking not detected, disengaging in 5 seconds unless center markings are re-acquired."

TTFN
I can do absolutely anything. I'm an expert!
faq731-376 forum1529
 
Cranky108 said:
The road had no center marking...
Looks to me like the highway center is well marked by a wide median strip, which the Tesla did not cross.
 
IRstuff said:
This sort of implies that the driver has to keep track of and remember all the caveats.

This would actually be similar to many aviation autopilots which have certain limits where the autopilot is not intended to be used; usually minimum airspeed and cross-wind, but often various others. Obviously there is a higher level of proficiency and responsibility associated with getting a pilots license than a drivers license, so it may be a little too much apples and oranges. Still, I think as long as the limitations are made crystal clear in the documentation and are reasonable then I think we can have a few things in this world that are not 100% idiot proof. But I agree that, if practical, the more checks where the autopilots can warn when you've left the defined limits, the better.

Professional and Structural Engineer (ME, NH, MA)
American Concrete Industries
 
The main difference is that pilots KNOW that ANYTHING they screw up, they die. They are mostly very conscientious about pre-flight checks, checklists, etc. If starting up a Tesla required going through pre-drive checks that required sign-offs and inspections, things might be a bit more professional.

TTFN
I can do absolutely anything. I'm an expert!
faq731-376 forum1529
 
pre-drive checks that required sign-offs

I like it! Heck, they have BIG displays, put up a rules quiz that changes each time. The first question is, "will you be using the auto-drive today"? If they answer NO the quiz ends. If they answer YES then a question or two that have to actually be read to answer correctly.

Keith Cress
kcress -
 
How long before we have the first case of a drunken driver using the autopilot to get home?

Professional and Structural Engineer (ME, NH, MA)
American Concrete Industries
 
TME said:
Still, I think as long as the limitations are made crystal clear in the documentation and are reasonable then I think we can have a few things in this world that are not 100% idiot proof. But I agree that, if practical, the more checks where the autopilots can warn when you've left the defined limits, the better.
I suspect that most car owners don't look at the owner manual that came with their car for much more than finding out how to operate the infotainment system, whatever other comfort & convenience features, and maybe the HVAC. Even among people having at least some claim to car enthusiasm there is considerable lack of awareness of O.M. content relative to the things that could really matter - vehicle dynamic assist systems, "driving modes", and other topics related to the actual driving of their cars.

The more serious the potential consequences resulting from error, the closer any thing needs to be to 100% idiot proof. As fast as circumstances can change, expecting a mostly inattentive driver to recognize that a warning was issued and take the correct actions - all within the time between when the AP issued its warning and the point of no return - may not be realistic.


Norm
 
The "blame Tesla" stance (not necessarily people here, but the cumulative opinions I've seen in general) is kind of hard for me to get behind. It feels like Tesla provided a product that has made things better for the driver, but because it didn't make it /perfect/ for the driver, it's somehow 'in the wrong'. The driver is better off for having the feature... so what good does it do to rub Tesla's nose in this incident and condemn them? It seems like it would only serve to set the precedent that it's not worth innovating at all, because if you try to improve a little bit, you'll only be tarnished for not improving a lot.

Reminds me of the tired of engineer/manager joke:
anyone/no one/someone said:
A man in a hot air balloon realized he was lost. He reduced altitude and spotted a woman below. He descended a bit more and shouted, "Excuse me, can you help me? I promised a friend I would meet him an hour ago, but I don't know where I am."

The woman below replied, "You're in a hot air balloon hovering approximately 30 feet above the ground. You're between 40 and 41 degrees north latitude and between 59 and 60 degrees west longitude."

"You must be an engineer," said the balloonist. "I am," replied the woman, "How did you know?"

"Well," answered the balloonist, "everything you told me is, technically correct, but I've no idea what to make of your information, and the fact is I'm still lost. Frankly, you've not been much help at all. If anything, you've delayed my trip."

The woman below responded, "You must be in Management." "I am," replied the balloonist, "but how did you know?"

"Well," said the woman, "you don't know where you are or where you're going. You have risen to where you are due to a large quantity of hot air. You made a promise which you've no idea how to keep, and you expect people beneath you to solve your problems. The fact is you are in exactly the same position you were in before we met, but now, somehow, it's my fault."

Note, I'm not saying they don't need to improve, or should stop trying to improve. I just don't /fault/ them for a driver /negligently/ driving 'hands free' with a feature that tells you in no uncertain terms that it isn't for 'hands free' use.
 
Tesla probably can't be faulted for any individual driver driving 'hands free'. Not with whatever legal disclaimers exist in their product documentation, if that sort of CYA approach is deemed sufficient.

Failure to be more pro-active in keeping hands free driving from happening is a different story, as 'hands free' driving if only for personal experimentation is clearly a predictable occurrence. Especially since Tesla freely discloses that AP is still in beta.


Norm
 
I seem to remember a juvenile joke from years ago.
A not very smart person bought a motor home.
He headed out on the freeway.
He engaged cruise control, and then left the drivers seat to make himself a cup of coffee in the galley....

For years I thought that it was just a silly joke.
Now I wonder.
The height of irony would be if the Tesla delivering the Darwin Award impacted a truck while on auto-pilot.
I wonder if it would be better to send the Darwin Award by drone?

Bill
--------------------
"Why not the best?"
Jimmy Carter
 
How did the radar unit fail to sense a row of posts with a guardrail on top?

Why would the AP even engage if the road is questionable?

Why would the AP not disengage if the road was good but turned questionable?

Why isn't there some detection method to ensure the drivers hands are still on the wheel since, according to Telsa, the drivers hands need to be on the wheel?

With these incidents occurring and the more I read about the system, the more convinced I am that Tesla is attempting to use the camera only because they believe that a camera based system is the future. However, it would appear the so-called AI system behind the camera needs to get A LOT smarter and do A LOT more learning about what it's seeing before it's ready. But, I'm not fully convinced a camera system can actually be ready. As a really basic description, the system relies on pattern or shape recognition, but there is an almost infinite number of patterns or shapes possible out on the road. So, a system that takes some new pattern or shape and tries to "best fit" it to the patterns or shapes it has in it's learned history will make identification mistakes.

The worst part is that when wrong, you get a hazard misidentified as something benign which results in the system driving on oblivious to the fact there even is a hazard. No warnings occur and the driver needs to catch the system acting up in time or the car crashes full speed into the hazard.

The other situation is misidentifying a benign object and taking evasive action not expected by drivers around the car resulting in a crash due to other drivers not expecting a car to be suddenly braking or swerving.

 
@NormPeterson

Should every car maker pro-actively keep people from driving while under the influence of drugs? I mean obviously that's a predictable occurrence. I think the onus you suggest putting on them is unreasonable when compared to current practice/norms.

@LionelHutz
I could see it being beneficial to engage the AP in questionable conditions as it has many other features besides object avoidance or lane-following. After all, it's meant to /assist/ the driver, not replace interaction of the driver.

I'm betting a sensor to ensure there are is a hand / are hands on the wheel would be pretty easy to defeat, and like I mentioned in my response to NormPeterson, I think it's unreasonable to require them to go so much further than every other common car on the market.

As to the frequency of negatives vs positives in computer assisted driving, the decision is a pretty big one to weigh. Obviously proponents have been touting statistics of these intelligent cars being x% safer than general drivers. I remain thoroughly unconvinced, personally, because I don't think we've had a good analysis of the data and I've never seen mention of how they interpret the data. I doubt the current Tesla driver is a typical representation of all drivers in general. It could be that Tesla drivers, as they are right now, are a safer-driving demographic to begin with.

One thing I do find difficult is equating a computer mistake with a human mistake. If a human causes an accident, I mostly think "c'est la vie" and move on. When machines/computers/software make mistakes with similar results, I find myself angrier or more frustrated with it. It's less acceptable for machines to foul up, in my mind, I suppose. That's one reason I tend not to care for increased automation or reliance on machine systems in my personal life. I very much prefer to screw things up directly, rather than have a machine screw it up for me :)
 
re: impairment; no, the car should not be responsible, even though there are some lock-outs that can be used on cars. The liability problems just shifts to the false negatives that are impaired but somehow manage to the get car started and die. Whose liability is that?

I think the main issue is still with the name "AutoPilot," which connotes way more than it really is. Thus, expectation and reality are not aligned, and that is clearly the fault of Tesla.

TTFN
I can do absolutely anything. I'm an expert!
faq731-376 forum1529
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor