Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations GregLocock on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Behold... the new Tesla Convertible! 10

Status
Not open for further replies.

Spartan5

Civil/Environmental
May 29, 2007
809
Three years later... same story:


tesla_crash_lqdszd.png
 
Too early ... that was a fatality.

Tesla is still bragging that full self-driving is coming soon (they're suggesting this year, which presumably means 2019).

If they have thoughts of that happening in that timeframe, then their crash-avoidance / prevention ought to be near bulletproof well in advance of that date (so that they have time to validate it). Obviously, it's not.
 
Person driven vehicles do this most every day and for the same reason - the driver.
Welcome to every single time technology shifted.
 
"Person driven vehicles do this most every day and for the same reason - the driver."

Not my 'person driven vehicle' - ever. I've never hit anything. I'll put my driving skills and ability to handle any situation that arises against any computer anywhere. I don't believe a computer will ever be able to match a human's ability to handle the surprises that will inevitably pop up in the driving environment. Assuming a computer that advanced could be created, I believe it would pose a bigger danger than a few inattentive humans.
 
A.I. is hard, especially outdoors.

 
But, from the second link, the Tesla's algorithms are idiotic. The Tesla's algorithms have sufficient information to determine that the car cannot possibly fit under the incorrectly classified "overhead structure," even without a LIDAR. The computer had full information prior to the truck crossing the scene, and therefore, can easily determine that there's negative clearance. This is essentially the same kind of "here and now, ignore the past" processing that Uber's accident in Arizona, and the two Boeing Max accidents share.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Stretching the bounds of new things can be hard - with lots of trial and error.
To whit: The first attempts to fly:
Early Flying Failures

Check out Eng-Tips Forum's Policies here:
faq731-376
 
People are missing the whole point of an autonomous car. An autopilot in an airplane doesn't make it so I don't have to fly the plane; it makes it so I can focus on other things reducing task saturation, or it helps remove tedious operation allowing one to stay focused (same reason we have cruise control).

An autonomous car shouldn't replace the driver; it should supplement it. Essentially using it as a high-end lane keeping program so you can focus on more things at once. Being able to look over your shoulder during a lane change without worrying if the guy in front of you has just slammed on his brakes? Stuff like that.'

After having driven a SUV with adaptive cruise control this winter I've vowed my next car will have that feature. Fantastic for safety and convenience.

Ian Riley, PE, SE
Professional Engineer (ME, NH, VT, CT, MA, FL) Structural Engineer (IL, HI)
 
From Wired.

"GM Will Launch a Self-Driving Car Without a Steering Wheel in 2019"

GMCruise.jpg


2019? Morning or afternoon? Because my 3D-printed, fusion-powered, flying car is coming in the afternoon.
 
People are missing the whole point of an autonomous car. An autopilot in an airplane doesn't make it so I don't have to fly the plane; it makes it so I can focus on other things reducing task saturation, or it helps remove tedious operation allowing one to stay focused (same reason we have cruise control).

No, Tesla called it "Autopilot" specifically to convince people that it could do more than it can actually do. Moreover, Tesla's Autopilot can't even do what should have been a basic feature of its object detection and collision avoidance. Nor, could Uber's system. In both cases, prior knowledge of an object moving into a collision intercept is neither implemented nor designed for. In both Tesla accidents, the sensors had to have detected the semi moving across the projected path of the car and basically threw away that vital information. Furthermore, as others have demonstrated, Tesla's software does not even perform rudimentary trafficability calculations that would clearly show that the car can't possibly fit under the "overhead structure" or couldn't possibly follow a path into a median divider, even if the lane markings allow it. These are the newbie errors that people in the industry have recognized for decades and why people spent the effort to implement Kalman filters and history files.

Additionally, Musk has been particularly pigheaded in refusing to even properly contemplating using a LIDAR, which would have definitively shown that the "overhead structure" isn't, and defective lane markings are not to be followed. Musk makes noises about using only what human eyes can sense, yet ignores the fact that the Telsa does use a radar, and almost all new cars use backup cameras and sonar.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
IRstuff said:
No, Tesla called it "Autopilot" specifically to convince people that it could do more than it can actually do.

I should have stated that both drivers and Tesla have missed the point. Or at least their marketing people did. I mean, good job I guess as it probably sells the car better but it definitely is a misnomer. After seeing videos of Tesla cars fail to avoid debris and other obstacles in the road it's definitely clear it can't be an "autopilot".

Going back to my aviation example. An autopilot will do a fantastic job of holding a heading or following a GPS but will happily run me right into a thunderstorm. Similarly, an "autopilot" in a car should require the driver to keep driving and basically just make it easier/safer for the driver to do so. Trying to take the driver out entirely isn't technologically feasible or practical IMO.

I also agree about the radar. I like Musk and he's a smart guy but he's not a perfect engineer/designer. The boring company tunnel seems particularly badly designed/implemented.

Ian Riley, PE, SE
Professional Engineer (ME, NH, VT, CT, MA, FL) Structural Engineer (IL, HI)
 
i think that we should be very careful automating a vehicle. for once, it will be very hard to design it in such a way that it will avoid harm as long as all kinds of unexpected situations can arise either from the driving environment or from other not always fully attentive drivers.

some forms of automation can be quite useful, like a automatic gearbox or automated lighting and windshield wipers. others not so, because it lets the driver direct his attention to other things then driving. even things like navigation can have that effect - i noticed that when driving pure on navigation instructions i sometimes have no idea how i got to where i am because i more or less just followed instructions. still navigation can of course be very useful, but it let's you drive with less attention then you should.

systems may improve over the years, but as long as traffic consists of both automated and human driven vehicles, pedestrians, cyclists, road crossing dogs etc it will be very hard to make it perfect under all circumstances.

even trains that are more or less automated require a driver that needs to be constantly attentive and to more or less permanently demonstrate that he is fit and capable to control the vehicle.
 
People are missing the whole point of an autonomous car. An autopilot in an airplane doesn't make it so I don't have to fly the plane; it makes it so I can focus on other things reducing task saturation, or it helps remove tedious operation allowing one to stay focused (same reason we have cruise control).

But assistance is not what the car manufacturers are shooting for as the end game. They are trying to achieve full autonomous operation with zero human intervention required.
 
They are trying to achieve full autonomous operation with zero human intervention required.

Sure, but given the many incidents, even across unrelated industries, we're not hiring the right engineers with the tribal knowledge necessary to even achieve what was already achieved in the 1990s; we've obviously got a bunch of image processing whizzes that don't get the first thing about tracking objects.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
They're rediscovering that A.I. is hard. What few seem to realize is that A.I. outdoors is even harder.

The issue with being outdoors is the unlimited breadth of randomness and complexity. And such extremes are a common daily occurrence.

I suspect that if an unexpected obstacle such as an escaped-from-zoo rhinoceros were standing in the middle of the road, these idiot vehicles would crash right into it. Because "it wasn't in the database" or other such nonsense.

If it's not an escaped rhinoceros, it's heavy rain, or fog, or a cross-traffic truck, or a barrier that wasn't there yesterday, or construction, or children in yellow raincoats holding red and green umbrellas.

Humans can figure out the exceptions in a couple of seconds, while A.I. would be endlessly perplexed...

5a2e915ac03c0e1678c7446b.jpg


Basic navigation is, by comparison, essentially trivial. Accomplishing it safely is what's very hard.

It's easy to make comparisons with human drivers, including bad drivers and drunks, to make a valid point about averages. But the ruinous concentration of liability is something that few seem to be contemplating. Just that might need 10-15 years to sort out.
 
Any comparison to aviation autopilots is especially misleading in that these are not used to operate in the kind of close proximity to stationary objects and other vehicles that road vehicles are and always will be operated.

You don't taxi on autopilot.

Regards,

Mike

The problem with sloppy work is that the supply FAR EXCEEDS the demand
 
VE1BLL-
Your picture above is intriguing. It suggests to me that someone would probably one day create some type of visual obstacles, just like the one you show, with intent of conducting some malfeasance, just because they think it's fun. Whether it causes a traffic jam, or maybe multiple deaths, might not matter much to them. It would be a little bit like stealing a stop sign.

Brad Waybright

It's all okay as long as it's okay.
 
But, I don't think this isn't even at the level of the AI, which is hard; we're talking just basic common sense obstacle avoidance, which was in better shape when I was in the field in 1994. Even dumb-as-a-rock target trackers running on quad C30's could coast a target on break-lock, because there was a teensy, rudimentary, track history calculation that told the tracker a target was moving at the certain speed across the scene and that if it disappeared, the tracker should try to maintain the same trajectory to attempt to re-acquire the target. This is contrasted with the Uber accident where the pedestrian was clearly detected by the sensor, processed and ignored, processed and ignored, processed and uh-oh, there's an object in my path!!! This was definitely stupider than the tracker we had 25 years ago that had no AI. Likewise, it's clear that the Tesla detected the truck and ignored it, detected the truck and ignored it, detected an overhead structure and ignored it.

What I'm getting at is that the front end processing isn't even passing relevant data to an AI to process, it's just throwing stuff away as if moving objects aren't relevant, regardless of whether there's an intercept trajectory or not. I don't care if it classifies it as a rhino or a oddly-shaped bicycle, if it's on a collision course, the AI should be sounding alarms after the 75% confidence level is reached.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Re. Seeing obstacles.

I explicitly advised my son to use the logic of looking for empty space (e.g. an empty road).

Drivers that are using the inverse logic, i.e. 'watching for other cars' are the sort that fail to notice motorcycles, bicycles, and other exceptions.

They're also the sort that'll make bad decisions when intersections are surrounded by tall banks of snow. i.e. "I didn't see the other vehicle." because their lane was surrounded by huge mounds of snow, they didn't see any reason not to, so they incautiously pulled out.

Same thing for pile-ups in fog.

These are the worst at most 10% of drivers. Most aren't this thick.

Based on some of the accidents with various self-driving cars driving into obstacles that caused it confusion (trucks, barriers, people with bicycles), it seems clear that they're using something related to the suboptimal inverse logic that I've described above.

Otherwise they wouldn't drive into non-empty road.

 
This one came under the heading,
'In Vancouver they have this'.

Might be a useful approach to effectively block menacing autonomous vehicles from neighbourhoods.

Screenshot_20190520-141739_1_ft0txv.jpg
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor