Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Boeing 737 Max8 Aircraft Crashes and Investigations [Part 2] 44

Status
Not open for further replies.

Alistair_Heaton

Mechanical
Nov 4, 2018
9,380
This thread is a continuation of:

thread815-445840


****************************
Another 737 max has crashed during departure in Ethiopia.

To note the data in the picture is intally ground 0 then when airborne is GPS altitude above MSL. The airport is extremely high.

The debris is extremely compact and the fuel burned, they reckon it was 400knts plus when it hit the ground.

Here is the radar24 data pulled from there local site.

It's already being discussed if was another AoA issue with the MCAS system for stall protection.

I will let you make your own conclusions.

D1SXk_kWoAAqEII_pawqkd.png



 
Replies continue below

Recommended for you

"Was the question asked how and what effect this software fix was going to have on the big picture or was it just accepted as fixed. "

Obviously, we won't know for sure for some time, but it's also been pretty obvious that faulty data was a more of a symptomatic problem. From a pure systems engineering perspective, we can only hope that someone realizes that the any arbitrary introduction of thresholds or limits on data inputs isn't necessarily going to do the job and that a more comprehensive solution approach involving looking at the inputs and aircraft immediate history holistically is required.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
I'm struck by some similarities between these crashes, and the Airbus crash of Air France 447 back in 2009. Both crashes were in part caused by failure of sensors, and the poor management (trapping) of those errors and clear reporting of the discrepancies by the automated systems to the crew.
 
AF 447 was pure poor discipline and not carrying out published procedures for invalid airspeed indication. There was a design fault with the pitot tube heaters which allowed them to get iced up but it was only one hole in the chain.

But after that it was pure pilot error I am afraid to say. To be honest it was pilot error they were flying into an ITCZ huge cell in the first place. The accident could have been trapped before the pitots iced to the point of being useless. It could have been trapped when the crew planned the rest cycle in the cockpit before departure by ensuring that the Commander was in his seat for a higher risk period of the flight crossing the ITCZ instead of in his bunk.

But anyway if they had set a pitch attitude and a power setting and flown the aircraft manually using partial panel procedure as published. They would have come out the other side and on the decent into azors or the canaries the aircraft would have defrosted at 15 000 ft and they could have climbed back up and continued to destination if they had enough fuel left. But the commander was in his bunk, the first officer was calling the shots, a cruise FO in the RHS who isn't allowed to fly the aircraft under 10 000ft, they went into a ITCZ CB and came out the bottom instead of out the other side because of a simple airspeed indication mismatch.


At no point did the aircraft take over. It did revert to a down graded control law when it worked out that the airspeeds were showing mismatches and kicked the AP out to make the pilots deal with it. After that everything that happened after that was pilot input.
 
I remember watching a dramatized documentary about that incident and seem to recall that there was conflict between the pilot/co-pilot and that one was secretly pulling the stick back without the knowledge or direction of the other.
 
Also want the an issue that the stall warning stopped and started due to conflicting data so when they came back in the"zone"the stall warning went off confusing them but yes pilot error was a key issue.




Remember - More details = better answers
Also: If you get a response it's polite to respond to it.
 
Charliealphabravo said:
I remember watching a dramatized documentary about that incident and seem to recall that there was conflict between the pilot/co-pilot and that one was secretly pulling the stick back without the knowledge or direction of the other.

AF 447 was a strange one.

The pilots had an airspeed mismatch. Autopilot kicks off, aircraft switches to an alternate control law. Pilot flying responds with nose-up inputs until the aircraft reaches MPA and eventually enters a stall. Aircraft begins descending, eventually at a rate over 10,000 fpm. PF apparently confuses stall buffet with overspeed buffet and continues nose-up inputs.

Angle of attack becomes so high, the flight controller thinks the AOA inputs are invalid. This means they are ignored, and that stall protection is turned off, so no stick shaker or stall warning. Any time PF brings the nose down, AOA returns to the range which the flight controller believes is correct, which re-activates stall protection.

This creates a situation where nose-down inputs activate the stick shaker/stall warning. That's pretty damn counterintuitive.

From the CVR transcript the Captain apparently understood what was happening and attempted to provide nose-down input to bring the aircraft out of the stall. The guy in the other seat, who was much less experienced, held his stick back which canceled out the captain's nose-down inputs for many minutes until the plane hit the water. This was possible because the two sidestick controls aren't physically linked the way a dual-yoke aircraft is, and there's no feedback mechanism.

There's certainly a large pilot error component- the pilots didn't follow the airspeed mismatch procedure, PF didn't yield control of the aircraft to the captain on the audible call etc. But also a design philosophy component as well. Seems likely that the exact same scenario with the exact same flight crew in a dual-yoke aircraft would not have resulted in a crash- the captain would have been able to easily tell that the FO was applying incorrect inputs.
 
I think the bigger picture issue now is whether the FAA or other regulators are going to demand more pilot training or more in depth analysis as Boeing have now also discovered some other mysterious fault requiring software changes.

Remember - More details = better answers
Also: If you get a response it's polite to respond to it.
 
Did you see the part where the system failed to report a high AoA when it deemed a valid signal to be invalid?
Manslaughter charges were brought against Air France and Airbus in March, 2011 in France
According to the Air France-KLM Group Consolidated Financial Statement for 2017, (page 70)
As off the end of 2017, the manslaughter charges against Air France were still ongoing.

Bill
--------------------
"Why not the best?"
Jimmy Carter
 
Re AF 447

In general, if any incident investigation finds that human error is a significant contributing factor, then perhaps that aspect deserves at least one more "Why?".

Convenient Wiki ref. 5 Whys

These lines of inquiry bear fruit when they lead to changes in the system design, software, user interface, human factors, training, etc.
 
Would it be unreasonable to suggest the new training for 737 Max proposed by the plane manufacturer should include the two scenarios of the Indonesian and Ethiopian accidents to see if the pilot could come out alive?

We got all the black box data so it shouldn't be too hard to put them into the simulators.

I will feel more comfortable with this inclusion if in future I have to board a 737 Max.
 
IMHO, installing two AoA sensors but using them one per flight leg in alternation was a colossally bad decision.

Perhaps some paper pushers thought that it provided redundancy, but it didn't; it provided a single point of failure, and additionally provided confusion that masked the failure half the time.





Mike Halloran
Stratford, CT, USA
 
"Perhaps some paper pushers thought that it provided redundancy, but it didn't; it provided a single point of failure, and additionally provided confusion that masked the failure half the time."

Two sensors can provide redundancy/cross-checking, in a correctly designed system. In both of the crashes in question, one of the two AoA sensors was working correctly, but the hardware/software ignored the contradictory data. A simple history track and cross-checking of both sensors could have potentially avoided the accidents.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Whatever passed for a system (not MCAS - I mean top aircraft level system) DFMEA for the final design would be very interesting to see.

"Schiefgehen wird, was schiefgehen kann" - das Murphygesetz
 
Unfortunately we are not talking about a correctly designed system.

We are talking about a horridly hacked kludge so stupendously bad that no part of the design team, from bottom to top, and especially toward the top, should be allowed to participate in design of anything remotely like an aircraft, again, ever.



Mike Halloran
Stratford, CT, USA
 
Been looking at what other makes do with AoA sensing. I have limited it to flyby wire types.

Ejets use whats called a smart probe system and have done away with AoA vanes

Airbus all have 3 vanes 2 main ones at the front and one down the back.

The A220 from Bombardier has 3 all on the nose.
 
VE1BLL: Thanks for the 5 whys. I learned something.

AH! That's what I was suggesting way up this thread. Why not three on a $120M plane and why not a voting controller that dumps the bad one. Sigh.

Keith Cress
kcress -
 
Circa 2008, flight QF72 went "psycho" in spite of having three AoA sensors. Triple redundancy by itself can't overcome bad (or incomplete) system design.

--

It seems like there might be an arguably new (perhaps it just needs to be made more explicit) sub-field of engineering; in-between System Design, Safety Design, Human Factors, Software, and A.I.

Specifically: How can we optimise the opportunity for synergy given both automation and humans? The sum should always be greater than the parts. But right now it seems more like the weakest link wins.

Point being, either can fail, either subtly or spectacularly. That makes it a very difficult requirement.

That's where the new science is needed: a theory and a formal process to ensure optimum synergy of automation and humans. Especially in abnormal or emergency situations.

Pilot and aircraft fighting is exactly the opposite. How do we ensure that we design-out these fights? How do we ensure that the automation and humans combine in a synergistic way?

The higher supervisory level of automation will require more inputs, and much more software.

More software brings its own issues, so a new process is required. Perhaps more AI and/or Fuzzy Logic at this new supervisory level.

By way of example about the more inputs. In these sorts of incidents the aircraft decides that pitching down is required. The system is seemingly oblivious to the sudden negative-g, and is oblivious to the passengers bouncing off the ceiling. The system is also seemingly unaware of the pilots' inputs.

If this were more of an A.I. topic, then we could usefully compare these systems to Helen Keller (the famous deaf and blind person). The systems lack sensory inputs to provide an opportunity for a higher level of overall situational awareness. e.g. Will a self-driving car posses enough sensory inputs that it can detect that it is, itself, on fire?

Can an aircraft be aware that its last action caused an unwanted result?

And how does it smoothly cooperate with the pilots, while accepting that it itself may be faulty?

It seems like this is the gap that is at the root of these and many similar incidents.

It'll probably take a decade to fill this gap.

 
Maybe there should be a well guarded button labeled "Let humans fly the plane".
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor