Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Boeing 737 Max8 Aircraft Crashes and Investigations [Part 2] 44

Status
Not open for further replies.

Alistair_Heaton

Mechanical
Nov 4, 2018
9,380
This thread is a continuation of:

thread815-445840


****************************
Another 737 max has crashed during departure in Ethiopia.

To note the data in the picture is intally ground 0 then when airborne is GPS altitude above MSL. The airport is extremely high.

The debris is extremely compact and the fuel burned, they reckon it was 400knts plus when it hit the ground.

Here is the radar24 data pulled from there local site.

It's already being discussed if was another AoA issue with the MCAS system for stall protection.

I will let you make your own conclusions.

D1SXk_kWoAAqEII_pawqkd.png



 
Replies continue below

Recommended for you

The Mentour Pilot video has been taken down because there is still a degree of speculation on the subject.

Regards
Ashtree
"Any water can be made potable if you filter it through enough money"
 
I agree.

Answered one of my questions I.e why no more mcas commands? Apparently so long as the pilot doesn't trim back up it kind of goes to sleep. Those two little trim up commands at high speed may have felt too harsh but kick stayed mcas back into life.

Also that AND command at the end was more violent due to the higher speed and couldn't be brought back by column pressure alone like it was earlier in the flight. .

Would continuous manual trim up at that point have saved the plane? Probably doomed by then but maybe needs to be added to the armoury of actions?

Remember - More details = better answers
Also: If you get a response it's polite to respond to it.
 
Would throttling back and putting the flap down have made the plane more controllable for the pilots?
 
Even if it had, how were the pilots in question to have known to do that - or been able to figure this out in a couple of minutes?

At the time of this, the second crash hadn't happened yet (they were it), and Boeing was still trying to bury MCAS as a footnote in the flight control systems. The logic behind its operation was opaque (still is rather opaque). The instructions for pilots just addressed trim runaway, not specifically MCAS going crazy.
 
Pete K's link above (and included again here) should be required reading for people who are trying to understand this. It's very rich with detail.
 
For some reason this whole MCAS debacle seems reminiscent or the several fatal crashes that have occurred with autonomous vehicles in control (albeit with human oversight, that proved to be inadequate in the event).
Not from a control logic aspect exactly, but rather from the model that systems of arbitrary complexity can be comprehended sufficiently by semi-independent modules of software and hardware, developed by semi-independent teams, and then launched into the real world of infinite possibilities, without making infrequent but egregious errors that that an appropriately trained, competent human given the same task and responsibility, could be expected to avoid.
There are many layers to this skepticism. For instance, by the authority it is granted, the MCAS can pitch the aircraft down based on its own narrow logic and limited sensor/data array. A human pilot would only take such action based on comprehensive evaluation of all available information, which surely is greater than that provided to the MCAS logic.
Putting the above another way, what happened to system engineering? I'm becoming increasingly disappointed learning about latter day failures that should not occur in a system that is properly engineered at the system level.
I hope the answer is not that systems have become too complex for comprehensive analysis. My rebuttal in that case is, "does that mean they should be let loose in the world before they are validated to be safe, relative to the systems they are supplanting and replacing?"

"Schiefgehen wird, was schiefgehen kann" - das Murphygesetz
 
hemi said:
Putting the above another way, what happened to system engineering? I'm becoming increasingly disappointed learning about latter day failures that should not occur in a system that is properly engineered at the system level. I hope the answer is not that systems have become too complex for comprehensive analysis.

Yes this is a failure of systems engineering, but only in terms of failing to apply the known and well-proven principles to the changes in the flight control system, which doesn't seem to have happened on the Max.
Refer to last month's Seattle times reports that the original system design of the MCAS included a very small one-time adjustment to trim, but was later expanded in scope after flight test results. The failure, under those conditions, is not returning to the original analysis and following through the consequences of those changes. I believe that process was not done, hence Boeing did not discover that it had made a safe system unsafe, even though they had started the analysis which would have revealed the fact, had they simply updated it after the flight test changes.

No one believes the theory except the one who developed it. Everyone believes the experiment except the one who ran it.
STF
 
Hemi is referring to a much wider scope than just this latest example. I've noticed the same thing, and so I agree fully.

It's worth noting that another major aircraft manufacturer has provided many of the 'case-study worthy' examples of this flavour of aircraft incidents over the decades. Where subtle design or coding errors, or trivially blocked or failed sensors, escalate.

There seems to be something missing in the synthesis, or integration at the highest level, within the System Design process.

And no, I can't put my finger on it.

But, based on some exposure to DO-178, I suspect that the solution for system software is contradictory to that process; at least at the highest level.

There's a semi-valid argument that such designs help to avoid too-frequent human error, and are thus competitive in terms of overall safety. This argument fails to address the 'concentration of liability' that will surely result.

 
At then end of the Peter Lemme post there is this link which gives a very interesting insight into possible human factors for the pilots of the jet. Well worth a read IMHO. You need to skip down a bit past the data section.

The key for him is the very fast speed at that altitude and the sensitivity of the aircraft to trim action.

ON the final AND the plane went into negative G territory so not surprised they couldn't recover from that.

Pilot action


Remember - More details = better answers
Also: If you get a response it's polite to respond to it.
 
The human factor has been mentioned several times.
What about the human factor as it applies to the management-engineering interface?
Did a sales driven management philosophy put undue pressure on engineering to meet unrealistic goals?
Management demands:
Install larger, more efficient engines.
Maintain type approval.
No simulator training.
If these demands are unreasonable from an engineering viewpoint and if this is the direction that Boeing management has been going, is it possible that the best engineers have voted with their feet and are now working elsewhere?
I'd like to be a fly on the wall in the Boeing board of directors meeting when the overall picture is discussed.

Bill
--------------------
"Why not the best?"
Jimmy Carter
 
"Putting the above another way, what happened to system engineering?"

It's way more than just that; Boeing has demonstrated on a number of programs that CMMI 3 and higher is not sufficient to fully solve a problem. That's because there needs to solid engineers with EXPERIENCE and understanding of the basic problems. The Uber collision is an excellent example of lack of tribal knowledge and understanding of a basic problem; the software collected and monitored an object moving in an intercept trajectory and yet, did nothing until the object (pedestrian) was in the physical path of the car.

On a broader scale, there used to a Zenger-Miller management class that insisted that a manager needed to know nothing about the technical aspects of the jobs of the people they managed; likewise, there's a tendency to think that you can get a bunch of image processing programmers to design a collision avoidance system.

In this 737 Max case, the lack of rejection of bad sensor data is a major fail. A system engineer would have to actually know something about flying to know to allocate or derive a requirement to gracefully degrade through a sensor malfunction.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
IRStuff has a good point. Perhaps the MCAS software should have been trained to ignore any AOA rate of change that was above a reasonable limit. After all, the aircraft can only change angle of attack up to some specific maximum rate while still under control, any rate faster than that would be defined as an upset or out of control. Any rate of change that is defined as out of control requires that the flight crew have full freedom to decide for themselves which flight controls should be exercised to fly out of the upset.

Any rate of change beyond that should be ignored or at least smoothed, for two reasons.

1) If the rate of change signal is valid, it is a hazardous departure (upset) from stable flight and the pilots are in a better position than the software to evaluate all factors before choosing a course of action. Trained human beings excel at a reasonableness check. Software does not typically simulate human capability so well in this area because the software will never have ALL the sensory input or ALL of the experience a trained human has. Example: a human pilot observing a moving indicator showing that the AOA was varying wildly while seeing absolutely no corresponding variations in any other performance instruments would instantly ignore the erratic AOA indicator.

2) If the rate of change signal is not valid, then it should not be allowed to affect the flight controls in any case.
 
Aside from the assumed software issues, I still don't understand how Boeing's simulator never encountered something resembling the accidents. Did they simulate a strike on the AoA resulting in mis-matching sensor data? Or was the simulator incapable of accurately simulating MCAS (and this was only discovered after the accidents)?

For a system that actively controls flight characteristics and fights pilot inputs, that seems like a major issue.

Is there any legitimate reason (aside from wanting to minimize changes to keep type certifications) to bury/limit pilot knowledge of such an important system? I would've thought (perhaps naively?) that pilots would be extensively trained on how to handle particular control system failures. With a plane featuring a brand new control system...I don't understand the lack of transparency.
 
"don't understand how Boeing's simulator never encountered something resembling the accidents. "

Testing a complex system, short of brute-force binary failure enumerations, is an art and time-consuming exercise. The Intel Pentium FDIV bug could only be found with very specific tests, and without the accidental finding by someone specifically using the full resolution of the math unit, it would have taken months, if not years of testing to find.

Likewise, in both accidents, while the AoA sensors clearly had some impact, there may have still be other contributing factors that caused the MCAS to act out that did not occur in over 8000 other takeoffs, since AoA failures appear to be a pretty common occurrence.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
waross,

I think you have hit the nail here very well.

It is pretty clear that the desires of the management for these outcomes and all in as short a time as possible have led us to where we are. It will be very interesting to see if other aviation bodies other than the FAA will accept the latest modification and not go back to the beginning and start to more critically investigate the changes from NG to MAX and maybe determine that it is not the same aircraft and that more additional training is required. It would be interesting to see this 45 minute Ipad "training" to see what it mentions about the handling characteristics and the stab cut out switches.

If the lack of feedback loop between original MCAS design intent ( small degree (0.6) AND trim command and only once) and what changed after flight testing ( 2.5 degree and multiple times) occurred on other systems.

I think it was sparweb who said earlier on in this post that there was a constant fine line / fuzzy boundary between what is "new" but still the "same as before". It looks to most people I think that Boeing have stepped over that line, the issue now for other regulators will be where else did they do this and how critically have those changes been reviewed for unintended but fatal consequences.

Certainly in instrumentation I deal with there are out of range errors which disable executive action based on them. Also rate of change which is physically impossible should flag an instrument error instead of direct feeding garbage into the control system.

Remember - More details = better answers
Also: If you get a response it's polite to respond to it.
 
IRStuff,

That's fair. Is AoA failure really that common? I thought there was an earlier post, maybe on the Lion Air thread, saying they were pretty sturdy typically.
 
RVAmeche said:
Aside from the assumed software issues, I still don't understand how Boeing's simulator never encountered something resembling the accidents.
Google "fuzzing" as it relates to software testing/hacking... perhaps fuzzing should be a slightly more prevalent test scenario in the simulators than current...

Dan - Owner
Footwell%20Animation%20Tiny.gif
 
Apart from simulating the actual MCAS system, there is at least one simulator in Europe that is capable of simulating recovery from a runaway trim condition caused by a failed MCAS.
I understand that part of the procedure put out by Boeing was a method to recover from the runaway trim caused by MCAS malfunctions.
That procedure has been tested on a simulator, just apparently not by Boeing.
On a broader scale, there used to a Zenger-Miller management class that insisted that a manager needed to know nothing about the technical aspects of the jobs of the people they managed
Most of us older folk can share at least one and probably more anecdotes of departments or whole companies in severe distress as a result of years of following this philosophy.
I suspect that the underlying issues are much broader than just the technical aspects of MCAS.
Let's look at the possible overall contributing factors:
FAA culture. (Possibly partly driven by underfunding.)
Boeing's management culture;
...Resisting additional training.
...Loss of corporate memory. (1982 flight manual.)
...Possible inappropriate engine selection and mounting.
...Issuance of an unworkable procedure without any verification that it would work.
...Has Boeing become a victim of creeping Peter Principle?

Is a software kludge a valid solution to an engine size and placement that changes the flight characteristics as much as the new engines changed the 737 characteristics?
A question for the experts:
Given the altered flight characteristics, would the Max8 have passed certification even under the original rules?

Bill
--------------------
"Why not the best?"
Jimmy Carter
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor