I am still convinced there is a problem with the plane based on simple probabilities. These are brand new planes so they shouldn’t be in a phase where failures multiply, and maintenance should be irrelevant unless these planes were delivered faulty. It’s a very small fleet compared to other models. Two extremely rare crashes (plane crashes are rare in general) within months of each others, with the same model, same phase of the flight, what is the probability that it is not related to the plane? It has to be statistically insignificant.
Now it may be a lack of pilot training, but by Boeing’s own account a B737 pilot should be qualified to pilot a Max. Then how come a small, insignificant feature can result in the crash of the plane? It would have to be not insignificant at all.
Speaking as an engineer, we, sometimes, underestimate the relevance of a relatively minor system. We can't imagine all possible scenarios and that's why in every single decision with systems that are responsible for human lives there is a lot of people involved - because we rely on someone, at some point, imagining the scenario everyone else didn't. This reduces the odds of letting something important slip through, but it's not perfect and can't hope to be (and, sometimes, we have the grim reminders we aren't all-knowing)
We should not rush to conclusions, since one of the investigations is in its infancy, but, from all the pilot reports that have been accumulating since the introduction of these models, it looks like the impact of the differences between models in the 737 family on crews were underestimated and training for them was not as thorough as it should have been.
Quick question, and this is meant sincerely and not flippantly: have you worked as an aviation or avionics engineer?
The focus on error conditions is truly impressive during the development of these systems. I’ve spent plenty of hours writing requirements, and writing and running tests, for cockpit software, and the sheer variety of error conditions tested is very high. Not that they’ll catch everything, but the idea that a single AOA sensor could cause the MCAS to fail seems like something that would have been analyzed and discussed by the engineers working on this.
No. I worked in other embedded control settings, but nothing as sensitive.
I understand what you say, that the AOA sensor malfunction or misread should never cause MCAS to fail and that any such situation (misread, malfunction, excessive actuator response, etc) should have been spotted sooner, by someone among the many people involved in its design, all the way from sensor to servo, well before any passengers were carried in the aircraft but everything indicates otherwise if we assume good faith. It's very likely all flaws will be identified and corrected, but reality has a way to stress those vanishingly small probabilities.
It's not impossible that a lot of very smart people who are dedicated to think about corner cases all day long will, eventually, let something important escape. We are all human and, in the end, everything is human error.
Now it may be a lack of pilot training, but by Boeing’s own account a B737 pilot should be qualified to pilot a Max. Then how come a small, insignificant feature can result in the crash of the plane? It would have to be not insignificant at all.