Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> And how is that different from a mechanical system?

Most mechanical components obey underlying physical principles that have linear or quadratic approximations, at least in certain regimes of environmental and other factors. Therefore, we can model the component and we can know when we are unable to model it.

We manage overall system complexity via physical/mechanical modularization, with things to insulate against thermal, mechanical, chemical, electrical coupling. By testing individual components, we have basic assurances on overall system behavior.

Software attempts to do this with "good design principles", but the truth of the matter is that just about any software component in a typical application can completely jack up the global environment for other components, and processes can make OS and environment modifications that completely break other processes belonging to the same user.

Try issuing performance guarantees on an airplane whose fuel pump can set μ0 and ε0 to -1 if the ground crewman that filled the wing tanks was named "Bob Null".



Computers are physical machines that obey the laws of physics. Flipping bits at a lower microcontroller level can be observed as literally directing electrons to travel to specific chip pins.

With unit tests and behavioral tests, we can assume basic assurances on individual components working as a whole.

Engineering also has good design principles. One does not make gear teeth perfectly angular (take a look at the Antikythera Mechanism) because it can lead to premature wear and will have poor performance. In fact, there are hundreds if not thousands of kinds of gear teeth, and interchanging them within the same application can have all kinds of long lasting effects. Take a look into any vehicle recall in the past 2 decades and see that nearly every one of them is an edge case bug that slipped by Q&A.

Not accounting for the string null being valid is a bad design principle within the domain of software. Just as using Frozen water as a bearing surface in high speed rotational machines (Hey! It's hard and slippery! It's perfect!) is a stupid mistake, not accounting for valid "Bob Null"s will also lead to premature failure if not for the database but for the business.

We've only been at software engineering for less than a hundred years. We've been at mechanical engineering for a good 2000 (see the aforementioned Antikythera). We might need a few more years to iron out best practices as an industry.


Here's the thing: physical processes and failures tend to average out to nice smooth functions with Gaussian distributions. Each additional random variable has a minimal contribution to the average state of the system. Wear and tear tends to accumulate gradually over time until some mostly predictable breaking threshold is met.

With digital computers, however, the size of the state space that the system can occupy grows exponentially with the number of bits of state in the system, and changing a single bit can result in an explosive cascade of changes to the rest of the system[0]. Accumulated random failures of computer software very rarely lead to a nice, smooth, predictable probability distribution. Software failures are not caused by anything remotely resembling wear and tear.

[0] Please excuse and correct any inadequacies in my autodidactically acquired understanding of information complexity.


Read Feynmans analysis of the Challenger disaster if you want to see just how well an physical engineering problem can grow exponentially due to changing the properties of a single bit - a difference in temperature of a few degrees changing the mechanical properties of a rubber o-ring in that case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: