Computers are physical machines that obey the laws of physics. Flipping bits at a lower microcontroller level can be observed as literally directing electrons to travel to specific chip pins.
With unit tests and behavioral tests, we can assume basic assurances on individual components working as a whole.
Engineering also has good design principles. One does not make gear teeth perfectly angular (take a look at the Antikythera Mechanism) because it can lead to premature wear and will have poor performance. In fact, there are hundreds if not thousands of kinds of gear teeth, and interchanging them within the same application can have all kinds of long lasting effects. Take a look into any vehicle recall in the past 2 decades and see that nearly every one of them is an edge case bug that slipped by Q&A.
Not accounting for the string null being valid is a bad design principle within the domain of software. Just as using Frozen water as a bearing surface in high speed rotational machines (Hey! It's hard and slippery! It's perfect!) is a stupid mistake, not accounting for valid "Bob Null"s will also lead to premature failure if not for the database but for the business.
We've only been at software engineering for less than a hundred years. We've been at mechanical engineering for a good 2000 (see the aforementioned Antikythera). We might need a few more years to iron out best practices as an industry.
Here's the thing: physical processes and failures tend to average out to nice smooth functions with Gaussian distributions. Each additional random variable has a minimal contribution to the average state of the system. Wear and tear tends to accumulate gradually over time until some mostly predictable breaking threshold is met.
With digital computers, however, the size of the state space that the system can occupy grows exponentially with the number of bits of state in the system, and changing a single bit can result in an explosive cascade of changes to the rest of the system[0]. Accumulated random failures of computer software very rarely lead to a nice, smooth, predictable probability distribution. Software failures are not caused by anything remotely resembling wear and tear.
[0] Please excuse and correct any inadequacies in my autodidactically acquired understanding of information complexity.
Read Feynmans analysis of the Challenger disaster if you want to see just how well an physical engineering problem can grow exponentially due to changing the properties of a single bit - a difference in temperature of a few degrees changing the mechanical properties of a rubber o-ring in that case.
With unit tests and behavioral tests, we can assume basic assurances on individual components working as a whole.
Engineering also has good design principles. One does not make gear teeth perfectly angular (take a look at the Antikythera Mechanism) because it can lead to premature wear and will have poor performance. In fact, there are hundreds if not thousands of kinds of gear teeth, and interchanging them within the same application can have all kinds of long lasting effects. Take a look into any vehicle recall in the past 2 decades and see that nearly every one of them is an edge case bug that slipped by Q&A.
Not accounting for the string null being valid is a bad design principle within the domain of software. Just as using Frozen water as a bearing surface in high speed rotational machines (Hey! It's hard and slippery! It's perfect!) is a stupid mistake, not accounting for valid "Bob Null"s will also lead to premature failure if not for the database but for the business.
We've only been at software engineering for less than a hundred years. We've been at mechanical engineering for a good 2000 (see the aforementioned Antikythera). We might need a few more years to iron out best practices as an industry.