I'd agree that it's totally reasonable to 'hack together' a quick prototype with 'duct-tape and cardboard' solutions -- not just for startups, but even in full-scale engineering projects as the first pass, assuming you intend to throw it all away and rewrite once your proof-of-concept does its job.
The problem is that these hacky unstable unreliable solutions sometimes never get thrown out, and sometimes even end up more reliable (via the testing and incremental improvement methods you mention) than a complete rewrite would be -- not only because writing reliable software is hard and takes time (beware the sunken costs fallacy here!), but because sometimes even the bugs become relied upon by other libraries/applications (in which case you have painted yourself into a REALLY bad corner).
It's a balance, of course. You can't always have engineering perfection top-to-bottom (though I would argue that for platform code, it has to be pretty close, depending on how many people depend on your platform); if you shoot too high, you may never get anything done. But if you shoot too low, you may never be able to stop drowning from bugs, crashes, instability, and general customer unhappiness no matter how many problem-solver contractors you may try to hire to fix your dumpster fire of code.
So again: Yes, it's a balance. But I tend to think our industry needs movement in the "more reliability" direction, not vice versa.
This is simply not my experience with exceptions. Exceptions are frequently thrown and almost never need to be caught, and the result is easy to reason about.
My main use case for exceptions is in server code with transactional semantics. Exceptions are a signal to roll everything back. That means only things that need rolling back need to pay much attention to exceptions, which is usually the top level in jobs, and whatever the transaction idiom is in the common library. There is very little call to handle exceptions in any other case.
GC languages make safe rollback from exceptions much easier. C++ in particular with exceptions enabled has horrible composition effects with other features, like copy constructors and assignment operators, because exceptions can start cropping up unavoidably in operations where it's very inconvenient to safely maintain invariants during rollback.
Mutable state is your enemy. If you don't have a transaction abstraction for your state mutation, then your life will be much more interesting. The answer isn't to give up on exceptions, though, because the irreducible complexity isn't due to exceptions; it's due to maintaining invariants after an error state has been detected. That remains the case whether you're using exceptions, error codes, Result or Either monadic types, or whatever.
Not sure which type of "server" you meant when you said that, is that in the narrow sense of database server?
Behaviors similar to the above are not that infrequent, are expected from many other servers-in-wide-sense: media decoder would drop all decoding in progress and try to resynch to the next access unit, a communication front-end device would reset parts of itself and start re-acquiring channels (such exception-like reaction is even specified in some comm standards). Network processor would drop the packet and "fast-forward" to the next. Etc.
You could argue that this still looks like a server behavior loosely defined (and I agree), but a) this makes application field for exceptions large enough IMO, and especially b) how differently could one implement all that with other mechanisms (like return codes), and for what benefits?
> This is simply not my experience with exceptions. Exceptions are frequently thrown and almost never need to be caught, and the result is easy to reason about.
I write GUI apps and that is also how I use exceptions - and it works just fine. If you have an exception, the only rational thing to do most of the time is to let it bubble up to the top of the event loop show a warning to the end user or cleanly quit the program while making a backup of the work somewhere else.
And this is part of why I never ever ever "write one to throw away". It's very rare that it actually gets thrown away and redone "properly".
Also I just don't want to waste my time writing something that's for sure going to be discarded. There's a middle ground between "write something held together with duct tape" and "write the most perfect-est software anyone has ever written". My goal is always that the first thing I write should be structured well enough that it can evolve and improve over time as issues are found and fixed.
Sometimes that middle ground is hard to find and I screw up, of course, but I just think writing something to throw away is a waste of time and ignores the realities of how software development actually happens in the real world.
This. Once the spaghetti code glued together to somehow work is deployed and people start using it, it's production system and next sprint will be full of new feature stories, nobody will green light a complete rewrite or redesign.
And that’s how you get a culture where severe private data breaches and crashy code are the status quo :/
We can do better. Why don’t we? I guess the economical argument explains most of it. I think if more governments would start fining SEVERELY for data breaches (with no excuses tolerated), we’d see a lot more people suddenly start caring about code quality :)
>We can do better. Why don’t we? I guess the economical argument explains most of it. I think if more governments would start fining SEVERELY for data breaches (with no excuses tolerated), we’d see a lot more people suddenly start caring about code quality :)
Governments care about the "economical argument" even more so. They don't want to scare away tech companies.
Besides, today's governments don't protect privacy, rather the opposite.
We got a green light for a complete rewrite, but only because of licensing issues with the original code. I'm just hoping we don't fall for the second-system syndrome.
There are exceptions of course. I have also been involved in some complete rewrites and green field projects to replace existing solutions but it's very rare. Happens much more often in government sphere compared to private sector.
Which is the mistake, the throwaway should test one sub system or the boundary between two sub systems and nothing more. To get tautological again, once you have a working system you have a system.
The problem is that these hacky unstable unreliable solutions sometimes never get thrown out, and sometimes even end up more reliable (via the testing and incremental improvement methods you mention) than a complete rewrite would be -- not only because writing reliable software is hard and takes time (beware the sunken costs fallacy here!), but because sometimes even the bugs become relied upon by other libraries/applications (in which case you have painted yourself into a REALLY bad corner).
It's a balance, of course. You can't always have engineering perfection top-to-bottom (though I would argue that for platform code, it has to be pretty close, depending on how many people depend on your platform); if you shoot too high, you may never get anything done. But if you shoot too low, you may never be able to stop drowning from bugs, crashes, instability, and general customer unhappiness no matter how many problem-solver contractors you may try to hire to fix your dumpster fire of code.
So again: Yes, it's a balance. But I tend to think our industry needs movement in the "more reliability" direction, not vice versa.