Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
They Write the Right Stuff (1996) (fastcompany.com)
78 points by doener on June 16, 2020 | hide | past | favorite | 32 comments


> How do they write the right stuff?

The article claims that it is their process, and although they are certainly extremely thorough, I don't buy it. At the very least, they cannot be compared to most developers apples to apples.

Why?

> NASA and the Lockheed Martin group agree in the most minute detail about everything the new code is supposed to do — and they commit that understanding to paper, with the kind of specificity and precision usually found in blueprints. Nothing in the specs is changed without agreement and understanding from both sides.

For most anyone working as a developer, could you even imagine what this would be like? Specs not changing or only changing in very small ways that are documented to the nth degree not only is impossible from a practical standpoint for most fields of software development, but such a different process that it's essentially a completely different field of work.

> Most people choose to spend their money at the wrong end of the process

> In the modern software environment, 80% of the cost of the software is spent after the software is written the first time

I would argue that this is largely the result of:

1. Incompetency as far as specifications go (suits don't know what they want, or what they are asking for)

2. Large changes in the specification (moving target)


There's great research about zero defect software, and my favorite is a weird grey textbook by Capers Jones called "The Economics of Software Quality"

In it you'll find that this level of requirements gathering, blueprint/spec writing and code inspection are actually cheaper in a relatively short time (months, not years).

Writing specifications that are extremely detailed ends up giving you all the logical failures ahead of time, and makes typing code more of a spell checking exercise. By having the logical failures before the code ever runs, you get to debug & fix them before any code runs. This ends up being faster & cheaper in total than waiting for users to find bugs.

However - I understand that for many companies they actually don't care if users find bugs as time to market is more expensive than the future bug finding. But if you have a product already in the market, and you expect to support it for ~year, you probably can do much more detailed design and avoid so much pain.


This seems incredibly short sighted.

To wit, the fact that Keller - or whoever is in that role these days - has to certify that the code and the system it runs on will work as expected speaks volumes about how serious this is.

If it doesn't, things tend to blow up, which is sure to ruin your day.

Also, I would be remiss if I didn't mention Therac-25, a medical device which killed or severely injured several patients.

https://en.wikipedia.org/wiki/Therac-25

In these cases, arguing over every change to code seems like the only way they can guarantee with any certainty that it works correctly.


Your response suggests you misunderstand the point I was making. I completely agree with you that there are certain niches which require extreme precision for safety purposes like that which is being discussed in the article as well as your medical device example. What irritated me about the article is that it was comparing these niche fields which have completely different requirements and incentive structures to the software industry in general as if that was somehow a reasonable thing to do. Suggesting that the software field in general is somehow juvenile while failing to consider the vastly different incentive structures that exist for developers, managers, etc. in most fields, which necessarily have a drastically higher tolerance for error, is patently ridiculous.


While I don't entirely 'pay for my shoes' developing software - mostly client side web stuff - it's a part of what I do and I can understand the frustration that other parts of the field seem to have when looking at situations like this.

Namely, we're at a point where agile practices have become so common that nothing is ever 'done' and it is acceptable to ship something with errors, assuming it can and will be patched later.

Both Mom & Dad started out as COBOL programmers on punchcards, and I've heard plenty of 'war stories' about how these kinds of situations were far less common because an error meant your program simply didn't run.

Unlike say a web browser which will mostly spit back out whatever weird HTML and CSS you feed into it, errors and all.

In consumer software, it's not, 'we follow orders or people die' but for people working on this, it probably feels that way if your stuff is used by enough people (iOS) or is critically important (medical billing).

If it doesn't do what it says on the tin, people are incredibly unforgiving of even the smallest mistakes.


Eh, the article is in FastCompany. The target audience may not be familiar with common software development practices. They contrasted the shuttle group process with normal processes and pointed out its hefty price tag & why the government prioritizes quality over speed and cost. Aside from an interesting article on the space program and process improvement, it was also useful for Joe CEO that regularly complains that, let's see 1996... Windows 95 gets it's IRQs screwed up every time he adds a printer or sound card, etc. "I paid for this software, why doesn't it work?"


I guess it all depends on the type of software that one is writing. If there is a bug in NASA's software, it will cost them tons of money and might put them back months or years in schedule. It makes sense to be extremely careful. Same applies to software used in medical devices, cars etc where lives are at stake.

If my todo app has a bug or even goes down a full day, the impact is not severe. I guess it is okay to change specs within reason - if a project is managed well and communication between the parties involved is great, some amount of flexibility in specs can be accomodated in most non-life threatening projects.


I'm largely self-taught in programming, so it wasn't until much later in my career that I really groked the value of foundational Computer Science. One day, I stumbled across this book:

http://elementsofprogramming.com

... and it was maybe the first time I understood that you very much can explicitly define the constraints, inputs, outputs, etc, of a program. For something like a NASA shuttle, it makes absolute sense that they would have an insanely granular definition of the domain / limits under which their code must operate.

Most of us deal in our day-to-day on top of a mountain of abstraction. But there are absolutely scenarios in which a formal, complete spec, is a life-or-death requirement.


> Incompetency as far as specifications go

Building complex software is a learning process, and you'll find that what was specified wasn't practical, you'll find better ways to do things, etc.


It's an interesting approach to a problem that most of us don't have, and so is not applicable to the majority of people writing software.

The key is to understand the risk/reward trade-off in their software stack. For the vast majority of software that is written, no one dies if it goes wrong. In fact, for the vast majority of software, it's a minor inconvenience when a bug bites. Because of this, time to market trumps correctness. It's a totally valid trade-off, but just not very satisfying from an engineering perspective.


> "How do they write the right stuff?

The answer is, it’s the process. The group’s most important creation is not the perfect software they write — it’s the process they invented that writes the perfect software.

It’s the process that allows them to live normal lives, to set deadlines they actually meet, to stay on budget, to deliver software that does exactly what it promises. It’s the process that defines what these coders in the flat plains of southeast suburban Houston know that everyone else in the software world is still groping for. It’s the process that offers a template for any creative enterprise that’s looking for a method to produce consistent – and consistently improving — quality."

I think here it is part of the solution. The team culture should be strong as also, to achieve a good software.


According to the article, 260 people are writing code for systems with strict specifications that do not change much over time. That's a significant constraint on the externalities they deal with. In contrast, most commercial programmers write an order of magnitude more code for systems far less defined with more unpredictable externalities, with shorter deadlines. There is no magic in the world. Everything has a reason, and it's often not (just) due to incompetence or stupidity.


One of the most influential and interesting articles to me that I share with all the developers with whom I think I can have an interesting conversation (and it influences some quite a lot)!

I remember there were some comments: > Are you prepared to pay 10x for a text editor, say? My answer: Hell yeah, if I am earning thousands using this editor, why not pay couple hundred bucks for a really well-made program with good support, etc.

But as I understand, there exists no demand for this type of thing among general users. People are ready to pay for tons of useless features being added (just to signal that the project is progressing) but not ready to pay for simpler solutions that are really focused on quality, security and consistent performance.


I think there is demand for those things, but commercial entities are poorly equipped to provide them. The entire growth mindset is antithetical to providing a stable and consistent product. Product tiers get introduced to extract more money though upselling, creating an inherently adversarial relationship between the company and end users who end up with an intentionally crippled product. As the company gets larger, the incentive structure surrounding middle management takes over and completely wrecks the product as individuals compete to game internal metrics. Unnecessary and regressive changes make their way into updates because "product launches" look good on people's resumes. Nobody cares what changes were actually made because they arent incentivised to. Add innevitable attempts at vendor lock-in / weaponization of the sunk cost falacy against customers and it makes perfect sense that most non-crap software (that also remains non-crap as time goes on) seems to be open source.

Maybe we need a patreon style system for providing long term stable funding to important open source projects. I would sleep easier knowing that FOSS developers for packages i use will have stable income and be able to continue their work.


Nicely said! I myself prefer the small-businesses’ products. Like several-person organizations that provide “underdo the competition” or “do one thing and do it well” type of software. But I agree on how “bureaucracy” and political games inside larger organizations lead to introduction of not-always-sensical features or re-designs.



Basically, it is possible to write largely bug free code ... this is what it costs.


It would be very instructive to compare this with the SpaceX development process. I think the AMA for them has still not been completed, and know of no other source of insight into their process. (To be clear, I also agree with the other comments. With few exceptions (flight, autonomous vehicles, medical, power) lives do not depend on our software, so this is likely not the right process for most of us.)


You can definitely increase correctness of the final product with more and more up front specs.

There's an understanding/level-of-detail hurdle that I've run into a lot: people have a desire to over-pseudocode and under-specify in usual design decisions. If you really want to get a good spec that will result in both good estimates and avoid surprises in terms of "whoops, this won't work actually" then you need to get really detailed here.

There's also a business one. Most businesses don't have quite as exact requirements as "fly to this point in space" or such. And often, the requirements iterate with user feedback. Or you try five things and see which users like the most. So the extra time delays really add up, when there's already a bunch of stuff going on behind the scenes than just the final feature that launches to the general public. If you're in this discovery phase, perfect is the enemy of "learning," and learning is what you need.


Checkout SpaceX's Reddit AMA from last week: https://www.reddit.com/r/IAmA/comments/1853ap/we_are_spacex_...


We don't build software like a bridge; we grow it like a garden. The industrial revolution was about building machines that dominate its environment. The software revolution is about writing code to resonate with its environment. The software grows in harmony with its user and business needs. Except, of course, for special cases such as this.

"The whole approach to developing software is intentionally designed not to rely on any particular person."

Our programming culture is not ready for this. Any programmer who identifies with their craft will never want to be another cog in the machine. A programmer could have chosen to be a doctor, lawyer, engineer, etc. but they choose to code because it is one of the way they can "express" themselves.


> A programmer could have chosen to be a doctor, lawyer, engineer, etc. but they choose to code because it is one of the way they can "express" themselves.

Not necessarily. I chose to code because I'm good at it and because I'm not good enough at anything else to do it as a job. Well, at least not in the same salary range.

I agree with your point otherwise but I'm not sure that people solely choose to become software developers because they wish to express themselves through their work. A lot of developers I know personally see coding as just a job. A job they like, but a job.


If you put as much effort into law as you have into software you would be a good lawyer and a terrible developer. It is a choice. Today your effort learning software pays better (with exceptions both ways) then law, but historically law has been good. I decline to speculate on if law will earn better than software in the future.

You can say the same for medicine, any other engineering field, farming, ditch digging, or any other job. Regardless of which you choose some love it and some it is just a job.


I certainly do not begrudge the efforts that go into building the shuttle software the process they must follow to ensure their code remains at such a high level of quality. When you have essentially an unlimited budget you can achieve a lot with competent managers, staff, and a process that is well-defined and understood and, most importantly, enforced. I don't know the details here but I'd be willing to bet that the cost of developing and maintaining their code greatly exceeds that of the vast majority of successful software efforts on a per line of code basis.

It gets back to the old saying: Cheap, Fast, Better. Pick two.


"An analysis accomplished after the Challenger accident showed that the IBM-developed PASS software had a latent defect rate of just 0.11 errors per 1,000 lines of code—for all intents and purposes, it was considered error-free. But this remarkable achievement did not come easily or cheap. In an industry where the average line of code cost the government (at the time of the report) approximately $50 (written, documented, and tested), the Primary Avionics System Software cost NASA slightly over $1,000 per line."

https://history.nasa.gov/sts1/pages/computer.html

I should really link to this article as well:

http://www.ganssle.com/tem/tem400.html#article2


It's in the article: $35M per year budget ($57M in 2020 dollars).


The article is from 1996. And the experts then describe the then software industry as "hunter and gatherers".

I am not sure where we are now, was OOP the dark middle age or the industrial revolution? Or will that AI be?

Would be really interested in your opinions!


I started college in 1997 so can give some insights of what it was like in the late 90s

- Unless you were a larger firm or project, version control was something you never even heard of

- A lot of the internet was still being put together

- Perl was a big deal and a LOT of it was bad e.g. "use strict" was almost never used so you could use variables willy nilly

- The Joel Test was yet to be written (5 years out roughly) and people were scoring low on the test then

- The only exposure to Linux was generally when you were in college unless you were hardcore. Otherwise, everything was windows. E.g. I remember installing Slackware from a CD that came with a giant book and then thinking: "What do I do now?"

- Java was the new hotness and Rutgers (my school) was the first major university to offer it for there 101 (actually RU CS first class was 111 b/c 101 was "business computing").

- There was no Google, Stack Overflow etc. You were lucky if people like Jeremy Zawodny happened to write a blog post about the weird Perl + MySql bug you were running into and/or if your local book store had Databases for Dummies

It really is amazing to me now to see how MASSIVE frameworks for JavaScript come out every year and the whole ecosystem spasms. Perl CGI was the de facto way to write web stuff for almost a decade. In fact LAMP used to mean "Linux Apache Mysql Perl" until php came along and even then giant website were still written in Perl e.g. Amazon, IMDB etc


Sounds the very opposite of Agile with a huge emphasis on up front requirements.


For me there is still a big question how much process and people matter for the quality.

This article suggests process should be the primary focus.

Books like Peopleware suggest that people matter most.


I just right the write stuff, myself.


Such a sad writeup.

Having a journalist glorify some 10x gurus in a specific team and diss every other software developer in the world is not going to solve the problem at all.

It's a shame though - because the topic is a really important and interesting one, which deserves discussion. But not this type of dramatisation.

The most important sentence only comes at the end (and is the reason you can't trust the solutions proposed):

> And money is not the critical constraint: the groups $35 million per year budget is a trivial slice of the NASA pie, but on a dollars-per-line basis, it makes the group among the nation’s most expensive software organizations.

You work with the constraints you have. Eliminating the constraint doesn't solve the problem - it redefines it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: