I don't have a transcript link at hand, but as far as videos go, "Functional Core, Imperative Shell" / "Boundaries" by Gary Berhardt is also a must-see (or must-read, hopefully).
I’ve been programming for a long time, watched this presentation several times, done a bunch of other research, and still don’t know if I understand what this presentation is about. I fear that I’ve tried to apply these simple-vs-complex principles and only made my code harder to understand. My understanding now is that complexity for every application has to live somewhere, that all the simple problems are already solved in some library (or should be), and that customers invariably request solutions to problems that require complexity by joining simple systems.
> still don’t know if I understand what this presentation is about
1. The simplicity of a system or product is not the same as the ease with which it is built.
2. Most developers, most of the time, default to optimizing for ease when building a product even when it conflicts with simplicity
3. Simplicity is a good proxy for reliability, maintainability, and modifiability, so if you value those a lot then you should seek simplicity over programmer convenience (in the cases where they are at odds).
If you agree with her hypothesis, what it's basically saying is that a clean design tends to feel like much more work early on. And she goes on to suggest that early on, it's best to focus on ease, and extract a simpler design later, when you have a clearer grasp of the problem domain.
Personally, if I disagree, it's because I think her axes are wrong. It's not functionality vs. time, it's cumulative effort vs. functionality. Where that distinction matters is that her graph subtly implies that you'll keep working on the software at a more-or-less steady pace, indefinitely. This suggests that there will always be a point where it's time to stop and work out a simple design. If it's effort vs. functionality, on the other hand, that leaves open the possibility that the project will be abandoned or put into maintenance mode long before you hit that design payoff threshold.
(This would also imply that, as the maintainer of a programming language ecosystem and a database product that are meant to be used over and over again, Rich Hickey is looking at a different cost/benefit equation from those of us who are working on a bunch of smaller, limited-domain tools. My own hand-coded data structures are nowhere near as thoroughly engineered as Clojure's collections API, nor should they be.)
> I fear that I’ve tried to apply these simple-vs-complex principles and only made my code harder to understand. My understanding now is that complexity for every application has to live somewhere, that all the simple problems are already solved in some library (or should be), and that customers invariably request solutions to problems that require complexity by joining simple systems.
Simplicity exists at every level in your program. It is in every choice that you make. Here's a quick example (in rust):
fn f(i) -> i32 { i } // function
let f = |i| -> i32 { i }; // closure
The closure is more complex than the function because it adds in the concept of environmental capture, even though it doesn't take advantage of it.
This isn't to say you should never pick the more complex option - sometimes there is a real benefit. But it should never be your default.
You are correct in your assessment that customers typically request solutions to complex problems. This is called "inherent complexity" - the world is a complex place and we need to find a way to live in it.
The ideal, however, is to avoid adding even more complexity - incidental complexity - on top of what is truly necessary to solve the problem.
I think, the shift in programmer's perspective on where complexity should live is very much related to the idea of "the two styles in mathematics" described in this essay on the way Grothendieck preferred to deal with complexity in his work: http://www.landsburg.com/grothendieck/mclarty1.pdf.
Rich belongs to the small class of industry speakers who are both insightful and nondull. Do yourself a favour if you haven't and indulge in the full presentation.
I still can't believe that I was actually there during that exact presentation but at the time it didn't have the impact on me that it seems to have had on HN as a whole. Maybe I should review it again, or maybe I'm just not smart enough / don't have the right mindset, IDK.
Rich Hickey seems to be a bit of a Necker cube. Some people i know and respect think he is a deep and powerful thinker. But to me his talks always seem like 90% stating the obvious, 10% unsupported assertions.
That is the key: stating the obvious actually is hard and I think Rich does a beautiful job to translate the thoughts and feelings most programmer have into words. It actually gives a way to discuss and think about things (especially design and architecture) with others. I learned that there is no such thing as "common ground" or common knowledge magically and intuitively shared by all programmers. So if this already reflects your thoughts - even better.
Yeah, I think it depends on whether you're thinking about things from a SYSTEMS perspective or a CODE perspective.
Hickey clearly thinks about things from a systems perspective, which takes a number of years to play out.
You need to live with your own decisions, over large codebases, for many years to get what he's talking about. On the other hand, in many programming jobs, you're incentivized to ship it, and throw it over the wall, let the ops people paper over your bad decisions, etc. (whether you actually do that is a different story of course)
Junior programmers also work with smaller pieces of code, where the issues relating to code are more relevant than issues related to systems.
By systems, I mean:
- Code composed of heterogeneous parts, most of which you don't control, and which are written at different times.
- Code written in different languages, and code that uses a major component you can't change, like a database (there's a funny anecdote regarding researchers and databases in the paper below)
- Code that evolves over long periods of time
As an example of the difference between code and systems, a lot of people objected to his "Maybe Not" talk. That's because they're thinking of it from the CODE perspective (which is valid, but not the whole picture).
What he says is true from a SYSTEMS perpective, and it's something that Google learned over a long period of time, maintaining large and heterogeneous systems.
tl;dr Although protobufs are statically typed (as opposed to JSON), the presence of fields is checked AT RUNTIME, and this is the right choice. You can't atomically upgrade distributed systems. You can't extend your type system over the network, because the network is dynamic. Don't conflate shape and optional/required. Shape is global while optional/required is local.
If you don't get that then you probably haven't worked on nontrivial distributed systems. (I see a lot of toy distributed computing languages/frameworks which assume atomic upgrade).
I read a bunch of the other ones. Bjarne's is very good as usual. But Hickey is probably the most lucid writer, and the ideas are important (even though I've never even used Clojure, because I don't use the JVM, which is central to the design).
I think that the thing about that talk that struck a chord is that he took a bunch of things that people had been talking about quite a bit - functional vs oop, mutability, data storage, various clean code-type debates, etc. - and extracted a clear mental framework for thinking about all of them.
___
1. https://www.infoq.com/presentations/Simple-Made-Easy/