Yeah, it doesn't work with keyword arguments. In the playground I tried a simple keyword with default value, and it converted to the wrong thing, as if "someone" was a valid type.
def greet(name: "someone"): String
"Hello, #{name}!"
end
Back in the day, a lot of people including me reported feeling more comfortable in Ruby after one week than all their other languages with years of experience, as if Ruby just fits your mind like a glove naturally.
I'm glad new people are still having that "Ruby moment"
Ruby is still the first tool I reach for if I need to do something quickly and naturally. People are always surprised at how readable the code comes out even compared to python.
Thank you for reading this.
I'm having fun learning Ruby. I just started working at a company where I use it full time. It's great learning it and I have supportive colleagues who are excited for me. I'm going to write more about Ruby. I have planned about 6 articles in the next few weeks. I hope I get around to them all.
I bet "I'm a lot of people". That's the point of the post. We exist, we contribute, some of us are critical. We just don't chase fame, don't care about (much) about recognition (beyond peer I guess) and have interests and ways to occupy our time other than software. :shrug:.
I accepted that I won't be a "name". Yet I have made suggestions that were adopted into Spring, I have commented on JCPs, I have talked with antirez (though not much contributed there, I'm still in awe of Redis' internal design). I just... don't care much about other people knowing me beyond what I need to pay the bills and make my immediate peers, manager chain and customers happy.
I see posts like this one pop up from time to time. I love it. Based on my 30y of exp that's also the workflow I converged on. It seems to me like every experienced and skilled developer is converging on this. jujutsu is entirely built to accommodate this workflow.
There are no silver bullets or magical solutions, but this is as close to one as I've ever seen. A true "best practice" distilled from the accumulated experience of our field, not from someone with something to sell.
This is cool, but missing a LOT of details between steps 4 and 5, which is the meat of the quicksort. Actually, the first and last elements of step 4 would be swapped, which means the order depicted in step 5 is incorrect.
I'd guess if you care more about speed than memory it might be faster to just move elements into new array - sequence through old array appending to start/end of new array according to pivot comparison. You'd be moving every element vs leaving some in place with a swap approach, but the simplicity of the code & branch prediction might win out.
I'm pretty sure the swapping is a fundamental part of the quicksort algorithm, not a mere implementation detail. That's the reason quicksort is an in-place algorithm.
Actually you're right, it is an implementation detail. The original isn’t mistaken, it’s just showing the lo-to-hi partitioning pass rather than the from-both-ends version I had in mind when I implemented quicksort before.
shame, shame, I should have double-checked before posting.
This article really resonated with me. I've been trying to teach this way of thinking to juniors, but with mixed results. They tend to just whack at their code until it stops crashing, while I can often spot logic errors in a minute of reading. I don't think it's that hard, just a different mindset.
There's a well-known quote: "Make the program so simple, there are obviously no errors. Or make it so complicated, there are no obvious errors." A large application may not be considered "simple" but we can minimize errors by making it a sequence of small bug-free commits, each one so simple that there are obviously no errors. I first learned this as "micro-commits", but others call it "stacked diffs" or similar.
I think that's a really crucial part of this "read the code carefully" idea: it works best if the code is made readable first. Small readable diffs. Small self-contained subsystems. Because obviously a million-line pile of spaghetti does not lend itself to "read carefully".
Type systems certainly help, but there is no silver bullet. In this context, I think of type systems a bit like AI: they can improve productivity, but they should not be used as a crutch to avoid reading, reasoning, and building a mental model of the code.
> Don't we have decades of research about the improvements in productivity and correctness brought by static type checking?
Yes, we have decades of such research, and the aggregate result of all those studies is that no productivity gain can be significantly demonstrated for static over dynamic, and vice-versa.
Not sure what result you are referring to but in my experience, many of the academic research papers use “students” as test subjects. This is especially fucked up when you want to get Software Engineering results. Outside Google et al where you can get corporate sanctioned software engineering data at scale, I would be wary most academic results in the area could be garbage.
reply