Hacker Newsnew | past | comments | ask | show | jobs | submit | DavidWoof's commentslogin

It's very common the US as well, but primarily in education circles. I honestly have no idea what percent of the general public would recognize it immediately (hard to know for anything, really).


> My understanding was that it was the tenant rights movement that killed SROs and boarding houses by making it practically impossible to keep them orderly

It depends on the time frame you're talking about. Long-term SROs like boarding houses were absolutely affected in the 50s/70s by tenant rights laws. But they adapted. In the 70s/80s, SROs were still widespread in large cities except that they all had occupancy time limits (usually 60s days or so) to avoid tenancy laws. But people who relied on them could just move to a new one when the time limit came, so the market was still viable.

But then in the late 80s/early 90s they all got zoned away in the way this article talks about. It was really more NIMBY than reformer. Note that this time frame corresponds with the height of the US crime wave, and what was once a sketchy urban neighbor became the source of major neighborhood blight, especially as re-urbanization started up in the late 90s


I have Sisyphus as my wallpaper. When people ask about it I say he's the patron saint of software development.


In OP's defense, "becoming suspicious" doesn't mean it's always wrong. I would definitely suggest an explaining comment if someone is using DISTINCT in a multi-column query.


Snopes has this as mixed because Stalin may or may not have expressed this sentiment at some point, but it seems impossibly unlikely to me that this pun works in Russian as it does in English.


It's hard to talk in the abstract because obviously people can abuse any type of code feature, but I generally find chaining array methods, and equivalents like c# linq, much easier to read and understand than their looping equivalents.

The fact that you single out .reduce() here is really telling to me. .reduce() definitely has a learning curve to it, but once you're used to it the resulting code is generally much simpler and the immutability of it is much less error-prone. I personally expect JS devs to be on the far side of that learning curve, but there's always a debate about what it's reasonable to expect.


The wonderful thing about .reduce() is that it can compute literally anything. The problem with .reduce() is that it can compute literally anything. As for the rest of the morphism menagerie, I like being able to break up functions and pass intermediate results around. It's literally cut and paste with map/filter, with a loop it's rewriting. Yay composability.

That said, it's easy to get carried away, and some devs certainly do. I used to be one of those devs, but these days I sometimes just suck it up and use a local variable or two in a loop when the intent is perfectly clear and it's not leaking side effects outside of a narrow scope. But I'll be damned if I let anyone tell me to make imperative loops my only style or even my primary one.


Reduce cannot calculate literally anything, in the sense you mean. It corresponds in computational power with primitive recursion. And quite famously, there are problems primitive recursion cannot solve that general recursion can.

On the other hand, I don't think I've ever seen something as recursive as Ackermann's function in real life. So it can probably solve any problem you actually mean to solve.


What the previous user means is that reduce is not a function that returns a list (albeit it can).

It just accumulates over some value, and that value can be anything.


Naw, GP is right, I'd forgotten about the limits of primitive recursion. But for almost any given real-world problem, it's something you can get away with forgetting.


Unfortunately, since we don't have continuations, we cannot make reduce _stop_ computing. In such cases where that is needed, it might be better to use a loop that can be broken out of.


Well, you can always throw an exception :) (ducks)

But yes, it's best used on sequences where you know you'll consume the whole thing, or at least when it's cheap enough to run through the rest with the accumulator unchanged.


> The fact that you single out .reduce() here is really telling to me. .reduce() definitely has a learning curve to it, but once you're used to it the resulting code is generally much simpler and the immutability of it is much less error-prone. I personally expect JS devs to be on the far side of that learning curve, but there's always a debate about what it's reasonable to expect.

Not only that, but the words that GP uses to single out .reduce() start with:

> I see so much convoluted code with arr.reduce() or many chained arr.map().filter().filter().map()

Which I do not doubt, but the point is diminished when one understands that a mapping of a filtering of a filtering of a mapping is itself a convoluted reduction. Just say that you prefer to read for-statements.


I say convoluted. I prefer using the functional-style array methods, but there's a time and place for everything, and I feel a lot of Javascript developers extend those methods beyond what is reasonable and into a convoluted mess, especially with reduce.

Give me a good classic `T[] => I` reduce function and I'm fine with it. Not the more common case of folks mutating the accumulator object.


> “nobody reads intermediate commit messages one by one on a PR”

I clean my history so that intermediate commits make sense. Nobody reads these messages in a pull request, but when I run git blame on a bug six months later I want the commit message to tell me something other than "stopping for lunch".

> pedantically apply DRY to every situation or forcing others to TDD basic app

Sure, pedantically doing or forcing anything is bad, but in my experience, copy-paste coding with long methods and a lack of good testing is a far more common problem.

You may be 100% correct in your particular case, but in general if senior devs are complaining that your code is sloppy and under-tested, maybe they aren't just being pedantic.


Yes. I think many people have no culture of good commits, so they never use bisect or blame, so they never see the use of good commits. It's a cycle


Good commits are not a requirement form bisect. I commit when I think something more or less completed, or I want to start a major refactoring and I'm afraid I might need to revert it.

I don't always check if commits are buildable, PR should be, because that is what is merged to master and tip of master should be buildable.


If a commit isn't buildable then when you get to it with bisect you have to skip. If this happens once in awhile it's not fatal, but it's sure not helpful


I actually find the relevant PR/MR discussion a lot more useful than the commit messages themselves. So any git blame is just to get a commit hash and look that up in GitLab/GitHub to see the entire change set and any comments around it. It makes me wish those comments were bundled with the merge commit somehow and could easily be accessed in the terminal where I'm viewing the git history.


Not my experience. Often the single commit is all the context I need. If it's not, follow the merge to the ticket number to get more context.


> Sure, pedantically doing or forcing anything is bad, but in my experience, copy-paste coding with long methods and a lack of good testing is a far more common problem.

This is a false dichotomy and an unproductive thing to focus at.

Experienced engineers know when to make an abstraction and to not. It is based in the knowledge about project.

Abstarct well and don't do compression. Easy said, and good engineers know how to do it.


JoelOnSoftware had a great piece back in the day where he mentioned that while he consciously knew what a short sale on an option was, in practice he had to stop and think about how to calculate it, while his financial friends just knew the answer immediately. He drew a comparison to pointers in C, where if you're going to be a C programmer, then pointers should just be intuitively obvious to you and not something you need to think about.

IAW, there are no pure fast or slow thinkers, a lot of this is just how well have you internalized the background material. Having quick repartee in conversation has absolutely no relationship to immediately seeing what the loop variable should be in a programming problem. FizzBuzz isn't quickly solved by decent devs because they think faster, it's quickly solved because it's a trivial problem that doesn't require serious thinking for experienced devs.

When I'm programming for finance or medical, I often have to tell the PM "let's stop here and let me think about this for a day". Because it's not my field, it takes me a while to get my head around it. OTOH, there's very often algorithm conversations where I have to wait for others to catch up.


I'm not sure it is quite that simple. The other day someone asked me about the project I've been working on. The thing I touch nearly every single day and know in intimate detail...

It still took me what felt like a good minute or more of thinking to remember anything about it and more than that to recall specific details of interest. It would take me even longer to think about something that I don't have at the tip of my tongue, so to speak, but I find there is no such thing as an immediate answer for me. That doesn't seem to be true of all others.


I think you're hitting on the fact that there are multiple variables that contribute to "quickness." Having digested a lot of background material is definitely part of it and ties to the higher up posts about e.g. Churchill. Also a way intelligence can correspond to it, in the sense that more intelligent people have often digested more topics. But there also seem to be people who are less distractable, more tuned in to what is going on, and more able to tie current happenings to their body of knowledge and make a joke or whatever.


One of the aphorisms I repeat a lot is that in this age where everybody programs a little, including analysts, devops, QA, researchers, etc., the thing that separates all of them from actual software developers is that developers know that "code is bad. Code is where I find all my bugs.

And I do wear these other hats sometimes. I think nothing of scripting a useful utility or cranking something out in R or VBA for a presentation. But when it comes to production code, I'll spend a lot of time trying to think of ways to reduce the amount of code required.

But it's two completely different philosophies regarding code, and unfortunately in some organizations AI is starting to blur the lines.


That's absolutely not what the article is about. Did you even read it?

People don't really have that debate anymore outside of twitter casuals, and it's dismissed with a wave almost immediately in this article, which then goes on to examine the complex grammar of "try and".


Yep. This is like someone seeing an article about geology and saying “ah, sphere earth vs. flat earth”. Like, no, the article already presupposes that the earth is spherical because that’s the viewpoint taken by all people with a serious academic interest in the topic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: