Sincere quesion: what is interesting or novel about this? Is it just the scale or did I miss some subtle aspect?
This is more (or less?) the same as industry best practices, just scaled up. There is a challenge in scaling up, as there is more potential for someone to mess it up. But it's the same technique.
> the same as industry best practices, just scaled up.
That's like saying S3 is the same as ext4, their the same, just scaled up! This is a poor argument, you'll note that S3 and ext4 are entirely different things, not "challenges", fundamentally different implementations.
Google is the only company I've ever worked for that automatically deleted dead code, let alone across a company of 100k+ SWE.
Fair enough and thanks for the reply. Still, for anything bothering engineers more than a bit repeatedly, anyone will write tools to remove the manual burden.
Our internal practice is to delete code if you suspect it's unused, run tests, and if it doesn't affect any tests, go for it. This could be automated, but it is not pressing enough, so we didn't automate it yet.
We could though, and it may even be a good idea, but I still don't get the novelty. But I appreciate your point of view.
Your proposed process is exactly the wrong way around. You'll end up keeping dead code just because it has tests, and delete code that's still used in prod just because it happened to be untested.
This is one of the details that the blog post goes into. Sounds like it's not as trivial and obvious a problem as you think it is, and you would have benefited from just not dismissing the post because of that.
Not to criticize your POV and argument directly, but, in the end, a lot of things, especially like these, are always easily subjected to the "we could do it, but just didn't bother to yet" kind of argument, and, when it comes down to the real work, things are much harder than they superficially appear to be. So yeah this isn't new... But... You know eheh
Well, I'd politely agree to disagree. Google scale is defined by novel, radical approaches, like for example inventing map/reduce, writing papers on LLMs others then implement successfully, or creating something like Kubernetes.
The specific topic here is not one of those google problems to me, as I can compare it to other problems we already solved. But yes, we could miss that critical point where a totally different problem domain emerges just from one order of magnitude more, so fair game.
Thanks for the reply and I meant it from similar experiences at my $DAYJOB where any initiative like this always uncovers a lot of dusty corners! Thanks for the great and polite reply, it's rare nowadays, so kudos!
I think the takeaway is that at Google's scale, even if you think some minor problem is not pressing enough to be automated, it will become pressing soon enough.
In the perspective of software engineering economics, the scale is important. Everyone knows it's good to clean up unused code but they just don't care because they think it doesn't yield a short term ROI for themselves. Then why don't we bring the cost down and see what happens? Automation changes this equation.
Sorry for responding on the wrong thread, it won't let me reply to your comment question. The reason I am doing so much awareness raising is that there is a critical deadline of May 23 and if the House doesn't approve the act on that day, it won't become law. I'm trying to drive more clicks to the many articles on this topic in the hopes that this will help persuade legislators to care. We have a chance to make a difference here and save lives. I'm sorry for the redundant posts and I'll lay off it for a bit.
This is more (or less?) the same as industry best practices, just scaled up. There is a challenge in scaling up, as there is more potential for someone to mess it up. But it's the same technique.
So what am I missing?