Hacker Newsnew | past | comments | ask | show | jobs | submit | jakub_g's commentslogin

The main issue with `git branch --merged` is that if the repo enforces squash merges, it obviously won't work, because SHA of squash-merged commit in main != SHA of the original branch HEAD.

What tools are the best to do the equivalent but for squash-merged branches detections?

Note: this problem is harder than it seems to do safely, because e.g. I can have a branch `foo` locally that was squash-merged on remote, but before it happened, I might have added a few more commits locally and forgot to push. So naively deleting `foo` locally may make me lose data.


Depends on your workflow, I guess. I don't need to handle that case you noted and we delete the branch on remote after it's merged. So, it's good enough for me to delete my local branch if the upstream branch is gone. This is the alias I use for that, which I picked up from HN.

    # ~/.gitconfig
    [alias]
        gone = ! "git fetch -p && git for-each-ref --format '%(refname:short) %(upstream:track)' | awk '$2 == \"[gone]\" {print $1}' | xargs -r git branch -D"
Then you just `git gone` every once in a while, when you're between features.

I recently revised my script to rely on (1) no commits in the last 30 days and (2) branch not found on origin. This is obviously not perfect, but it's good enough for me and just in case, my script prompts to confirm before deleting each branch, although most of the time I just blindly hit yes.

To avoid losing any work, I have a habit of never keeping branches local-only for long. Additionally this relies on https://docs.github.com/en/repositories/configuring-branches...


Not just squash merges, rebase-merges also don't work.

> What tools are the best to do the equivalent but for squash-merged branches detections?

Hooking on remote branch deletion is what most people do, under the assumption that you tend to clean out the branches of your PRs after a while. But of course if you don't do that it doesn't work.


I have the same issue. Changes get pushed to gerrit and rebased on the server. This is what I have, though not perfected yet.

  prunable = "!f() { \
  : git log ; \
  target=\"$1\"; \
  [ -z \"$target\" ] && target=$(git for-each-ref --format=\"%(refname:short)\" --count=1 refs/remotes/m/); \
  if [ -z \"$target\" ]; then echo \"No remote branches found in refs/remotes/m/\"; return 1; fi; \
  echo \"# git branch --merged shows merged if same commit ID only\" ;\
  echo \"# if rebased, git cherry can show branch HEAD is merged\"  ;\
  echo \"# git log grep will check latest commit subject only.  if amended, this status won't be accurate\" ;\
  echo \"# Comparing against $target...\"; \
  echo \"# git branch --merged:\"; \
  git branch --merged $target  ;\
  echo \" ,- git cherry\" ; \
  echo \" |  ,- git log grep latest message\"; \
  for branch in $(git for-each-ref --format='%(refname:short)' refs/heads/); do \
   if git cherry \"$target\" \"$branch\" | tail -n 1 | grep -q \"^-\"; then \
    cr=""; \
   else \
    cr=""; \
   fi ; \
   c=$(git rev-parse --short $branch) ; \
   subject=$(git log -1 --format=%s \"$branch\" | sed 's/[][(){}.^$\*+?|\\/]/\\\\&/g') ; \
   if git log --grep=\"^$subject$\" --oneline \"$target\" | grep -q .; then \
    printf \"$cr  $c %-20s $subject\\n\"  $branch; \
   else \
    printf \"$cr  \\033[0;33m$c \\033[0;32m%-20s\\033[0m $subject\\n\"  $branch; \
   fi; \
  done; \
  }; f"
(some emojis missing in above. see gist) https://gist.github.com/lawm/8087252b4372759b2fe3b4052bf7e45...

It prints the results of 3 methods:

1. git branch --merged

2. git cherry

3. grep upstream git log for a commit with the same commit subject

Has some caveats, like if upstream's commit was amended or the actual code change is different, it can have a false positive, or if there are multiple commits on your local branch, only the top commit is checked


if you're using gerrit then you have the Change-Id trailer you can match against?

This is my PowerShell variant for squash merge repos:

    function Rename-GitBranches {
        git branch --list "my-branch-prefix/*" | Out-GridView -Title "Branches to Zoo?" -OutputMode Multiple | % { git branch -m $_.Trim() "zoo/$($_.Trim())" }
    }
`Out-GridView` gives a very simple dialog box to (multi) select branch names I want to mark finished.

I'm a branch hoarder in a squash merge repo and just prepend a `zoo/` prefix. `zoo/` generally sorts to the bottom of branch lists and I can collapse it as a folder in many UIs. I have found this useful in several ways:

1) It makes `git rebase --interactive` much easier when working with stacked branches by taking advantage of `--update-refs`. Merges do all that work for you by finding their common base/ancestor. Squash merging you have to remember which commits already merged to drop from your branch. With `--update-refs` if I find it trying to update a `zoo/` branch I know I can drop/delete every commit up to that update-ref line and also delete the update-ref.

2) I sometimes do want to find code in intermediate commits that never made it into the squashed version. Maybe I tried an experiment in a commit in a branch, then deleted that experiment in switching directions in a later commit. Squashing removes all evidence of that deleted experiment, but I can still find it if I remember the `zoo/` branch name.

All this extra work for things that merge commits gives you for free/simpler just makes me dislike squash merging repos more.


Mine's this-ish (nushell, but easily bashified or pwshd) for finding all merged, including squashed:

    let t = "origin/dev"; git for-each-ref refs/heads/ --format="%(refname:short)" | lines | where {|b| $b !~ 'dev' and (git merge-tree --write-tree $t $b | lines | first) == (git rev-parse $"($t)^{tree}") }
Does a 3-way in-mem merge against (in my case) dev. If there's code in the branch that isn't in the target it won't show up.

Pipe right to deletion if brave, or to a choice-thingy if prudent :)


What if you first attempted rebases of your branches? Then you would detect empty branches

> The product is the sales demo that impresses VPs. Meanwhile, recruiters are still shuffling candidates around in Google Sheets.

This gave me a chuckle, because a colleague who talked with HRs just told me exactly this last week.


I was surprised recently when browsing on Amazon (I rarely buy clothing/shoes there, but I did a few times).

I chose my usual size, but Amazon told me "nope we think you want one size smaller, based on your history and our data for this product".


I’m curious to know if that was an accurate assessment by Amazon and if you were (or would have been?) satisfied with the purchase of their suggestion.

I have gone to the manufacturer's sites (I tend to avoid Amazon for purchases over about $50, because fraud).

They usually have something like "If you are a 10.5 for Adiddas, then you are a 10 for us" kind of thing.

Those have worked for me.


Semi-related thing that suprises me as a man (this is in Europe):

I go to a clothing shop at the start of a new collection, and look at men's shirts for a given type:

- there's 10 XS, 10 M, 10 XXL versions

I go towards end of the season:

- there's like 1 M remaining, but 8 XS, 8 XXL.

Like if they were surprised and had no data that most population is M.


I had subscribed to clearance sale newsletters from quite a few retailers but unsubscribed when I noticed they put stuff in the outlet section only when they run out of sizes S to L.

As far as I can tell, this is a key driver of why the EU implemented anti-waste provisions for clothing starting in a couple years: because retailers will just order arbitrary amounts of clothes and shred (!) whatever is unsold, which since they made their profit, doesn’t cost them anything and saturates the planet in formerly-wearable clothes that have been destroyed out of wasteful sloppiness. If they actually had to put in effort to design production runs — or else see their clothes sold at the bargain store — then they may well start acting less “why bother, we got our money, next”. One hopes, anyways.

> because retailers will just order arbitrary amounts of clothes and shred (!) whatever is unsold, which since they made their profit, doesn’t cost them anything

This is so obviously untrue. It's painfully obvious that the less they waste, the more profit they make, so there's plenty of incentive for them to order the right amount.


> there's plenty of incentive for them to order the right amount.

If the incentives to avoid waste were effective in the market, the EU wouldn’t need to introduce regulations to halt it — but their industry is destroying a bit over ten kilos of finished products per person there each year, so the market carrot isn’t working and manufacturers have more than earned the regulatory stick (and can afford the share of profits compliance will cost them).

https://www.eea.europa.eu/en/newsroom/news/many-returned-and... is a good start if you’re unfamiliar with the issue as a whole.


You seem to not even be addressing my point that what you said is objectively false.

You've instead changed the discussion to whether or not those incentives are "effective", which is a very different and subjective discussion. You also make the dubious assumption that we should halt all destruction of retail clothing when the truth is that a supply surplus is a good thing to prevent supply shocks for consumers and the unused surplus does not necessarily make sense to donate or reuse. So a complete prohibition on the destruction of retail clothing seems like a very blunt tool here.

But, to engage with your new point: I would argue that retail clothing waste is not unique and that a much fairer system would be to tax commercial/industrial waste in general. For that reason, this move by the EU seems more like a publicity stunt than a meaningful measure to me.


Oh. Okay! I’m clearly wrong and it would be a waste of your time for me to claim otherwise. Apologies :)

My impression is that at the end of the season, there's like 0 XXL, and 8XS, and 8Ms, but maybe that's just a perception problem.

To me, the main problem is that inevitably, any SPA with dozens of contributors will grow into a multi-megabyte-bundle mess.

Preventing it it's extremely hard, because the typical way of developing code is to write a ton of code, add a ton of dependencies, and let the bundler figure it out.

Visualizing the entire codebase in terms of "what imports what" is impossible with tens of thousands of files.

Even when you do splitting via async `import()`, all you need is one PR that imports something in a bad way and it bloats the bundle by hundreds of kilobytes or megabytes, because something that was magically outsourced to an async bundle via the bundler suddenly becomes mandatory in the main bundle via a static import crossing the boundary.

The OP mentions it here:

> It’s often much easier to add things on a top-level component’s context and reuse them throughout the app, than it is to add them only where needed. But doing this means you’re paying the cost before (and whether) you need it.

> It’s much simpler to add something as a synchronous top-level import and force it to be present on every code path, than it is to load it conditionally and deal with the resulting asynchronicity.

> Setting up a bundler to produce one monolithic bundle is trivial, whereas splitting things up per route with shared bundles for the common bits often involves understanding and writing some complex configuration.

You can prevent that by having strong mandatory budgets on every PR, which checks that the bundle size did not grow by more than X kB.

But even then, the accumulation of 100s/1000s of PRs each adding 1KB, bloats the bundle enough to become noticeable eventually.

Perf work is thankless work, and having one perf team trying to keep things at bay while there are dozens of teams shipping features at a fast pace is not gonna cut it.


It makes sense to have separate entry points for landing pages, no need for fancy providers and heavy imports.

In practice people are more than willing to wait for an SPA to load if it works well (figma, gmail/gdocs, discord's browser version used to be pretty good too, and then of course there are the horrible counter-examples AWS/GCP control panels, and so on).


Exactly. To avoid dependency explosion, you need at least one of 1) most libraries built internally 2) great discipline across teams 3) significant performance investment/mandate/enforcement (which likely comes from business requirement eg page load time). I have rarely seen that in my limited experience.

Since we talk resizing windows, for months I was _sometimes_ unable to resize windows at all, and couldn't figure out why. I thought it was a random bug of macOS.

Finally I realized the issue: if a window spans across two displays, it won't resize. Insane!

(I have an external monitor up, laptop down, and it's easy to move a window such that it stretches a few pixels from monitor to the laptop. No resize for you!)


Window management isn't macOS' strong suit, but external monitors make it act absolutely crazy. Connecting monitors will do anything from keeping all windows in the same position to restoring previous positions to launching them across screens, sometimes completely outside visible screen space, seemingly arbitrarily.

I get why Apple wants you to make every window either a small tile or a full screen application now, their window manager simply can't cope with anything else.

Whatever they're doing is somehow worse than both Windows and the major Linux desktop environments. Maybe there's some obscure preference among old school macOS users that like having their windows placed so that only a small corner pokes out of the bottom left when attaching an external monitor?


On the topic of multi monitor messiness, NOTHING gets my blood boiling quite like the taskbar (or whatever it's called, the bottom application drawer) moving between monitors, seemingly arbitrarily

Keep your cursor hovered over the bottom of the 2nd monitor? It moves. Want to move it back? I have tried everything I could think of to try get it back, I still to this day after 5 years of being on Mac because work forces it on me cannot see the logic or heuristic it chooses for when to move the fucking dock. I swear it's basically random, and it's a daily occurrence for me that I have to just shake my cursor violently to get the stupid thing to eventually move.

The worst part is you can't even disable this dumbass behavior! You can't tell it "Hey, dock should ONLY be on monitor 1", so you just have to live with this anti feature


As far as I can tell, the dock appears on either the left/rightmost display (when docked to the side) or on the main display.

What is the "main display"? You can find out by going to settings > displays, where you find a "use as" dropdown that can be set to "mirror", "main display", or "extended display". If you want to move the dock, change the main display. This also affects a few other, smaller things.

I personally put the dock to the side so it doesn't take up precious space (windows don't seem to want to cover the dock if it's at the bottom, even with the setting for that disabled).


I remember early OS X (10.4 / 10.5? - damn, that was 20 years ago?!) with a laptop and external monitor.

It was farcical, as the menu bar was always only on the primary monitor, so you had to use/click menus on that monitor, even if the actual window the menu was for was on the other monitor.

Around 10.7 or so they started putting menus on both monitors at the same time to at least make this scenario a bit more sane.


Oh, the Dock. Try to have a setup where you have two displays vertically, BUT you want the Toolbar and Dock both be on the top one, stably - impossible.

Workaround I found: you can configure the monitors to be a pattern like this instead:

   1
     2
touching only in the corner. Then it works, the Dock is on monitor 1.

It seems to be a common issue, and despite googling I wasn't able to find a solution that worked (back in Aug '25). For some reason, it does not happen if you position the dock on the left/right side rather than the bottom

If your monitors are arranged horizontally, you just need to touch the bottom part of the screen you want the dock to be on (I set mine to autohide). If they are arranged vertically, it's best to have them in zigzag or put the dock to the sides, not the bottom.

It's infuriating, which is why I prefer to use spotlight (actually Alfred) or the app switcher.


You can turn this off in the settings, forgot exactly where. I actually found after 1-2mo I preferred not being able to haha

I didn't think a window could span two screens - I thought it only appeared on the one that had most of the window.

Easy to stretch a few pixels? Easier to move windows with super+arrows so they snap perfectly to the monitor borders, and then you'd never have this issue. I rarely drag windows "by hand" (by mouse) anymore!

Actually, npm supports "provenance" and as it eliminated long lived access tokens for publishing, it encourages people to use "trusted publishing" which over time should make majority of packages be auto-provenance-vefified.

https://docs.npmjs.com/trusted-publishers#automatic-provenan...


pypi also added this last year [1] and encouraging people to use trusted publishing as well.

[1] https://docs.pypi.org/trusted-publishers/


If the build doesn't happen without network access, it doesn't really work.

Unless the Chrome web store integrates with this, it puts the onus on users to continuously scan extension updates for hash mismatches with the public extension builds, which isn’t standardized. And even then this would be after an update is unpacked, which may not run in time to prevent initial execution. Nor does it prevent a supply chain attack on the code running in the GitHub Action for the build, especially if dependencies aren’t pinned. There’s no free lunch here.

key word "encourages"

when someone uses `npm install/add/whatever-verb` does it default to only using trusted publishing sources? and the dependency graph?

either 100% enforcement or it won't stick and these attack vulnerabilities are still there.


For those of you on macOS who still want to benefit from arguably the best drawing application ever conceived, https://jspaint.app/ is THE way. Use it all the time when editing screenshots.

Bonus point: that Windows 95 style "error" beep when pasting too large image. Always sends the shiver down the spine and confuses the coworkers around (we're an all-Mac shop).


my favorite "easter egg" hidden behind File -> Exit menu item of jspaint.app... I still remember how it blew my mind the first time I saw it!

This wet my eyes. The times...


With Macbooks Air M4 starting at $1k/€1.1k, and apparently soon some even cheaper Macbooks coming up, it's really difficult to justify buying a Windows laptop those days and having to deal with all Microsoft bs, unless you have specific needs and being locked in.

The difference of "value for money" in terms of build quality, battery life, screen, touchpad, OS stability, OS upgrades experience, and overall polish and level of user (non-)hostility is immense.

A Windows guy for two decades, got an MBP for work, and while I miss some Windows software and I don't like some Mac things (e.g. no real write-to-disk hibernation; pricey upgrades from base models etc.), but there's no way I'm going back.


ya, starting with M1, Apple really broke away hard. Windows laptops, especially of the thin-and-light variety, are so bad by comparison.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: