Hacker Newsnew | past | comments | ask | show | jobs | submit | flubbergusto's commentslogin

> I recently spent a few hours evaluating different terminals. I went back to urxvt, tried Alacritty again, gave Ghostty a try, and spent quite some time configuring Kitty. After all this I found that they all suck in different ways

Last time I did the same (days not hours tho lol) was somewhat surprised to find myself landing on xterm. After resolving a couple of gotchas (reliable font-resizing is somewhat esoteric; neovim needs `XTERM=''`; check your TERM) I have been very pleased and not looked back.

urxvt is OG but xterm sixel support is nice.


Have any issues with mupdf? I find it suckless.


To the currently dead sibling comment by kjrfghslkdjfl (on the off chance they get to see this): mupdf is extremely cross platform. I felt that should have at least been mentioned before your comment reached being dead over that misunderstanding.


That's, uh, not why the comment is dead.


Seconding this. It's my default choice for many file formats, not just pdf. However it doesn't support jpegxl so in those cases I use Okular (very much not minimal but quite usable).


Yes okular is just brilliant for PDFs i love okular


That's android only. He's talking about desktop.

I like it too though.


Sure, but at least those of us reading this thread have learned this lesson and will be prepared. Right?


Oh definitely.

This isn't exactly the same lesson, but I swore off Docker and friends ages ago, and I'm a bit allergic to all not-in-house dependencies for reasons like this. They always cost more than you think, so I like to think carefully before adopting them.


This is supported in the client/daemon. You configure your client to use a self-hosted registry mirror (e.g. docker.io/distribution or zot) with your own TLS cert (or insecure without if you must) as pull-through cache (that's your search key word). This way it works "automagically" with existing docker.io/ image references now being proxied and cached via your mirror.

You would put this as a separate registry and storage from your actual self-hosted registry of explicitly pushed example.com/ images.

It's an extremely common use-case and well-documented if you try to RTFM instead of just throwing your hands in the air before speculating and posting about how hard or impossible this supposedly is.

You could fall back to DNS rewrite and front with your own trusted CA but I don't think that particular approach is generally advisable given how straightforward a pull-through cache is to set up and operate.


This is ridiculous.

All the large objects in the OCI world are identified by their cryptographic hash. When you’re pulling things when building a Dockerfile or preparing to run a container, you are doing one of two things:

a) resolving a name (like ubuntu:latest or whatever)

b) downloading an object, possibly a quite large object, by hash

Part b may recurse in the sense that an object can reference other objects by hash.

In a sensible universe, we would describe the things we want to pull by name, pin hashes via a lock file, and download the objects. And the only part that requires any sort of authentication of the server is the resolution of a name that is not in the lockfile to the corresponding hash.

Of course, the tooling doesn’t work like this, there usually aren’t lockfiles, and there is no effort made AFAICT to allow pulling an object with a known hash without dealing with the almost entirely pointless authentication of the source server.


I think containers is the way to go. Maybe on top of VM (defense in depth-swiss-cheese is the only way to go imo). Something like Qubes can be great for VMs.

https://github.com/legobeat/l7-devenv/pull/153

This works for me (which I do run in VMs also, yes). A key thing is some secrets like GH token and signing keys are not available even for the IDE and code in the environment requiring them. Like a poor-mans HSM, made for dev, kinda. Also LLM assistant gets access to exactly what it needs. No more, No Less.

You can have your cake and eat it too.

https://github.com/legobeat/l7-devenv


> I think containers is the way to go. Maybe on top of VM (defense in depth-swiss-cheese is the only way to go imo).

If you go for a VM, why involved containers at all? What additional security you get from layering containers on top of VMs, compared to just straight up use a VM without containers?


VMs are great for coarse isolation ("dev box", "web surfing", etc). A typical qubesos workstation would have a handful.

In the setup I linked, separation is more fine-grained. Ephemeral container for each cargo/nodejs/python/go/gcc process. The IDE is in a separate container from its own language servers, and from the shell, which is separate from both the X server and the terminal window, the ssh agent, etc. Only relevant directories are shared. This runs my devenv with vscode fine on a 16GB RAM 8c machine.

You'd need like 1T RAM and over 9000 cores to have that run smoothly with real VMs ;)

Basically containers can give you far more domains (with better performance and weaker isolation) on the same host.

The other upside is that the entire containerized setup can be run as unprivileged user. So an escape means they are still nerfed local user. A typical VM escape would have much shorter path to local root.


The theory is defense-in-depth. It's dubious if it buys you much, but any malware now needs a container escape and a VM escape.

In reality, if it's target malware, it will, and if it's a mass-spray like a simple VSCode extension, it won't have either. (Nigerian Prince theory: You don't want to deal with the security-conscious people for a mass-attack)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: