Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any time you see a command starting with “sudo” and with a path of “/“ in the arguments, alarm bells should be going off. Here be dragons.


Right, I would be more surprised if this didn't break your system. I never would've tried unpacking a tarball to my FS root in the first place.


Yup... tar alone can't destroy your system, you need sudo (and the appropriate parameters) for that.



What, you mean the increasingly popular install pattern

   curl ... | sudo bash
might have pitfalls? Gasp.


Well... Yes you're not wrong. But this isn't really any different from downloading an installer and entering your password to permit the install to proceed.

So I assume that you also never do that - and the downloaded installer is even worse, because you can't easily look inside the executable to determine what it does.

Shocking news : Installing software installs the software.


You shouldn't do that. At least download the installer first, have a short look at it and decide if you like to run it.

I personally avoid software whose manual suggests this type of install because I think of the developers as not that intelligent ones.


The content of the file being served can easily be switched based on the user agent accessing the file (curl vs a browser)

Instead, you shoud wget, review and execute locally or something similar.


Yeah, no, you shouldn't do that. Kinda my point in writing that, right? But it's terrifyingly commonplace and needs to stop.


I don't even understand the reasoning to do that.

Maintaining that kind of monstruosity of shell scripts that have to work on all kind of OS or linux distros must be a giant PITA compared to the simplicity of making an appimage/flatpak and a set of deb/rpm packages for the most popular distros and clear instructions for port maintainers to do the same for archlinux and BSD systems.


One should never pipe curl output into `sudo bash`, but I think you've got things quite backwards here. Barring some extreme edge cases, putting binaries in /usr/bin, icons in /usr/share/icons, config files in /etc, libraries in /usr/lib and so on, are standard across the the Linux workd, and simple utilities that are just archives of binaries, docs, and auxiliary files can easily be deployed to most distros with a common install script.

This is far, far less complex than having to maintain and distribute an entire bundled runtime environment, deal with inconsistent behavior with the local config, etc. Flatpak and AppImage have their use cases, but by no means are they simpler than just putting binaries in the right places -- they are in fact an entire additional layer of complexity.


Here's one I spotted a few weeks back, just so you know I'm not making shit up:

https://docs.waydro.id/usage/install-on-desktops (see Ubuntu section)


If it is not provided by the distribution, those things shouldn't go in /usr/bin|lib|share but in /usr/local/bin|lib|share


That's a reasonable practice, yes, if you're doing a manual direct install. In situations like this, though, I typically just write a quick PKGBUILD script so the package manager can manage it, which means pointing things at the direct /usr/* paths, not /usr/local/*.


I don't recall ever seeing this approach with sudo. Is it really becoming popular? It used to be just bash, e.g. `curl ... | bash`. But now there is nothing stopping you from putting sudo in that bash script and expecting with a high probability that NOPASSWD: is also there ;)


Indeed, if you preface the curl | bash with sudo apt install, there's no difference!


Yet we all npm install or run random binaries without a second thought.


I do neither of those things!


Yes, but someone has to do the dirty work ;)


Appreciate the sacrifice




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: