ELF doesn't have standard fields for describing where an executable came from.
Linux systems haven't historically needed this, because every executable on the system (outside of the user's home directory or sometimes /usr/local) should belong to a package or shouldn't be there at all. So the typical Linux equivalent would be `dpkg -S /usr/bin/executablename`, or a similar incantation on RPM or pacman or ...
Even proprietary software should be using a packaging system, whether that's the distribution packaging system, or some proprietary software containment system like flatpak/snap/etc.
Such metadata would be more important on a system that regularly installs not just proprietary software but unpackaged proprietary software.
This is the difference between /usr/bin and /usr/local/bin. The things managed by dpkg, rpm & co. go to /usr, and any non-packaged software should go to /usr/local (stuff built by compiling locally) or typically /opt (software supplied in binary form but not as a package). See for example the Filesystem Hierarchy Standard: https://www.pathname.com/fhs/pub/fhs-2.3.html.
The /usr/local hierarchy is for use by the system administrator when installing software locally. It needs to be safe from being overwritten when the system software is updated. It may be used for programs and data that are shareable amongst a group of hosts, but not found in /usr.
Locally installed software must be placed within /usr/local rather than /usr unless it is being installed to replace or upgrade software in /usr.
That is easily read as "non-system packages go into /usr/local", but that's obviously incorrect.
> every executable on the system (outside of the user's home directory) should belong to a package or shouldn't be there at all.
Is there any write-up about how this view came to be accepted in the most popular Linux distributions? Or, more generally, on the history of software packaging in the Linux world?
Depending on your perspective it's either not unique to Linux at all (every OS now has an integrated package manager), or it's entirely unique (no OS not directly derived from UNIX uses the same design).
It's worth remembering that Linux forks off from the AT&T UNIX design very early, and UNIX didn't have any notion of software management. It was an OS which assumed you'd compile programs yourself, probably because you wrote them yourself. It came out of a research lab funded by a government-granted monopoly, so it was designed for relatively expensive and powerful hardware. Like every early OS UNIX had very little in the way of userspace services so a pragmatic hack was to use the notion of well known directories to locate things like man pages or binaries. The concept of software being overlayed onto the same directory structure followed naturally from that, which means the FS doesn't have any metadata in it describing what belongs to what. Vendors viewed the problem of software management as primarily one of how to add and remove optional components supplied by the base OS, and how to execute upgrades. By that point the ship of sorting files by type rather than by component name had already sailed, so they added package manager databases on top to track the extra metadata the FS design couldn't.
Microsoft and Apple approached things differently. The FS was in their conception primarily a way for users to organize their own files. DOS didn't offer any services that required registration so users got used to the idea of organizing programs into directories as they wished, and the background in cheap and heterogenous hardware meant that programs were often in semi-random locations determined by things like how many floppy drives or hard disks the computer owner had purchased. Thus when Windows came along and started offering integrated services, assuming specific physical locations on disk wasn't viable. So they invented the registry which served as the inverse of how UNIX did things: the registry was full of magic directories where small files could be placed to register things, and the FS was where the association between files and components (app folders) was kept.
> Like every early OS UNIX had very little in the way of userspace services so a pragmatic hack was to use the notion of well known directories to locate things like man pages or binaries.
> DOS didn't offer any services that required registration so users got used to the idea of organizing programs into directories as they wished, and the background in cheap and heterogenous hardware meant that programs were often in semi-random locations determined by things like how many floppy drives or hard disks the computer owner had purchased.
But both systems had PATH from like, almost from the beginning, yet on UNIX, people mostly kept putting stuff into a common (cess)pool of /usr/bin and /usr/lib while on the DOS/Windows side of things each program generally got its own separate directory. You can even see this in the difference of the .so/.dll searching logic: on Windows, you put a .DLL next to the executable to do something similar to the LD_PRELOAD trick.
AT&T System V Unix had pkgadd for installing binary packages before Linux was a thing. There was lots of commercial unix software distributed without source code from the 1980s onwards.
Linux systems haven't historically needed this, because every executable on the system (outside of the user's home directory or sometimes /usr/local) should belong to a package or shouldn't be there at all. So the typical Linux equivalent would be `dpkg -S /usr/bin/executablename`, or a similar incantation on RPM or pacman or ...
Even proprietary software should be using a packaging system, whether that's the distribution packaging system, or some proprietary software containment system like flatpak/snap/etc.
Such metadata would be more important on a system that regularly installs not just proprietary software but unpackaged proprietary software.