The QUEENCREEK components may as well be malware, not just "appearing" to be malware.
These services are insanely invasive and resource hungry, to the point that I regularly have to scrub them out of my system. If I don't, my CPU fans will spin up and make turbine noises while this monstrosity collects every piece of metadata it possibly can to be sent back to big brother at Intel.
To expand on the comments in the original article, this is the description text file of one of these services:
Inte(R) System Usage Report Service
SystemUsageReportSvc_QUEENCREEK monitors
the computer system usage and helps to improve
system's performance."
Intel is misspelled. That's insane for a Fortune 500 company.
At most such organisations, you'd be raked over hot coals if you did something like this.
Let us also ignore the missing 'the' or 'your' in "helps to improve system's performance." -- either way this is a flat lie. It doesn't improve performance in any way. It's spyware sending telemetry, that's all it does.
The industry-wide problem is that there are zero consequences to this type of shoddy code deployed to a billion devices globally. It's just waiting to be next global Crowdstrike-style outage or remote code execution exploit.
PS: Right next to this spyware in the list of services is the "Intel® Dynamic Application Loader". I won't describe it here, read for yourself what this does "for you", and for state actors that might want to hide malware that even the operating system can't access: https://www.intel.com/content/www/us/en/developer/tools/dal/...
This shows how much of a false sense of security code signing can create when done inconsistently like this: Highlighting unsigned binaries as dangerous, yet displaying an entry `python.exe malware.py` as trustworthy is… not great.
Relatedly, I really wish runtimes and interpreters would rename their process to the name of the file they are running by default. Finding out which `java` or `python` out of dozen identical processes I need to kill isn’t fun.
Yes, and on Linux I can also use the appropriate flags to `ps`, but I wish I wouldn't have to look at (potentially very cluttered) full command lines or command invocations.
It makes everything run as <interpreter> <script> just feel less "native" when dealing with processes than binary executables (or things run via binfmt_misc, which is unfortunately not very common for Java applications at least, and it seems like a mix for Python as well).
There is no API for a process to change its name. More precisely, there is no concept of a “process name”, there is only the name of the executable file (image) that was loaded.
At least on Linux, it's definitely possible with prctl(2):
PR_SET_NAME
Set the name of the calling thread, using the value in the
location pointed to by name.
The name can be up to 16 bytes long, including the
terminating null byte. If the length of the string,
including the terminating null byte, exceeds 16 bytes, the
string is silently truncated.
I was talking about Windows, given the context of TFA. You can also name threads in Windows. Especially in background applications, however, the initial main thread may not exist for the whole duration of the process (e.g. consider pthread_exit), so I’m not sure how practical that approach would be.
I really hate it when major PC vendors name autorun tasks (or really any background task) with cryptic names that don't clearly identify the vendor and application. Yes, I realize we can't trust the name is legit without further verification. But when it is legit, knowing the vendor and app identity right in the name saves time. It would be nice if ALL applications did this but I can forgive a small open source project not doing so. However, when a Fortune 500 tech company with millions of users does it, it's unforgivable.
It costs nothing to make your user's lives just a little bit easier. Also, for fuck's sake please populate the standard Window's file metadata for all your EXEs and DLLs when you're releasing products. I shouldn't have to run your app to find out the version number, vendor name, app name, release date, etc.
ELF doesn't have standard fields for describing where an executable came from.
Linux systems haven't historically needed this, because every executable on the system (outside of the user's home directory or sometimes /usr/local) should belong to a package or shouldn't be there at all. So the typical Linux equivalent would be `dpkg -S /usr/bin/executablename`, or a similar incantation on RPM or pacman or ...
Even proprietary software should be using a packaging system, whether that's the distribution packaging system, or some proprietary software containment system like flatpak/snap/etc.
Such metadata would be more important on a system that regularly installs not just proprietary software but unpackaged proprietary software.
This is the difference between /usr/bin and /usr/local/bin. The things managed by dpkg, rpm & co. go to /usr, and any non-packaged software should go to /usr/local (stuff built by compiling locally) or typically /opt (software supplied in binary form but not as a package). See for example the Filesystem Hierarchy Standard: https://www.pathname.com/fhs/pub/fhs-2.3.html.
The /usr/local hierarchy is for use by the system administrator when installing software locally. It needs to be safe from being overwritten when the system software is updated. It may be used for programs and data that are shareable amongst a group of hosts, but not found in /usr.
Locally installed software must be placed within /usr/local rather than /usr unless it is being installed to replace or upgrade software in /usr.
That is easily read as "non-system packages go into /usr/local", but that's obviously incorrect.
> every executable on the system (outside of the user's home directory) should belong to a package or shouldn't be there at all.
Is there any write-up about how this view came to be accepted in the most popular Linux distributions? Or, more generally, on the history of software packaging in the Linux world?
Depending on your perspective it's either not unique to Linux at all (every OS now has an integrated package manager), or it's entirely unique (no OS not directly derived from UNIX uses the same design).
It's worth remembering that Linux forks off from the AT&T UNIX design very early, and UNIX didn't have any notion of software management. It was an OS which assumed you'd compile programs yourself, probably because you wrote them yourself. It came out of a research lab funded by a government-granted monopoly, so it was designed for relatively expensive and powerful hardware. Like every early OS UNIX had very little in the way of userspace services so a pragmatic hack was to use the notion of well known directories to locate things like man pages or binaries. The concept of software being overlayed onto the same directory structure followed naturally from that, which means the FS doesn't have any metadata in it describing what belongs to what. Vendors viewed the problem of software management as primarily one of how to add and remove optional components supplied by the base OS, and how to execute upgrades. By that point the ship of sorting files by type rather than by component name had already sailed, so they added package manager databases on top to track the extra metadata the FS design couldn't.
Microsoft and Apple approached things differently. The FS was in their conception primarily a way for users to organize their own files. DOS didn't offer any services that required registration so users got used to the idea of organizing programs into directories as they wished, and the background in cheap and heterogenous hardware meant that programs were often in semi-random locations determined by things like how many floppy drives or hard disks the computer owner had purchased. Thus when Windows came along and started offering integrated services, assuming specific physical locations on disk wasn't viable. So they invented the registry which served as the inverse of how UNIX did things: the registry was full of magic directories where small files could be placed to register things, and the FS was where the association between files and components (app folders) was kept.
> Like every early OS UNIX had very little in the way of userspace services so a pragmatic hack was to use the notion of well known directories to locate things like man pages or binaries.
> DOS didn't offer any services that required registration so users got used to the idea of organizing programs into directories as they wished, and the background in cheap and heterogenous hardware meant that programs were often in semi-random locations determined by things like how many floppy drives or hard disks the computer owner had purchased.
But both systems had PATH from like, almost from the beginning, yet on UNIX, people mostly kept putting stuff into a common (cess)pool of /usr/bin and /usr/lib while on the DOS/Windows side of things each program generally got its own separate directory. You can even see this in the difference of the .so/.dll searching logic: on Windows, you put a .DLL next to the executable to do something similar to the LD_PRELOAD trick.
AT&T System V Unix had pkgadd for installing binary packages before Linux was a thing. There was lots of commercial unix software distributed without source code from the 1980s onwards.
I was thinking that it was named for the town in Arizona; it's within commuter range of Intel's large presence there. Perhaps the developer was browsing home listings at the time.
i like the entirely not subtle irony of citing hanlon's razor in the same message you suggest a more elaborate and intentionally malicious alternative explanation
Right, because I knew someone would come along and spout it since most haven't read the edward snowden leaks or don't understand how modern intelligence agencies work.
Somebody who expected to be able to hook more into it, or intentionally left that many steps to allow someone else to hook into it. I've seen things like this where the intended result is that you can modify one parent launcher so others can call child launchers without modifying them, only adding more of them. Planning too broadly or too far ahead can amusingly be it's own form of incompetence.
Thing is, that's very "sysadmin-y". A developer would typically just want their exe file be called directly, and if they need to hook more things into the startup process, they'll write more code into the start of int main()
These services are insanely invasive and resource hungry, to the point that I regularly have to scrub them out of my system. If I don't, my CPU fans will spin up and make turbine noises while this monstrosity collects every piece of metadata it possibly can to be sent back to big brother at Intel.
To expand on the comments in the original article, this is the description text file of one of these services:
Intel is misspelled. That's insane for a Fortune 500 company.At most such organisations, you'd be raked over hot coals if you did something like this.
Let us also ignore the missing 'the' or 'your' in "helps to improve system's performance." -- either way this is a flat lie. It doesn't improve performance in any way. It's spyware sending telemetry, that's all it does.
The industry-wide problem is that there are zero consequences to this type of shoddy code deployed to a billion devices globally. It's just waiting to be next global Crowdstrike-style outage or remote code execution exploit.
PS: Right next to this spyware in the list of services is the "Intel® Dynamic Application Loader". I won't describe it here, read for yourself what this does "for you", and for state actors that might want to hide malware that even the operating system can't access: https://www.intel.com/content/www/us/en/developer/tools/dal/...