Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even on Windows if I send an EXE or MSI of my software to someone they get a scary security warning that prevents them from running it. The only guaranteed way around that is to be a big company (or a big open-source project).

If security really mattered, every OS would run applications in a proper sandbox, but why bother with that when you can just point your Web browser at a program running on someone else's server? Oh, but consent to these tracking cookies first.



they get a scary security warning that prevents them from running it.

The huge difference is that's only a warning, and not a cryptographically locked-down system unlike Apple's.


Actually, in a way, it is. All Microsoft has to do is revoke your application's signing certificate and Windows Defender will prevent it from running on Windows computers.

Apple does the same thing with Notarization and Gatekeeper on macOS. If they choose to revoke your signing certificate, Gatekeeper will prevent your software from running on macOS.

That means if you do, say or compete with something that Microsoft or Apple doesn't like, they can prevent your apps from running on their platforms.


How come I can run so much software compiled from source then?


Likely because you've either disabled some of the overbearing security mechanisms at some point (smartscreen is a toggle and it's really frustrating if you're setting up a compiler toolchain) or you're running files that were produced on the local computer. If you disable all the privacy invading checkboxes during Windows setup (most don't), you partially neuter smartscreen as well.

Every browser I know uses the Mark of the Web to tell Windows that a file came from the internet. You'll have to store the file on a FAT32/exFAT drive to get rid of it. If a file comes from the internet, smart screen kicks in.

When a file is unsigned, smartscreen essentially prevents you from running the file. You can work around it, but I had to look up a tutorial myself.

If the file is signed, metadata will be extracted and submitted to Microsoft. If that fails or the exact binary hasn't been run on a certain amount of computers, smart screen will show a big scary warning despite your $500 a year digital signing certificate. This is something developers just have to deal with every time they update their applications, but most people won't be the first x to download the executable and applications using auto updaters download updated in the background can the necessary flags to work around smartscreen.

The restrictions are there, but they're not there for (most) development environments and for most users of popular software.


> If the file is signed, metadata will be extracted and submitted to Microsoft. If that fails or the exact binary hasn't been run on a certain amount of computers, smart screen will show a big scary warning despite your $500 a year digital signing certificate.

This is true and annoying, but only with regular certificates. The more expensive EV certificates bypass this “well known” check.


I don't know what platform you're using, what app distribution method you used, where the code was compiled, if the code was signed, where it was signed and by who, etc.

A generic answer to your question is that the software was signed by whoever compiled or distributed the software, which can include your own machine. Your own key might be in your trust store or your app distribution method might put their key in your trust store. Both macOS and Windows will treat software compiled on the same system it is run on as blessed to run without strict signing checks.

On macOS, ad-hoc certificates can be used, but the OS will treat those binaries as if they're radioactive. If you compiled code on macOS, the system will treat that software specially on that specific system and allow you to run it[1]. On Windows, certificates can be added to trust stores. Chocolatey, for example, has their own signing certificate for all of the compiled open source software they have in their repositories, so Windows allows their software to run.

The biggest issue is what comes with software distribution itself, where your code isn't blessed by default by the system it was compiled on, or doesn't have signing certificates in the users' trust stores, and Gatekeeper and Windows Defender go out of their way to stop your users from running software with signing certificates they don't like.

[1] https://apple.stackexchange.com/a/426854


I compiled it myself. It’s not signed as far as I know. I didn’t disable anything…


macOS and Windows treat binaries compiled on the machine they're run on specially. Checkout out the StackOverflow link in my OP, it gives details on how binaries compiled on the same machine they're run on don't have the 'quarantine' bit set on macOS.


> That means if you do, say or compete with something that Microsoft or Apple doesn't like, they can prevent your apps from running on their platforms.

Are there examples of Microsoft actually doing that?

Ability to prevent known malware from being run in the majority of PCs after detection seems like a useful feature from the Internet health point of view.


You can just turn off defender. And being specifically put in the malware list isn't the same and if clearly false could be used in court.


Stuff like this happens, and it tends to not get legal attention: https://news.ycombinator.com/item?id=27914752

I learned long ago, and keep it clearly in my mind, that what AV considers "malware" and what the user considers malware are not entirely the same.

and if clearly false could be used in court.

I do wonder if Windows becoming adware, but then the built-in antimalware detecting possible "competitors'" adware and removing it, could be challenged in court as anticompetitive behaviour.


The real issue is with software distribution.

You, personally, can turn off Windows Defender, but your users probably have no idea why the app they're trying to run doesn't work when they double click it. They're also probably shown multiple scary warnings that discourage them from using the app and trick them into thinking it's broken or malicious.

It's a hurdle just to convince users such an app isn't malware, and then it's an entirely other hurdle to help them actually run the software by bypassing Defender.


Call me paranoid, but I see it as a slippery slope. And for most users that security warning is as good as a cryptographic lockdown anyways.


They are certainly boiling the frog slowly.

And for most users that security warning is as good as a cryptographic lockdown anyways.

On the other hand, I find it ironic how a lot of "security professionals" will complain constantly about users accepting security warnings with no thought anyway (and they usually use this argument to justify their increasingly authoritarian measures of controlling them.)


It can be simultaneously true that a warning can be a barrier to adoption of your software product versus a competitor, and that a warning is not an effective barrier for a user who thinks they're installing an unreleased video game or going to receive millions of dollars of crypto from foreign royalty.


>If security really mattered, every OS would run applications in a proper sandbox,

these OSes were designed decades ago, before we really had a good grasp on security. there were other significant concerns as well, such as performance

also, modern OS toolkits, such as on macos and windows 11, are moving towards a permission and API model that will allow sandboxing. In fact, macos is moving quite quickly towards this.

And lastly, there is a widely deployed OS that runs all applications in a proper sandbox: chromeos

I think it's understood at this point by everyone in the industry that sandboxing is the future, but it's taking a while to get there.


> before we really had a good grasp on security

Not just that, but before we realised just how many people there would be trying to claw their way into any gap for all manner of dark purposes.

Early networked OS and protocol designers thought that people would, largely, cooperate with each other and share resources for the greater good.

I wish to live in their naively optimistic future, instead of the one with real humans :/


Code signing and associated warnings are significantly different from fully prohibiting the execution of unapproved code, I'm not sure why people struggle with the distinction. Windows has been doing this mostly the same way for literal decades.

iOS has a full sandbox which would apply even to "side loaded" applications, which makes the arbitrary constraint even more ridiculous as a "for your own good" measure.


Fun fact: This is the reason google pivoted to the web, after being blocked as an alternative office suite on Windows.

They realized that they need to change the platform for distribution, and hence this is why the web (post-chromium) is now what it is...with all its absurd redundancies of APIs and bloat.

Only because Microsoft can't keep their shit together.

Apple is more complicated, because despite the absolute control they've established (no other browser engine / JIT compiler process allowed for whatever made up reasons) they did not face the European courts that forced Microsoft for the exact same thing to allow to install other Browsers.

And now we are stuck with Safari, repeating the loop, because Apple can't keep their shit together.


If you hate cookies you’re going to be posted when you find out what’s happening in native apps. It’s an order of magnitude worse they just don’t have to ask or inform you first




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: