"Client-server application security: Distribute the server's public RSA key with the client code, and do not use SSL."
It seems like the point of this is to make sure that when communication moves from the client, only the server with the proper private key can decrypt it - BUT, aren't you just going to use OpenSSL to implement this (via TLS)? Is there some other way to accomplish this I'm not aware of? What's he specifically warning us against?
I'm warning you against using a complex and error-prone protocol where you don't need all the complexity. You might end up using OpenSSL anyway, but using OpenSSL to perform an RSA encryption is vastly safer than using OpenSSL to verify a certificate chain.
I am a bit mystified by you warning people off SSL. It's complex because the problem of establishing a trusted channel between two unrelated parties is complex. Simpler protocols with the same constraints have failed.
You should be clearer with your recommendation. If you can relax the constraint that the protocol has to work between strangers, then SSL offers functionality you don't need, and the actual interface to SSL has moving parts that might hurt you.
Most people who build crypto can't relax that constraint, even if they think they can. We've beat several schemes that relied on pre-distributed public keys because of the out-of-band channels that wound up getting grafted on to bring new members into the group.
Finally, verifying a certificate chain isn't hard. It's constructing a verifiable certificate chain in the first place that has proven difficult. If IE7 didn't offer the "bad certificate" click-through warning, and simply failed the request, we'd be discussing the right problem instead of the red herring.
You should be clearer with your recommendation. If you can relax the constraint that the protocol has to work between strangers, then SSL offers functionality you don't need, and the actual interface to SSL has moving parts that might hurt you.
I specifically mentioned client-server applications, didn't I? If you are handing out client code which talks to a server you run, you don't need the protocol to work between strangers.
You need a secure way of handing out client code, sure. There's no way to get around that. But once you've figured out how to do that, you might as well include the server public key.
If an attacker can mangle the server public key which you're distributing with the client code, they can mangle the client code, at which point you've already lost.
So you draw an imaginary line around the parts of your application that have simple security requirements, make recommendations about the stuff inside the lines, ignore the stuff outside the lines, and berate the technologies that deal with the stuff outside the lines.
This is all the more critical when you consider how the usual web developer handles the complexities of chained certificate verification: namely, by disabling verification entirely, and instead trusting every upstream certificate as offered.
For example, as of Rails 2.3, ActiveResource (the default RESTful web service API client used in many Rails applications) disables all SSL verification whenever SSL is used, without so much as a configuration flag to enable it again.
This is a meme that needs to die. Users of SSL routinely fall into behavior traps that ruin the security of SSL. That doesn't leave them secure: it leaves them screwed. The problem that SSL is solving (and that applications are failing to solve) is hard.
If you can establish relationships between members of your protocol, you can rely on continuity for security. Continuity is what SSH uses. It's easier than authority, and it's sane to recommend that people take advantage of it when possible.
But when your users will constantly be enrolling new participants into your system, continuity becomes a huge design risk, just like how everyone used to lose their SSH keys to sniffers at Usenix Security.
This comment went a little over my head - if you don't mind, can you point me to something that explains what continuity and authority are in this context? I'm also curious to learn how people used to lose their SSH keys because of continuity. Thanks.
I'm concerned about the lack of revocation that comes with bundling a key in source. Shouldn't you warn people in your note that if the key is compromised, you're going to have a difficult time?
You should already have a mechanism in place for alerting people to vulnerabilities in your client code -- for all practical purposes, a compromised server key is just a vulnerability in the client code which needs to be corrected by upgrading to a newer version of the client.
"Client-server application security: Distribute the server's public RSA key with the client code, and do not use SSL."
It seems like the point of this is to make sure that when communication moves from the client, only the server with the proper private key can decrypt it - BUT, aren't you just going to use OpenSSL to implement this (via TLS)? Is there some other way to accomplish this I'm not aware of? What's he specifically warning us against?