One of the largest holes in encrypted communication is still the fact that the vast majority of email is still neither digitally signed, nor encrypted. And even if they are, the usual schemes do not encrypt the subject line.
I wish there was something like Let's encrypt but for email. Just make it trivial to sign and encrypt your mail. Also, mail clients should give a huge warning for unencrypted and/or unsigned mail, just like browsers do with web sites. Right now, at least on Outlook for macOS, you only get a happy green padlock on signed email, if you ever receive one.
I think Latacora's (famous?) post[1] sums the situation up here nicely: email is designed in such a way that precludes an encryption scheme that actually works for ordinary users. Even power users consistently fail to use email encryption correctly.
If we care about secure communication, then we should be nudging users towards protocols that enable encryption, rather than fighting against it. For 99% of cases, that probably means Signal.
I don't think that Signal is a proper substitute for anything that email is used for. Maybe it would be better to work on more secure successors or extensions to email.
You know you're in trouble when people start talking about forward secrecy being problematic. What you're saying about the "email-like use case" for cryptography is that it's unserious protection, because a lack of forward secrecy practically guarantees full decryption of the entire history of messages, for any ordinary participant in the system.
Sure. Because people overwhelmingly aren't relying on the security of their email; it's overwhelmingly stuff no adversary would care to read. Then they retrofit the UX requirements they have for those boring mails onto all emails, and suggest that encrypted email should just accept those as constraints, and then we'll declare victory.
Eventually a private key will leak, and without forward secrecy, that private key will probably decrypt all past messages to that person, and all future messages to that person, until they give all their correspondents a new key.
With email, because people quote when replying, you'll get the other side's messages too.
Like, the simple PGP-like system where sender encrypts message using recipient's public RSA key.
And of course it's not improved by switching from RSA to ECIES.
You need to ratchet the key, or double ratchet like Signal protocol.
Email as a concept can evolve. We can break backward compatiblity. Call it email v2 and include some killer features. If enough major players and users get involved then it'll happen.
My hope is it'll be something like Dark Mail, yet with a carve out for enterprise recipients to inject their controls and anti-malware before end-user delivery. (To combat spam and malware.)
In theory the giants that already hold the vast majority of all email communications - like Microsoft and Alphabet - are in a prime position to introduce a successor, hopefully this time with a receipt so the last argument in favour of fax dies off. At the same time, they have no proper motivation to do so.
That’s the point about interoperability. If we’re going to make “email v2” (not a terrible idea!), then the considerations that will go into securing it will ensure that it’s entirely incompatible with the thing we currently call email.
In other words: without sufficient clarity, email v2 just confuses people like my parents. Who would be better served by Signal anyways.
Signal has no concept of trust delegation: also, the verification API is extremely hidden and sucks (i.e. my brother working at a FAANG had no idea it existed or what it did - it's also extremely confusing when you open it).
This creates two problems: (1) is that Signal functionally operates as "trust on first use", and (2) Signal has no system by which you can communicate to "conceptual entities" - i.e. companies. There's no way in Signal to talk to say, a bank or a government agency as an entity (IMO: to turn a profit, this is the business Signal actually need to be in).
Which makes it a bit of stochastic security measure: encrypt enough communications and probably when something important does come up, the window to intercept it has failed - except of course, that those verification number messages nobody knows what to do with and so they ignore them.
> (2) Signal has no system by which you can communicate to "conceptual entities" - i.e. companies.
This has always kind of bugged me with email as compared to physical mail: While with a physical mailbox I can write a letter "to whom it may concern" and throw it in, with email I need to find out if the special, general purpose inbox is info@, contact@, hello@, or whatever other address the company uses, assuming they use the same domain for their email as they do for their website.
On the next level, there is no first-class support for stuff like 'send this message to person X, although it is adressed to organisation Y, where X works.' Basically, acknowledging the legal and organisational reality that while (single) humans might read, process and respond to communication, it is the legal entity that is actually being adressed.
There is a standard for this, most US companies I encountee over 100 people seems to support at least the security, info, postmaster, and support mailboxes at least.
I agree that the verification UI sucks. I have similar stories about otherwise technical people not knowing about it or otherwise not understanding it.
At the same time: the relevant comparison here is email. Email isn’t even TOFU between arbitrary identities; it’s trust-on-each-message. Similarly for conceptual identities (like a bank’s catch-all address).
(I also agree with your point about this needing to be one of Signal’s businesses. WhatsApp and other chats already do this, I believe.)
Email these days is however tied to DKIM and domains. We have UI problems, but communicating to a companies email servers at their domain name can be reasonably expected to be communicating with that company.
It's just the security story on that if you never want the content disclosed isn't great, but conversely, conceptual entity communications are always going to be a bit public by nature.
There's a whole other rant I have about this problem, where we really lack domain specific trust standards - i.e. communicating with a business, what I want to know is "is this a recognized legal business entity in it's jurisdiction, and what's it's status to mine?" which is very different to "I need to make absolutely sure me and John Smith's communication is just between us" - but they're in the same space of problem.
> There's a whole other rant I have about this problem, where we really lack domain specific trust standards - i.e. communicating with a business, what I want to know is "is this a recognized legal business entity in it's jurisdiction, and what's it's status to mine?"
I have the same pain, but this seems more like a regulatory issue than a technological one. Here in Germany, (basically all) legal entities need to publish a physical adress where they are reachable, it would be easy, in theory, to extend this to a reachable domain or email adress, thereby giving a guarantee, at least in Germany, that you are interacting with the business you are expecting. As you said, DKIM already exists.
DKIM is an anti-spam tool, not an end-to-end encryption tool (obviously, it's not end-to-end at all, and if you're relying on it, you might as well forget about message encryption, because you've simply decided to trust your server).
I agree with the sentiment, but I question how useful this would really be. Most people nowadays unfortunately use web clients, so the keys are going to have to be stored somewhere else, since backing up a browser's local storage is no easy task. If you don't have sole access to keys, but rather the keys are controlled by the same entities that control your email, I don't think there will be any benefit.
I have similar doubts as well. Especially considering that digital signatures and encryption do not protect against impersonation attacks, like phishing via facebok.com or other similar sounding domains, in either the case of websites or email. But without widespread use you don't even have the option.
There are plenty of email users on IMAP, and they use web mail to host mail storage. The IMAP clients can do S/MIME (or PGP I suppose).
The bigger problem is trustworthy user discovery service i.e. a directory to exchange public keys. This exists at an enterprise level (active directory) but not globally.
Usually anyone that buys into the viability of PGP email will also tell me with a straight face that the MIT keyserver is completely appropriate for civilians.
Search is another problem, because instead of simply being able to rely on server-side full text search, every client needs to download all mails, decrypt them all and then create and maintain its own local search index.
This is actually the best example for why encryption isn't a human right. Postal mail isn't encrypted, telephone calls aren't encrypted, and the UN hasn't made a declaration about that. Why is that?
It's because encryption is a red herring. The theory that encryption is going to stop government surveillance is ridiculous. Even a perfect technology is not going to override national politics. Any government oppressive enough to require surveillance of its citizens will not be stopped by encryption. They will just block it or require a backdoor, or find yet more exploits that they won't announce in order to take advantage silently (whether it's the government directly, or one of the many 0day-for-pay players)
Either way they will enforce their will, until the citizens reform their government. You can't tech your way out of politics. You want an end to surveillance? Then go get involved in politics! Force your government to stop surveiling its citizens! Don't wait for someone else do the heavy lifting for you.
Postal mail isn't encrypted because it's not viable for ordinary citizens to encrypt physical mail.
However, tampering with mail, or opening someone else's, is a federal crime with harsh penalties attached. The message is clear: "postal secrecy" is highly valued.
Given encryption's increasingly critical role in taking control away from users, I've become more skeptical of such proclamations in the past few years.
Governments are against encryption, because it takes power away from them.
Yet on the other side, there's Big Tech who will use encryption for things like Secure Boot and continued locking-down of "devices" (they don't even want you to think it's a general-purpose computer) against users, ostensibly for "security". Thus I believe their reasons to oppose the government's opposition to encryption are not truly in the interests of the users, but because it gives them the power instead.
Encryption is classified as a munition, because that's what it fundmentally is; everyone would probably agree that it's better to have a gun yourself, instead of one pointed at you.
I don't know of a simple solution to this problem. I'm just offering a critical alternative viewpoint to the "encryption = good" and "encryption = bad" sides.
"By offering end-to-end encryption they can give power back to the users and simultaneously simplify their regulatory burden."
This is exactly why I use e2ee in my app. I sleep better knowing that the keys to the castle are not solely in my hands, but rather with each user. They will always have a say in how their data is used because they ultimately control access.
'Technology companies currently use encryption positively to keep your bank transactions and online purchases safe and secure. Encryption has many other uses throughout everyday life, but some social media companies such as Meta are proposing to implement or already have implemented E2EE in private messaging spaces.
E2EE overrides current controls in place that help to keep children safe and potentially poses a huge risk."
This line is becoming quite a pattern in UK (Tory) government
rhetoric. They forcefully state wishful thinking as if it were a fact.
There are no controls because there is no possibility of controls, as
a matter of mathematics. But by exploiting ignorance, the tories
managed to beast parliament into an intractable "just imagine if..."
clause in the Online Safety Bill... a law that provisionally exists if
and only if the impossible becomes possible.
And then they sit back, fold their arms and call those "controls".
Controls must be effective, otherwise they're just buttons for show,
that aren't wired to anything.
The reason nation states are losing ground to BigTech is because they
are using 18th century power politics to fight 21st century logic.
I do wish my government would go take some basic CS and cryptography
classes.
> There are no controls because there is no possibility of controls, as a matter of mathematics.
This is kind of true because, for example, the one-time pad is information-theoretically secure, and anyone could choose to use a one-time pad with anyone else given prior arrangements. Or anyone could choose to use RSA for confidentiality with anyone else given a mutual desire to communicate confidentially and an authentic but not confidential channel.
However, there's nothing mathematically stopping a government from punishing people who are observed to follow a protocol to create a confidential channel. Although Eben Moglen has argued that the right to speak PGP is the right to speak Navajo, a government could conceivably choose to punish people for speaking a foreign language.
One might wish (I would certainly wish) that people would be extremely upset by this, but I guess it wouldn't contravene mathematics.
In fact, there are some historic cases where populations were subject to official military censorship in wartime, which included overt government review of some of their communications, and possibly a prohibition on the use of "codes", and sometimes restrictions on the use of foreign languages. One might again wish that this would be both much less acceptable and much less feasible now than in the past, but it's not completely impossible.
> there are some historic cases where populations were subject to
official military censorship in wartime,
That's fine, and those may have been legitimate needs at the time.
Overcoming the Nazis meant stilling loose lips.
But are we at war? I mean officially according to UN declarations?
And if we are not at war, or in an official state of emergency and
official censorship re-emerges we need a rethink.
So long as decent people acknowledge that we are under illegitimate
censorship accordant with a tyrannical/fascist regime, then they must
reposition with respect to resistance and toppling that regime.
What's problematic for me (and everybody else here I think) is the
double-think - a widespread belief that we remain in a liberal
democracy while in reality de-facto censorship has snuck in.
The better answer though is that it's a misdirect: they're pointing at encryption as the bad guy, the one thing which takes place solely in the digital space and is activity-agnostic - nothing which happens with encrypted bits, by itself, can do anything in the real world.
"Pedophiles and terrorists" on the other hand, have to do a lot of things in the physical world in order to actually be pedophiles and terrorists. And the vast majority of work taking them down is infiltration - which is to say, nobody breaks encryption, they break trust - which is much easier, and has side-benefits like "generating actual evidence".
This "real world" link is missed (intentionally by politicians) by a majority of people because the Internet has become "the great solution". It scales, we can stay seated at our desks and still (claim to) solve crime.
Electronic communications are merely 'theory' to the physical world's 'reality'. It's like the phrase "no plan survives contact with the enemy".
The Internet only contains the symptoms. If you want to know where there are real problems that need solving in this arena, speak to some teachers - but no, children's services remain under-funded and under-staffed whilst we spend more and more on internet surveillance.
Social problems can not be solved by technology legislation.
Nation States are one of Big Tech's best customers.
Nation States love to give legal and regulatory incentives as 'guidelines'.
It's the average joe that loses. The only way out is privacy from both nations and corporations.
If another person knows your technology exists, it will be inexorably wrenched from your hands and controlled by the biggest bully in town. We see it every single day.
And for hiding the activities of pedophiles and terrorists.
Encryption isn't solely beneficial to human rights, it enables considerable harm to be shielded from scrutiny. There has to be a balance with the extent to which applications of encryption are permitted in society.
Cars, guns, knives, sharp sticks, rocks, language, religion, belief, opinion, cigarettes, sugar, bad parenting, divorce, marriage. Ban 'em all!
Humanity's evils pre-date technology, therefore banning technology will not remove them.
I'd also go on to say that many of the examples I've listed above have more to do with the creation of both terrorists and pedophiles, by a very wide margin, than encryption.
Problems that are hard to solve inevitably have scapegoat solutions thrown at them.
Without encryption pedophiles have access to your family photos. One innocent photograph of a toddler bathing could become the gratification of a sex criminal.
If communication isn't encrypted, this means pedos have a higher chance to get that data either through mitm or through some more esoteric method if encryption exists but it has a backdoor. If a backdoor exists, it will be used
I wish there was something like Let's encrypt but for email. Just make it trivial to sign and encrypt your mail. Also, mail clients should give a huge warning for unencrypted and/or unsigned mail, just like browsers do with web sites. Right now, at least on Outlook for macOS, you only get a happy green padlock on signed email, if you ever receive one.