In August 2013, Ladar Levison shut down Lavabit, a secure email service, after the US Government ordered him to turn over the private key for his company. The FBI had a search warrant to view the mail of a single user, probably Edward Snowden, but Lavabit had been designed so that individual user’s email was encrypted, and the only way it could be accessed was by compromising the security of the every email user on the server. Rather than comply with what he considered to be government overreach, Levison chose to shut down his business. However, he hasn’t been idle since then. He’s currently working on a project called the Dark Internet Mail Environment (DIME), which will provide an open standard for end-to-end encrypted email, and also minimizes leakage of metadata. Here’s his presentation on this at Defcon earlier this year:
When an email message is sent from one domain that has adopted DIME to another, the message is signed and encrypted. When one of the domains has not adopted DIME, the system reverts to standard email, with a warning attached in the email client of the DIME user to let them know that the message is not secure.
On hearing of DIME my first thought was, how do you filter unwanted spam messages if both the content of the message and the metadata are encrypted ?
One method might be for the email client to calculate signatures from the header and content of the message and send them to a central service to see if any of them are indicative of spam. While the signatures would not reveal the content of the message in full, they could potentially reveal more of the content than would be acceptable in a secure system.
Another alternative could be that rather than sending the signatures to a central service, a complete list of of all spam signatures is sent to the client, and the testing takes place locally to the client, with no information being sent back to the central server. However, a complete spam signature database would most likely be too large to be readily distributed to millions of individual workstations and devices and updated in real time.
Happily, the whole signature database does not need to be distributed. Remember that DIME communications are signed, and that signature includes the sending domain. The recipient only needs access to a blacklist of domains that send spam in order to filter out unwanted encrypted messages. That’s a short enough list that it could be distributed and updated in real time, especially if a peer-to-peer distribution method was used to reduce the load on the central server. In fact it would not even need to be delivered to individual clients, as the sending domain (but very little else) is visible to the DIME mail server in the receiving domain. Only one copy of the blacklist per domain would be required. This is similar to the situation for other domain signing systems such as DKIM/DMARK, except there is no further line of defense if the sending domain is not blacklisted.
One way to beat this system would be to use malicious email accounts in trusted domains to send spam, either by phishing for email credentials at ISPs that support DIME, or by setting up free email accounts should any of the free email providers start supporting it. If DIME or something like it becomes ubiquitous, it is vital that service providers make sure that the user is legit and remains so. Domain reputation becomes all important, and a small number of bad users could send enough spam to poison the reputation for the whole domain and result in legit mail from the other users going to the spam folder. Dual factor authentication can be used to reduce the risk of phishing. However the problem of spammers owned email accounts is more difficult.
If someone starts offering free DIME email accounts, then some form of Turing Test that is more advanced than solving a CAPTCHA will be required. I discussed the effectiveness of various sorts of Turing Test in an earlier blog post. Generally, the more effective the Turing Test, the more the person taking it has to establish their identity. However, perhaps a activist against a repressive government might not want to provide a large amount of identity information to a secure email provider or anybody else. One solution might be to rate limit based on the level of identification provided. A new account with no other id gets ten messages a day, an account that provides a phone number for dual factor authentication gets 100 messages a day, and an account that makes a small payment via PayPal gets 1,000 messages a day.
Another possible attack would be for a spammer to set up a large number of disposable domains, install DIME on all of them and send out as much spam as possible before they get blocked. (We already see widespread use of disposable domains by affiliate, diet pill, and pharma spammers.) However, the computer resources required to encrypt each message would slow down the rate at which a spammer could send mail, and mean that they could deliver less volume before getting blacklisted. It’s possible that a really large-scale use of disposable domains could make the blacklist too large to be manageable.
I’m sure we will see far more messaging encrypted in future, and whether it is Levison’s solution or some other, to be successful it will have to cope successfully with attacks from spammers. In fact, decent spam filtering could well mean the difference between success and failure for a new mail system. Let’s hope DIME gets this one right.