Hacker Timesnew | past | comments | ask | show | jobs | submitlogin
DOJ: Strong encryption that we don’t have access to is “unreasonable” (arstechnica.com)
306 points by nimbius on Nov 10, 2017 | hide | past | favorite | 224 comments


This subject always reminds me of a talk I heard Eben Moglen give, where he said:

"In 1995, there was a debate at Harvard Law School – four of us discussing the future of public key encryption and its control. I was on the side, I suppose, of freedom. It’s where I try to be. With me at that debate was a man called Daniel Weitzner who now works in the White House making Internet policy for the Obama administration.

On the other side was the then Deputy Attorney General of the United States and a lawyer in private practice named Stewart Baker who had been chief council to the National Security Agency, our listeners, and who was then in private life helping businesses to deal with the listeners. He then became, later on, the deputy for policy planning in the Department of Homeland Security in the United States and has much to do with what happened in our network after 2001.

At any rate, the four of us spent two pleasant hours debating the right to encrypt and at the end there was a little dinner party at the Harvard faculty club, and at the end, after all the food had been taken away and just the port and the walnuts were left on the table, Stuart said, “All right, among us now that we are all in private, just us girls, I’ll let our hair down.”

He didn’t have much hair even then, but he let it down.

“We are not going to prosecute your client, Mr. Zimmermann," he said. “Public key encryption will become available. We fought a long, losing battle against it, but it was just a delaying tactic.” And then he looked around the room and he said, ”But nobody cares about anonymity, do they?"

And a cold chill went up my spine and I thought, all right, Stuart, and now I know you’re going to spend the next twenty years trying to eliminate anonymity in human society and I am going to try to stop you and we’ll see how it goes.

And it’s going badly."

https://www.softwarefreedom.org/events/2012/Moglen-rePublica...


It's odd because peoples desire for anonymity is preventing their use of good encryption. A solid online identity and reliable way to get data to you without 3rd parties would be a good foundation for exchanging keys and encrypting data between parties. I'm talking fixed IP addresses as a starting point. We have whole infrastructure to make it easy to interact with public web sites (DNS, etc) but it's very hard to find and send a packet to your friend. Any efforts to change this would be seen by a lot of people as an attack on anonymity.

I don't want make believe anonymity, I want to know where data is going and where it's coming from. Once I have that I can encrypt for privacy and web sites can strip identifying information if they want to provide an anonymous forum.


Aside from widely broadcast one-time pad encrypted/authenticated messages (which provides anonymity for the receiver, not the sender -- that can be bootstrapped), deterministic rotating asymmetric key pairs (say, a non-compromised curve ECC scheme using Key Families) coupled with a relatively-high non-deterministic latency mixnet delivery system using onion/garlic routing for intermix peers goes a long way towards achieving data security, authentication and metadata anonymity for all involved parties. Discovery and routing in such a system is a very complicated problem -- and it's one yet to be solved; though, generally if one is going to such lengths, one has a rendezvous system already in place.

The primary problem is that it's expensive for users -- expensive in terms of time, cost, and complexity; which means usage is low. Consequently, it also makes people targets simply for using the system as the number of those using the system is relatively small -- something one can't really get away from easily unless makes it look like one isn't using the technology. To do that, one then has to go through the effort of creating cover traffic -- and creating consistently good cover traffic (good enough to fool a human analyst, because one is one of thousands, not one of millions/thousands of millions) is immensely difficult and techniques change over time w/ local conditions so it's hard to automate. Life gets really really difficult when survival depends not only on keeping people out but also on keeping them from knowing anything of interest is there in the first place!

Don't forget about stylometry and inadvertent signatures; never communicate in real time, avoid absolutely everything but plain text data if at all possible, write very plainly (say, using only first few thousand most common words in your language) and use stylometry defeating tools (for example, Anonymouth -- though I haven't audited it; others have, but it's still just a thin layer to apply to other work) to prevent others from creating signatures based on the words you use and how you use them. NEVER forget that binary data you send might contain metadata fields to give you away (version numbers, encoding settings -- all seemingly innocuous but possibly unique to you!). Sending images? Make absolutely certain your camera doesn't have sensor glitches that can create a signature. ( see https://www.schneier.com/blog/archives/2006/04/digital_camer... ) Don't forget your surroundings either ( NSFW Language/Topic (4Chan helps track down targets) but incredibly illustrative - https://i.imgur.com/nLCklgZ.jpg ). Sending scans of documents? Most new printers print identifying patterns using steganographic techniques. ( see https://www.eff.org/issues/printers ). Stop and meet someone and both of you brought your mobiles? The fact that your phones traveled together, and where you went, is recorded (CO-TRAVELLER; see https://www.washingtonpost.com/world/national-security/nsa-t... ). The world is INCREDIBLY hostile to anonymity seekers.

Relying on third parties to strip data isn't a workable anonymity solution because you can't trust them to do so, correctly or at all. Not to mention that; but with pervasive internet monitoring (which, thanks to Snowden, we know is real) the mere fact that you've communicated with someone or a site storing your data is stored in a place where it cannot be wiped by any party authorised to participate in your conversation. Generally, if you're not anonymous to the person you're communicating with, until you choose to identify yourself during communications who's contents may then be repudiated (say, an olm/axolotl ratchet) at a later time by publishing that private key, now expired and no longer good for future authentication -- then you won't be anonymous to any party, period.

It's a trust no-one, verify everything type of situation; people don't deal well with that. Pervasive encryption is only the first and easiest step. If you want true (or even just reasonable) anonymity, things get very expensive, very quickly.


I don’t get it.


I think, roughly, the government can't really stop you encrypting stuff but it's fairly easy for them and a bunch of advertisers to track what you are up to online.


Seems like that point would be easier to make without a story that is clearly just a vehicle for name-dropping, but what do I know.


The government has demonstrated that they will abuse every power given to them, and even those that weren't. I would not entrust every aspect of my personal information to the very same organizations that indefinitely detains people, including American citizens, without access to a lawyer while commiting acts of torture; and the ones that said the Patriot Act could never be used for domestic surveilance; that lied about being unable to unlock the phone of the last guy they tried this with to get the law changed; and that continues to engage in parallel construction, torture by proxy, and extraordinary rendition. And now they're saying that we should trust them to stop those bad guys once again. And the most chilling aspect of this request, despite its inherent absurdity, unenforcibility, and threat to freedom and privacy, is that they have a very good chance of winning that power.


Everything you're saying is true, but I think there's something far less dramatic that somehow rings true for people even more. One revelation of Snowden is that the NSA would regularly 'seize' and pass around attractive nudes and other such media that were intercepted from people. [1] At other times there was something that even got its own little name 'LOVEINT' which would involve NSA workers spying on their love or romantic interests for reasons. [2]

When you talk about things like torture nobody really worries about things like that happening to them, because it mostly won't. But I think people tend to forget that these ABC agencies are made up of people. People that, like all people, are very flawed. My family tends to be of the 'I don't care - I have nothing to hide' state of mind. Referencing these events, which are really just basic consequences of human nature, sent them on a 180 fast.

[1] - http://www.businessinsider.com/edward-snowden-guardian-inter...

[2] - https://blogs.wsj.com/washwire/2013/08/23/nsa-officers-somet...


Another thing to the debate about "I have nothing to hide":

Who decides what is meaningfull to hide in 20 years time? The relationships you had 20 years ago, what are they doing now? You don't know but the government surely does and you had connections to these people that now are possible terrorists. Good luck getting through the security check at the next airport.


> that lied about being unable to unlock the phone of the last guy they tried this with to get the law changed…

I found it very telling that they announced their inability to access the phone only a few days after the incident. It struck me as a very clear PR move that was designed to lay the groundwork to shape public opinion. My threshold for declaring that the device is inaccessible requires more time for research and effort than this.


Don't forget that the phone was unencrypted when they got it, and that it was the FBI who locked it in the first place: https://www.nytimes.com/2016/03/02/technology/apple-and-fbi-...


I have no idea how I never knew that. I thought I followed the case fairly well, but had never read about user error on the part of the FBI. For all the articles written, this was a very under reported detail.


Also, if they can't even properly secure their own cyber offensive tools. How long until other actors get hold of the backdoor keys? It would basically render the effort of encryption moot.

They shouldn't even be asking for this power if they can't at least academically prove that they can't mess it up.


> The government has demonstrated that they will abuse every power given to them, and even those that weren't

I think this mixes up what is true and what people (myself included) wish was true.

Governments don’t have power given to them. Their default state is God-Kings ruling on personal whims.

Governments have power taken from them, either by corporations, or by religions, or by other governments — sometimes these groups even call themselves “the people” — but the restrictions are not stable equilibriums, they are constantly fought against on all sides.


Governments always gain their power from the governed, and nowhere else. This fact is always resisted by those who know it because it a) makes the people responsible for the actions of their government and b) makes the government responsible for their people.

For as long as we ignore this fact, we'll get corrupt governments. Alas, its also a key reason that governments are corrupt - a government is only as ethical as the people it governs.


The argument here is extremely simple.

Encryption is the only way to secure information. This is true for criminals and non-criminals alike. To deny encryption is to deny security to everyone.

Presuming it's the criminals who will look to exploit these vulnerabilities, denying security is making every non-criminal susceptible to attack.

So the only question that needs to be answered is this. Do we want to protect our citizens? The only answer is yes. The only solution is encryption.

The problem with strong encryption is that it already exists. Even if strong encryption were made illegal, criminals will be the one's securing their data despite the law. To deny citizens the right to protect themselves is just putting them all at risk. It's disarmament.

Under the law, all the police should need is a warrant. It's not even an exception to any rule.


> Do we want to protect our citizens? The only answer is yes.

This is not the right question. It is the one they use but is not the right one.

"Do we want our citizen able to protect themselves" is the right question. And as most government have shown, they really don't want it. They want to be in charge of the protecting.

Once you see things from their perspective their position makes more sense.


But it's funny that the US is so adament about being able to defend yourself with guns. But with software it's a debate. And it seems that often, the people for guns are against encryption and vice versa.

From a european perspective it's so strange.


The metaphorical person on the street can wrap their head around why having a gun in a person's hand gives that person power. Everyone basically knows that the gun control debate is a debate about who has and is allowed to have power.

I'm not sure the metaphorical person on the street even knows encryption exists; if they do, it's only very vague and probably an entirely incorrect understanding of it.

As for people who are for one and not the other, even though I'm personally in favor of both the second amendment and encryption in the hands of people, it's not hard to see a meaningful difference between them that would make it reasonable to have differing opinions. Guns in the hands of the populace is a positive power to do harm, encryption in the hands of the populace is strictly defensive. Criminals can use encryption to defend things we'd rather not have defended, but that is fundamentally a different concern that criminals using guns to attack things we'd rather not have attacked.

I think the only weird case would be being in favor of gun rights but thinking the government should have back door access to encryption, on the part of someone who deeply understands both. That is to say, I'm sure you can find people who believe both those things, but I suspect the latter is mostly an un-thought-out instinctive reaction to generally trust authority figures who seem generally trustworthy to them combined with a lot of ignorance about what encryption is and how it works. Humans have an all-but-instinctual understanding of physical weapons, encryption is basically magic by comparison.


> Humans have an all-but-instinctual understanding of physical weapons, encryption is basically magic by comparison.

Fair point. Although that still does feel weird to look at the us from Europe where guns nor anti-encryptions are popular.


Doesn't seem odd to me, the government has bigger and more guns at their disposal and that's the situation they want with encryption too.


they have had cases where criminals basically handed them the case by not encrypting their storage. then they run into a case where the criminal had encrypted all of their storage and they have zero access to the data. They are basically that they want to force people not to be careful with their data so they can expose their entire with a simple court document. They don't want to have to do real police work.


Members of congress aren't pro "defend yourself with guns," they are pro "get votes of people who are pro defend yourself with guns."


Does that distinction matter?


It absolutely matters. If a politician is just paying lip-service to a principle, he's definitely not going to fight for it. He's just going to do what he wants to do, and try to give the appearance of caring about the issue.

The person who actually believes what he espouses will actually work for that goal.

Unfortunately, the vast majority of politicians are the former rather than the latter, hence the popular perjorative "RINO". Maybe there's a similar thing on the Democrat side, but I can't think of one right now.


It provides context to https://hackertimes.com/user?id=sametmax 's confusion.


Yes. That specific difference is the classic vector to corruption, bribery and selling out.


> From a european perspective it's so strange.

Agreed. To me, it seems that encryption should be uncontroversially accepted from the populations's standpoint, while gun policies have valid arguments for and against.

Basically: it's really hard to shoot someone in the face with pgp.


Obligatory XKCD: https://www.xkcd.com/504/


> Even if strong encryption were made illegal, criminals will be the ones securing their data despite the law.

Well, anybody can hide their use of encryption by using steganography, [1]. E.g. you just piggy-back your secret file as noise on top of a legitimate file.

[1] https://en.wikipedia.org/wiki/Steganography


Or you write an email to your friend, saying:

    Hi Alice, I just did a billion coin-tosses, and the outcomes were:

    HTTHHTHTHHTHHTHHTHHTTTHTHTHTTTHHTHTTHT ...

    Regards, Bob


What's the point of such an email?

If you consider police officers and judges to be that stupid, you don't even need that. You can simply say: "No, my hard drive isn't encrypted, I'm just collecting white noise."

The whole point of steganography is to not raise suspiciousness in the first place.


What if you were to use the noise (encrypted data on a hard drive) as a one time pad to send obviously innocent messages to your friend, messages you could produce the plaintext for on demand? With the right setup it would appear very innocent, if a little strange.


But how do you design a strong encryption algorithm that can be trivially unlocked once a warrant is provided?

Answer: you can’t.


That's not the point. Like a body buried in the desert, there is nothing that can force a person to reveal anything. Silence is encryption enough when it comes to secrets. Torture doesn't work either.

So it's business as usual. It's obstruction of justice. Book'em.

And of course, "I can't remember" is always the greatest defense.

This whole thing is about the government endangering the public in exchange for abusing their rights.

It's all bad. You could go as far as argue that the government is obstructing justice and endangering the public by denying basic security.

We already know the enemy will find ways to access all the data. Encryption is not the last line of defense. It's the only line.


Can't remember is a statement that can be disproven. Pleading non self incrimination laws (as per Miranda) is more effective. Even if you have nothing incriminating.

This is what DoJ wants to close by making just possession of encrypted documents criminal.

(Note: not a lawyer.)


>Can't remember is a statement that can be disproven

It is? I'm not sure how you'd disprove it. Anecdotally (which I suppose actually matters in this case!) I've had a case where a (fairly long, 26 character) password I used regularly suddenly (and thus far, permanently!) went out of my head. I can remember some fragments of the password but not the whole thing.


I'd say the point is that the idea that the legal system suddenly has no way to deal with witnesses claiming forgetfulness "because encryption" is absurd.

The only novel issues encryption brings to the table involve self incrimination, because, AFAIK, IANAL, etc., the only time an encryption key is inarguably protected by the fifth amendment is where the fact that the defendant knows the key is itself incriminating evidence, because the fifth amendment only applies to self-incriminating testimony, not other self-incriminating evidence in the defendant's possession (e.g., the contents of a hard drive, encrypted or not).


If the prosecution has evidence you have recently logged in it will be all upto whether the jury believes you.


my first though as to how you would do this, is that you would implement some kind of key bag. You don't ever own your own private keys, theres a copy of them held in a central repository. they are all centrally stored and can be requested by a warrant.

The problem with this of course is that its a truly horrible idea which defeats the whole concept of a private key. Another massive problem here is that you've just created the biggest target for hackers. once the keys are out... everyone is screwed.


Dual-key encryption. You just have to trust the government not to lose their key.


> Answer: you can’t.

No, it is incredibly simple. You have a master key and the government holds this key on a secure audited system which can only be used to unlock a device once a court order is granted.

The government's security for the master key will certainly be much better than the average user's password security so this will not decrease the average user's security in the least.

You would also make the master keys expire regularly (maybe daily) so as long as a user updates their phone they will get updated with the new keys to protect against a leaked key.


> You would also make the master keys expire regularly (maybe daily) so as long as a user updates their phone they will get updated with the new keys to protect against a leaked key.

What would the logistics of this be? Would the government need to store all master keys to be able to decrypt an old message? How would you know you're using the right key to decrypt a message? What happens if all the old keys leak?

What about foreign communications? You can't compel foreign actors to encrypt with your algorithm. What if I'm storing foreign data which is encrypted with illegal algorithms, is that going to be illegal? If so, then goodbye hosting services in the US. If not, how are you going to differentiate between foreign data and local data?

What about the transition period? What do you do with legacy encryption? What about people who haven't received the newly updated government-sanctioned encryption yet? What about old devices that can't run your encryption algorithm, closed systems, etc?

I don't think it's as incredibly simple as you put it.


We are talking about different things. I was talking about allowing access to encrypted data on devices which is the main issue that le has been complaining about. You seem to be talking about a backdoor for all crypto everywhere which is very different.


> The government's security for the master key will certainly be much better than the average user's password security so this will not decrease the average user's security in the least.

Yes it will. A single user being careless with their password only exposes that user. Leaking the master key (and it will leak) exposes everyone. The target is much bigger, the payoff is much bigger for the bad guys, the risks are immense.


> Leaking the master key (and it will leak) exposes everyone

I agree that it should be assumed that the key will leak but there are plenty of ways to practically mitigate the usefulness of a leaked key.

I mentioned expiring keys already, which is obvious, but there are more sophisticated protocols that can be put in place.

> The target is much bigger, the payoff is much bigger for the bad guys

I don't think this is true. These keys would only be usable with physical access to a device. If you have physical access to the device its hard to imagine a scenario where the easiest route to cracking it would be penetrating a secure government facility.

Let me add that you have to think about security in relative and not absolute terms.

If you trust the government to secure a massive stockpile of NBC weapons, if you trust the government to manage a massive state security apparatus with hundreds of thousands of armed agents deployed domestically, then it is a little silly to draw the line at trusting them with your facebook feed.


> If you trust the government to secure a massive stockpile of NBC weapons [etc]

Physical security and digital security are very different. Someone stealing a bomb is still only one bomb. Securing that bomb involves fortifying a well-defined local border. Attacking it requires personal risk that is hard to parallelize.

Digital networks can be attacked at any time from any number of opponents. These attacks are usually automated without the risk of being found by a guard with a machine gun. Stealing the escrow key database isn't merely a single bad event; it would allow access - possibly retroactively - everything supposedly protected by those key, which is presumably "everything".

> key

You seem to be using "key" to mean several different concepts.


> government's security for the master key will certainly be much better

It's foolish to assume this after Snowden was able to walk away with his archive of classified documents. In his case, storing that many documents within the reach of one person risked losing the entire archive, which is exactly what happened. If literally everything depends on a government held escrow key, You've painted a target on a huge single-point failure.


This is called a key escrow, which is known for not working, mainly because the assumptions ("once a court order is granted") can't be implemented technically.


> known for not working

care to provide any evidence for that claim?

> ("once a court order is granted") can't be implemented technically

we live in the real world. if you can't trust your judiciary then your precious little algorithms aren't going to save you. (sorry to be the bearer of bad news)


The "government master key" idea is silly (for a number of reasons).

Yet, it's very possible to share a copy of every user key using methods like Shamir's secret sharing - therefore requiring P out of N entities agreeing on allowing the decryption to happen.

The secrets can be shared in advance with attorneys, civil rights groups, government entities and allows a democratic-ish process around decryption.


> The "government master key" idea is silly (for a number of reasons).

all of the security experts advising law enforcement, intelligence agencies, and politicians seem to think it is pretty reasonable.

care to share any of your reasons for disagreeing with them?


What is the plan for when (not if) that master key leaks?


If you want all the investigators to be able to open things with the master keys without compromising it, it's not simple at all.


"incredibly simple"


> Encryption is the only way to secure information.

I am not sure that this argument is simple at all.

Encryption only provides theoretical security. In practice even if strong encryption can't be broken it can almost always be bypassed rather trivially if data is being accessed on a regular basis.

If data is encrypted and left cold then that can be difficult or impossible to retrieve if a secure key is used and that key has never leaked anywhere but that is a pretty limited use case.

For average people encryption provides very little real world security because their data is online on buggy insecure devices all the time so there are always easier ways to compromise their data than breaking crypto.

Sophisticated criminals also don't trust encryption and don't use digital devices at all.


This ignores transport encryption, which is the bulk of what protects you on a daily basis from street criminals.


Interesting example. Transport encryption provides basically no real world security at all. The fact of the matter is that there are hundreds of millions of credit cards with full PII available for sale so it is almost certain that whatever PII you are sending over your highly secure connection is already in the hands of criminals.

The only security you have is the fact that there aren't nearly enough criminals to exploit all of the info that has already been stolen so there is a very low chance that you will be a victim of identity theft, but it has nothing to do with transport layer encryption.


For random criminals, you are right, ROI is low. But Data in Transit protection is essential to prevent targeted attacks, whether it is attempt at data interception or whacking your device with an exploit to steal information off it (continuously if desired).

It may not be in everyone’s threat model, but it is definitely very important and provides real world security.


"The only security you have is the fact that there aren't nearly enough criminals to exploit all of the info that has already been stolen"

This is pretty much the case, to include security vulnerabilities in most systems as well. Most just are not exploited because their arent enough criminals with enough time.


Modern encryption algorithms are unbreakable. Do you even know what you are talking about?


It’s interesting because these arguments are identical to those that could support the Second Amendment for exactly the same reasons.


Brings back memories. I'm the Daniel Weitzner mentioned by Eben Moglen. The debate was actually billed as a being about the future of privacy in the digital age. The moderator was Arthur Miller, a truly distinguished law professor at Harvard who wrote about privacy in the 1970s. Despite the broad focus on privacy in the title, the discussion ended up being all about encryption technology and policy. After 45 minutes or so of arguing about encryption, key escrow and the Clipper Chip, Arthur said in his trademark stentorian voice, "This was supposed to be a debate about privacy and all I've heard about is ENCRYPTION!" Despite this, we continued to talk about nothing but cryptography, as if its availability or lack thereof was the only question that mattered for privacy.

The Dept of Justice official referred to was Deputy Attorney General Jamie Gorelick, who is now a partner at a big DC law firm WilmerHale, where she represents Jared Kushner and Ivanka Trump.

See our paper, Keys under doormats: mandating insecurity by requiring government access to all data and communications, for more. https://doi.org/10.1093/cybsec/tyv009


It's one thing to say "reasonable minds can disagree"; I can empathize with the FBI's position, though I think they are fundamentally wrong.

However two things are striking about this speech, and similar recent (over the past few years) ones in the US and UK at least:

1 - Rosenstein is no dummy, so must be perfectly aware of the doublespeak in his statement that they don't want to make things easier for criminals yet companies must provide accomodation for alleged non-criminals. The pre-GWOT NSA took information assurance seriously and, at least in some cases, made encryption stronger for everyone (consider the RSA S-box) even (perhaps) at the expense of the NSA's SIGIINT efforts. I don't know if the Information Assurance Directorate is even staffed any more.

2 - The propaganda is at its most flagrant with the Sutherland shooter: Apple reportedly offered its help but the FBI ignored that until the 48 hours had passed to lock the phone, and then excoriated Apple. Meretricious malpractice, as far as I am concerned.

Neither of which makes me in any way supportive of the FBIs position, no matter what actual merit might lie in it.


Here's my biggest complaint with this debate - people are confusing literal with metaphorical. They make the analogy of the unbreakable safe. Encryption isn't that. You can still recover the physical phone and all of the storage chips on it.

That the patterns of bits in the chips make up some unrecognizable utterance is seemingly immaterial. I could write gibberish in my journal at home if I wanted to, and I think we'd all agree it would be ridiculous for the FBI to run around screaming about unbreakable ink


I think you're the one confusing reality with matter. Nobody ever cared about anybody's diary as a physical artifact. What matters (pun intended) is the information.

The way to fight for strong privacy isn't to run around screaming how "these people just don't GET it!". Because they will look at the framed Math PhD certificate on the wall and rightfully conclude that you're starting from wrong assumptions.

Instead, start by imagine the most perfect FBI agents you can. Then, debate them.

That debate must start with agreeing that your idealized agent does indeed have a harder job when all evidence moves from binders full of incriminating paper to an encrypted, impenetrable blob. Not accepting that truth makes you useless for your cause, because nobody who isn't already a convert will listen to you if you deny reality with ill-fitting analogies.

Only then can you make your case, the two main arguments of which should be:

- It is impossible to weaken encryption without running the risk of those weaknesses being exploited, or the keys to the backdoor, falling into the hands of bad actors.

- The ability to automate electronic surveillance potentially increases the quantity of surveillance to a point where it also takes on a different quality. Even if judicial oversight remained (which is questionable, considering FISA et al), privacy invasion was previously limited by two informal, yet important, caveats: the costs and resources required to have agents physically search, and the public visibility of such searches.


> That debate must start with agreeing that your idealized agent does indeed have a harder job when all evidence moves from binders full of incriminating paper to an encrypted, impenetrable blob. Not accepting that truth makes you useless for your cause, because nobody who isn't already a convert will listen to you if you deny reality with ill-fitting analogies.

This. 1000 times this.

I'm actually really worried that, the harder our community pushes back against seemingly reasonable requests for access, and the louder we scream about stuff like the Texas shooter's phone, the less political capital we're going to have left to spend when we need it. And we're going to need a lot of it for this fight.


The other side of this is that enshrining encryption as something that police can't compel you to help with just creates a huge loophole for hiding incriminating documents. You can go to jail for destroying evidence, why would encrypting the data and refusing to provide the password or deleting the key be any different?


> You can go to jail for destroying evidence, why would encrypting the data and refusing to provide the password or deleting the key be any different?

Specifically encrypting incriminating data after you have evidence of a crime in an effort to cover it up should be treated as the equivalent of shredding documents. (Assuming, of course, they can prove it, just as they have to prove that you had the documents in question prior to destroying them in order to prosecute you for destroying evidence.)

That's not in any way the same thing as having your data encrypted and refusing to decrypt it.


I don't really think the timeline is meaningful in this case. Having a rule where people cannot be made to decrypt files is just legalizing document shredding with an extra step.

To avoid cases where people legitimately forgot their passwords just assume that the police have video evidence of you unlocking the files just before you were arrested. You know the passphrase and the police could prove it beyond reasonable doubt in court.

You just start with your files encrypted with a strong passphrase and refuse to provide it when you get caught. This is different than routine shredding because the moment when they become inaccessible is when you refuse, not the moment you encrypted them.

If they were instead physical documents buried somewhere hidden where the police could not possibly find them without your help the court still has the ability to hold you in contempt if you don't produce them. What makes the secret knowledge of their location any different than the secret knowledge of the password?


"If they were instead physical documents buried somewhere hidden where the police could not possibly find them without your help the court still has the ability to hold you in contempt if you don't produce them"

Are you sure this is so, this sounds awfully similar to compelling you to testify against yourself. Perhaps you are confused.


Not really. If it is a foregone conclusion that the documents exist, the court can legally compel you to turn them over.


> If it is a foregone conclusion that the documents exist, the court can legally compel you to turn them over.

Are you sure? That would imply that you could be compelled to produce documents that are known to exist but were stolen from you.

It seems like a faulty premise. If they don't know where the documents are then how could they know they haven't been stolen or destroyed?

It's the same problem with encryption keys. Just because you had it yesterday doesn't mean you have it today. People actually lose or forget things, especially under stress.

That's one of the main purposes of protection against self-incrimination -- so that the government can't claim you know something that you don't and then hold you in contempt for not telling them.


> Are you sure?

Yes. A quick google for "foregone conclusion doctrine" will turn up a bunch of fairly-recent news about this.

> It's the same problem with encryption keys. Just because you had it yesterday doesn't mean you have it today. People actually lose or forget things, especially under stress.

Yes, and that's part of the problem. I'm not saying I agree with how all this works, just stating that's how it is.

There are limits, of course. If the court cannot establish that you know (or at least knew) the password/phrase/key. "I forgot" can certainly be a legitimate defense, but it of course depends on whether or not a judge believes you. If we could use "I don't remember" as an unquestioned excuse, we could get away with anything.

> That's one of the main purposes of protection against self-incrimination -- so that the government can't claim you know something that you don't and then hold you in contempt for not telling them.

That's not what we're talking about here. We're talking about things the government affirmatively knows that you either have or know. Unfortunately, of you no longer have or know that thing, the burden is on you to prove that you don't, which is difficult.


> Yes, and that's part of the problem. I'm not saying I agree with how all this works, just stating that's how it is.

You have to keep in mind that judges make rulings that conflict with the rulings of other judges all the time. It means one of them is wrong and it takes a higher court (or legislative action) to sort it out.

Pointing to lower court rulings in the news doesn't mean the issue is settled.

> If we could use "I don't remember" as an unquestioned excuse, we could get away with anything.

That is obviously nonsense. People are regularly convicted without being compelled to say or do anything. The government simply has to prove their case without the defendant's testimony.

> We're talking about things the government affirmatively knows that you either have or know. Unfortunately, of you no longer have or know that thing, the burden is on you to prove that you don't, which is difficult.

But that's the point. They should have to prove that you have it, not that you had it. And when the thing is the contents of your mind, it's impossible for them to prove that without your cooperation, and impossible for you to disprove it.

The burden that something can't be proven in a criminal proceeding is supposed to fall on the government, not the accused.


> but were stolen from you.

You're forgetting that the courts are human and would be sympathetic in this case. If it couldn't be shown that you have access to the documents or you could show they were stolen then you would be fine.

> Just because you had it yesterday doesn't mean you have it today

Right, which is why I prefaced the discussion with the situations where the police can prove beyond a reasonable doubt that you posses the key/password. We can make it more direct by arresting you immediately after you prove on video that you're capable of decrypting the documents.

> can't claim you know something

Right, but the difference is we're talking about a case where they can prove you know something. We're firmly in foregone conclusion territory.


> You're forgetting that the courts are human and would be sympathetic in this case. If it couldn't be shown that you have access to the documents or you could show they were stolen then you would be fine.

But that's the whole problem. How are you supposed to prove that you don't have something? It's completely reasonable that someone can have stolen it from you without you being able to prove it.

They can prove that you do have it by finding it in your possession, but if they could do that then they wouldn't need you to tell them where it is. If they don't know where it is then they can't know whether you have it or not.

> Right, which is why I prefaced the discussion with the situations where the police can prove beyond a reasonable doubt that you posses the key/password.

That's just assuming the conclusion.

Proving beyond a reasonable doubt that somebody knows something is next to impossible. You can have them on video entering the correct pass phrase and it only proves that they knew it when the video was made, not that they still remember it now.


> It's completely reasonable that someone can have stolen it from you without you being able to prove it.

I agree and if I was designing the legal theory I would make sure that the burden of proof is on the person claiming an other has knowledge.

> the correct pass phrase and it only proves that they knew it when the video was made, not that they still remember it now.

Right, which is where reasonable doubt comes into play: if the video was months ago it's completely reasonable to forget a password -- if it's two hours later they have a much tougher case to make about spontaneous amnesia.

Applying the 'you can't possibly prove knowledge under any circumstances' argument would be absurd in any other case.

"Did you know she was under 18?"

"No your honor, I forgot, it had been a few weeks since I saw her ID."


> Right, which is where reasonable doubt comes into play: if the video was months ago it's completely reasonable to forget a password -- if it's two hours later they have a much tougher case to make about spontaneous amnesia.

You're confusing less likely with unreasonable.

A pass phrase long enough not to make the whole question irrelevant is hard to remember.

You may have it in short term memory until it gets displaced by "oh crap I need to hire an attorney and a bail bondsman and call my boss and explain this to my wife" type issues. You may be able to remember it sitting in a familiar environment surrounded by your stuff but not in a jail cell without any of those cues.

It's completely reasonable to forget something you knew five minutes ago. It happens all the time.

Haven't you ever walked into a room and been unable to remember why you did? And that isn't 128 bits worth of context-free random data.

> "Did you know she was under 18?"

> "No your honor, I forgot, it had been a few weeks since I saw her ID."

I'm not sure this is making the point you want it to. The real targets of statutory rape laws are pedophiles who rape eight year olds, and in those cases it isn't a question of memory. You may not have remembered whether the child was 8 or 9 but you couldn't reasonably have thought they were above the age of consent. Which is why nobody objects to putting those pedophiles in jail, or to the laws that make it happen.

It's the cases where there could be a legitimate confusion that create exactly this problem. You can't tell if someone is one year above or below the age of consent just by looking at them, which is why those cases are extremely controversial.

How is it absurd that you could forget someone's age? Do you know the exact age of everyone you've ever been to the birthday party of? You probably knew on the day of the party.


> This is different than routine shredding because the moment when they become inaccessible is when you refuse, not the moment you encrypted them.

Not true. The moment you encrypted them, they became inaccessible without your consent.


Strictly speaking, it's the moment you delete the unencrypted originals.


> This is different than routine shredding because the moment when they become inaccessible is when you refuse, not the moment you encrypted them.

If it really had anything to do with when you refuse then you could just proactively refuse as soon as you encrypt so it happens at the same time. And it would imply that if you were killed before being asked to decrypt then the government would have access to the data because you never refused, which is obviously not true.


The refusal was specific to this situation. The data also becomes inaccessible the moment a person is incapable of producing the password like when they die.

When does encrypted data become inaccessible? When you encrypted them or when you forgot the password?


> When does encrypted data become inaccessible? When you encrypted them or when you forgot the password?

Inaccessible to which party? It becomes inaccessible to anyone without the key immediately and inaccessible to anyone with the key when they forget the key.

Compare the situation where a person commits a document to memory and then destroys it, but writes down some inscrutable clues to help them remember it. You have some gibberish you can't decipher, if you give them the gibberish they can recreate the original document, but it's only by application of information that exists only in their mind.


Encryption of the file should be treated as a separate step from deletion of the plain text. The latter is destruction of evidence in the case of a crime.


This is a tough sell when hard drive manufacturers use encryption to implement a secure instantaneous delete. You can't really argue that the files were deleted the moment they were encrypted.


Are you arguing you should legally have to keep a plain text copy of anything you encrypt? I mean I get your train of thought but that seems to be the conclusion


No, only that destruction of evidence could pertain to deleting the plaintext of crime-related documents.


You don't know what's in the file.

Police need reasonable cause to take action.

If all you have is an encrypted file, there's nothing to say it is incriminating. Encryption is necessary for a whole range of things.

Would you like banks to be forced to use weak encryption when processing bank-to-bank transfers?


What they want is strong but backdoored. Nobody told them that backdoors are detected and leaked or cracked. A single set of powerful keys will be a really high value target. Alternatively the backdoor key schedule.

Net time, they should ask DoD if they would use the proposed scheme. Or whether the critical economical infrastructure of the US warrants less protection than DoD documents.


> Nobody told them that backdoors are detected and leaked or cracked

Many experts have told them. They have told them that 'backdoored' is contradictory to 'strong.

The impression I get - being as far away from this debate as any other non-US citizen - is this request for "strong but accessible" "encryption" is repeated by administration and echoed by media consistently. Eventually, every citizen with the limited understanding and memory that we have to spare will be asking 'why aren't our scientists giving law enforcement what they need?', instead of what does that actually mean. In a way I find myself very fortunate to be on the side of the pond where these voices still exist, but reason temporarily prevails.


> Nobody told them that backdoors are detected and leaked or cracked.

People have been telling that to politicians over and over and over and over again. They refuse to hear it.

The problem with politics is that a lot of people have this idea of how the world should work, and they think this somehow overrules how the world actually works. In the fantasy world they live in backdoors can only ever be used by the good guys.


But can you go to jail for purposefully hiding evidence and then refusing to help find it after your arrest?



Well OK but we are mostly talking about criminal cases here.


But it does make sense as an analogy? In this case the contents of the safe correspond to the plaintext, the key that opens the safe is the private key, and having the safe without the key is like having the ciphertext without the private key


It doesn't because a safe is just a wrapper for its contents, which are within the safe still in their original form, while with cryptography the contents no longer exist in their original form. A safe would be a good analogy for instance for a tamper-proof USB stick which required a password to make its contents visible to the operating system, but which stored its data unencrypted.

A better analogy for cryptography is "a journal written in code". This better captures the property of encrypted data: the contents of the journal corresponds to the plaintext, the rules to encode the writings corresponds to the cipher plus the private key, and having the journal without the rules is like having the ciphertext without the private key. Like with cryptography but unlike with a safe, to get to the contents you have to know or guess the rules; you can't drill the door like you can with a safe.


His comment makes sense to 99% on non-tech people, and if it was possible it would make sense to all. Would we want to open Bin Laden's iPhone?

He wants you to have your home with 100 locks, guard dogs and armed guards. BUT if a court orders you, you have to let the police in to check x, y and z. Now I don't think that a secret key can be somewhere and stay safe for a long time. It will be leaked or hacked. This places hundreds of million innocent people at extreme danger of having everything personal revealed.

So, IMO, giving the cops a master key opening all our doors "if needed" doesn't work. TSA does that, but presumably while cameras are running, and this is stuff we know it will be searched. So encryption is the best option.


It's insane to see this comment downvoted. Hey HN: this guy/gal is agreeing with you! To punish them for not participating in your willful misunderstanding of this argument is intellectually bankrupt, and toxic to your cause.


> if it was possible it would make sense to all.

That's not true; it doesn't make sense to me even if it were possible to guarantee proper handling. The risk of the government abusing it's power in a completely legal way is greater than some crimes going unsolved because documents remain secret.

What matters to me is the power imbalance; with few exceptions whatever the government can do, normal people should be able to do to.


100000000000000% wrong. Store a file from your offshore account--showing that you avoided $1.78b in taxes--at home and IRS can raid it at any time with a court order. The same applies to all your documents, papers.

The 4th amendment doesn't mean you can do every illegal thing in the world and never fear the state...simply the state cannot engage in fishing expeditions. If 5 kids are reported missing from your neighborhood, and a day later you have a backhoe digging on your backyard, be ready to answer a few questions--that may lead to other questions and warrants.

>>The risk of the government abusing it's power in a completely legal way

No such thing in a western /democratic society. After you go all the levels, you must obey. Sometimes it sucks, but...


I'm not sure why you're being downvoted. Well, actually I do know, and the answer sucks and it's frustrating.

But you're right -- in an ideal world, pretty much nobody would design a future where the worst of humanity can hide behind encryption to avoid accountability.

The problem is that, with the current set of technologies that we have right now, we either give the cops (effectively) the ability to get everything on everyone, limited only by their discretion (ha!), or we give them nothing, and we give guys like Osama Bin Laden and Richard Spencer a place to hide.

One of my biggest fears is that, if we give the voting public only these two options, then eventually they're going to side with Officer Friendly over Bin Laden and Spencer.

Ultimately, it's going to be up to us as technologists to figure out something better.


Unpopular but true comment warning:

It's kind of alarming how easily you can throw Osama Bin Ladin (a guy who killed thousands of innocent Americans), with Richard Spencer who _says_ fucked up things. Richard Spencer (just like the Westboro Baptist Church) is probably a complete douchebag nutjob. But to so casually equate saying douchey things with the murder of thousands... that's exactly what leads to Officer Friendly getting more powers over every day, innocent citizens.

Look at Russia and China. They're literally doing what you're suggesting by using thought crimes to justify massive surveillance and censorship of every forum and network to "protect society" from bad thoughts and damaging their "way of life".

The very rights that allowed the civil rights movement to exist, are the ones you guys are casually trying to destroy. Daring to have an opinion the majority finds revolting. Freedom of Speech is literally a protection of minorities from the majority. Those in power don't need protection.

[edit] 1 minute in, and yep, this is going down as I expected. Have a great day. =D


You're right - I picked Spencer because he's an easy target, especially on a board that generally leans to the Left. If there's some incident of racial violence that somehow traces back to him, I doubt the typical HN reader would object to the FBI getting a warrant to search his house or his car.

And I suspect that most here wouldn't mind them getting a warrant for his phone under the same circumstances, if they can just bring themselves to admit it.

> The very rights that allowed the civil rights movement to exist

I think I see what you're saying here, but encryption wasn't really around in MLK's day.

> are the ones you guys are casually trying to destroy.

Whoa there, please don't lump me in with the authoritarians, especially the new Leftist flavor that's so popular lately.

The case that I'm trying to make in this thread is that we've got to find a better balance that protects minority voices (like MLK, or Ben Shapiro, or Milo Yanawhatshisname or whoever Berkeley is rioting about this week) in a cryptographically strong way, while still allowing for some sort of accountability in extreme cases.

If we fail at this - or worse, if we can't be bothered to try - then I'm afraid we're going to wind up stuck with something crappy like key escrow, and then the thought police are really going to have a field day.


But again you've made the same error. The locks can be opened and the phone retrieved. iPhones are not indestructible, you can access any part of them if you want.


And do what with an encrypted one? They want the contents, not the phone. Frankly, they might even return it after making a copy of it /fingerprints etc.. Spend $1m on a zero day for each case? What if they run out of zero-days?


Marvel at the encrypted data that they have full access to, and feel free to attempt to brute-force decrypt it, just like any other hostile attacker. The job of the encryption system is to prevent unauthorized access, from the perspective of the owner of the data. If it fails to do that job, it's broken and should be replaced by a system that isn't.


It's hard to know where to even begin in arguing against this.

There's the freedom/privacy argument, but I guess this is debatable depending on if you view computer files as an extension of your ideas/knowledge, or an extension of your physical possessions.

Someone brought up the entire "risk of overreach and abuse" argument.

There's also the likelihood of any tools the government has being leaked and used by bad actors (as we have seen too much recently).

Oh, and the "it's technologically impossible" argument, which should be the only one you need -- but they refuse to hear that. (Are there some supposed experts who are telling the DOJ this can be done?)


> Someone brought up the entire "risk of overreach and abuse" argument.

I don't have a problem with the authorities operating within the purview of the Constitution, eg. a warrant from a court for a particular device. That, however, is not how we got to where we are today.


Consider these two points as a start:

- They solved crimes before iPhones. Encryption is not a roadblock

- In many countries guns are illegal. Yet criminals do own them. If encryption becomes illegal, criminals would still use it


> - In many countries guns are illegal. Yet criminals do own them. If encryption becomes illegal, criminals would still use it

Making guns illegal doesn't stop criminals from using them, but it does make it possible to jail someone only because they had a gun.

Outlawing encyrption won't stop criminals from using encryption (just look at China) but it does make it possible to jail dissidents only because they were using encryption.

Surveillance will be easier if encryption is illegal, but surveillance would be easier if everyone was obligated to wear a GPS tracker as well. "Making surveillance easier" is not sufficient to argue it should be implemented. There needs to be checks and balances. And when only the government can keep secrets, and they will, there are no more.


Very good point!


> - In many countries guns are illegal. Yet criminals do own them. If encryption becomes illegal, criminals would still use it

The main reason why recent terror attacks in Europe used trucks and knives was because guns are really hard to buy in most countries. Even on the black market it'll be hard to get guns in many European countries. Not saying that there's no black market but most people wouldn't have the contacts to get any. So outlawing helps a lot and while it doesn't reduce gun crime by 100%, it probably reduces it by >90%.

While encryption is easier to get online (no physical shipping), I doubt many people have the knowledge to identify good encryption without backdoors. Most would probably fall for mechanisms planted by intelligence services.


I think their intent is not so much to make criminals not use it, but have companies use backdoored encryption, so they can get Apple's keys to a device with a warrant.

The problem remains that they constantly abuse their power and are able to even circumvent the warrant part once they know that those keys exist.


> It's hard to know where to even begin in arguing against this.

It's really not.

Encryption is all-or-nothing.

Either encryption works, or it fails. You can't pick and choose for whom it will work, and for whom it will fail.

So that frames the question, "Who is allowed to use encryption?"

This is a dangerous question to be asking, which is why it is hidden behind rhetoric by those who are asking it.

Encryption is speech.

Anyone can create and use a cypher. Such techniques were invented long before modern computing. Encrypted data is indistinguishable from random data, and possibly even unencrypted data.

Encryption is math. Cyphers are mathematical functions, whose derivations are public knowledge.

That brings us to the next question: "How?"

Either you control speech, or make math secret. Neither option is scalable, and neither option is moral.


>Oh, and the "it's technologically impossible" argumentbe the only one you need -- but they refuse to hear that.

Things like DUAL_EC_DRBG seem to prove this claim false.


I guess if you set the bar for what you want to accomplish at barely above "nobody is allowed to encrypt anything, ever." Nobody would knowingly use such a scheme. It's also a security hole giving access to anyone with the key, which is sure to be leaked/compromised if shared with the entire DOJ and others. Also, once the key is compromised, everything in the past that was ever encrypted is now readable by anyone.


>Nobody would knowingly use such a scheme.

Actually, most people would use it because most people just don't care that much.

>which is sure to be leaked/compromised if shared with the entire DOJ and others

But this is a policy issue, not an issue with the idea of backdoored encryption. There are policies that could reduce the probability of leak to negligible levels (e.g. a secure NSA facility does all the decryption). Not to mention that only the NSA will be in a position to decrypt your communications from 5 years ago. No one else is storing such a vast quantity of data.


Gotta love the fact that the EU seems to think the exact opposite

https://www.theguardian.com/technology/2017/jun/19/eu-outlaw...


It's not as simple as that. In the EU there are lots of political factors (like the director of the Dutch intelligence service, to name just one) that are quite vocal about abolishing strong end-to-end encryption; just as there are political factors in the US that wish to grant citizens the freedom to use strong encryption unencumbered.


But apart from the UK, Intelligence services in European countries tend to have less influence and aren't as popular with the general public. Many people have bad experience with intelligence services (Germany in the 30s/40s, Eastern European countries until 1990) so that there's still a lot of mistrust.

That has changed a bit with recent terror attacks (fear of terror outweighs other fears) but in general, data protection is taken much more seriously in Europe than in other regions and that doesn't only include companies but also the state.


Which is funny when you think they invented enigma in the first place.


The Enigma machine was invented by the Germans, not the Dutch.


Sorry, deutschemark being the german money before the euro, I still confuse them in my head.


When I was in college, there was a retired guy that would come in to tutor students as a way to stay busy.

He was your stereotypical brainiac type dude - quiet, lanky, glasses, soft spoken, and razor sharp. He must have been in his 50s, but you would think he had just completed upper division math, chemistry, and physics "last semester" with perfect grades to boot.

He told me a story about how once while in his Masters program, one of his colleagues figured out how to do some cool stuff with unenriched uranium.

Almost like out of a movie, he said, the government stepped in and made sure he did not publish his research.

I wonder if we'll see that type of stuff happen with cryptography or if it's already happening. I wouldn't be surprised if these theatrics were just to maintain the illusion that they don't have access to stuff.


And then Chinese figure it out for a third of the price and have a monopoly on nuclear reactors.

Remember the time when organisations like DARPA actually promoted public research of this magnitude? Or when DoD was making certain research public?


I wish this could be hammered into the thick heads of congress: there is secure, and there is insecure. There is not a gradient.


Sorry but that's not true. Using a short pass code for your phone is not secure but it's more secure than using none at all. There are many shades for security and not everything that's unbreakable is the same as no security. The same as old and simple locks are not secure but better than no locks at all.


That's... just not true.

And that kind of misrepresentation just weakens the arguments for strong encryption, because intelligent people will see them as pretty transparent misrepresentations. Have you considered that's why the arguments for strong encryption aren't going well -- that we're not actually engaging with intelligent people trying to understand the issue, we're chanting trite, shallow inaccuracies?

I mean -- "there is not a gradient"? ...what do you call changing key size?

Ed:

I'd like the people downvoting to explain how changing the keysize isn't a gradient of security. (Hint: You can't, because it is.)


I had another comment, but in response to your Ed: comment:

key size is not a measure of security. It is a measure of how /long/ we intend the key to be secure.

More explicitly: Key size does not exist of the gradient of protocol security. We know how long a key takes to break given current technology and algorithms. We choose a key size to render the time to break infeasible against our prediction of state of the art some amount of time in the future. If there's a gradient, the gradient isn't "how secure it is", its "how long it will remain secure".

Hence any policy that endeavours to control the "strength" of encryption through controls over key length is /necessarily/ requiring an insecure key size.

It can be put this simply: How small must the key be to allow it to be "good enough" for the DoJ? Would they accept a continuous 5 years on a 10000 gpus? Noting of course that in 18-24 months that key size will now only require 2.5 years, then 1.25, 7 months, 3 months...

Of course I'm sure 5 years and millions of dollars will be "unreasonable", so it would need to take less time, and cost less.


The "how long" is fits well with some other estabilished measures of security we use. For example, safes are designed to resist entry for a certain length of time. In military, defenses are often quantified by how long they should be able to hold back a presumed attack scenario.


But thanks to math, we jumped from safes that can withstand hours of attack to math that can withstand every computer on this planet for the next 1e7 years before being cracked.

The problem with what others have posited as "money based encryption" easily scales up with AWS, Azure, GCE, and private clouds. Even individuals can buy a large cloud for 1h for cheap and crack with rainbow tables or such.

But for a real safe, I can go get a thermic lance. It nicely cuts inside most safes. Or I can use a bunch of liquid nitrogen and freeze-shatter it.


Those 1e7 numbers are highly speculative actually. Decade or two is more reliable as a prediction. (Just remember we might have big quantum computers soon.)

These vastly speed up even dumb attacks against the passwords. (Not even keys.)


Sincere question: how do you define "how secure it is" except "how long it will remain secure (under attack)"?

Edit:

You're also completely eliding that security is probabilistic -- they might just guess our key on the first try. We can only discuss it as the expected amount of computation to figure out our key on average. That expected amount has a gradient along keysize.


A protocol is secure if, and only if, the fastest attack is an attack on the key itself. All of the recent crypto breaks (that not cause by prior key size restrictions req'd by gov agencies) have been protocol flaws, e.g. flaws in the protocol allowed you to derive the key without having to just explore the entire key space.

Anything other than deriving the plaintext of encrypted data alone would mean the protocol was insecure.

That said, I am coming to agree with you in terms of trying to explain to people who don't write crypto code that saying key size is gradient of security is probably the most sensible thing.

I still disagree with you on the actual statement :D


He meant "how long before technology advances, due to Moore's Law or whatever, to the point where it's really cheap to brute force the key space". Not "how long it takes from the moment you attack it to the moment you break it"


We change key size because what is secure in terms of key size is literally compute bound. Key strength changes because we predict when a key will /cease being secure/.

That said most of the demands made by DoJ aren't for reduced key size, they're for variations of key /escrow/: literally breaking the security model of crypto entirely.

So maybe I could be more specific: there is no such thing as an almost secure protocol.

The key (ha!) result of this recognition is that all secure protocols have been moving to some variant of ephemeral keys. Specifically to deal with the problem of all static keys eventually becoming insecure.


> Key strength changes because we predict when a key will /cease being secure/.

Yes, we use keys up the gradient of security that key size represents as attacks become more powerful. The reason we don't use the more secure keys in the first place isn't that 2048b keys weren't always more secure than 1024b keys -- it's just that we didn't (for most purposes) need to be that secure, and so we choose an appropriate spot on the gradient for our cost-benefit analysis.

Pretending that's not a gradient of security is simply dishonest.

> That said most of the demands made by DoJ aren't for reduced key size, they're for variations of key /escrow/: literally breaking the security model of crypto entirely.

That's missing the plot for the details: the DoJ wants a method by which they can break into digital safes in a manner similar to physical safes. Their proposal is key escrow, but that's partly because technologists didn't suggest a better way when the DoJ simply asked to get it done with little guidance. So they made a specific ask. And it sucks -- because they're not technologists. Everyone knows it, but the DoJ isn't inclined to let people flat out refuse.

Pretending that there aren't technical solutions with transparent ruses -- like there aren't gradients of security -- are how we got to lawyers demanding technical features.

I don't disagree with you that we should use secure protocols, I'm just saying we need to hold ourselves accountable for honest and strong arguments, not ruses.

The one you end on -- that using ephemeral keys is fundamentally a stronger algorithm that doesn't work well with long term taps -- is a strong argument. Much better than things like "there aren't security gradients" -- partly because they're actually true.


No one came up with a better "solution" than escrow because there is not one.

I've spent years of my life working on making it so people don't have to risk their information whenever it touches a computer.

Key length is a measure of how long you want the key to be secure. Also note that we tried that once in the past: DES had a deliberately crippled key space. That was resulting in terrible security bugs only a few years ago.


> No one came up with a better "solution" than escrow because there is not one.

I keep seeing this statement being made whenever this topic comes up. Yet I've never seen a formal impossibility result.

It's amazing. Cryptographers are the smartest people in the world when it comes to solving most problems. (Just ask them!)

But seriously, some of the stuff they can do is like magic. Things that, intuitively, sound like they should be impossible. For example:

Zero knowledge proofs? Can do.

Oblivious transfer? Sure thing.

Fully homomorphic encryption? Coming right up!

But then the DOJ says they want some way to investigate the Texas shooter's phone without also getting access to everyone else's data. And suddenly the whole community is like "I dunno man, aren't you just asking me to 'nerd harder'? ¯\_(ツ)_/¯ lololol"

It was cute at first, but if we keep it up we're going to start burning through our credibility real soon.


> I mean -- "there is not a gradient"? ...what do you call changing key size?

It isn't that there are no levels of security, it's that you can't be at two separate levels at the same time. There is no overlap.

Mandating 512-bit RSA is useless because the government could break it but so can everybody else. Allowing 4096-bit RSA wouldn't allow the government to break it.

There is no middle ground. Mandating something like 1024-bit RSA, which is considered weak but nobody has actually broken it yet, is worse than useless. The FBI probably couldn't break it today and some hackers will probably break it tomorrow, so it would only leave people at risk without providing the government access.


>It isn't that there are no levels of security, it's that you can't be at two separate levels at the same time. There is no overlap.

What do you say to DUAL_EC_DRBG, which seems to be precisely that "separate levels" of security you claim is impossible?


DUAL_EC_DRBG is, in principle, equivalent to encrypting the same data with two separate keys, each one being able to recover the plaintext. There are no "two separate levels at the same time" here: the weakest of these two keys determines your security level. With DUAL_EC_DRBG, one of these keys is a static key which, once leaked, can break every message with a small amount of extra effort.

The same applies to the recently published DUHK attack: once the hardcoded key is known, the whole thing gets broken, showing that the security level was only as strong as the weaker key.


I don't see how this is a substantive reply. Yes, once your key is known you no longer have security. That's true regardless of how many keys your scheme employs.

Just like in traditional encryption scenarios, your personal key remaining secret is a part of the assumption. That there are now two secret keys doesn't alter the analysis substantially.


> Just like in traditional encryption scenarios, your personal key remaining secret is a part of the assumption. That there are now two secret keys doesn't alter the analysis substantially.

Of course it does. At best it doubles the risk of key compromise, but it's really much worse than that.

A master key isn't like a normal key. If you compromise Alice and Bob's key, you can spy on Alice and Bob, but not Alice and Carol and definitely not Carol and Dan.

The existence of a master key is a massive security risk. Alice and Bob's key is worth say $5000. The value of stealing it generally isn't worth the effort. Nobody sends Mossad to spy on every plain old Alice and Bob.

A master key for everything is worth trillions of dollars. Every government and crime family would throw everything they have at stealing it, and many of them would succeed. Spetsnaz units and foreign intelligence operatives would fall out of the sky. Crime bosses would pay multi-million dollar bribes and still turn a huge profit.

And from there it would leak.

It's a completely different level of risk. Orders of magnitude worse. And it implies a prohibition on forward secrecy. So when it leaks, Armageddon.


Lets say I grant everything you say about the danger to every Alice and Bob that a master key exists. The question is, can policies be enacted and systems put into place that provide a commensurate amount of security for the risk the backdoor key poses? It seems plausible that it can be. But this is the conversation that needs to happen, not the boneheaded claim that its "mathematically impossible" to have a secure system with a backdoor. It's plausible that a system can be "secure" under appropriate definitions while having such a backdoor mechanism. If the analysis shows its not possible in practice, then that's fine as well. But we have to actually do the analysis.


> The question is, can policies be enacted and systems put into place that provide a commensurate amount of security for the risk the backdoor key poses? It seems plausible that it can be.

It's plausible that the government can permanently keep a key to every lock secure against every attacker?

That is such a ridiculous claim that just skipping right to "no, that's impossible" is a completely reasonable thing to do. But we can do the analysis if you really want to.

Look at the other things the government has tried really hard to protect. Nuclear secrets? Nope.

https://en.wikipedia.org/wiki/Atomic_spies

The government has actually lost hydrogen bombs on multiple separate occasions. Not the secrets, the actual live thermonuclear weapons. The things that one of which can turn all of D.C. into radioactive glass.

What about all the sensitive information about the people with security clearances?

https://en.wikipedia.org/wiki/Office_of_Personnel_Management...

Incredibly dangerous biological materials ("arguably the most deadly disease ever to affect mankind")?

https://www.npr.org/2014/07/08/329884145/in-a-lab-store-room...

Classified information in general? The list of violations is too long to even enumerate.

And this is what happens when the thing they're supposed to be protecting doesn't have incomprehensibly large commercial value on the black market.

There is no question that they are not capable of doing this.


This is just lazy. A string of bits that only need to be accessed behind closed doors isn't like any of those other things. What's more, that secret room can be secured arbitrarily well.


Most of those other things are a string of bits that only needs to be accessed behind closed doors. And it's not like a smallpox lab is open to the public either.

You can devise a defense against any attack you can think of, but for a trillion dollar prize someone will find an attack you didn't think of.

And that's the problem. You can say the words "secured arbitrarily well" but in practice you've created a room with the keys to the world in it and your first indication that your security was insufficient is an incalculable catastrophe.

It's like creating a button that gives anyone who presses it three wishes but destroys North America. "Don't worry, we'll put some guards around it" doesn't cut it. Some things need to just not exist.


> That there are now two secret keys doesn't alter the analysis substantially.

It doesn't, and that's the point. There are no "separate levels" of security. The attacker only has to break one of the keys, whichever is the weakest one. The "security level" of the whole system is the "security level" of its least secure part.


I wish I didn't agree with you but I do.

Security is a process not a state. You cant say that something is "secure", there is more secure and less secure. What the politicians are saying is that your individual security is not as important as their responsibility in security policy.

Security + Politics = Every shade of grey conceivable and then some not yet conceived.


I don't like it either.

But as a practical matter, I think we will do more to protect privacy and security by engaging with the process and honestly addressing their concerns so we can strike a balance between conflicting societal needs than we'll do with hardline stances based on inaccuracies.

I think pretty rightly a lot of tech people got told off by the political process for misrepresenting what was possible and how technology worked in an effort to not have to obey social structures. I don't think most of us liked that (I sure didn't!), but we're not going to have everything our way (and especially not by lying or throwing tantrums). I mean, if I were a senator, I'd be thinking "So, they can secure a ledger with floating cryptographic difficulty when they want to make money, but a solution for national security is impossible? Yeah, fuck these guys." There just hasn't been the kind of open, honest discussion around the topic that would satisfy their concerns.

And the key to having some of it our way is explaining why that issue is paramount to have our way and honestly engaging in the process to make it happen. Politics is a game of compromise and negotiation -- the government is almost certainly not only willing to concede some of the things on the FBI wishlist if better alternatives are put forward, but actually is interested in doing so.

Everyone knows professional investigators ask for too much, but if no one else is putting forward honest suggestions -- what choice do politicians have?


To defend this user's point, I think could be a case made for a key escrow that requires an unlock from different organizations. RSA solved this years ago. We could establish a key escrow that adds a key to your personal key. This extra key would allow unsealing in cases where it would be needed within the law.

The extra key could be set up so that it requires X out of Y keys. Each key could be owned by different organizations, like the state govt, FBI, Courts, Nonprofit oversight committees, citizens oversight of police, and such. This could provide a balance of security and privacy, and allow in extreme circumstances a forced break of encryption.

(Ideally, it would require many orgs that are normally in opposition to agree. It would not be "state govt", "FBI", "CIA" like the old Clipper Chip.)


I as a private citizen do not want the government to have access to my files. Period. End of story. They have zero right to have access to every aspect of my life. Any "solution" that involves any government the ability to access encrypted files is not encryption, but a lie.


And that's where I have to disagree.

"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized." 4th Amendment, Bill of Rights, US Constitution

That's a balance, of between "Get off my lawn", and "I affirm I saw X illegal thing and I swear it in front of a judge, and the judge agreed."

Now, I don't trust some bureaucratic department to honor the constitution. And from the sounds of it, neither do you. Which is why I was keeping in mind of having an antagonist based key-holding scheme which would require a multitude of people to unlock before the data would be unsealed. To me, that does re-enable the balance set forth in the Constitution, namely the 4th Amendment to the Bill of Rights.


I personally consider parts of the contents of my computer to actually be an extension of my mind. A daily diary for instance. Once we get to the point where we have computer chips embedded in our skull, this will be literally true.

What should be considered part of one's mind is not clear, but what is clear is that whatever the mind is, no one should be able to pry without consent.

Not commenting on what the constitution actually means, just saying what I think it should mean.


I actually think for objects like phones, the 5th amendment should apply. Phones contain way more information about a person than any other medium in history. It contains so much information that I think we should consider it an extension of the person.

At some point we will have devices actually embedded in our bodies collecting every thought. Should those not be considered part of us and thus protected under the 5th amendment?


We can also require that the hardware issuer (eg, Apple) stores an encrypted copy of the key (throwing away the key used to encrypt the original key), such that it costs $1M (or other amount) to break the encryption and reveal the key.

There's no reason it shouldn't require expense and physical breaking to gain entry, just because it's digital (and I think that this scheme gains legal protection because of such features).

By actually discussing it, instead of hiding behind lies like "there are no security gradients", we can talk about systems where it would require X amount of compute effort to break an encrypted key held in escrow for Y years at Z expense to reveal the key. (I was actually thinking about using a hash chain to timelock the key, since we have a pretty good idea of how hard it is to sequentially hash.)

I don't think most of us are against the government being able to see individual, targeted encrypted drives -- I think we object to the ability to transparently compromise all systems.


To be honest, I'm against the "moneyball" solution. Hardware's going to only get: cheaper, faster, better. That $1m price will inexorably come down to the point that skiddie could do it on their phone with AWS.

I still stand by the point of having a consortium of opposing interests as a combined group (or supermajority) to override an encryption. I think of it as a strong version of checks and balances.

In that case, if members are also hidden, it doesn't matter how many dollars are thrown at the problem. Unless you have peoples' willful intent, the escrow doesn't work.


Let's work through your $1M example. Assume Moore's Law continues for the foreseeable future, so the price of all computation halves every 18 months (ie, every 1.5 years).

Then in 15 years, that $1M computation still costs $1000.

In 30 years, it can be done for one dollar.

Is that good or bad? I guess it depends on your threat model and what the data is worth.


At a technical level, that supposes the file you want to decrypt still exists in X years.

My actual thought was to have the secure enclave emit an encrypted copy of the key with a targeted key strength when presented with a request signed by Apple's key. It would require Apple's participation (or compromising Apple) but still require that the person spend a significant amount of money on the process.

By having it encrypt a key, you can make normal messages much stronger, such that you can't decrypt messages without the device in question (because the key can't be attacked directly, only the weakly encrypted version of the key when the secure enclave shares it). Further, because you can change how strong the SE emitted key is with each revision, you can have new phones always have a 10 year expected safety window (and even turn up the difficulty over time). In the case of a total compromise of Apple's storage, the attacker still has to spend significant funds to compromise any given phone -- so we'll only see targeted attacks. (That is, they might say, crack Bill Gates' phone, but are they really going to spend hundreds of thousands a pop to break the keys of random Starbucks workers? I'm honestly not super worried if Bill Gates has to spend a few thousand extra dollars every few years to protect his billions.)

But we're never going to get to discuss those kinds of scoping and cost-benefit tradeoffs if we don't engage in the process of shaping legislation in an open and honest way.


Oh, that's cool. Thanks for sharing.

Full disclosure: I've been working on some similar ideas for a while. I'll be presenting a high-level pitch for the general concept at the USENIX Enigma conference in January [1]. Also hoping to have a full paper to share on https://eprint.iacr.org/ sometime later this month.

> But we're never going to get to discuss those kinds of scoping and cost-benefit tradeoffs if we don't engage in the process of shaping legislation in an open and honest way.

Agreed. And I'll actually take it one step farther. I think it might make sense for tech companies to adopt a very conservative version of this approach even without a government mandate.

Then the next time the DOJ rolls around to demand somebody turn over their private key (a la Lavabit), or to demand that somebody create a custom OS for them (Apple), we can say "No, we already gave you a way in, just pay the million dollars. Now bugger off and leave me alone."

[1] https://www.usenix.org/conference/enigma2018/


Apple never has your keys to store.

That's kind of the point of end to end encryption. Note that on osx/ios that same e2e encryption protects credit card and password data.

Saying company X should store their keys is the same as saying "Company X should paint a giant target on their servers that have to be weakly protected to appease requests from agency Y". The solution to unending breaches of user data is to not be able to decrypt it, that only way to achieve that is to never have the keys.


This might be unpopular (and is certainly counter to the government's desires), but I am absolutely against breakable encryption of any sort. I am absolutely fine with criminals being able to use strong encryption to make data impossible to ever be read by the government.

The bit I take issue with here is that breakable encryption is absolutely necessary for law enforcement to do its job. No. It makes it _easier_ for them to do their job, at the expense of everyone else's security. There are usually other ways to get a conviction other than being able to decrypt a criminal's data. And if in some instances there isn't, I'm ok with that. I value freedom and privacy higher.


I didn't downvote you, but ultimately either someone else can get in or they can't, the fact that keys have varying sizes is tangential to this issue since the size chosen only needs to be one that is sufficient. The fact that I don't know what that value is doesn't change the fact that the outcomes are binary (secure|insecure).


Your initial assumption seems to be wrong.

In cryptography there is one information theoretical secure scheme: The one time pad. But even that relies on circumstances to keep it secure. Without limitations you can not say no one can break it, because obtaining the key material might still be possible.

That leaves us with most other schemes. They are computationally secure. This implies that the security of the system depends on the computational power of the attacker. So a system can only be secure for a class of attackers and to a certain extent.

If you apply a binary clssification of secure insecure as you proposed it someone can get in", most systems today, if not all, are insecure.


>most systems today, if not all, are insecure

I'd say that's a fair assessment.


The "thick heads of congress" will see how you've been proven wrong in other comments, allowing them to follow their instinct to ignore arguments from people who start with an ad hominem.

You may think you're taking a strong stance for privacy and freedom. But anyone who isn't already in your corner, and sees reality through similarly myopic lenses, will only be put off by such blatantly obvious falsehoods.

Technologist tend to follow some variation of the "law of the jungle": disagree with the FED's policies? Create a currency that follows no policy except the one dictated by its algorithm! In this endeavor, they commit two mistakes:

(a) Confusing what is with what ought: the inability to do something, sometimes almost true (monetary policy for bitcoin), sometimes obviously false (two-key encryption), is used as an argument to shut down debate.

(b) The refusal to meaningfully engage with any argument that is not narrowly about tech. They believe any such arguments aren't "objective", and that they can sidestep it with technology. Yet this fails to see that of course every algorithm or other technology is the child of ideology by just another name.

In doing so, technologists usurp powers that aren't theirs: if the gold standard is the better monetary policy, you're supposed convince enough people to get Rand Paul elected president. Yes, politics is deeply frustrating, because everyone is just wrong, all the time. But it's still a much better decision-making process than five techbros in china noticing their hashrate has made them king.


This has been the refrain for 35 years. Open source strong encryption is everywhere now. The horse has left the barn, farm, county, state, and is currently swimming the Pacific.

Police have other ways to fight crime. Eventually we may be forced to deploy the ultimate weapon, namely correcting the social, economic, psychological, and neurological factors that breed it in the first place.


Perhaps the DOJ should take the first step - implement a proof of concept and run all their computers, servers, and communications on it, with a trusted third party holding the keys, say the house judiciary committee.


I understand law enforcement's frustration. However, this is quite simple.

Encryption relies on keys. Either the government has everyone's keys, or they don't.

Any stockpile of encryption keys would give access to millions of people's and businesses' data. It would be a hacking target of inestimable value, targeted by criminal organizations and foreign governments using every technique imaginable.

It would be stolen. Period. And every citizen and business would suffer disastrous consequences.

We can't say this enough in this debate: making everyone's keys accessible to one entity means they absolutely will be stolen. Whether we trust a government entity's motives isn't even relevant. They do not and cannot have perfect security, and that should end the debate.

If anyone doubts that the keys would be stolen, please see:

- https://en.wikipedia.org/wiki/Office_of_Personnel_Management... - https://motherboard.vice.com/en_us/article/qkjkxv/fbi-flash-...


> It would be a hacking target of inestimable value, targeted by criminal organizations and foreign governments using every technique imaginable. It would be stolen. Period.

And we really don't have to look any further than the Equinox hack to prove this. If they think that was bad, and it's clear that nearly all of them do, then imagine it was more than just identity data... what if it was all your actual data. If politicians know that all of their email, their internet history and all of their other secrets will get out if this happens, I wonder if they'll change their minds.


Oh - politicians won't have their data subject to the backdoors, only the rest of us.


I can’t overstate how important it is for Google, Apple, and Microsoft to keep hammering at this. It’s workkng. The DOJ is running into this a lot now (e.g. can’t get into a drug dealer’s iPhone to see who he’s messaging). Eventually, they’ll see that the ship has sailed and encryption is just something they have to deal with.


That's an incredibly optimistic take on this problem. Google, Microsoft, EFF, et al need to win every single battle or they lose the whole war.

The DOJ just needs to find one sympathetic test case, or one sufficiently horrible incident to get the laws changed. Like the Patriot Act in the US, or the new French surveillance law that passed after the Bataclan attacks.

It's the classic asymmetry problem that makes security so hard in general. Only now, the party with all the money and power and time is also the one who only needs to win once.


What did the police do before there was the internet or phones?


I imagine they spent a lot of time solving crimes, asking questions like:

"Where did they keep all their papers and correspondence?"

"Can somebody come break into this safe we have a warrant for?"

What did the cops do before X was invented?

They didn't worry about X being used to commit or cover up criminal activities while continuing to try to do their job of keeping communities either safe or oppressed, depending on how well they related to them.

I support strong crypto, but I think implying detectives and the DoJ are just too lazy or dumb or whatever to deal with this problem is a little unfair.

Apologies if that wasn't the implication


Except in most circumstances, there wouldn't have been any papers, because nobody was recording all their conversations. Nowadays, everything is happening digitally and is stored indefinitely by default; and law enforcement feels like they deserve access to all that.

The attempts at preventing ubiquitous encryption don't seem to be focused on crimes where there would have been a paper trail if people still used paper; they are focusing on reconstructing the last year or so of someone who is either dead or uncooperative, all in hopes of finding something they can use.

If they were targeting organizations with a bureaucracy for some crime, I think they could just demand access to all documents and get them? If I remember correctly, in the Levandowski case Google's lawyers got access to a huge trove of Uber's internal emails. If a private company can get that access, it should be possible for law enforcement as well, no?


Right? Apparently police never solved crime before they were allowed full access to every aspect of your life.


To be fair, it used to be that they didn't really solve most crimes.


there has never been a point where they solved most crimes. The difference is now people just happen to be carrying more information with them in their pocket than they ever had before.


To be fair, they still don't


They do just like they do now: lobby to get more laws to either (1) gain more leverage to coerce compliance or (2) criminalize actions which are tangent to the behavior they want to coerce.

I realize your question was rhetorical.


In my nation, and as I hear in many other places, the police force was much more decentralized in the past. You had local police handling problems at the local level using local knowledge. As I hear/read it was a bit more costly, had the occasional problem of local corruption, quality had a bit more variance, organized crime was a bigger problem, but the average crime had better results and was done quicker. There was also no mass surveillance.


That's also because criminals needed local knowledge. If I wanted to commit a crime today in another city, I can get there easily and quickly and get all the info I need online. In the past, small cities would've noticed strangers but (luckily) you can walk through other cities nowadays without people objecting that. For the police that means that they need to follow criminals and be able to share information nationally and internationally.


I'm no supporter of insecure cryptography, but before there was internet or phones and access to such, lots of crimes went unsolved.


Clearance rates for homicide have _dropped_ over the past 50 years:

  America’s homicide clearance rate—the percentage of solved
  crimes that lead to arrest—has fallen considerably in the
  past 50 years, from around 90% in 1965 to around 64% in
  2012, according to federal statistics.
(https://www.economist.com/news/united-states/21656725-police...) (See, also, https://www.citylab.com/equity/2017/06/police-arent-getting-...)

IMO, similar to counter-terrorism efforts (most spectacularly, 9/11), technology becomes a crutch. Frankly, I wouldn't be surprised if mass encryption leads to _improved_ clearance rates, as law enforcement becomes less complacent. Take this latest example: the FBI agent declines Apple's help because he's convinced the geeks at the FBI lab can handle it. He presumably doesn't bother actually confirming with the geeks in the lab, nor does he attempt to put Apple in touch with the lab. Just utter complacency, confident that the machine (computers, bureaucracy) will suffice.

This sort of laziness wouldn't have been tolerated in Hoover's FBI. Moreover, Hoover likely would have been more aggressive using the tools available to him--keyloggers, etc--rather than whining.


Given that we're now learning that many of the convictions won last century were based on bad evidence and corrupt behavior, I don't really put much stock into those figures.


Wouldn't that be a change in data collection more than in absolute quality of problem-solving? Japan, for example, I remember reading that their near-perfect homicide rate is actually because they'll classify it as "fell down some stairs" if they can't solve it. Particularly in the US with its history of social issues, I can easily see a ton of homicides in the 50s just never being written down.


I lost the link, but another newspaper article said that the clearance rate for homicides of whites dropped from, IIRC, 90% to 85%. So, yes, part of it is social issues. But in any event, it's an interesting fact that official clearance rates have moved in the opposite direction of advances in technology and ease of remote surveillance. Technology doesn't solve our social issues, and in fact can exacerbate them.

Regarding Japan, yes, their official rates are questionable. I wish I could find the journal article, but last year I was researching the supposed 100% clearance rate in Singapore and came across a very in-depth article[1] that discussed clearance rates in Singapore, Japan, and elsewhere in Asia and that left me with the impression that w'ever the actual clearance rates, they're nonetheless _much_ better than the U.S.

[1] Spoiler: the 100% clearance rate in Singapore was plausible, in no small part because it's a small city-state with very few homicides to begin with.


Or homicides that were attributed to some person that it was convenient to lock away and who were unable to prove their innocence.


A good example of this is sexual assault cases on campus. A lot of times they try to handle them without getting the real police involved, or attempt to downplay everything about it to keep the statistics pointed one way.


They would open letters, tap phones and watch bank accounts. All of those things can be subverted now. That said, they were just lucky back then... it doesn't mean everyone should suddenly give up their right to privacy now.


The wording of this brings up a worrisome point. What encryption methods does the DOJ currently have access to? Why are they complaining about needing access to this encryption now? Is it because other previous encryption methods are know to be broken or they already have access to that data?


Probably because so many communications are moving to encrypted by default, HTTPS everywhere and WhatsApp for instance (as Brazil has found out). If the big players decide to switch to E2E then governments would need to get them to change their products. Better to head them off before it is too late.


This is exactly it. The Going Dark Problem: https://www.fbi.gov/services/operational-technology/going-da...


You can't legislate reality. You can't legislate physics, or the value of pi, or weather or not it's technically possible to create nuclear weapons. The fact is that strong public key encryption is possible with a few lines of code. No matter how hard they jump up and don't the universe is not going to put that genie back in the bottle because someone writes a law or an executive order.


I tend to think that encryption should not be regulated, but I don't agree with your analysis either; it's too dismissive of a much more nuanced reality.

You can very easily make it so all consumer devices ship with software that only uses encryption that the government has escrowed keys for. You can require app stores to have the same requirement. The government requires all sorts of things of people who manufacture products. Of course, they can't regulate the 3d printer or CNC mill in your garage, but that doesn't make regulating mass-produced products a futile effort.

Yes, some people will get around it. They will use open-source software that they download and install themselves on devices that allow sideloading apps. But no one in the government expects perfection out of this. They expect that most criminals, most terrorists even, are not so sophisticated, not so careful, that they will avoid being ensnared.

Remember that the situation they are trying to avoid is one where EVERYONE'S texts and phone calls are by-default hopelessly inaccessible to the government, even with a warrant (or in the case of foreign targets, even with the most sophisticated HUMINT and SIGINT).


> You can't legislate [...] the value of pi

Amusingly someone (inadvertently) tried: https://en.wikipedia.org/wiki/Indiana_Pi_Bill


I think people not having access to the details of their government is unreasonable.


"law enforcement equities"

What an odd phrase.

We have a criminal justice system in this country that is adversarial and is tilted in favor of the accused. That's because our founders realized the immense power of the state could easily overrun any person it wanted to unless there were strict and tight guards on what they could do.

These lawyers, who presumably should know much more about all of this than I do, continue to make cases that strike me as "There are bad people! Because they are really bad, we need to change the game to give us more power"

But there have always been bad people. There always will be. There is no stopping that fact. It is part of being human.

I wonder if these people realize that even if they continue to get their way, the only thing they'll end up doing is moving the really bad people from the private sector to the government. I get the feeling they slept through a large part of world history.

I continue to hear arguments than sound reasonable. I continue to hear wonderfully-intricate arguments. What I've yet to hear is any of these yahoos recognize exactly what kinds of trade-offs they're pitching. I get the feeling I'm watching very poor workmen, focused on the tiny job in front of them instead of the ramifications of that job. I don't think we need to argue that many of these people are wrong as much as we need to argue that many of these people are incompetent. It doesn't bode well for the future.


I'd argue that any backdoored encryption (which renders the plaintext accessible to the government and to any other entity with access to the backdoor key) is inherently irresponsible. It introduces a single point of failure that will be routinely exposed in the course of ordinary criminal investigations. If US technology companies rely on this encryption to protect their trade secrets, then it's only a matter of time before China finds a way to exfiltrate the backdoor key.


From a practical point of view I'm not sure there are any criminal activities that have happened because of strong encryption being available? I mean with things like the 911 attacks it didn't seem to make any difference. On the other hand encryption is very useful for financial transactions and the like. It seems the benefits considerably outway the problems.


If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself.

-- James Madison


Governments are the next on the Silicon Valley ‘disrupt’ list.

The music industry had to be dragged kicking and screaming away from their physical distribution model.

Governments will have to be dragged similarly until they accept encrypted content is something they have the same right of access to as private thoughts.


The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

- George Bernard Shaw


encryption that the DOJ has access to, isn't.


I'm under the impression that encryption should be covered by the second amendment as an 'arm'. Just like guns encryption is there to protect us from malicious parties. If only we could get the NRA or some other more ethical gun club to endorse it as such.


"Every situation we can't cheat our way out of is unfair."


These complaints have always felt like disinformation to me. I assume they have larger capabilities than they let on, its just they can't prosecute without revealing that.


The moment that the US government stops complaining about encryption is when you will know that it has found a way through it.


That would be too transparent. Also, if they really crack popular encryption, only few people will know to avoid any leaks. Maybe they'd complain even more to give people the feeling that they cannot crack that particular method.


Me: Strong encryption that they do have access to is "unreasonable".

Thankfully they work for us right?


Is this crazy or not?

And if it's crazy, what's the best way to argue against it?


If they want to convince me to accept some kind of back door in my encryption, they have to propose a system where it can be shown that it cannot be abused by bad actors within my government, and where there are clearly stated public rules about when it can be used.

It is possible to design such a system, where the probability of abuse is arbitrarily low [1], but I have a hard time imagining the current DOJ proposing such a thing.

[1] key escrow with access controlled by a multilevel secret sharing system that requires consensus among a diverse international group of shareholders to release the key from escrow. The shareholder group is chosen so that it includes a mix of public and private entities in a variety of jurisdictions, including anonymous shareholders, so that no entity can acquire enough power or influence to force a key to be revealed.


> key escrow with access controlled by a multilevel secret sharing system that requires consensus among a diverse international group of shareholders to release the key from escrow. The shareholder group is chosen so that it includes a mix of public and private entities in a variety of jurisdictions, including anonymous shareholders, so that no entity can acquire enough power or influence to force a key to be revealed.

Intelligence agencies will just steal the keys, and then individual actors leak them onto the black market.


The probability of that can be made arbitrarily low by proper choice of parameters for the secret sharing system, at least against realistic threats over realistic timeframes.


I think you vastly underestimate what intelligence agencies operating outside the bounds of the law are capable of doing.


What are some examples of such parameters?


That's a good question. We should probably be cooperating to find out instead of asking it to silence ideological opponents. It might be useful to know for transient key secure multiparty computation.


Definition: a "(t, n) threshold system" is a method of taking a b-bit number, s, and producing n b-bit numbers with the property that someone who has access to t or more of those n numbers can reconstruct s, but someone who has access to less than t of the numbers can learn nothing about s. The n b-bit numbers are called "shares".

Suppose you use a (5,5) threshold system to make 5 shares of a secret, and you distribute those shares to 5 shareholders. Someone who wants to get your secret without your cooperation has to convince all 5 shareholders to cooperate (or steal copies of the shares from all 5 of them).

If that does not provide a sufficient level of protection, you could instead go with a (6,6) or (7,7) or higher threshold system. The higher you go, the less likely it is that a bad actor will be able to get copies of all the shares.

A drawback of that approach is that if just one shareholder loses their share, the secret is not recoverable. That can be addressed by increasing n more than t. Instead of say, a (7,7) threshold system maybe you use a (7,10).

That's the basic idea. You set t high enough that the chances that a bad actor, even a powerful one, could subvert t different shareholders is low enough for you, and you set n above t a bit to allow for some shareholders losing their shares or being unavailable if the time ever comes when the shareholders decide that there is a legitimate reason to recover your secret.

Going beyond the basic idea, you can add a second level. You take your secret, and make, say, 4 shares using a (4,4) system. Let's call these "level 1 shares". Instead of giving the level 1 shares to shareholders, we can take each level 1 share, use a separate threshold system to make shares of the level 1 share, which we call "level 2 shares", and distribute the level 2 shares to shareholders.

What this does is let us make different categories of shareholders, with different weights.

So we might make 4 level 1 shares, using a (4,4) system. Call these s1, s2, s3, and s4. We thing might apply a (3,3) system to s1, and give the resulting shares to whatever agency or department or branch handles warrants in 3 separate foreign national governments.

s2 we apply a (3,6) system to, and given those 6 shares to 6 non-government civil rights organizations.

s3 we apply a (3,6) system to, and give those to 6 individuals that we trust.

s4 we apply a (2,3) system to, and give those to three commercial entities that offer shareholding as a service. We should pick entities in 3 different countries, separate from the countries we gave s1's shares to.

With that scheme, someone trying to get at our secret needs to get 3 foreign governments, 3 civil rights organizations, 3 people we trust, and 2 companies to all agree that giving up our secret is justified.

As with the single level approach you can bump the particular numbers up or down to decrease or increase the chances that someone can illegitimately get your secret.


I don't see how limiting access to a key escrow is sufficient to reduce the probability of abuse or exploitations.

Let's assume we have this "diverse group of shareholders" all with private keys and published public keys. The process for distributing and revoking those public keys adds a lot of complexity and points of failure.

Up to this point, the idea "works" but still provides a non-neglible increase in implementation complexity and decrease in security.

Once I use all those public keys to to encrypt my private key to place in escrow, it is impossible to verify if I have complied with the law and honestly provided an accurate escrow entry without actually having all those share holders decrypt my private key and verify that it correctly decrypts my content. At this point my content is exposed anyway.

It is NOT possible to create a secure, enforceable and un-abusable key escrow system.


> diverse international group

No government would ever agree to give multiple foreign governments/NGOs the veto power over tools that are "important for national security".

More realistically, your "shareholder groups" in any plan the government might actually approve would be existing groups the government is already involved with (e.g. Equifax).


Countries are known to do this on big issues, see eg. NATO.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: