Kevin Drum is displeased with the big new NSA scoops, writing a post entitled “Snowden Disclosures Finally Hit 12 on a Scale of 1 to 10”:
Nonetheless, this is truly information that plenty of bad guys probably didn’t know, and probably didn’t have much inkling of. It’s likely that many or most of them figured that ordinary commercial crypto provided sufficient protection, which in turn meant that it wasn’t worth the trouble to implement strong crypto, which is a bit of a pain in the ass…Now every bad guy in the world knows for a fact that commercial crypto won’t help them, and the ones with even modest smarts will switch to strong crypto techniques that remain unbreakable. It’s still a pain in the ass, but it’s not that big a pain in the ass.
For what it’s worth, this is about the point where I get off the Snowden train. It’s true that some of these disclosures are of clear public interest. In particular, I’m thinking about the details of NSA efforts to infiltrate and corrupt the standards setting groups that produce commercial crypto schemes.
But the rest of it is a lot more dubious. It’s not clear to me how disclosing NSA’s decryption breakthroughs benefits the public debate much, unlike previous disclosures that have raised serious questions about the scope and legality of NSA’s surveillance of U.S. persons. Conversely, it’s really easy to see how disclosing them harms U.S. efforts to keep up our surveillance on genuine bad guys.
I’m always annoyed at hyperbole inflation. If we’re beyond 100 percent “damage,” (which I would define at a minimum as firing every single security agency employee and contractor and razing their buildings to the ground) then why not 40 out of 10? Or a quadrillion out of ten? The sky’s the limit! Wait, no it’s not! But nevermind.
It’s possible, even likely, that some “bad guys” (sheesh) will switch to secure forms of communications that the NSA is unable to crack in response to this story. The problem is that the NSA was doing about the most sweeping imaginable anti-encryption effort to spy on a tiny minority of people, and undermining all sorts of critical internet infrastructure to do so.
We’ve seen this again and again with the security state. Instead of even a token effort to target their surveillance to suspected bad guys, they just take as much as they can possibly get and say “trust us.” As I said previously, most of these efforts involve weakening crypto implementation protocols throughout the entire internet and building backdoors into commercial software. People might believe the NSA won’t abuse that capability, but I think history shows no one is to be trusted with that kind of secret power.
Furthermore, there’s no reason in principle that the security holes the NSA is blasting everywhere will only be used by them. A clever enough hacker, criminal, spy, or even terrorist might take advantage of NSA-created weaknesses. As Steve Randy Waldman says:
NSA faces a conflict of mission. The organization’s more famous, swashbuckling “signals intelligence” is about maintaining a digital offense. It relies on adversaries using vulnerable systems. NSA discovers (or purchases) uncorrected “exploits” in order to break into the systems on which it hopes to spy. Normally, a good-guy “white hat” hacker who discovers a vulnerability would quietly inform the provider of the exposed system so that the weakness can be eliminated as quickly and safely as possible. Eventually, if the issue is not resolved, she might inform the broad public, so people know they are at risk. Vulnerabilities that are discovered but not widely disclosed are the most dangerous, and the most valuable, to NSA for intelligence gathering purposes, but also to cyberterrorists and foreign adversaries. There are tradeoffs between the strategic advantage that come from offensive capability and the weakness maintaining that capability necessarily introduces into domestic infrastructure. If the mission is really about protecting America from foreign threats (rather than enjoying the power of domestic surveillance), it is not at all obvious that we wouldn’t be better off nearly always hardening systems rather than holding exploits in reserve. Other countries undoubtedly tap the same backbones we do (albeit at different geographical locations and with the help of different suborned firms). Undoubtedly, passwords that nuclear-power-plant employees sloppily reuse occasionally slip unencrypted through those pipes.
So I think the tradeoff here was definitely worth it.