ISC Handler Rob sent the team a draft RFC currently under review by the IETF that seemingly fits quite nicely in the "What could possibly go wrong?" category. Take a second and read Explicit Trusted Proxy in HTTP/2.0 then come back for further discussion.
Collect jaw from floor, and recognize that what's being proposed "buggers the CA concept and browser implementation enough to allow ISP’s to stand up “trusted proxies” to MITM and cache SSL content in the name of "increasing performance." Following are highlights of my favorite content from this
Maybe I'm reading this wrong and don't know what I'm talking about (common), but we think this draft leaves much to be desired. What do readers think? Imagine this as industry standard in the context of recent NSA allegations or other similar concerns. Feedback and comments invited and welcome. |
Russ McRee 175 Posts ISC Handler |
Reply Subscribe |
Feb 24th 2014 4 years ago |
I haven't read this all carefully, but I think the RFP is probably talking not about ISPs, but about an enterprise wanting to use its Internet proxy to decrypt and monitor HTTPS traffic, they just didn't make that totally clear. This of course is good for security pros and is not necessarily a huge increases risk for the user, assuming that the enterprise can already monitor what they're doing at the host or network level.
Note the RFP defines "trusted proxy" as one where a dummy root CA cert from the proxy's internal CA has already been imported onto the client. Importing this cert is necessary before the client will trust the dummy certs that proxies like Blue Coat SG generate and send to clients on the fly. All of the things are already possible to do and are being done today -- it's just that the current protocol implementations being used today need improvement. I suspect that's what the linked discussions in the RFP are probably about. For example, what happens if the proxy finds a site with an invalid cert, because maybe there's a man in the middle attacker hijacking the session? Most enterprises can just reject all invalid certs, this would break too many sites. Ideally the proxy would notify the browser and user of the issue and let them make the decision whether to proceed (I know, it's not perfect), but I'm not sure this is always possible to do. Note also the first quote in bold text in the above article is referring to a gap in today's current state, not how the future state should work. If you don't want this inspection to happen, use a client cert and the proxy will probably let it through without decrypted inspection. Which might feel safer for the user, maybe, but can be riskier for the enterprise and their security professionals. The need for this is that, if the organization was not able to inspect outbound HTTPS, then that lets malicious insiders, malware and attackers on internal computers to perform outbound protocol tunneling through the proxies with little to no proxying / protocol inspection. |
Anonymous |
Reply Quote |
Feb 24th 2014 4 years ago |
CheckPoint and Websense already do HTTPS "MITM" proxying. I refuse to use my work's internet on that basis alone.
We don't need this. |
Anonymous |
Reply Quote |
Feb 24th 2014 4 years ago |
Reading "The Register" it would seem this is an important discussion not just limited to telcos and that all concerned should comment.
http://www.theregister.co.uk/2014/02/24/saving_private_spying_cryptobusting_proxy_proposal_surfaces_at_ietf/ http://lauren.vortex.com/archive/001076.html |
carol 10 Posts |
Reply Quote |
Feb 25th 2014 4 years ago |
Quoting Anonymous:CheckPoint and Websense already do HTTPS "MITM" proxying. I refuse to use my work's internet on that basis alone. Yup and they do a great job. How do I know? I see all of the garbage that people are clicking on that comes from compromised legitimate HTTPS sites. Even a few years ago that might be once a quarter if that. Now it's a couple almost every day. If you have valuable data you need to protect, whether it's customer data or intellectual property, and you're NOT doing HTTPS inspection, everything your wonderful, computer-literate, security-aware employees click on goes straight to the desktop AV for remediation. You know, every site that is at the top of the list for the search engine they use. Not a problem though, because desktop AV is so effective now, something like 95% right? (Oh wait, that's the current "miss" rate. Sorry.) The malware is already inside your network at that point and with multiple exploits being the norm from a single site nowadays, you're negligent if you know of this threat and don't bring it up as something that should be mitigated. So yeah, I'm happy that you don't Internet surf at work. Do your personal stuff at home. |
Anonymous |
Reply Quote |
Feb 25th 2014 4 years ago |
This post (http://hillbrad.typepad.com/blog/2014/02/trusted-proxies-and-privacy-wolves.html) helped to clarify the draft for me.
After reading this and rereading the draft, it is apparent that the draft does not intend at all to affect end-to-end HTTPS connections. This is not MITM of end-to-end HTTPS. Rather, it intends to provide for on-the-fly upgrading ONLY of HTTP traffic to HTTPS between the user and the proxy (e.g., ISP). While this is an interesting idea, it seems to be a ridiculously kludgey approach, which would cause more problems and introduces more pitfalls than it "fixes". Even after defending the concepts of the proposal, Hill also states: "One thing this whole episode has finally convinced me of is that “opportunistic encryption” is a bad idea. I was always dubious that “increasing the cost” of dragnet surveillance was a meaningful goal (those adversaries have plenty of money and other means) and now I’m convinced that trying to do so will do more harm than good. I watched way too many extremely educated and sophisticated engineers and tech press get up-in-arms about this proxy proposal, as if the “encryption” it threatened provided any real value at all. “Opportunistic encryption” means well, but it is clearly, if unintentionally, crypto snake-oil, providing a very false sense of security to users, server operators and network engineers. For that reason, I think it should go, to make room for the stuff that actually works." |
T 31 Posts |
Reply Quote |
Feb 25th 2014 4 years ago |
This is a draft right?
|
T 1 Posts |
Reply Quote |
Feb 25th 2014 4 years ago |
Having recently started looking into rolling out HTTPS interception (for the reasons stated by others here) at my $DAYJOB$ I gotta say I'm not really a fan of fundamentally breaking SSL as a result. Yeah, you can push out a new CA cert to all of your windows machines and coerce your *nix users to import it into their browsers. But it breaks IM clients that use HTTPS. It breaks linux systems trying to download security patches and/or updates via HTTPS.
We found it broke any number of apps that use HTTPS and often don't provide useful errors or warnings about their ceasing to work because of a perceived MITM attack on their HTTPS traffic. For instance, some linux desktops just stopped seeing that there were any new updates available. No mention made about "Oh, by the way, I can't talk to any of the repositories I use to look for updates anymore." There's just too many different apps, many with their own list of certs they trust (totally separate from what other apps or the OS might use) that break after performing your own MITM attack (even if done with the best intentions, that's what it is). Yeah, users will click on anything and HTTPS links then side-step a lot of the filtering we do. But these should be viewed the same way we view anti-virus - as just another layer of security, not a silver-bullet avoiding the need to educate users. |
Brent 112 Posts |
Reply Quote |
Feb 26th 2014 4 years ago |
Another issue is when mutual authentication between client and host is leveraged. It is not mainstream. I know Bluecoat can get in the way of retrieving personal certs online from Verisign because of the need for mutual authentication. I imagine this to be the case for any HTTPS proxy or MITM like implementation. It should be problematic in many other situations as Brent pointed out.
How is the distribution of "Trusted" certs to be implemented? Sorry, for some reason the IETF site is slow to load for me. I can see this implemented at an enterprise level with exceptions for known issues. Broader than that and implementation is problematic from the cert distribution and further. |
G.Scott H. 48 Posts |
Reply Quote |
Feb 28th 2014 4 years ago |
Users may think they don't "need" this RFC, but their network security team does need it. Yes, the users would probably rather not have their web browsing monitored, but tough. It's not always about the user, the enterprise has to defend itself (and it's users) as well. Yes, proxies today CAN do MITM monitoring, but not very well, as multiple people here pointed out. Those problems in the current state are reasons why HTTPS needs tweaking - so that HTTPS proxying works better.
In the current state, most proxies are mostly just tunneling HTTPS (and most other non-HTTP ports and protocols) through largely without inspection - acting like a port-based firewall from 1990. "If dest port = 8443, then let it all out." All those Linux apps breaking are exactly the problem that causes us to need an updated HTTPS RFP like this one. Right now HTTPS apps aren't expecting a MITM proxy, so the HTTPS protocol and RFC don't really give a set way to communicate and negotiate issues and exceptions. So each app is left to anticipate and perform its own error handling, which predictably fails to anticipate every possible future network environment and error. Ideally a protocol RFC would help by saying, "in this situation, the proxy shall do X and the client app should do Y." Almost nobody wants HTTPS to be decrypted for every domain, you'll want to set up some exceptions (which is also true about proxying in general, some apps won't work well with a proxy, period). It's possible your proxy admins could be doing a better job of reading the proxy logs for errors, to see what source servers or destinations need exception rules on the proxy to work. If the enterprise doesn't decrypt and inspect HTTPS, then attackers, malware, malicious insiders, etc. could be stealing and tunneling sensitive internal data out via HTTPS to a legitimate commercial web or email server, and you might never notice. To answer the last question above - I believe the draft RFP presumes the clients already have root CA certs installed to trust dummy PKI certs generated by the proxy for each site. It doesn't state how those are distributed, nor does it need to. Each enterprise can use whatever tool it uses to distribute software or perform remote client administration - AD GP, login scripts, client OS install images, emailed instructions, an "error" page from the proxy with a link and instructions, MS SCCM, IBM Tivoli, etc. etc. |
G.Scott H. 8 Posts |
Reply Quote |
Mar 2nd 2014 4 years ago |
Sign Up for Free or Log In to start participating in the conversation!