Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Encryption

BREACH Compression Attack Steals SSL Secrets 106

msm1267 writes "A serious attack against ciphertext secrets buried inside HTTPS responses has prompted an advisory from Homeland Security. The BREACH attack is an offshoot of CRIME, which was thought dead and buried after it was disclosed in September. Released at last week's Black Hat USA 2013, BREACH enables an attacker to read encrypted messages over the Web by injecting plaintext into an HTTPS request and measuring compression changes. Researchers Angelo Prado, Neal Harris and Yoel Gluck demonstrated the attack against Outlook Web Access (OWA) at Black Hat. Once the Web application was opened and the Breach attack was launched, within 30 seconds the attackers had extracted the secret. 'We are currently unaware of a practical solution to this problem,' said the CERT advisory, released one day after the Black Hat presentation."
This discussion has been archived. No new comments can be posted.

BREACH Compression Attack Steals SSL Secrets

Comments Filter:
  • by icebike ( 68054 ) on Monday August 05, 2013 @07:42PM (#44481915)

    Those guys are giving away all your exploits.

  • Lets see, gotta have man in the middle AND requires the attacker and victim to be on the same network.
    Piece of cake!

    • by Manfre ( 631065 )

      Lets see, gotta have man in the middle AND requires the attacker and victim to be on the same network.
      Piece of cake!

      If some one manages to wire in to my network, they won't need to bother with this exploit. They'll have a lot more access.

      • by ls671 ( 1122017 )

        Wouldn't just one compromised machine on your wired network fit the bill?

        • by imikem ( 767509 )

          Not on a switched network unless you get on a specially configured port. It has been a long time since most wired networks had shared access. Wireless however is generally a shared medium.

          • by skids ( 119237 )

            It's actually rather rare, even these days, for a switched network to be properly configured to protect against MAC flooding attacks. Actually, it's probably more common in enterprise setups for the WiFi to be more secure than the wired, since WPA-Enterprise is getting pretty common.

          • If you have a managed switch, sure. If you have a cheap OTS switch, then you won't even get a notification that one of the nodes is doing an ARP flooding attack on the switch...
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Yeah, no worries, 'cause the infrastructure providers and their NSA buddies aren't in the middle.

    • by DarkOx ( 621550 )

      I might be missing the obvious but I don't see the *need* to be on the same network. A couple nailed up ARP entries on your next hope router, a nailed up arp on a separate router with some NATs all at your ISP should enable your favorite three letter agency to this from the comfort of their Washington offices.

    • Re:Piece of Cake (Score:5, Informative)

      by phantomfive ( 622387 ) on Monday August 05, 2013 @09:48PM (#44482689) Journal
      Here's the list of requirements from CERT. All of these must be true for the attack to work. From this list, a creative person could think of many ways a website could avoid this exploit, but it's harder for the client.

      1. HTTPS-enabled endpoint (ideally with stream ciphers like RC4, although the attack can be made to work with adaptive padding for block ciphers).
      2. The attacker must be able to measure the size of HTTPS responses.
      3. Use of HTTP-level compression (e.g. gzip).
      4. A request parameter that is reflected in the response body.
      5. A static secret in the body (e.g. CSRF token, sessionId, VIEWSTATE, PII, etc.) that can be bootstrapped (either first/last two characters are predictable and/or the secret is padded with something like KnownSecretVariableName="".
      6. An otherwise static or relatively static response. Dynamic pages do not defeat the attack, but make it much more expensive.

    • by pmontra ( 738736 )
      Any public wifi network will do, at restaurants, conferences, trains, airports etc. Remember firesheep? Where that worked this will work too.
  • Perhaps Verisign can offer some form of overpriced "insurance" to make customers feel safer on the Internets. I'm sure it'll be thrown in for "free" with a "SecureSite Ultimate" package, for a mere $1500! GoDaddy will no doubt follow suit.
  • So if sounds like this could be practical on a WiFi network??

  • by mstefanro ( 1965558 ) on Monday August 05, 2013 @08:00PM (#44482017)

    This is quite an ingenious attack, but I am very surprised it has taken people so long to find it, as it is very straightforward and easy to understand conceptually. Makes you wonder "how did I not think of that".

    Although it may seem like the requirements of a successful attack are difficult to achieve, this may not be the case.
    It is usually very easy to inject some plain-text in the source code of webpages.

    On facebook:
    https://www.facebook.com/photo.php/INJECT_WHATEVER_YOU_WANT_HERE/ [facebook.com]
    If you view the source of that URL you can see the text "INJECT_WHATEVER_YOU_WANT_HERE" appears 3 times in the source code.
    By appending the query string, on youtube:
    https://www.youtube.com/watch?v=hLkugwOYbFw&INJECT_WHATEVER_YOU_WANT_HERE [youtube.com]
    And on google:
    https://www.google.com/?INJECT_WHATEVER_YOU_WANT_HERE [google.com]

    That means that an attacker can extract secret information from a lot of the HTTPS pages that you're visiting.

    When I first read about this attack, the first fix that came into my mind was to just append /* [random text of random size] */ to all text/html responses.
      But this may cause troubles: if the random padding is too large, the purpose of compression
    is defeated. If it is too small, workarounds may be found.

    Maybe it is time to start thinking of algorithms that perform compression and encryption together, not separately?

    • Who said the Eastern European or Russian or Chinese or N. Korean government hackers had not already been using this?

    • by rtb61 ( 674572 )

      Of course is a shift to a broadband world, simply drop the compression and just go with the encryption. If your are compressing to save bandwidth and then padding to secure the compression, which not only takes up bandwidth but initial compute cycles, simply drop the compression and cut out the point of attack.

    • by complete loony ( 663508 ) <Jeremy.Lakeman@g ... .com minus punct> on Monday August 05, 2013 @11:53PM (#44483229)
      So web servers need to disable gzip & deflate compression on any https page that might contain something sensitive? Sounds like an easy enough fix to me.
      • And cookies would be protected (normally...) as they are included in the header (not compressed) rather than the body.
  • FTA: "mitigations include disabling HTTP compression"

    What's the point of HTTP compression anyway? Text is a small part of the bandwidth, and most other stuff (pictures, etc.) are already kept/stored/transferred in highly compressed formats like JPEG. Trying to compress files like that does little or no good. What am I missing here?

    • by mstefanro ( 1965558 ) on Monday August 05, 2013 @08:23PM (#44482129)

      Open the Net panel of Firebug on this page and then refresh it a couple of times. Order the HTTP requests by Size. You will see that the HTML of this page is takes the vast majority of bandwidth. Images are simply a "304 Not modified", whereas the HTML is a "200 OK" of ~41KB at this time.
      So in case of Slashdot, HTML is the bandwidth bottleneck, not images.

      • Open the Net panel of Firebug on this page and then refresh it a couple of times. Order the HTTP requests by Size. You will see that the HTML of this page is takes the vast majority of bandwidth. Images are simply a "304 Not modified", whereas the HTML is a "200 OK" of ~41KB at this time.
        So in case of Slashdot, HTML is the bandwidth bottleneck, not images.

        IN the case of Slashdot, there is no bandwidth bottleneck. It's the miles of shitty, shitty, Javascript that make everything turn to shit.
        Browse the web without Javascript and with an ad blocker. It's like moving from dialup to broadband.

        • If 90% of slashdot's traffic is used to send HTML, then that is their bottleneck,
          and it's where it is most effective to apply compression (Amdahl's law).

          • by yamum ( 893083 )

            Amdahl's law?

            • Amdahl's law [wikipedia.org]
              • by yamum ( 893083 )

                I know this. How is it related to compression?

                • I believe you missed the key phrase "where it is most effective." The first sentence of the linked article:

                  Amdahl's law, also known as Amdahl's argument,[1] is used to find the maximum expected improvement to an overall system when only part of the system is improved.

                  The reference was to the utility of compression in this case, not the mechanics of it.

            • Comment removed based on user account deletion
              • by mcgrew ( 92797 ) *

                Ran across your comment while metamoderating and modded you down for that shortened URL. Short URLs are not to be trusted, you could send people to goatse or even malware. It's OK in a sig because of space limitations, but completely unacceptable in a comment. If the link's not a disgusting troll repost the actual URL of whatever you're linking and don't use stupid URL shorteners in comments. I metamod almost daily and always mod short URLs as "troll"; I'm not stupid enough to click something suspicious lik

                • Comment removed based on user account deletion
                  • by mcgrew ( 92797 ) *

                    There's no more reason to than there is to post shortened URLs in the first place. Not going through the trouble and I shouldn't have to.

                    • Comment removed based on user account deletion
                    • by mcgrew ( 92797 ) *

                      How would a short URL defeat the lame filters? That seems backwards; a long URL of lowercase letters might possibly defeat the SHOUTING filter and the "not enough characters per line" filter. You have me puzzled.

        • by Anonymous Coward on Monday August 05, 2013 @09:13PM (#44482493)

          Browse the web without Javascript and with an ad blocker. It's like moving from dialup to broadband.

          While I loathe JavaScript on a professional level, I gotta say: It's time to give up the Lynx browser. There can't be that many interesting Gopher sites left!

          • Browse the web without Javascript and with an ad blocker. It's like moving from dialup to broadband.

            While I loathe JavaScript on a professional level, I gotta say: It's time to give up the Lynx browser. There can't be that many interesting Gopher sites left!

            NoScript is your friend. Allow sane shit, block tracking and advertising horseshit.
            Then slap Greasemonkey on there and run your own scripts to replace ugly/shitty scripts.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      You can start to figure out how to render the page as soon as you have the HTML (and javascript, css etc.). It's on the critical path, as the HTML is what tells you what else to download, like images. Any speed-up in transferring the HTML directly leads to lower latency on loading a webpage. Text compresses very well so the reduction is significant. The text is much larger than you think for large pages or even for small pages when you include javascript, css and so on. Even http headers are now being compr

    • by Cramer ( 69040 )

      You've not looked at many modern web "applications", have you? The amount of javascript, style sheets, and html markup is ENORMOUS. It's common for sites to save 50-75% of bandwidth enabling compression. (for sites that aren't primarily images, etc.)

  • It should be security 101 that you never send your secrets, just send proof that you know the secret.
    • This attack could be used by an attacker to figure out your Facebook username for example. Should Facebook avoid sending your username in pages to you? And then sometimes you actually need to tell the client a secret they have to know, like an anti-CSRF token.

      • by dog77 ( 1005249 )
        For a given https connection, each side can prove to the other that they have knowledge of the authentication cookie, without sending their part of that knowledge. There are probably many ways this could be done, and I am not going to pretend I know the best way, but here is one way. Each side sends random challenges as part of the connection establishment. Each side receives the challenge and encrypts it using the public key generated at the time of the authentication cookie establishment. The challeng
      • by gl4ss ( 559668 )

        the access token is in every request going to the direction of the server.. it's the same.

    • If the proof you send that you know the secret doesn't change, that proof becomes the secret.

      • by dog77 ( 1005249 )
        No it does not, because you can't use it again. At best you call it a one time secret.
  • Amused by notion one would expect a different outcome with HTTP layer vs TLS layer compression. In every way that matters it is exactly same issue only this time attack analysis is limited to response body.

    Also have some trouble with assertion "it is very common to use gzip at the HTTP level." For static assets sure however I expect numbers for dynamic content to be a much different story.

    • by Covener ( 32114 )

      Also have some trouble with assertion "it is very common to use gzip at the HTTP level." For static assets sure however I expect numbers for dynamic content to be a much different story.

      It's in fact very common for dynamic content.

  • The DEFLATE and gzip formats allows multiple blocks of compressed data as well as blocks containing literals with no compression. Plus, just because the default implementation always looks for duplicate strings, doesn't mean you always have to do so. While it would add a heck of a lot of complexity, it should be possible for a web server to ignore duplicates that occur in sensitive strings, and output them in literal blocks so that they don't effect the frequency data of the rest of the stream. All without requiring any changes to browser implementations. This is far from simple, but could probably be done in a generic way for well known http headers.
  • by Anonymous Coward
  • by Anonymous Coward

    Anyone else getting the feeling we're approaching security the wrong way? There will never be an end to these kind of exploits. Worse yet, they force us to reduce performance in order to gain security.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...