Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Encryption Security Software The Internet News

SSL Holes Found In Critical Non-Browser Software 84

Gunkerty Jeb writes "The death knell for SSL is getting louder. Researchers at the University of Texas at Austin and Stanford University have discovered that poorly designed APIs used in SSL implementations are to blame for vulnerabilities in many critical non-browser software packages. Serious security vulnerabilities were found in programs such as Amazon's EC2 Java library, Amazon's and PayPal's merchant SDKs, Trillian and AIM instant messaging software, popular integrated shopping cart software packages, Chase mobile banking software, and several Android applications and libraries. SSL connections from these programs and many others are vulnerable to a man in the middle attack."
This discussion has been archived. No new comments can be posted.

SSL Holes Found In Critical Non-Browser Software

Comments Filter:
  • This again? (Score:5, Insightful)

    by Anonymous Coward on Thursday October 25, 2012 @04:26PM (#41770303)

    News Flash: People bypass inconvenient security features. Security reduced as a result.

    How does this at all lead to a "death knell" for SSL?

  • by JDG1980 ( 2438906 ) on Thursday October 25, 2012 @04:27PM (#41770311)

    The death knell for SSL is getting louder

    What does this mean? Just that vendors should be using the newer versions of SSL that were rebranded TLS? Or is there another, competing technology that is recommended instead?

    • by Anonymous Coward on Thursday October 25, 2012 @04:34PM (#41770405)

      It means that both Gunkerty Jeb and Timothy didn't read TFA and are both fucking stupid.

      Summary: libraries allow you to selectively ignore part or all of the certificate chain verification, including OpenSSL, which is exactly what your fucking browser asks you to do when you visit a site with a self-signed or expired cert. TFA argues that this is the wrong behavior. TFA also doesn't understand that sometimes you don't care that much about MITM, just that the traffic is encrypted to make the current session opaque.

      TFA also doesn't understand what the layers of security are around Amazon's EC2 toolkit, either.

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        TFA also doesn't understand that sometimes you don't care that much about MITM, just that the traffic is encrypted to make the current session opaque.

        Your session is not going to be very opaque if there's a man in the middle listening in.

      • by dkf ( 304284 ) <donal.k.fellows@manchester.ac.uk> on Thursday October 25, 2012 @05:36PM (#41771127) Homepage

        TFA also doesn't understand that sometimes you don't care that much about MITM, just that the traffic is encrypted to make the current session opaque.

        That allows you to have a wonderfully secure conversation with whoever is snooping. Great step forward there!

        It's important that clients verify the identity of the servers to which they connect, but they can do so in many ways. Public HTTPS does it in a particular pattern, but a self-signed certificate also works (provided you've distributed the server's public key to clients in a trusted way first). The problem with self-signed on the public HTTPS web is that there's too many sites for it to be at all practical for you to acquire all their self-signed public certificates before connecting to any of them; that advantage (of the CA system) ceases to be very relevant on a closed system such as an intranet, though larger intranets can go for things like a private CA.

        Expired certificates or non-matching host certificates are a demonstration of poor deployment.

        • by Anonymous Coward

          Yup. And a lot of times people just don't give a damn.

          There are other reasons to use weak SSL - most of them rather stupid and created by not-so-bright IT policies. For example: tunneling through stupidly configured proxy servers often requires an SSL connection - it does not matter to anyone involved what certificate is used to establish such a connection, the proxy simply often wants the connection to be encrypted using SSL.

        • by Anonymous Coward

          The public CAs are fundamentally untrustworthy. Your only hope is to do like ssh: keep track of certificates that have been seen, and raise an alert if a site's certificate ever changes. Self-signed isn't worse than China-signed, Belarus-signed, Russia-signed, France-signed, Israel-signed, or any of the other supposedly "trustworthy" public CAs you so love.

          • by Lennie ( 16154 )

            I'm sure that would work really well: NOT

            This policy does not work because one of the first website people visit is Google, they change their certificated on a weekly basis (yes all servers; every week they roll out new certs).

            • by Anonymous Coward

              (yes all servers; every week they roll out new certs).

              Apparently I just hit the one server, they forgot about:
              Common Name (CN) www.google.com
              Serial Number 4F:9D:96:D9:66:B0:99:2B:54:C2:95:7C:B4:15:7D:4D
              Issued On 10/26/11
              Expires On 10/1/13

              Unless "new certs" means one that was issued a year ago.

              • by Lennie ( 16154 )

                I don't know that is what I heared in a presentation, I think it is somewhere online. I'll see if I can find it somewhere.

            • Doesn't matter, unless they roll out a new CA cert every week.
              It's the longevity, and security, of the CA cert that matters.

          • by Zibri ( 1063838 )

            I don't really like the CA model either, but your suggestion doesn't seem thought through. SSH asks you to actually verify the key fingerprint of the new host key you are trying to connect to; this would be quite hard for non technical users that want to visit their bank website etc.. And like other commented, that would also be a PITA with key rollovers.

            No, the real solution I think is developed in the DANE IETF WG: distributing keys through DNS, secured by DNSSEC.

      • TFA also doesn't understand that sometimes you don't care that much about MITM, just that the traffic is encrypted to make the current session opaque.

        That's because that's a nonsense idea. If the scheme you use is vulnerable to MITM, the session using that scheme are not opaque to unintended eavesdroppers. That's what "MITM" means.

      • by vux984 ( 928602 )

        TFA also doesn't understand that sometimes you don't care that much about MITM, just that the traffic is encrypted to make the current session opaque.

        Others have already weighed in, but I have to pile on too. You need to realize this is an absurd position.

        What is the point of an 'opaque session' if you are having it with an unknown party?

        If you are willing to talk to anyone who presents themselves as the endpoint and you don't authenticate them, what does it matter if someone else can't listen in... for al

        • Re: (Score:2, Informative)

          by Anonymous Coward

          Your parent stated it badly. It's not that you aren't worried about Monkey in the Middle. It's that you aren't worried about third party identity verification to avoid MITM. If you self-sign your own certificate, then you know that it's valid. You aren't relying on a third party signer (e.g. VeriSign) to validate it. You are validating it.

          The thing is that for this to work, you need to verify the certificate. If you don't verify the certificate, then you can end up with MITM attacks. There is a mecha

          • by timster ( 32400 )

            MITM in cryptography usually stands for "man in the middle". "Monkey in the middle" is a kid's game where a group stands in a circle and tries to keep the ball away from a single kid designated the "monkey".

      • by Skapare ( 16644 )

        MitM success means opaque fail. If you want opaque, you must prevent MitM. If the transit has no MitM opportunity, then what's the point of opaque in the first place.

    • by Anonymous Coward on Thursday October 25, 2012 @04:34PM (#41770413)

      It means that this "post" is really clickbait. And now we know why no one RTFA.

      • by wmbetts ( 1306001 ) on Thursday October 25, 2012 @04:48PM (#41770583)

        Yes, it is and it's bs that libcurl got caught in the middle. By default libcurl is secure.


      • It means that this "post" is really clickbait. And now we know why no one RTFA.

        Yes - please, nobody make any more topical comments about the "death knell" phrase or you're just going to encourage this kind of submission whoring and editors who play along. They'd love to see a long thread debating the merits of whether or not TLS is about to go extinct, and there will be trolls to fuel such an absurd thread if you allow it.

        They'll have to make do with a small number of page views on the meta bitching about

    • Unfortunately they can't leave TLS because of IE 6 and maybe IE 7(?) support. These apps use HTML from IE for functionality and need to support these older browsers for corps and people who refuse to upgrade to a modern browser.

      Another reason to also get rid of XP as you can't upgrade someone's IE from your own setup.exe program.

      Arstechnica.com had an article (older as I can't find it to link) which showed TLS to be ineffective by 2016 as computers became faster and through collisions will be able to hack t

  • Wouldn't a Death Krull be cooler?
  • I thought SSL in general was susceptible to man in the middle attacks, so ANY app that uses it would be too. That doesn't mean a death knell. It means something like Domain Keys need to be used to make these even more secure.
    • by Bill, Shooter of Bul ( 629286 ) on Thursday October 25, 2012 @04:38PM (#41770451) Journal

      As long as you are using a legit SSL cert ( avalible for less than $10 anually) with decent cipher strength ( again, avalible for less than $10 anually), man in the middle should be impossible with TLS/SSL and the proper use of it by the client ( don't connect and send sensitive data, if the ssl cert isn't valid or the signer isn't trusted).

      • by Artraze ( 600366 ) on Thursday October 25, 2012 @05:10PM (#41770839)

        There's not really any such thing as a "legit" certificate; you're referring to a signed one. This does nothing to protect against a man-in-the-middle attack. What it does do is establish a chain of trust linking your certificate back to an authority. If that authority is trusted then your cert can be too (to the extent you trust the authority). If, and that's a big if, we trust that _all_ trusted authorities will thoroughly vet the certificates they sign then we can _trust_ that a MITM attack cannot occur, but realistically "legit" certificates do nothing more than that. If, say, the US DoD (once/often? a trusted authority) decides to MITM you, they can just sign a cert and MITM you.

        The only way to actually prevent MITM is to exchange the certificate (or some verification mechanism like a hash) in some sort of trusted manner (e.g. distributing it's hash with a client app).

      • You can get them for free from StartSSL, and most browsers/vendors will trust them.

    • by dgatwood ( 11270 ) on Thursday October 25, 2012 @05:05PM (#41770769) Homepage Journal

      The current versions of SSL/TLS are never vulnerable to man-in-the-middle attacks unless a trusted certificate authority is compromised (as long as both client and server implement RFC 5746). Whether the certificate authorities are trustworthy is another question, of course.

      This particular problem is caused by folks disabling the SSL stack's built-in chain validation and then not implementing their own. As far as I know, there are exactly two correct ways to support self-signed keys in Android: provide your own trust store that includes trust for that specific self-signed key or subclass the X509 validation class to add that specific self-signed key as an additional trusted anchor into the list of trusted anchors that it returns. Unfortunately, there's a lot of very bad advice out there, particularly on sites like Stack Overflow, telling folks to disable chain validation entirely. The result is that not only does the app trust that self-signed key, it also trusts any self-signed key.

      It doesn't help that there's no canonical source for that information from Google, so there are many, many questions on sites like Stack Overflow that all ask the same basic question in different ways and get different answers....

      Patient: Doctor, when I drill a hole in my hand, I can't scoop up water from the bucket to drink.
      Doctor: Why did you drill a hole in your hand?
      Patient: So that the acid wouldn't stay in my hand.
      Doctor (alarmed): Why did you put acid in your hand?
      Patient: Because the bucket dealer wanted too much money for a bucket.

      Yeah, it's like that.

      • by Luthair ( 847766 )
        StackOverflow is really the scourge of the Internet when you're searching for anything programming related, too many unrelated or unanswered pages appear in query results.
  • Protocol (Score:5, Insightful)

    by Synerg1y ( 2169962 ) on Thursday October 25, 2012 @04:32PM (#41770371)

    How is the wrong implementation of a protocol in a framework library a fault of the protocol?

    Either devs need to be aware that there's extra steps in validating using an SSL library in their framework of choice, or the framework needs to be patched appropriately, but based on the concepts the article's provided, sounds like bad implementation aka crap code, and not enough QC. Some OOP would help make the implementation easier though...

  • by gweihir ( 88907 ) on Thursday October 25, 2012 @04:32PM (#41770373)

    This is a problem of bad APIs and people not competent to select libraries with better ones. The same would happen with any other encryption protocol. Implementing and using cryptography is hard, in particular because testing will usually not show anything is wrong and testing is still the only thing most software "developers" have in their bag of tools to ensure correctness. As long as people without a clue decide they can implement cryptographic libraries or use them, these things will continue to happen.

    • by Dast ( 10275 ) on Thursday October 25, 2012 @04:50PM (#41770607)

      This is a problem of bad APIs and people not competent to select libraries with better ones.

      While that might sound true, I think the problem is deeper than that. The issue in a lot of cases is developers having to deal with non-ideal SSL/TLS setups that they have no control over.

      It usually goes like this:

      Dev monkey gets told by PHB, we need to make our communications secure, so implement SSL. Dev monkey adds SSL support to the app. Code seems to work. Testing (or even worse, someone in Production) comes back and says: dev monkey's SSL code doesn't work with our Customer XYZ's server. Dev monkey tests things himself and finds that Customer XYZ is using a self signed cert or an expired cert. Dev monkey tells PHB that Customer XYZ needs to fix their setup. PHB tells dev monkey that the setup cannot be changed because of ABC and that dev monkey needs to "code around the issue". Dev monkey updates app to not choke on bad certs. Code gets released, and Customer XYZ's remote worker gets p0wned by a man in the middle attack. Customer XYZ blames PHB, PHB blames dev monkey. Dev monkey sighs and gets another mountain dew.

      • Dev monkey updates app to not choke on bad certs [...] PHB blames dev monkey

        The PHB got the blame exactly right. The Dev Monkey proved he didn't understand SSL as soon as he did a blanket "trust everything." Dev Monkey screwed the pooch.

        • In the real world, dev monkey doesn't get to do what dev monkey wants, a good step would be to ask management what to do since customer won't change set up that way monkey covers monkey's ass and if the customer gets backed, management can deal with it on a my guy told you so basis.

          • by Dast ( 10275 )

            Exactly. In the real world, dev monkey doesn't get to make the decisions. If dev monkey doesn't code around the problem, PHB finds a different code monkey to make the change. Not everyone gets to work for themselves or for a small startup where they can make their own decisions.

            • To be honest...

              If I was consulting for a company and I clearly outlined the risk of allowing self-signing certs to them and they said we'll take it, I'd make sure I had something in writing... like an email & I'd do it for them anyways, if they get MITM'd later, I'd refer to them to the email. You can't always stop people from taking short cuts, but making them aware of the risks is more than most people probably do.

          • Dev monkey proved they didn't understand how to fix the problem.

            The last time I came across this in the real world, I was writing a .NET application. The .NET framework offers a delegate [microsoft.com] to handle exactly this situation. I assume the Java, and any C API (worth its salt) offers a similar functionality. These functions are intended to extend the trust chain validation. You can analyze the chain and verify that your certificate (and only your certificate) caused the error, and that the error was within p
      • by Anonymous Coward

        Your dev monkey is fucking incompetent and should be slapped with a smoked mackerel in the face.

        A reasonable developer would just add XYZ's cert to the list of trusted certificates manually. If you think about it a little, it doesn't matter who tells you that a purported XYZ's certificate is indeed XYZ's. It could be one of the trusted CAs, selected for by Microsoft, Mozilla or Google. Or it could be the holy trinity of XYZ's CEO, CTO and CFO, materializing in your office in person. It could even be your PH

        • by sapgau ( 413511 )

          Sure but good luck updating XYZ cert in the certificate store for hundrends and hundreds of clients.

          If you try to document it so that the user can update it then there is a 50-50 chance they will screw their certificate store and then not even their banking, gmail or shopping would work.

      • by gweihir ( 88907 )

        While this does not seem to be the issue in the OP, this is definitely a realistic scenario. Maybe I should have said that the implementation on both sides of the tunnel needs to be competently done.

  • by wmbetts ( 1306001 ) on Thursday October 25, 2012 @04:40PM (#41770483)

    The compliant about libcurl is baseless. It's said VERY CLEAR in the documentation how to use the feature. If stupid devs can't figure it out that's hardly the fault of a library developer. I've never had an issue with it and I've used it in C, C++, and PHP.

    To repeat what I said on the mailing list. If I break my thumb with a hammer do blame the hammer or do I blame myself?

    As Yehezkel Horowitz pointed out on the mailing list.

    This is the quote from the FAQ
    >Q: How do I use cURL securely?
    >A: CURLOPT_SSL_VERIFYPEER must be set to TRUE, CURLOPT_SSL_VERIFYHOST must be left to its default value or set to 2. Anything >else, such as setting CURLOPT_SSL_VERIFYHOST to TRUE, will result in the SSL connection being insecure against a man-in-the-middle attacker.

    The real answer should be - cURL defaults are secure - no need for any code to use it securely.
    ==================
    In general I think the very short answer for this publication should be RTFM.

    The little bit longer answer would be -
    1. cURL is a C code library - you can't set a value to TRUE since this is not in the language syntax.
    So you has somewhere in your includes something like "#define TRUE 1" - you must be aware to this issue - this is an important part of the relations between computers/compilers/programmers.

    2. Before setting any option to cURL - you should read the very clear documentation about this option.
    ==================
    As to what we can do to make cURL even better (in order to protect unprofessional users that don't know what they are doing), We could make '1' to act as '2' (verify peer identity), and add a special magic value (i.e. 27934) that will act as todays '1' (check for CN existence but don't verify it).

    I think they owe everyone at libcurl an apology.

    • Just to clarify Yehezkal didn't say they owed every at libcurl an apology. I did.

    • by Anonymous Coward on Thursday October 25, 2012 @05:43PM (#41771183)

      This is the quote from the FAQ
      >Q: How do I use cURL securely?
      >A: CURLOPT_SSL_VERIFYPEER must be set to TRUE, CURLOPT_SSL_VERIFYHOST must be left to its default value or set to 2. Anything >else, such as setting CURLOPT_SSL_VERIFYHOST to TRUE, will result in the SSL connection being insecure against a man-in-the-middle attacker.

      1. cURL is a C code library - you can't set a value to TRUE since this is not in the language syntax.
      So you has somewhere in your includes something like "#define TRUE 1" - you must be aware to this issue - this is an important part of the relations between computers/compilers/programmers.

      It is good that the default is secure, but this is bad API design. There are at least two ways it can be improved:

      1) The name "CURLOPT_SSL_VERIFYHOST" implies a boolean value, so "set(CURLOPT_SSL_VERIFYHOST, TRUE)" looks like reasonable code after a quick glance. Since the option is a multiple choice option, not a boolean, it should be named something like "CURLOPT_SSL_VERIFYHOST_MODE".

      2) C has had enums since forever. The values "1" and "2" are opaque magic numbers, and flags that are this important should be set with well-named enums, not with magic numbers. Further, if the API setter function was typed with the appropriate enums, the compiler would have complained when it saw "set(CURLOPT_SSL_VERIFYHOST, TRUE)".

      Yes, the application devs used libcurl incorrectly, and yes, the above criticisms are nitpicks, but a library this important should be designed very defensively to minimize the change that users will make dumb mistakes.

      • by sapgau ( 413511 )

        Oh please please mod parent up!!!
        +1

        That api design is retarded.

      • While those are valid points and should be corrected I still say it's the fault of the developers and not libcurl. Every option is documented well and the documentation is easy to find.

    • by fnj ( 64210 )

      If I break my thumb with a hammer do blame the hammer or do I blame myself?

      Sometimes, and sometimes, respectively. If the head rotates 90 degrees on the handle, or flies off the handle, on a low-mileage hammer without having undergone any abuse, and the thumb was hit a result of that fault, then you blame the manufacturer of the hammer and perhaps sue them. They almost certainly have insurance for this kind of thing.

      Otherwise you're well advised to blame yourself.

      • Touche, I was using that phrase in the context of libcurl though. In this case none of those things happen ;).

    • by fatphil ( 181876 )
      > It's said VERY CLEAR in the documentation how to use the feature.

      And, following the rules of English grammar, it's VERY CLEAR that one modifies a verb with an adverb, not an adjective.

      But such a mistake isn't important, surely? I mean, we understood what you meant, so the communication worked, didn't it?
      • Forgive me for not proof reading a post on the Internet. Hopefully, the world won't come to an end, but if it does I hope grammer nazis die first.

        • by fatphil ( 181876 )
          As I predicted, my post would be a wooooosh. Now go back and think about what I posted, and how it was relevant.
    • by sapgau ( 413511 )

      Oh my god, a parameter that takes either '2' or TRUE???
      What the hell is that?

      I will stick with my strong typed languages thank you very much.

  • by ClayDowling ( 629804 ) on Thursday October 25, 2012 @04:53PM (#41770647) Homepage

    While libraries like cURL have excellent documentation, other libraries such as OpenSSL have terrible documentation. Assuming that the cURL developers understood how to use OpenSSL correctly, it's quite simple for me to use their library to establish a secure connection.

    What's harder is to figure out how to do it with OpenSSL. There is no obvious starting point for opening a secure connection that you can glean from reading the man pages. There are books you can buy on the subject, but that doesn't excuse the library authors from writing easy to understand documentation. The library itself is quite elegant: with just a few steps you have a secure connection that you can read and write just as if it were any other network connection (or, for that matter, a file on disk). But figuring out how to correctly set up and tear down a connection using that library isn't well documented at all.

    • By default libcurl is secure. It's only insecure if you mess with an option. Personally, I'm glad that option is there.

      This is the quote from the FAQ

      >Q: How do I use cURL securely?
      >A: CURLOPT_SSL_VERIFYPEER must be set to TRUE, CURLOPT_SSL_VERIFYHOST must be left to its default value or set to 2. Anything >else, such as setting CURLOPT_SSL_VERIFYHOST to TRUE, will result in the SSL connection being insecure against a man-in-the-middle attacker.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...