×

Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Finding More Than One Worm In the Apple

timothy posted about 7 months ago | from the looking-deeper dept.

Bug 116

davecb (6526) writes "At Guido von Rossum's urging, Mike Bland has a look at detecting and fixing the "goto fail" bug at ACM Queue. He finds the same underlying problem in both in the Apple and Heartbleed bugs, and explains how to not suffer it again." An excerpt: "WHY DIDN'T A TEST CATCH IT? Several articles have attempted to explain why the Apple SSL vulnerability made it past whatever tests, tools, and processes Apple may have had in place, but these explanations are not sound, especially given the above demonstration to the contrary in working code. The ultimate responsibility for the failure to detect this vulnerability prior to release lies not with any individual programmer but with the culture in which the code was produced. Let's review a sample of the most prominent explanations and specify why they fall short. Adam Langley's oft-quoted blog post13 discusses the exact technical ramifications of the bug but pulls back on asserting that automated testing would have caught it: "A test case could have caught this, but it's difficult because it's so deep into the handshake. One needs to write a completely separate TLS stack, with lots of options for sending invalid handshakes.""

Sorry! There are no comments related to the filter you selected.

From whence the headline? (4, Insightful)

SuperKendall (25149) | about 7 months ago | (#47018369)

The "Apple" had only one bug, the Goto Fail bug - since Apple did not use OpenSSL they never had the second bug.

So why is the headline painting Apple as the source of both bugs?

Re:From whence the headline? (4, Insightful)

Anonymous Coward | about 7 months ago | (#47018621)

The "Apple" had only one bug, the Goto Fail bug - since Apple did not use OpenSSL they never had the second bug.

So why is the headline painting Apple as the source of both bugs?

Dude.. chill, it is an actual apple, as in a fruit -- it is a saying. I didn't read the headline your way at all.

Re:From whence the headline? (-1, Offtopic)

SuperKendall (25149) | about 7 months ago | (#47018745)

I didn't read the headline your way at all.

Fine, but the tie to Apple is obviously the intent. Note the headline does not come AT ALL from the original source article, so why phrase it that way if not to tie back to the computer company?

If it's no big deal then why does my response bother you enough to post a response?

Re:From whence the headline? (2, Informative)

Anonymous Coward | about 7 months ago | (#47018937)

It's exactly the original title of the article which is:

"acmqueue - Finding More Than One Worm in the Apple"

Re:From whence the headline? (0, Insightful)

Anonymous Coward | about 7 months ago | (#47019143)

This is why people hate Apple users.

Re:From whence the headline? (1)

93 Escort Wagon (326346) | about 7 months ago | (#47020983)

Dude.. chill, it is an actual apple, as in a fruit -- it is a saying. I didn't read the headline your way at all.

Actually, you are wrong. If you read the article, you'll see its main focus is on the "goto fail" bug and what the author perceives as the development shortcomings that allowed it to happen in the first place. The focus is pretty Apple-centric, mainly because he's using the "goto fail" bug as the primary evidence to support his central tenet. However I did not get the impression the author was anti-Apple.

Heartbleed is only mentioned as an afterthought because (as the article mentions) it became public knowledge some time after the author wrote the first draft of the article.

The "finding more than one worm" phrase doesn't appear to refer to Heartbleed at all - it's about (in the author's opinion) changing practices so more bugs can be caught and/or prevented.

Re:From whence the headline? (0)

Belial6 (794905) | about 7 months ago | (#47021675)

It is a saying, but it is a very poor author that would use it in this context. When discussing a problem with a company called 'Apple', using a saying that also uses the word "apple" as an object containing a problem would only be used by the most incompetent of writers if they were not trying to make a play on words.

Re:From whence the headline? (3, Insightful)

jeffmeden (135043) | about 7 months ago | (#47018645)

They are comparing the test methods that might have cought the Apple SSL "goto fail" bug vs the Heartbleed openssl bug (which was unchecked memory access). How do we know there isn't another SSL bug in either? That's right, we don't. And we won't until testing (automated or otherwise) gets better in both places. Ideally we would have a way to find (and fix) lots of worms in both.

Re:From whence the headline? (1)

jythie (914043) | about 7 months ago | (#47018927)

Though even with automated testing, we still do not 'know'. Striving for perfection is something worth doing, but believing perfection is possible is not.

Re:From whence the headline? (1)

DarwinSurvivor (1752106) | about 7 months ago | (#47020121)

We may never know that there isn't, but we very well could know that there is.

Re:From whence the headline? (3, Interesting)

radtea (464814) | about 7 months ago | (#47018981)

And we won't until testing (automated or otherwise) gets better in both places.

I'm skeptical of testing (automated or otherwise), and I think point in TFS is well-taken: testing that would have caught this bug would have involved creating tests that virtually duplicated the system under test.

While some code is susceptible to test-driven development and thorough testing, and that should be done where-ever possible, the resources required to test some code effectively double the total effort required, and maintaining the tests becomes a huge headache. I've worked in heavily-tested environments and spent a significant fraction of my time "fixing" tests that weren't actually failing, but which due to changes in interfaces and design had become out-of-date or inappropriate.

That's not to say that testing can't be done better, but it's clearly a hard problem, and I've yet to see it done well for the kind of code I've worked on over the past 20 years (mostly algorithmic stuff, where the "right" answer is often only properly computable by the algorithm that is supposed to be under test, although there are constraints on correct solutions that can be applied.)

So I'm arguing that a culture of professionalism, that implements best-practices including coding standards and code reviews (possibly automated) that check for simple things like open if statements and unchecked memory access would be lower cost and at least as effective as heavier-weight testing.

This is a static-analysis vs dynamic-analysis argument, and while I certainly agree that dynamic analysis is necessary, both these bugs would have been caught with fairly simple-minded static analyzers checking against well-known coding standards from a decade ago.

Re:From whence the headline? (2)

rasmusbr (2186518) | about 7 months ago | (#47019171)

Okay, but in this case the bug had little to do with the algorithm. The bug was triggered unconditionally for every input.

Re:From whence the headline? (3, Informative)

serviscope_minor (664417) | about 7 months ago | (#47019335)

both these bugs would have been caught with fairly simple-minded static analyzers checking against well-known coding standards from a decade ago.

Except they wouldn't. Coverity out right stated that their static analyzer would not have caught the heartbleed bug.

Re:From whence the headline? (2)

Greyfox (87712) | about 7 months ago | (#47019409)

It's no so much that -- if your coupling is loose enough, you should be able to test the API of any component in your system. But you have to come up with that test. Programmers often have blind spots for things where "no one would ever do that." It might be OK for programmers to come up with basic tests that exercise the API and make sure it's at least marginally functioning as designed, but you also really need to throw some guys at the code who just like to break things. Paid software houses don't even do that very often, much less open source projects.

You also never see software auditing anymore. Everyone says "Oh you don't need that anymore now that we have Fortify," but fortify didn't catch this bug, did it? I did some auditing for Data General back in the '90's and found the buffer overflow in the AT&T telnet server 2 years before the same overflow was found on the Linux one. Fortify might have actually caught that one, since it was a fixed length buffer accepting user input, if anyone had ever thought to run Fortify against that program.

Re:From whence the headline? (1)

eulernet (1132389) | about 7 months ago | (#47019883)

I totally agree with you: testing is not a panacea.

Another good practice is adding bounds checking when building in debug mode (for example in "assert" statements).
When you develop your code, you use the bounds-checked version, and when in production, the checks are removed.

Also, you seem to forget that OpenSSL is full of legacy code, and legacy code is a not 2 times harder to unit-test but a hundred times !
You need to cut the code into little pieces, but the goal of OpenSSL was to write the fastest code.

So there is a culture problem before writing tests !

Re: From whence the headline? (1)

Anonymous Coward | about 7 months ago | (#47020097)

Except the TFS is about how a simple unit test could have found it. The quote from Langley is being rebutted, not reinforced.

Re:From whence the headline? (2, Interesting)

Anonymous Coward | about 7 months ago | (#47020319)

I seem to remember seeing an article on the NASA coding practice, and they do exactly what the summary suggests: every important feature is implemented twice, with two different algorithms, and they are tested against each other to ensure they produce the same result. They also do formal code reviews of every check-in (no matter how minor), and any bug found is treated as a process problem (i.e. how can we fix the process that allowed this bug in), rather than just a software problem.

As a result they produce code which is as close to perfect as anyone has ever come, and costs about 10x the industry average to develop.

Simple Unit Test to catch Apple's bug. (1)

Decameron81 (628548) | about 7 months ago | (#47020827)

At least Apple's bug could've been caught with basic unit-testing. This is the snippet of code from Apple's bug:

static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams,
                                                          uint8_t *signature, UInt16 signatureLen)
{
        OSStatus err; ...

        if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
                goto fail;
        if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
                goto fail;
                goto fail;
        if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
                goto fail; ...

fail:
        SSLFreeBuffer(&signedHashes);
        SSLFreeBuffer(&hashCtx);
        return err;
}

Just implement a unit test with the following logic:

1. When SSLHashSHA1.update() is called, DO NOT return an error.
2. Expect 2 calls to SSLHashSHA1.update() and check the input parameter on each call.
3. Expect 1 call to SSLHashSHA1.final() and check the input parameters are what you'd expect.

That simple unit test would've caught this issue without any need of duplicating code.

Re:Simple Unit Test to catch Apple's bug. (1)

Decameron81 (628548) | about 7 months ago | (#47020857)

PS: this is pretty obvious while unit-testing but I'll make it clear to avoid any confusion... the real implementation of SSLHashSHA1.update() and SSLHashSHA1.final() would not be called in this unit test, as that'd be outside of the scope of it.

And how deep can we test? (1)

gwolf (26339) | about 7 months ago | (#47019621)

Of course, it's obvious today that a test for behavior on inconsistent requests should have been done in OpenSSL. As well as a test for each failure cause should have been done by Apple. And next week, when an off-by-one bug bites us on an integer overflow in libfoobar, people will say testing for that condition should have been trivial.

So, yes, some conditions can be found with fuzzers. Of course, fuzzers work in an erratic way, and not all bugs can be triggered by them. But maybe fuzzing our code (more importantly, our security-sensitive code) will yield better results than preparing tests for those components in the system we are aware of.

Then again, properly fuzzing takes quite a bit of time. It is way less fun to watch a fuzzer than to see tests making green check marks...

Re:From whence the headline? (2)

phantomfive (622387) | about 7 months ago | (#47019167)

I don't know about the headline, but the other day I ran into a bug on OSX (some commandline tools developed problems with UTF-8). "No problem," I thought, "I'll just report it." I had to create an account to report a bug, which was annoying, but then when I got to the bug reporting website, I found this error message [picpaste.com] . "LOL" I thought, "but ok, I'll email them." I told them their bug website was having trouble, and they emailed back and said, "please report that through our bug reporting tool."

Re:From whence the headline? (0)

Anonymous Coward | about 7 months ago | (#47019211)

you do get the difference between Apple and an/the Apple or?

Re:From whence the headline? (1)

Cinder6 (894572) | about 7 months ago | (#47019433)

It's an attempt to get more views, I think. I know I clicked the link when I saw it in my RSS feed because I thought, "Holy crap, they found another glaring security whole in Apple products?" Then it's somebody analyzing others' analyses.

Tests can never catch these bugs (3, Insightful)

Anonymous Coward | about 7 months ago | (#47018395)

For the same reason new viruses will always defeat anti-virus software: Each virus is tested against existing anti-virus programs and only released into the wild when it has defeated all of them.

Re:Tests can never catch these bugs (1)

Gunboat_Diplomat (3390511) | about 7 months ago | (#47018519)

For the same reason new viruses will always defeat anti-virus software: Each virus is tested against existing anti-virus programs and only released into the wild when it has defeated all of them.

Not all of them. Malware writers go for the biggest targets for the least effort, and often just test against (and even have active intervention against) the best known/most used free AV programs (Microsoft, Avast, AVG, etc.), since a major volume of users use free solutions, and the major commercial (Symantec, Trend, Kaspersky, F-Secure, McAffee, etc.). It is actually a good bet that if you go with a lesser known commercial AV product you gain a significant protection advantage.

Re:Tests can never catch these bugs (0)

Anonymous Coward | about 7 months ago | (#47018717)

Lowest hanging fruit is the easiest.

Everyone knows this, this is why there is progressively less return on investments the higher the fruit is.

Like in 1995, it was easy to piggyback any malware you could dream of on pirated software because the average pirate also pirated their antivirus software. Since free Antivirus software is now available, and has been for quite a while, there is no reason to use commercial antivirus software, and thus the risk of malware goes down significantly.

The other thing that has chanced since 1995 is NAT. In 1995 if you wanted to connect two computers to the same internet connection you had to run proxy software, and the machines on the inside of the network could never do anything without the proxy. Today proxies are discouraged because they get used as a malware vector.

Since the easiest fruit has been picked, malware has to keep getting harder. The current trend is to actually replace the C run time on the operating system so that the malware is completely invisible to everything, including the antivirus software. This is how hackers, bot-writers and cheaters still manage to ruin MMOFPS and MMORPG games. The anti-cheating software often isn't anti-virus/anti-malware in nature, so it isn't looking for a compromised CRT, and the anti-virus software doesn't care about preventing cheating/hacking in video games, so it's not looking for the malware being activated by the MMORPG.

Re:Tests can never catch these bugs (1)

Kimomaru (2579489) | about 7 months ago | (#47018643)

Sadly, it's a shame that people put much faith in AV programs given their effectiveness (http://arstechnica.com/security/2014/05/antivurus-pioneer-symantec-declares-av-dead-and-doomed-to-failure/). I think author R.R. Martin has it right (https://www.youtube.com/watch?v=X5REM-3nWHg), keep separate machine for different purposes - one for serious work and one for messing around with. It doesn't feel like a good idea to use one machine for everything.

Re:Tests can never catch these bugs (1)

Gunboat_Diplomat (3390511) | about 7 months ago | (#47018757)

Sadly, it's a shame that people put much faith in AV programs given their effectiveness (http://arstechnica.com/security/2014/05/antivurus-pioneer-symantec-declares-av-dead-and-doomed-to-failure/). I think author R.R. Martin has it right (https://www.youtube.com/watch?v=X5REM-3nWHg), keep separate machine for different purposes - one for serious work and one for messing around with. It doesn't feel like a good idea to use one machine for everything.

Symantec is mixing up stuff here to try to position themselves for the new hot profitable APT market. For one; the context of this quote about AV being dead was a WSJ interview with the CEO where he said it in the context if Symantec being able to increase their profit, as AV has become quite cheap and APT is getting all the nice profit margin - it was not said in a context of user need, but in a context of Symantec profit need.

Then they mix up some statistics about targeted advanced hacker attacks (APT), which of course isn't stopped by AV, but it doesn't make the treat from traditional malware any less. All reports and research show that regardless of APT, the threat from standard malware is increasing, not decreasing (just as those hit by Cryptolocker..).

Yes, AV is not 100%. There will be APT type attack that bypass it, and there will be time periods with brand new malware that bypass it. But that last point is often overblown. Well over 90% of actual real world infections are from known malware that would be stopped by a good AV program. Even a condom isn't 100% safe, that doesn't mean that it is meaningless to use a condom.

Re:Tests can never catch these bugs (1)

Kimomaru (2579489) | about 7 months ago | (#47018917)

Possible, but even assuming this, the main issue is that AV in general is considered a relevant safety measure when perhaps it should not be. The assumption by itself can lead to a false sense of security. Frankly, I'd rather run multiple VMs on a machine at the very least - MS Windows for games and Debian for serious work. I don't do serious work on a Windows machine or on any Apple device for that matter - I'd rather my OSs and apps be open source and subject to comminity scrutiny.

Re:Tests can never catch these bugs (0)

Anonymous Coward | about 7 months ago | (#47021393)

I'd rather my OSs and apps be open source and subject to comminity scrutiny.

Yes, like OpenSSL, or Sendmail.

Re:Tests can never catch these bugs (1)

bluefoxlucid (723572) | about 7 months ago | (#47018801)

Actually, a static checker did find the OpenSSL bug, but nobody used Frama-C to check OpenSSL. Any parametric fuzzing would have caught the OpenSSL bug as well: give it construction of the packet and say, "Vary the data in this fixed length field, vary data and size of this variable-length field." Such tests only account for what types of data come through the program, and may cause strange behavior.

Test-driven development would also have caught Heartbleed. Similar to fuzzing, TDD would produce valid and invalid tests and their valid results. For example, a TLS Heartbeet test would send a valid and an invalid request and check each response against an expected response. Most likely, this process would prevent bugs like Heartbleed; it would later fail in regression, i.e. if the test expects 65000 "HELLO" to close connection but instead gets back a response.

Various tests can and have caught these bugs.

Re:Tests can never catch these bugs (2)

Laxori666 (748529) | about 7 months ago | (#47018805)

Did you actually read the article? The particular apple bug would have been easily caught by testing. In fact, the handshake code was copy-pasted six times in the code, and only one of the copies had the bug... if the developer had thought about about testing at all, that code would have been factored into one function, and even just by doing so the bug would have been less likely.

Re:Tests can never catch these bugs (0)

Anonymous Coward | about 7 months ago | (#47018993)

This is the one benefit of the "cloud" -- usually new malware tests against AV software with cloud lookups disabled -- which means it can then be instantly blocked if the AV package's cloud lookup features allow for generic blocking via cloud.

Sometimes the malware authors test with cloud enabled -- which just means that there are always confirmed-cloud blocks for all that tested malware prior to it hitting the public (author tests until things break, but has just submitted the "working" version to vendors, who then detect it within 15 minutes).

Same thing goes for these tests -- if you have sanity checks baked in and do a methodical fuzz testing on top, you can actually catch the majority of these bugs.

In both cases, there will always be a few that slip through, but those will be very rare.

Because... (0)

Anonymous Coward | about 7 months ago | (#47018417)

... security experts are not that fond of unit testing.

Two Code Smells (1)

SuperKendall (25149) | about 7 months ago | (#47018451)

After reading TFA, I'm not sure I like the suggested approach to the "fix" in the code by replacing the two if blocks with a common method where you pass in all sorts of parameters.

Yes duplicate code is bad, I agree that's a "code smell" (one of the worst coding terms every to be invented BTW).

But just as odiferous to me, is a method with a billion arguments like the combined extracted method has. Sure duplicate if statements smell bad, but the replacement is worse and also harder to comprehend.

I know it's in theory more testable, but at what cost when the code is more obsfucated? If the code and tests are harder to understand are you really better off?

Re:Two Code Smells (1)

bluefoxlucid (723572) | about 7 months ago | (#47018811)

Code smell? What the fuck? What was wrong with "antipattern"?

Re:Two Code Smells (1)

SuperKendall (25149) | about 7 months ago | (#47019449)

I totally agree. It sounds better in every way and is clearer to boot.

Re:Two Code Smells (0)

Anonymous Coward | about 7 months ago | (#47021957)

An antipattern is a design idiom or coding practice that you have identified and is known bad (e.g. inner platform effect, god object). A code smell is less defined and more intuitive, based on knowledge and experience. When code has a bad code smell, you know something is not quite right, but you may not have identified the precise problem.

Re: Two Code Smells (0)

Anonymous Coward | about 7 months ago | (#47020211)

Then the next step is to bundle the parameters in a clearer fashion. Perhaps some of them can be derived from a common containing item. F not, perhaps the 4 that have the same type can be wrapped into different classes to reduce the chance of getting them in the wrong order. But I do not agree that six arguments to a method makes it uglier or less desirable than repetitions of code that uses six variables.

Testing isn't Perfect (1)

Anonymous Coward | about 7 months ago | (#47018459)

If you're selling that you coulda/woulda caught all X for X that haven't happened yet, you're selling snake oil. The reality is that this computer stuff is a little harder than it looks to do properly, and if all you have to offer is marketing bullshit and a History of Art degree, maybe you should leave it to the professionals, and push for budget to do things correctly rather than just do them.

Re:Testing isn't Perfect (0)

Anonymous Coward | about 7 months ago | (#47018667)

I also think that using programmers that have been around the block a few times and run into these types of problems would be more careful about their code style. Especially if they've taken the time to think about how to prevent it the next time around, rather than being so overloaded as not be able to think. Not saying that this is the case here, just saying.

Re:Testing isn't Perfect (1)

jeffmeden (135043) | about 7 months ago | (#47018671)

If you're selling that you coulda/woulda caught all X for X that haven't happened yet, you're selling snake oil. The reality is that this computer stuff is a little harder than it looks to do properly, and if all you have to offer is marketing bullshit and a History of Art degree, maybe you should leave it to the professionals, and push for budget to do things correctly rather than just do them.

But PC-Lint has been successful at finding _every_ bug in Dr Dobbs...

Worth repeating... (4, Interesting)

QuietLagoon (813062) | about 7 months ago | (#47018461)

The ultimate responsibility for the failure to detect this vulnerability prior to release lies not with any individual programmer but with the culture in which the code was produced.

.
I've often said that you don't fix a software bug until you've fixed the process that allowed the bug to be created. The above quote is of a similar sentiment.

Re:Worth repeating... (0)

Anonymous Coward | about 7 months ago | (#47018789)

Try telling your boss that you won't fix a software bug until the team is no longer pressured to meet some deadline.

Re:Worth repeating... (4, Insightful)

radtea (464814) | about 7 months ago | (#47018853)

I've often said that you don't fix a software bug until you've fixed the process that allowed the bug to be created.

One of the things that struck me about the goto fail bug was that it was specifically engineered out of coding best practices in the '90's.

Any reasonable coding standard from that time forbade if's without braces for precisely this reason. And yeah, that's a "no true Scotsman" kind of argument (if a coding standard didn't contain such a clause it was not by my definition "reasonable") but the point still holds: software developers at the time were aware of the risk of open if statements causing exactly this kind of failure, because we had observed them in the wild, and designed coding standards to reduce their occurrence.

So to be very specific about what kind of processes and culture would have prevented this bug: a reasonable coding standard and code reviews would have caught it (much of the code review process can be automated these days), and a culture of professionalism is required to implement and maintain such things.

The canonical attribute of professionals is that we worry at least as much about failure as success. We know that failures will happen, and work to reduce them to the bare minimum while still producing working systems under budget and on time (it follows from this that we also care about scheduling and estimation.)

Amateurs look at things like coding standards and reviews and say, "Well what are the odds of that happening! I'm so good it won't ever affect my code!"

Professionals say, "The history of my field shows that certain vulnerabilities are common, and I am human and fallible, so I will put in place simple, lightweight processes to avoid serious failures even when they have low probability, because in a world where millions of lines of code are written every day, a million-to-one bug is written by someone, somewhere with each turn of the Earth, and I'd rather that it wasn't written by me."

It's very difficult to convince amateurs of this, of course, so inculcating professional culture and values is vital.

Re:Worth repeating... (1)

Eunuchswear (210685) | about 7 months ago | (#47020569)

For fucks sake, in the 70's we got rid of programming languages where it was even possible to omit the fucking 'braces' - Algol68 replacing algol 60, Fortran 77 replacing FORTRAN 66.

Then those Pascal and BCPL reborn losers came along, and it all gies go hell.

So , is there a tool that finds every occurrence? (1)

Marrow (195242) | about 7 months ago | (#47021303)

I mean, if unbraced if statements are so deadly, then why are they not outlawed unless specifically allowed by compiler directive. On your head be it.

Re:Worth repeating... (1)

Chelloveck (14643) | about 7 months ago | (#47022161)

Have you read the BSD style(9) man page? It specifically recommends omitting unnecessary braces. In an organization which follows this style guide, not only would the lack of braces not be flagged in a code review but if there were braces around a single-statement 'if' clause the reviewers might require that they be removed. Now, given that OSX is derived from BSD...

Use a space after keywords (if, while, for, return, switch). No braces are used for control statements with zero or only a single statement unless that statement is more than a single line, in which case they are permitted.

[...]

Closing and opening braces go on the same line as the else. Braces that are not necessary may be left out.

if (test)
....stmt;
else if (bar) {
....stmt;
....stmt;
} else
....stmt;

(Leading dots added as placeholders because why the hell would anyone ever want to post code samples on a News for Nerds site, anyway?)

It should be noted that the "super-secure" OpenBSD platform ships with this same style guide. FWIW, I agree with you that braces should be mandatory. I think this is a supremely dumb recommendation.

Re:Worth repeating... (1)

pla (258480) | about 7 months ago | (#47018905)

I've often said that you don't fix a software bug until you've fixed the process that allowed the bug to be created. The above quote is of a similar sentiment.

Sounds great! Now just show me a program (more complex than "Hello World") with no bugs.

Yes, culture can play a large role in the frequency and severity of bugs released into production code. But humans make mistakes, simple as that. No amount of code reviews or test suites or BS like "pair programming" can ever get around that basic fact.

Or perhaps as a better example of the problem with that philosophy - Show me a program with no bugs that used OpenSSL. In that case, even the trivial "Hello World" example would have a serious bug completely out of the control of the developer.

Re:Worth repeating... (1)

drinkypoo (153816) | about 7 months ago | (#47020191)

Yes, culture can play a large role in the frequency and severity of bugs released into production code. But humans make mistakes, simple as that. No amount of code reviews or test suites or BS like "pair programming" can ever get around that basic fact.

Uh, you have gone perpedicular to the problem here. Code reviews and test suites are things we have because of that basic fact.

Re:Worth repeating... (1)

jc42 (318812) | about 7 months ago | (#47022113)

I've often said that you don't fix a software bug until you've fixed the process that allowed the bug to be created. The above quote is of a similar sentiment. Sounds great! Now just show me a program (more complex than "Hello World") with no bugs.

Of course, it has been occasionally pointed out that the canonical "Hello World" program (from the "C Bible") actually has a bug. Granted, it's not one that you're ever likely to observe in the wild, and good luck writing malware to exploit it. But most programmers, even expert C programmers, can't spot it despite being trivially obvious when pointed out. This is actually a fairly nice example of how difficult it can be to write bug-free software, and I'd wonder if it was done intentionally in that book "with malice aforethought". ;-)

Re:Worth repeating... (1)

Shatrat (855151) | about 7 months ago | (#47020301)

It's like the Einstein quote, "We can not solve our problems with the same level of thinking that created them'"

Neatness counts (3, Insightful)

mariox19 (632969) | about 7 months ago | (#47018491)

if ((err = SSLHashSHA1.update( &hashCtx, &signedParams)) != 0)
goto fail;
goto fail;

Those familiar with the C programming language will recognize that the first goto fail is bound to the if statement immediately preceding it; the second is executed unconditionally.

Sorry, but it needs to be said: this is sloppy, he-man coding. Is there a problem with using brackets? Is it your carpal tunnel syndrome? Are you charged by the keystroke?

This is how mistakes happen. For shame!

Re:Neatness counts (1)

BronsCon (927697) | about 7 months ago | (#47018551)

if ((err = SSLHashSHA1.update( &hashCtx, &signedParams)) != 0) {
goto fail;
}
goto fail;

Seems as though it still would/could have happened. Would it have been easier to catch? Likely. Still would have happened, though.

Re:Neatness counts (2, Insightful)

Anonymous Coward | about 7 months ago | (#47018693)


if ((err = SSLHashSHA1.update(
      &hashCtx, &signedParams)) != 0) {

      goto fail;

}

goto fail;

Seems as though it still would/could have happened. Would it have been easier to catch? Likely. Still would have happened, though.

True, it COULD have happened. But that's a helluva lot more obvious.

And if you RTFA (I know....), the author really had to contrive a BS example with mismatched braces to make a case against requiring braces on all conditional code even if they're only one line.

If mismatched braces is your "proof" that a code standard that requires braces all the time doesn't help prevent the creation of bugs like this, you're really desperate.

Re:Neatness counts (0)

Anonymous Coward | about 7 months ago | (#47018631)

It's not clear to me how using brackets would have helped. The code would have failed even if there were brackets around one or the other or both of the offending statements. And it's not clear if the additional brackets would have increased the likelihood that the mistake would have been noticed.

Re:Neatness counts (0)

Anonymous Coward | about 7 months ago | (#47018829)

Braces around both gotos technically would have avoided the issue, because then the second would just be dead code (as opposed to the rest of the validation code). That being said, the duplicate goto makes no sense regardless of the presence or absense of braces, so it seems silly to argue that the source of the problem was a lack of braces.

Frankly any kind of halfdecent static or dynamic analysis should have noticed that important parts of the problematic function was dead code. The most obvious explanation for why it went unnoticed was that nobody could be bothered with it.

Re:Neatness counts (1)

jeffmeden (135043) | about 7 months ago | (#47018841)

It's not clear to me how using brackets would have helped. The code would have failed even if there were brackets around one or the other or both of the offending statements. And it's not clear if the additional brackets would have increased the likelihood that the mistake would have been noticed.

Brackets around the entire IF would have caused it to work as expected, the second `goto fail;' would simply never get used. Brackets around only the first would have caused the second to look a lot more out of place, at least as much more as two identical lines on identical indents could. Where is the indent regime on this issue anyway? Isn't that even more of a style atrocity than foregoing brackets? Nothing like intentionally making your code look like assembly spit from a debugger to make it easy to maintain.

Re:Neatness counts (1)

jones_supa (887896) | about 7 months ago | (#47019241)

if ((err = SSLHashSHA1.update(
&hashCtx, &signedParams)) != 0)

goto fail;

goto fail;

Making an assignment inside the if test makes it also more ambiguous. I would have gone with:

err = SSLHashSHA1.update(&hashCtx, &signedParams);

if (err != 0) {
/* code... */
}

/* code... */

Re:Neatness counts (0)

Anonymous Coward | about 7 months ago | (#47020361)

Not to mention that doing it that way makes it a fair bit easier to see the value of err BEFORE the if when debugging.

Lack of static and structural coverage analysis (1)

postmortem (906676) | about 7 months ago | (#47018555)

So good test should catch this goto fail for sure, either functional test or an unit test. Looks like neither are thorough for the library.

Bot more importantly, if static analysis or structural coverage of code was done, both would point out that there is something wrong with the code.

All of these testing strategies should be done for such s critical piece of software.

Fuzz Testing. Next! (1)

VortexCortex (1117377) | about 7 months ago | (#47018567)

Automated unit test stubs with range checking and input fuzzing. Took me two weekends to build one atop Doxygen. If your build environment does not do this you're maliciously stupid.

Re:Fuzz Testing. Next! (0)

Anonymous Coward | about 7 months ago | (#47019015)

Fuzztesting doesn't work for some of these things. I'll leave it as an exercise to your ego to figure out why.

NOTE: IF fuzz testing was "The Solution", as you allude to, we would not need unit testing at all. And clearly we need unit tests.

Re:Fuzz Testing. Next! (1)

msclrhd (1211086) | about 7 months ago | (#47019275)

They are all tools that can be applied to improve the quality of the code. No one thing is "The Solution".

* Test Driven Development (TDD) is a good approach to ensure that the code you write is testable. This will not work for things like UI code, but other code will benefit.

* Unit Tests can either be developed via a TDD-like approach (easier to do), or after the code is written (harder to do).

* Automated Regression Tests (a superset of Unit Tests) provide good coverage for ensuring code works as expected without involving a large manual testing team. These will only detect the things covered by the automated tests.

* Static Code Analysis tools can pick up a lot of problem areas, but will not detect every problem. These results can be used to identify what tests need to be created to prevent future regression.

* Fuzz testing is good at providing strange data to e.g. a protocol or file format parser. These are intended to be soak tests -- e.g. "does my regular expression parser handle all these strange and possibly invalid constructs". Fuzz testing would have most likely found the heartbleed bug (because it would have permutated the length of data to request). Any failures here should be converted to Unit/Regression tests to ensure that the problem is (a) fixed by any code changes made and (b) does not occur in the future. Fuzz testing will typically find hard to identify bugs (e.g. data races) that are not easy to identify from manually constructed tests or static analysis.

* Manual/ad hoc testing is important as it can uncover bugs that the developers are not aware of.

* Code and Security Reviews help identify potential issues (e.g. if you have someone knowledgeable about SQL injection, they can assess whether some code is vulnerable to that attack).

None of these is a silver bullet, but the more you have the better the code will be.

Re:Fuzz Testing. Next! (0)

Anonymous Coward | about 7 months ago | (#47019397)

The real solution is actually dry-running your code.

Despite my absolute hatred for shit like UML, the basic foundations are incredibly helpful, those being the pseudocode and sequence diagrams. DETAILED sequence diagrams. The rest of UML is literally the same apple viewed from different angles and a waste of time.
Pseudocode, basic flow. Do a good basic sequence diagram of what needs to happen based on that, anonymous "functions / entities". Detailed pseudocode with specific actions and/or detailed sequence diagram that contains the separate compartments and how they communicate. Done.
This will catch most errors. Even with programmers that are only learning. When they are taught to go through things line-by-line, it gets them actually thinking about it. And when you do the iterative sequence / pseudocode method, it is hard to fail as easily as the heartbleed bug. It was an honest, basic mistake, but one that could have been avoided.

Then when you test that, it should bring up very little errors. Unit tests, fuzz tests. And very specifically use pre-built libraries out there that test the absolute hell out of things and try to break them. Remaking the wheel is often the biggest reason so many insecure things exist (ESPECIALLY PHP, it is so easy to write bad code in that due to the way the language works. Horribly).
Re-use others code / tools for testing. It has most likely been created by large groups of people over many years and is bulletproof.
So many people want to do it on their own and it is a silly attitude and you have nothing to prove to anyone.

Re:Fuzz Testing. Next! (0)

Anonymous Coward | about 7 months ago | (#47019429)

I don't understand why everyone is harping about fuzztesting either. It's randomly poking at a black box. Pointless.
If you aren't sure what the code does and you're too lazy or incompetent to properly white box test it, then for the love of god, do a full input permutation test. Test all possible input values. If you have to, let it run until it completes or until you get a better idea.

It takes brains (1)

Pro923 (1447307) | about 7 months ago | (#47018587)

I've been in this field for 20+ years now, and I don't necessarily (in fact, I usually don't) agree with whatever the current trend is (which is probably why my karma is negative). One underlying trend, has been to make software something that can be made by anyone - to remove the requirement of having a special mind that is able to think through algorithms and code. This has generally been accomplished through process, and abstraction. Process - if we can describe a method well enough, then anyone should be able to follow it to it's logical conclusion. Abstraction - we keep adding layers upon layers in an effort to simplify and streamline that which is a complex thing (lots of numbers in sequence to control a microprocessor and it's accompanying hardware). You can probably tell that I'm not a great fan of either - though I'm really really trying to not be a negative type, and to go with the flow more. But I can't help my fundamental feelings that there is just no substitute for a smart individual with a gift of understanding the logic of code. I'm always against process because it takes the gift that i was given and neutralizes it. Personal feelings aside, I just don't think that all the process in the world is ever going to get ahead of the curve that is the battle between perfectly functional software and bugs.

Re:It takes brains (4, Insightful)

jeffmeden (135043) | about 7 months ago | (#47018755)

I've been in this field for 20+ years now, and I don't necessarily (in fact, I usually don't) agree with whatever the current trend is (which is probably why my karma is negative). One underlying trend, has been to make software something that can be made by anyone - to remove the requirement of having a special mind that is able to think through algorithms and code. This has generally been accomplished through process, and abstraction. Process - if we can describe a method well enough, then anyone should be able to follow it to it's logical conclusion. Abstraction - we keep adding layers upon layers in an effort to simplify and streamline that which is a complex thing (lots of numbers in sequence to control a microprocessor and it's accompanying hardware). You can probably tell that I'm not a great fan of either - though I'm really really trying to not be a negative type, and to go with the flow more. But I can't help my fundamental feelings that there is just no substitute for a smart individual with a gift of understanding the logic of code. I'm always against process because it takes the gift that i was given and neutralizes it. Personal feelings aside, I just don't think that all the process in the world is ever going to get ahead of the curve that is the battle between perfectly functional software and bugs.

If you make brilliant code that only you can understand, sorry to be harsh but you aren't that brilliant. We definitely need to value people who can generate and perfect algorithms, but do you think anyone would remember/value the Pythagorean Theorem if it was 40 steps long? No, he thought of a (then brilliant) way to do it simply and easily so that one only needs to understand basic math to pull it off. This is what we need more of; a single elegant algorithm that is so short it is hard to misuse is better than 1,000 algorithms that are all so hard to understand that only the author knows exactly how it works and will be forgotten as soon as the particular language or application fades into the past.

Re:It takes brains (2)

Pro923 (1447307) | about 7 months ago | (#47018957)

I kinda agree... I mean, I never go out of my way to artificially complicate my code. I'm not one of those people that uses macros just for the sake of showing how clever I can make them. The problem is that - to use your example - how many people do we have now that actually learn how to derive the Pythagorean Theorem? How do we build on that? The gifted people that COULD build on it, can't - because they're sandboxed into a process, or a higher level abstraction. My kids - they were in awe of the Saturn V, the F-1 Engine - all of that technology (with the slide rules!) and knowhow that we had back in the 60's (after watching Apollo 13)... I told them that I didn't think we had the capacity to do that anymore. Then I read an article recently where they were looking at the F-1 engine like it was some alien artefact. Point being, I just don't think that we value our smart people anymore because we just stick them into the same process as every dope that has a connection to get his job. We need find the smart people, encourage them with financial incentives and get THEM to write the code.

Re:It takes brains (0)

Anonymous Coward | about 7 months ago | (#47019515)

Sending man to the moon was a great accomplishment, but it was an artificial, politically motivated statement. It's not a great example of efficiently using our resources.

Re:It takes brains (1)

Pro923 (1447307) | about 7 months ago | (#47020003)

are you serious? to me it's the exact opposite. I think it's the best example there is of how - when we're truly motivated to do something incredible, we actually can.

Re:It takes brains (0)

Anonymous Coward | about 7 months ago | (#47021183)

01100110 01110101 01100011 01101011 00100000 01100001 01100010 01110011 01110100 01110010 01100001 01100011 01110100 01101001 01101111 01101110

Re:It takes brains (2)

Junta (36770) | about 7 months ago | (#47019327)

If you make brilliant code that only you can understand

There's a false dichotomy here. He said that only *some* are qualified enough to create solutions to complex problems. You are saying his claim is that only *one* can understand, implying that the problem can't possibly be too hard, and that any hard code to follow is just because the developer is terrible at coding.

As a counter to your example of the Pythagorean Theorem, what about post-graduate math and science? There are tons of things which would make 40 steps seem easy by comparison. Should society forgo those just because only some people are realistically going to be able to understand and apply that correctly?

A very ubiquitous situation is that with the 'anyone can understand it or else it shouldn't exist at all' philosophy, there is no way we'd have cryptographic libraries at all.

I will agree that his stance against processes is a bit too harsh, but I've been around enough to know in some scenarios such a jaded perspective would be perfectly understandable. I've seen some projects that had appropriate and helpful processes that did help quality, but been witness to many many more that had ineffective process that achieved nothing but create busy work while still churning out crap code.

Re:It takes brains (0)

Anonymous Coward | about 7 months ago | (#47019509)

> If you make brilliant code that only you can understand, sorry to be harsh but you aren't that brilliant.
Maybe, but only if your colleagues reach a basic minimum level of competence. Most programmers are shit and would struggle to write a hash table without fucking it up.

Sometimes it's not the communicator, but the audience.

Re:It takes brains (0)

Anonymous Coward | about 7 months ago | (#47020385)

This is what we need more of; a single elegant algorithm that is so short it is hard to misuse is better than 1,000 algorithms that are all so hard to understand that only the author knows exactly how it works and will be forgotten as soon as the particular language or application fades into the past.

But that's not the choice we have. Occasionally, we need to choose between (i) a concise, and possibly elegant implementation of an algorithm, which is complex enough that most coders can't follow its logic; or (ii) a library that implements equivalent functionality through a dozen layers of abstraction, a bug in any of which can constitute a security hole, but which divides the work logically enough for most people to understand.

Sometimes, fundamental security-critical code needs to be written once by a gifted expert, rigorously debugged, then left the hell alone because any further tinkering is likely only to introduce bugs.

Unit tests are just one tool (1)

g01d4 (888748) | about 7 months ago | (#47018743)

FTFA:

Compiler and static-analysis warnings also could have detected the unreachable code, though false warnings might have drowned out the signal if such tools weren't already being used regularly.

I'd purpose that these tools weren't being used properly rather than turning the issue into a nail for the unit testing hammer.

-Wall -Werror (4, Interesting)

Megane (129182) | about 7 months ago | (#47018777)

Turning on all warnings and forcing them to errors certainly would have caught the bug in Apple's SSL code. Anyone who just lets warnings fly by in C code is an idiot. Even if the warning is mildly silly, getting it out of the way lets the important warnings stand out. Sensible warnings from C compilers are the very reason we don't use lint anymore. Even then you still have to watch out, because some warnings won't appear at low optimization levels, and I recall hearing that there are a few obscure warnings not turned on by -Wall.

Also, it could have possibly been introduced by a bad merge. One of the things that putting braces on every if/for/while/etc. does is give merges more context to keep from fucking up, or at least a chance to cause brace mismatch.

As for Heartbleed, just the fact that the code wouldn't work with a compile time option to use the system malloc instead of a custom one should have been enough to raise some red flags. Because rolling your own code to do something "more efficiently" than the system libraries never introduces new problems, right?

Re:-Wall -Werror (1)

roger10-4 (3654435) | about 7 months ago | (#47019311)

Need to explicitly add -Wunreachable-code. Annoyingly, "-Wall" doesn't catch this particular error (at least on the versions of gcc I've used).

Re:-Wall -Werror (1)

Eunuchswear (210685) | about 7 months ago | (#47022029)

macro abuse.

C has a huge design bug - pragmas can't be used in macros.

Re:-Wall -Werror (1)

Qzukk (229616) | about 7 months ago | (#47021717)

One of the things that putting braces on every if/for/while/etc. does is give merges more context to keep from fucking up, or at least a chance to cause brace mismatch.

I'm not so sure. I've lost track of the number of times where patch has chosen a completely random

....}
..}
}

to wedge a new } else { in, because at the end of a block, all the braces look the same.

I put an end to that by ending blocks with } // if (foo)

Re:-Wall -Werror (2)

rabtech (223758) | about 7 months ago | (#47022201)

Turning on all warnings and forcing them to errors certainly would have caught the bug in Apple's SSL code. Anyone who just lets warnings fly by in C code is an idiot. Even if the warning is mildly silly, getting it out of the way lets the important warnings stand out. Sensible warnings from C compilers are the very reason we don't use lint anymore. Even then you still have to watch out, because some warnings won't appear at low optimization levels, and I recall hearing that there are a few obscure warnings not turned on by -Wall.

Let me quote from one of the best-tested and most widely used projects out there, SQLite, from http://www.sqlite.org/testing.... [sqlite.org]

Static analysis has not proven to be especially helpful in finding bugs in SQLite. Static analysis has found a few bugs in SQLite, but those are the exceptions. More bugs have been introduced into SQLite while trying to get it to compile without warnings than have been found by static analysis.

The bolded part has been my experience unfortunately. Static analysis is nearly useless.

An appropriate test for something like an SSL stack is a separate test harness that "fuzzes" the stack by exploring large random combinations of values, some with known good certificates and others with randomly generated (and thus broken) ones. These days one can spin up thousands of VMs, run a massive suite of billions of test cases in parallel over a few hours, then spin them down and spend a relatively small sum of money.

And yes, the test harness for something like this is probably going to exceed the # of lines of code of the actual implementation by an order of magnitude. For really important security-critical stuff like cryptography, SSL/TLS, keychain management, etc it is well worth the effort.

LOL! Really? (0)

Anonymous Coward | about 7 months ago | (#47018867)

You guys, I swear. Must be nice up there on your thrones. Captain Hindsight, where are you? Ahh .. over on slashdot posting ..

There isn't a piece of useful software EVER RELEASED IN THE HISTORY OF HISTORY that hasn't had bugs. It was a bug, the fixed it, move along, nothing more to see here ...

If you are that worried about it, go write your own OS/Compliler/Language and while you're at it design your own hardware just to be sure. Oh ...

Reeks of a terrible article (2)

rebelwarlock (1319465) | about 7 months ago | (#47018929)

We have some lovely elements coming together right here on the slashdot blurb:

1. Stupid pun instead of a descriptive title
2. Full caps in the article excerpt
3. Trying to bring up coding "culture"
4. Assertion that it totally could have been caught beforehand, but they aren't sure exactly how.

Somehow, I don't think I'm missing much by not reading the article.

Re:Reeks of a terrible article (1)

jones_supa (887896) | about 7 months ago | (#47019355)

The young whippersnappers are to blame.

Re:Reeks of a terrible article (0)

Anonymous Coward | about 7 months ago | (#47020041)

I read it and can confirm - you have not lost anything.

Merge Conflict (3, Insightful)

znigelz (2005916) | about 7 months ago | (#47018933)

This is clearly the automatic resolution of a merge conflict by the versioning control software. These are such a nightmare to debug and happen all the time. Developers rarely check their entire change visually post merge. Though this can be found using static analysis that force coding standards (such as forcing the use of brackets or proper indentation for the lexical scope). Though the bugs from automatic conflict resolution can only be really improved through better versioning software. These are without question the worst and most frustrating bugs.

Re:Merge Conflict (0)

Anonymous Coward | about 7 months ago | (#47019003)

+1. I diff every checkin, and I find merge errors regularly. Not often, but regular.

Re:Merge Conflict (0)

Anonymous Coward | about 7 months ago | (#47019065)

This is clearly the automatic resolution of a merge conflict by the versioning control software. These are such a nightmare to debug and happen all the time. Developers rarely check their entire change visually post merge. Though this can be found using static analysis that force coding standards (such as forcing the use of brackets or proper indentation for the lexical scope). Though the bugs from automatic conflict resolution can only be really improved through better versioning software. These are without question the worst and most frustrating bugs.

This is just one reason I hate automated merges. I wish I could force not using them. That would force developers to coordinate changes to common code - they damn well should be doing that anyway. I'd rather know about code conflicts early in a release cycle anyway.

Yeah, better version control software would help. But even that won't fix a fundamental flaw in automated merges: there's no way for any automated process to know if the merged code is semantically correct. And to top that all off, the merged code will have never been subjected to unit testing.

Re:Merge Conflict (0)

Anonymous Coward | about 7 months ago | (#47019603)

If your merged code will never be subject to unit testing (and integration and acceptance testing!) you will have a bad time. You should always run your tests automatically on the checked in code. Be it static analysis and/or unit / integration / acceptance testing.

Re:Merge Conflict (1)

phantomfive (622387) | about 7 months ago | (#47019359)

Oh yeah, you are surely right, now that I think about it. If anyone looks at this code, they will easily see which single line caused the problem:

if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0)
goto fail;
if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0)
goto fail;
if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
goto fail;
if ((err = SSLHasehSHA1.update(&hashCtx, &signedParams)) != 0)
goto fail;
goto fail;
if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
goto fail;
It's a problem that could have been caught easily with a dead-code static analysis. Or someone looking at the code (maybe Apple doesn't do code review on every check-in?) Or stepping through the code in a debugger (though I guess that's rare to do after it's already been committed). Or a few unit tests on this function.

There are lots of ways this bug could have been avoided.

(PS. Is there no way to insert a space into Slashdot? Is there no way to insert properly indented code? That would be better than working on beta).

OpenSSL sucketh (0)

Anonymous Coward | about 7 months ago | (#47018941)

heartbleed

...

WHY DIDN'T A TEST CATCH IT?

A couple of months before this posting, I would have thought OpenSSL has done positive things for the community. Now I know that this is unreliable software that must be replaced as soon as viable. I've since learned better. Maybe OpenSSL didn't always suck. But, at the time of this writing in May of 2014, it sucketh.

A couple of days ago (at the time of this writing), OpenSSL Vahalla Rampage: "Sometimes bad tests can be more harmful than no tests at all" [opensslrampage.org] documents yet another bad test in the OpenSSL code. The OpenSSL code is riddled with problems, including tests that don't work as intended. So, there is a simple answer to the question: why a test didn't catch heartbleed. The answer is that reliable testing was not being performed. Multiple tests that were created, didn't work, and so couldn't be relied upon anyway.

Thank goodness there is a competent team on this planet who knows how to do things right, and is now taking care of the problem. (The makers of OpenSSH are ripping OpenSSL to shreds, and creating LibReSSL, the library that is a re-implementation of SSL code.)

For tons more examples of how cripplingly bad OpenSSL has become, see other articles on the OpenSSL Vahalla Rampage [opensslrampage.org] site. The site will probably mean nothing to people who don't know programming (or don't have enough experience to have been introduced to techniques like manually freeing up memory that was dynamically allocated earlier). I don't quite comprehend some of the intricacies, but some I do, and those that I do, are ROFL-making material. Like this amusing observation [opensslrampage.org] (which will amuse anyone who has used to C preprocessor), and many others.

Perhaps one of the best examples is the code that uses a goto statement, to jump to this label:

if (0)
                {
err:

That gem was found from Flingpoo! : OpenSSL is written by monkeys [peereboom.us] , yet another ridiculing site documenting the current tragedy of OpenSSL. That site has some other wretched examples, like the #ifdef...if...#endif...else mis-construct.

Quo Bono (1)

ThatsNotPudding (1045640) | about 7 months ago | (#47019071)

Who benefitted? The TLAs. Accidental, my ass.

Where's the revision history? (1)

Animats (122034) | about 7 months ago | (#47019107)

The article doesn't give the revision history of how that code got there. Who put it there, when did they put it there, and what did the code look like before and after they put it there. We need the names!

Use compiler warnings (1)

roger10-4 (3654435) | about 7 months ago | (#47019169)

It's remarkable how many organizations don't enable aggressive compiler warnings (or worse, ignore or disable them). One of the best practices I've learned is to turn on every warning that you possibly can and use the option that treats all warnings as compiler errors. The code from Apple may have been properly unit tested. However, if this was the result of a bad automated merge, unit tests are often not repeated on the resulting code base headed for system test. The GCC "-Wunreachable-code" option would have caught this type of error.

Overconfidence in unit tests... (1)

Junta (36770) | about 7 months ago | (#47019179)

The article contains the same flaw that people who rabidly declare unit tests as a panacea. The article basically shows that after discovery of a bug, a unit test can retroactively be constructed that would have caught the bug, therefore it's inexcusable that the bug got released, ignoring the fact that is hindsight. Unit tests are not without their utility certainly, but practically speaking you will not be able to construct unit tests that catches every single possible scenario. This is tricky enough for trying to catch functional problems, but for security problems where an adversary is explicitly trying to bend something beyond even what the developer conceived of in design, unit tests become even more tricky. If someone has the foresight in implementing a feature to craft a test case to explicitly try malicious things, then they probably wouldn't have messed up the code in the first place. Of course, there is value in having the first developer with that awareness institute such a test case so that a follow up activity gets checked, but I think in most of the cases the bug came with the first checkin of the function, meaning the developer just never considered the possibility at all. This means they made buggy code and they would have or in fact did also made inadequate test cases. You can't just say 'if Apple had done unit tests, their code would have been perfect!'. There are projects without unit tests that fare pretty well and there are projects with unit tests that fail miserably in terms of quality.

I have heard people claim with a straight face that they now have '100% coverage' through unit tests and then go on to say at-will releases are therefore safe to do without any particular testing.

Re:Overconfidence in unit tests... (0)

Anonymous Coward | about 7 months ago | (#47019629)

Anyone but a retard would have written a test that verified the code did something sensible if passed invalid parameters.

Of course, anyone but a retard would have written the code to do something sensible if passed invalid parameters, so they wouldn't have needed the test in the first place.

Re:Overconfidence in unit tests... (1)

russotto (537200) | about 7 months ago | (#47022155)

The article contains the same flaw that people who rabidly declare unit tests as a panacea. The article basically shows that after discovery of a bug, a unit test can retroactively be constructed that would have caught the bug, therefore it's inexcusable that the bug got released, ignoring the fact that is hindsight.

The function was supposed to check various ways a key exchange message could be screwed up. The minimum set of unit tests appropriate for such a function are pretty clear -- feed it messages that are screwed up in each different way, and make sure all of them fail. And feed it a message that is not screwed up and make sure it succeeds. This won't catch everything, but it would have caught this one.

There are lots of times it's unreasonable to expect a developer to have written a unit test which would have caught the bug; this isn't one of them.

"and explains how to not suffer it again" (0)

Anonymous Coward | about 7 months ago | (#47019835)

I fucking HATE split infinitives.

Nothing makes you look like a bigger imbecile than splitting an infinitive.

The culture (2)

computational super (740265) | about 7 months ago | (#47019953)

Yeah, the "culture" is "hurry up and get it done so you can get on to the next thing because if something takes more than an hour to do it's not worth doing" and it exists in every single software development organization on planet Earth. Until these things actually start costing real money to people with real power, this will continue.

Bug is somewhere else (2)

gnasher719 (869701) | about 7 months ago | (#47021865)

Ok, writing "goto fail;" twice in a row is a bug. But it's not the real bug. This code was checking whether a connection was safe, and executed a "goto fail;" statement if one of the checks failed. It also executed one "goto fail;" by accident, skipping one of the checks. But one would think that a statement "goto fail;" would make the connecction fail! In that case, sure, there was a bug, but the bug should have led to all connections failing, which would have made it obvious to spot (because the code wouldn't have worked, ever).

So the real problem is a programming style where executing a statement "goto fail;" doesn't actually fail! If a function returns 0 for success / non-zero for failure like this one, it should have been obvious to add an "assert (err != 0)" to the failure case following the fail: label. And that would have _immediately_ caught the problem. There should be _one_ statement in the success case "return 0;" and _one_ statement in the failure case "assert (err != 0); return err;".
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?