Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Bug Bounties Don't Help If Bugs Never Run Out

Soulskill posted about 5 months ago | from the trying-to-bail-the-ocean dept.

Bug 235

Bennett Haselton writes: "I was an early advocate of companies offering cash prizes to researchers who found security holes in their products, so that the vulnerabilities can be fixed before the bad guys exploited them. I still believe that prize programs can make a product safer under certain conditions. But I had naively overlooked that under an alternate set of assumptions, you might find that not only do cash prizes not make the product any safer, but that nothing makes the product any safer — you might as well not bother fixing certain security holes at all, whether they were found through a prize program or not." Read on for the rest of Bennett's thoughts.

In 2007 I wrote:

It's virtually certain that if a company like Microsoft offered $1,000 for a new IE exploit, someone would find at least one and report it to them. So the question facing Microsoft when they choose whether to make that offer, is: Would they rather have the $1,000, or the exploit? What responsible company could possibly choose "the $1,000"? Especially considering that if they don't offer the prize, and as a result that particular exploit doesn't get found by a white-hat researcher, someone else will probably find it and sell it on the black market instead?

Well, I still believe that part's true. You can visualize it even more starkly this way: A stranger approaches a company like Microsoft holding two envelopes, one containing $1,000 cash, and the other containing an IE security vulnerability which hasn't yet been discovered in the wild, and asks Microsoft to pick one envelope. It would sound short-sighted and irresponsible for Microsoft to pick the envelope containing the cash — but when Microsoft declines to offer a $1,000 cash prize for vulnerabilities, it's exactly like choosing the envelope with the $1,000. You might argue that it's "not exactly the same" because Microsoft's hypothetical $1,000 prize program would be on offer for bugs which haven't been found yet, but I'd argue that's a distinction without a difference. If Microsoft did offer a $1,000 prize program, it's virtually certain that someone would come forward with a qualifying exploit (and if nobody did, then the program would be moot anyway) — so both scenarios simply describe a choice between $1,000 and finding a new security vulnerability.

But I would argue that there are certain assumptions under which it would make sense not to offer a cash prize program — and, in keeping with my claim that this is equivalent to the envelope-choice problem, under those assumptions it actually would make sense for Microsoft to turn down the envelope containing the vulnerability, and take the cash instead. (When I say it would "make sense", I mean both from a profit-motive standpoint, and for the purposes of protecting the security of their users' computers.)

On Monday night I saw a presentation put on by Seattle's Pacific Science Center "Science Cafe" program, in which Professor Tadayoshi Kohno described how he and his team were able to defeat the security protocols of a car's embedded computer system by finding and exploiting a buffer overflow. That's scary enough, but it was more interesting how his description of the task made it sound like a foregone conclusion that they would find one — you simply sink this many person-hours into the task of looking for a buffer overflow, and eventually you'll find one that can enable a complete takeover of the car. (He confirmed to me afterwards that in his estimation, once the manufacturer had fixed that vulnerability, he figured his same team could have found another one with the same amount of effort.)

More generally, I think it's reasonable to assume that for a given product, there is a certain threshold amount of money/effort/person-hours such that if you throw that much effort at finding a new security vulnerability, you will always find a new one. Suppose you call this the "infinite bug threshold." Obviously the amount of vulnerabilities is not really infinite — you can only do finitely many things to a product in a finite amount of time, after all — but suppose it's so close to infinite as to make no difference, because the manufacturer would never be able to fix all the vulnerabilities that could be found for that amount of effort. I'm sure that $10 million worth of effort, paid to the right people, will always find you a new security vulnerability in the Apache web server; the same is probably true for some dollar number much lower than that, and you could call that the "infinite bug threshold". On the other hand, by definition of that threshold, that means that the amount of vulnerabilities that can be found for any amount of money below that, will be finite and manageable.

(I'm hand-waving over some details here, such as the disputes over whether two different bugs are really considered "distinct," or the fact that once you've found one vulnerability, the cost of finding other closely related vulnerabilities in the same area of the product, often goes way down. But I don't think these complications negate the argument.)

Meanwhile, you have the black-market value of a given type of vulnerability in a given product. This may be the value that you could actually sell it for on the black market, or it may be the maximum amount of effort that a cyber-criminal would invest in finding a new vulnerability. If a cyber-criminal will only start looking for a particular type of vulnerability if they estimate they can find one for less than $50,000 worth of effort, then $50,000 is how much that type of vulnerability is worth to them.

Now consider the case where

infinite bug threshold > black-market value

This is the good case. It means that if the manufacturer offered a prize equal to the black-market value of an exploit, any rational security researcher who found a vulnerability, could sell it to the manufacturer rather than offering it on the black market (assuming they would find the manufacturer more reliable and pleasant to deal with than the Russian cyber-mafia). And we're below the infinite bug threshold, so by definition the manufacturer only has to pay out a finite and manageable number of those prizes, before all such vulnerabilities have been found and fixed. I've made a couple of optimistic assumptions here, such as that the manufacturer would be willing to pay prizes in the first place, and that they could correctly estimate what the black-market value of a bug would be. But at least there's hope.

On other hand, if

infinite bug threshold < black market value

everything gets much worse. This means that no matter how many vulnerabilities you find and fix, by the definition of the infinite bug threshold there will always be another vulnerability that a black-hat will find it worthwhile to discover and exploit.

And that's the pessimistic scenario where it doesn't really matter whether Microsoft chooses the envelope with the vulnerability or the envelope with the $1,000, if the infinite-bug-threshold happens to be below $1,000. (Let's hope it's not that low in practice! But the same analysis would apply to any higher number.) If the black-market-value of a bug is at least $1,000, so that's what the attacker is willing to spend to find one, and if that's above the infinite-bug-threshold, then you might as well not bother fixing any particular bug at that level, because the attacker can always just find another one. It doesn't even matter whether you have a prize program or not; the product is in a permanent state of unfixable vulnerability.

At that point, the only ways to flip the direction of the inequality, to reach the state where "infinite bug threshold > black-market value", would be to decrease the black market value of the vulnerability, or increase the infinite bug threshold for your product. To decrease the black market value, you could implement more severe punishments for cyber-criminals, which makes them less willing to commit risky crimes using a security exploit. Or you could implement greater checks and balances to prevent financial fraud, which decreases the incentives for exploits. But these are society-wide changes that would not be under the control of the software manufacturer. (I'm not sure if there's anything a software company could do by themselves to lower the black-market value of a vulnerability in their product, other than voluntarily decreasing their own market share so that there are fewer computers that can be compromised using their software! Can you think of any other way?)

Raising the infinite bug threshold for the product, on the other hand, may require re-writing the software from scratch, or at least the most vulnerable components, paying stricter attention to security-conscious programming standards. Professor Kohno said after his talk that he believed that if the programmers of the car's embedded systems had followed better security coding practices, such as the principle of least privilege, then his team would not have found vulnerabilities so easily.

I still believe that cash prizes have the potential to achieve security utopia, at least with regard to the particular programs the prizes are offered for — but only where the "infinite bug threshold > black-market value" inequality holds, and only if the company is willing to offer the prizes. If the software is written in a security-conscious manner such that the infinite bug threshold is likely to be higher than the black-market value, and the manufacturer offers a vulnerability prize at least equal to the black-market value, then virtually all vulnerabilities which can be found for less than that much effort, will be reported to the manufacturer and fixed. Once that nirvana has been achieved, for an attacker to find a new exploit, the attacker would have to be (1) irrational (spending an estimated $70,000 to find a vulnerability that is only worth $50,000), and (2) evil beyond merely profit motive (using the bug for $50,000 of ill-gotten gain, instead of simply turning it in to the manufacturer for the same amount of money!). That's not logically impossible, but we would expect it to be rare.

On the other hand, for programs and classes of vulnerabilities where "infinite bug threshold < black-market value", there is literally nothing that can be done to make them secure against an attacker who has time to find the next exploit. You can have multiple lines of defense, like installing anti-virus software on your PC in case a website uses a vulnerability in Internet Explorer to try and infect your computer with a virus. But Kaspersky doesn't make anything for cars.

cancel ×

235 comments

Sorry! There are no comments related to the filter you selected.

Bennett Haselton (5, Insightful)

Laxori666 (748529) | about 5 months ago | (#46787851)

Bennett Bennett Bennett Bennett Bennett Bennett! I Bennett love Bennett Bennett! Nothing Bennett is Bennett more Bennett entertaining Bennett than Bennett reading Bennett what Bennett a Bennett person Bennett which Bennett I Bennett have Bennett no Bennett idea Bennett who Bennett they Bennett are Bennett has Bennett to Bennett say!

Re:Bennett Haselton (-1, Redundant)

Anonymous Coward | about 5 months ago | (#46787893)

I'm famed Slashblogger Bennet Haselton, and I approve this message.

Re:Bennett Haselton (0)

gth740k (1167237) | about 5 months ago | (#46788447)

Hmm, not sure whether to mod Troll or Insightful... leaning toward insightful though.

Bennett Haselton post (5, Funny)

kruach aum (1934852) | about 5 months ago | (#46787873)

Why do words, suddenly appear
Every time, Bennett's here?
Just like me
you long to be
free from this

Re:Bennett Haselton post (0, Redundant)

arth1 (260657) | about 5 months ago | (#46787991)

Why do words, suddenly appear
Every time, Bennett's here?
Just like me
you long to be
free from this

I wish Bennett Haselton and apk would get their own room.

There, they could engage each other in interesting discourse without bothering the rest of us.

Let's call it.... Slashdot Beta!

When did slashdot become a blog for Bennett? (1)

oneiros27 (46144) | about 5 months ago | (#46788039)

It used to be that CmdrTaco or one of the others on the slashdot staff would occassionally post an article, but in general, the standard procedure would be that someone would write something on some other website, and then Slashdot would link to them.

And sometimes, they'd link to one blog over and over again so often that were just rehashes of press releases (eg, coondoggie & Roland Piquepaille) rather than containing any original information or commentary, and they crowd out actual good articles on the topic. ... but what is Bennett's link to the site? Obviously, it's stronger than coondoggies Network World spamming, as he's linking in articles rather than directly posting them.

It seems like Bennett might have some tech cred [wikipedia.org] , and may specifically have experience in this particular area ... but he posts on such a wide area of ... I'd say expertise, but some of it's poorly informed crap.

It almost seems like his submissions are trolling from the slashdot 'editors'.

Re:When did slashdot become a blog for Bennett? (5, Funny)

Anonymous Coward | about 5 months ago | (#46788165)

But you don't understand. Bennett discovered DIMINISHING RETURNS.

People need to know.

NEED. To. Know.

Re:When did slashdot become a blog for Bennett? (0)

Anonymous Coward | about 5 months ago | (#46788707)

Bennett posts beautiful works of diminishing returns every time he sits in front of a keyboard.

Society would be better off if he were a garbageman.

Re:When did slashdot become a blog for Bennett? (-1)

bennetthaselton (1016233) | about 5 months ago | (#46788401)

Is there a statement in this or some other article that you think is incorrect?

Re:When did slashdot become a blog for Bennett? (0)

Anonymous Coward | about 5 months ago | (#46788689)

Instead of wasting time throwing your pearls of wisdom before these unappreciative simpletons, why don't you head over and take a look at Tarsnap? [tarsnap.com]

He'll pay a bounty [tarsnap.com] for every bug you find. Even for typos! (Though small bounties are paid in Tarsnap credit.)

With an (effectively) infinite number of bugs, that's infinite money! Your amazing insights on software quality and mathematics could make you rich!

Wow (-1, Flamebait)

twistedcubic (577194) | about 5 months ago | (#46787887)

I wonder if there is anything good to read over at soylentnews.org?

Re:Wow (0)

Anonymous Coward | about 5 months ago | (#46788777)

Earth sized exoplanet found in Habitable Zone. Meanwhile, you get rambling garbage from borderline "Wired level of knowledge" sophists.

http://soylentnews.org/article.pl?sid=14/04/18/0324230

Seriously though, you guys keep Bennett. He's all yours.

Like Cockroaches (1)

tmjva (226065) | about 5 months ago | (#46787901)

They're like the small kitchen cockroaches in suburbia. You never can get rid of them, so all you can do is mitigate periodically because over time they just repopulate from the outside in the wild (a.k.a. the neighbor's yard) which can be likened to "the cloud". (I use the RAID smoke, not the sticky spray.)

Re:Like Cockroaches (3, Funny)

Anonymous Coward | about 5 months ago | (#46787955)

Or it's like pretty much anything bad. Just because you cannot eliminate it completely doesn't mean you give up fighting. Murder is bad, and no matter how many we arrest, there will always be more murderers. That doesn't mean we should eliminate the police.

This "article" might actually be the dumbest thing Bennett Haselton has ever written, which puts it in legitimate contention for dumbest thing anyone has ever written.

Re:Like Cockroaches (2, Funny)

Anonymous Coward | about 5 months ago | (#46788005)

They're like the small kitchen cockroaches in suburbia. You never can get rid of them, so all you can do is mitigate periodically

I don't like Bennett Haselton's posts either, but isn't that a bit harsh?

Metaphor (1)

Sarten-X (1102295) | about 5 months ago | (#46787903)

You can't ever win a race with no finish line. Even if you maintain a constantly-increasing lead, your opponent will still eventually cover the same ground you do, so why even bother running?

Re:Metaphor (1)

Oo.et.oO (6530) | about 5 months ago | (#46788213)

i understand that you think you are using a metaphor (even if it is in fact a simile)
but either way it's false. in a race, you either finish or you don't. in software it's an on going process with (hopefully, but rarely) increasing quality and functionality.
small battles can and are won.
say you have 10,000,000,000 bugs in the system as a evidence.
researchers find and patch 1,000,000,000
is your system more or less likely to have an exploit found?

Re:Metaphor (1)

Oo.et.oO (6530) | about 5 months ago | (#46788257)

wtf, slashdot? "evidence"? i said "an estimate"!

Re:Metaphor (0)

Anonymous Coward | about 5 months ago | (#46788399)

You may win in a single race, but that's boring. Winning multiple races is more interesting. You can race on how many individual races you have won, which is what they do in many car and other sports. Then you can race on how many championships you have won. There's no end goal in that race.

Re:Metaphor (1)

Sarten-X (1102295) | about 5 months ago | (#46788799)

You're arguing that I can't refer to a "race with no finish line" because all races have finish lines?

For that level of pedantry, I'd expect you to know that it really was a metaphor [dailywritingtips.com] .

Re:Metaphor (4, Insightful)

lgw (121541) | about 5 months ago | (#46788753)

The notion that you can't have code without these flaws (buffer overruns, dangling pointers, etc) is just asinine. I've worked on significant codebases without any such flaws. You just have to adopt a programming style that doesn't rely on being mistake-free to avoid the issues.

Want to end the danger of buffer overruns? Stop using types where it's even possible.

Want to end the danger of dangling pointers? Managed code doesn't do anything to solve this problem, and is often the worst offender since coders often stop thinking about how memory is recycled, and well-formed objects can hang around in memory for quite some time waiting on the garbage man. So you have to write code where every time you use an object you check that it hasn't been freed, and importantly hasn't been freed and then re-used for the same object! (That happens on purpose in appliance code, where slab allocation is common.)

Heck, for embedded code I simply wouldn't use dynamic allocation at all. All objects created at boot, nothing malloced, nothing freed. Everything fixed sized and only written to with macros that ensure no overruns. I wrote code that way for 5 years - we didn't even use a stack, which is just one more thing that can overflow. That style is too costly for most work, but it's possible, and for life-safety applications it's irresponsible to cheap out.

Bennett's Ego (5, Insightful)

Anonymous Coward | about 5 months ago | (#46787907)

" I was an early advocate of companies offering cash prizes to researchers who found security holes in their products, so that the vulnerabilities can be fixed before the bad guys exploited them. I still believe that prize programs can make a product safer under certain conditions. But I had naively overlooked that under an alternate set of assumptions, you might find that not only do cash prizes not make the product any safer, but that nothing makes the product any safer — you might as well not bother fixing certain security holes at all, whether they were found through a prize program or not."

Is the whole premise of this article Bennett having a conversation with himself, talking about his previous points that no one also cared about? I understand slashdot is trying to start doing op-eds by having this guy write. But everything he writes is this long-winded, blowhard, arrogant, ego-massaging nonsense that no one but him cares about. Here he's writing about his previous writings and how his thoughts have changed..in a poorly-written article with no sense of a conclusion..just rambling.

Bennett is also not an information security expert ..he's just a blowhard..can we have someone really involved in information security, like Bruce Schneier, write articles for Slashdot instead of this nonsense?

let's see if work lets me post (0)

Anonymous Coward | about 5 months ago | (#46788037)

probably not.

pagerank algo encourages referencing one's previous posts.

algo changes us.

-s

Re:Bennett's Ego (0)

bennetthaselton (1016233) | about 5 months ago | (#46788205)

Is there a statement in the article that you think is incorrect?

Re:Bennett's Ego (5, Insightful)

Charliemopps (1157495) | about 5 months ago | (#46788423)

While I don't share the AC's animosity towards you, the premise of your argument is entirely wrong.

The number of bugs are not limitless, they are very much a finite thing.

The benefit to the company is not limited to closing that single bug. When someone reports one bug, you likely are learning a new method and/or way of thinking in regards to the procedure/module/whatever is involved. One "reported" bug could likely make many dozens or more other bugs readily apparent in your code.

It also teaches your organization how to avoid that bug in the future. How many bugs were in the wild, being used by blackhats for YEARS through multiple iterations of a software package before being caught?

Also, you get to find the mistake in the code and, if you're managing your code correctly, you will know who made the mistake. So you can coach if it was something that should have been caught.

And lastly, it solidifies your place in the market as a leader. People study your code intently, use it more, get more involved. The more people involved, the bigger your talent pool, the more industry respect you have, and as a result the more people will look to you as a company that cares about the stability and long term viability of your product.

Re:Bennett's Ego (1)

Jmc23 (2353706) | about 5 months ago | (#46788477)

Ah for shame. You didn't realize he lives in a reality all his own. Points from our reality do not work.

There's supposed to be some sort of silver...

Re:Bennett's Ego (0)

bennetthaselton (1016233) | about 5 months ago | (#46788627)

The number of bugs are not limitless, they are very much a finite thing.

That's true -- but I did say,

Obviously the amount of vulnerabilities is not really infinite — you can only do finitely many things to a product in a finite amount of time, after all — but suppose it's so close to infinite as to make no difference, because the manufacturer would never be able to fix all the vulnerabilities that could be found for that amount of effort. I'm sure that $10 million worth of effort, paid to the right people, will always find you a new security vulnerability in the Apache web server; the same is probably true for some dollar number much lower than that, and you could call that the "infinite bug threshold".

Do you think that statement is incorrect? That for $10 million worth of effort, you could always find a new vulnerability in Apache, no matter how many iterations of bug-fixing you've already gone through?

The benefit to the company is not limited to closing that single bug. When someone reports one bug, you likely are learning a new method and/or way of thinking in regards to the procedure/module/whatever is involved. One "reported" bug could likely make many dozens or more other bugs readily apparent in your code.

It also teaches your organization how to avoid that bug in the future. How many bugs were in the wild, being used by blackhats for YEARS through multiple iterations of a software package before being caught?

Also, you get to find the mistake in the code and, if you're managing your code correctly, you will know who made the mistake. So you can coach if it was something that should have been caught.

And lastly, it solidifies your place in the market as a leader. People study your code intently, use it more, get more involved. The more people involved, the bigger your talent pool, the more industry respect you have, and as a result the more people will look to you as a company that cares about the stability and long term viability of your product.

I think all of that is true but doesn't negate the argument. Those are all great reasons to incentivize people to find bugs in your product. But if the state of the product is such that in practice it will always be possible to find another vulnerability for $50,000 worth of effort, and the vulnerability is worth $100,000 on the black market, someone will still find another vulnerability and exploit it.

Re:Bennett's Ego (0)

Anonymous Coward | about 5 months ago | (#46788435)

Writing text longer than 200 characters for others to read do require something more than just being correct: In /. terminology it needs to be either A) Interesting B) Informative or C) Funny (Entertaining)

Your text on the other hand is utterly non-provocative boring fluff. When you bring up a theme that the readers have absolutely no interest in whatsoever to begin with, it really needs to add something that make people care about it. At least you should spend some time on /. in order to begin understanding and appreciating your audience, who on average probably have been here longer than you, thus probably have more mature opinions than you as well.

What about your piece is falsifiable? It's just opinion, not an article. The rest of us use comments for opinions like these, and we need approval of the /. herd mentality to reach up loud enough to be heard. Just because you know someone, doesn't make you deserve to be on top of everyone else here. This just breaks the fundamental businessmodel and functionality of /.

Sorry if this sounds harsh, but nothing about your post made me want to read any firther than the first sentence, and I have never heard about you before, so that was my first impression.

To be fair: Very few articles on the internet, /. especially, are so interesting you just HAVE to spend all the time reading it, so don't assume people will find it interesting or worth their time just because it came from you. People's time and attention is precious. Spend it wisely.

Captcha: smacked

Re:Bennett's Ego (0)

khasim (1285) | about 5 months ago | (#46788573)

Is there a statement in the article that you think is incorrect?

You missed the point of the post that you are replying to. But since you asked ...

You can visualize it even more starkly this way: A stranger approaches a company like Microsoft holding two envelopes, one containing $1,000 cash, and the other containing an IE security vulnerability which hasn't yet been discovered in the wild, and asks Microsoft to pick one envelope.

That makes no sense. Why would a security-researcher offer to pay MICROSOFT for NOTHING?

Microsoft should be paying the security-researcher.

It would sound short-sighted and irresponsible for Microsoft to pick the envelope containing the cash â" but when Microsoft declines to offer a $1,000 cash prize for vulnerabilities, it's exactly like choosing the envelope with the $1,000.

Wrong again.

Not PAYING $1,000 is NOT the same as getting an ADDITIONAL $1,000.

If I have $1,000 and I do not buy something for $1,000 I still have $1,000. But if someone gives me an envelope with $1,000 then I have TWO THOUSAND DOLLARS.

You might argue that it's "not exactly the same" because Microsoft's hypothetical $1,000 prize program would be on offer for bugs which haven't been found yet, but I'd argue that's a distinction without a difference.

No. It's wrong because in your example Microsoft ends up with an ADDITIONAL $1,000 from a security-researcher.

Re:Bennett's Ego (0)

Anonymous Coward | about 5 months ago | (#46788351)

If he doing op-eds I would think he also interviewing people in those fields, and how many stories/articles have been posted on /. alone, with interviews either by /. or from a linked source. It's like saying a sports broadcaster should be allowed to nitpick at teams and explain what they should be doing, simply because they didn't play, or because they didn't play in on a major professional team. You learn from paying close attention, do interviews, from players to the coaches thru out a league.

Having said that, just about every op-ed I have read is dumb, its as if these people were somehow the only ones to think of this! Not just Bennett but all the other ones as well.

This article is something that those in programming already know and if they don't then their half the problem.

To sum up the entire rant...

Closed source doesn't care about 'security' they never have, the whole point in having a Bounty Program is to allow users or the zombified masses to think the software they are using is 'safe and secure'. We've already seen the effects of this with the NSA exploiting back doors and holes that researchers and hackers couldn't find, and if they did, the patches would fix one hole only to perhaps add another one.

Whether you believe that or not, it is worth thinking about and examining the industry to see if it is true or a lie.

For open source, IE Mozilla if you want a variety of eyes scanning and trying to exploit the code giving people incentives, cash, you will get more attention. It's possible if OpenSSL had a Bounty in place the Heartbleed bug could have been prevented. We know the code is open to anyone who wishes to look at the code, and open source has prided itself on trying to be more secure then proprietary.

Re:Bennett's Ego (0)

Anonymous Coward | about 5 months ago | (#46788739)

Bennett is also not an information security expert.

He wanted to be [nytimes.com] , but Microsoft said he was "too dumb for it". I doubt that was the real reason. He sounds like a total asshat.

Overthinking the issue (0)

Anonymous Coward | about 5 months ago | (#46787919)

The value is a drop in the bucket for most companies that are widespread enough to need to do this. It's a lot better than say, going bankrupt because nobody trusts your product anymore. Even if you have to do it forever.

By this logic... (2, Insightful)

Lab Rat Jason (2495638) | about 5 months ago | (#46787921)

... we shouldn't attempt to arrest or prosecute criminals because there is always another one right behind the first?

You should be ashamed of your apathy.

Re:By this logic... (1)

bennetthaselton (1016233) | about 5 months ago | (#46788425)

There aren't infinitely many criminals; crime rates are lower than they would be if we didn't arrest or prosecute criminals, because the population of criminals is finite and in fact small enough that our policing and sentencing policies can make a dent in it. (If there really were infinitely many criminals, then indeed it would be pointless to arrest or prosecute them, but there aren't.)

Re:By this logic... (1)

Anonymous Coward | about 5 months ago | (#46788693)

There aren't infinitely many criminals

There aren't infinitely many bugs either, but that didn't stop you from making it the premise of your rant.

There aren't infinite bugs (1)

egarland (120202) | about 5 months ago | (#46787925)

If you start with the assumption that you can't make secure software, then you shouldn't make any software at all.

Re:There aren't infinite bugs (4, Interesting)

mlts (1038732) | about 5 months ago | (#46787989)

People talk about bug free code. It is a matter of won't, not a matter of can't.

Sometimes, there are products out there which can be considered "finished". Done as in no extra features needed, and there are no bugs to be found. Simple utilities like /usr/bin/yes come to mind. More complex utilities can be honed to a reasonable degree of functionality (busybox comes to mind.)

The problem isn't the fact that secure or bug free software can't be made. It is that the procedures and processes to do this require resources, and most of the computer industry runs on the "it builds, ship it!" motto [1]. Unfortunately, with how the industry works, if a firm does do the policy of "we will ship it when we are ready", a competitor releasing an early beta of a similar utility will win the race/contracts. So, it is a race to the bottom.

[1]: The exception to this rule being malware, which is probably the most bug-free code written anywhere these days. It is lean, robust, does what it is purposed to do, and is constantly updated without a fuss.

Re:There aren't infinite bugs (0)

Anonymous Coward | about 5 months ago | (#46788063)

constantly updated without a fuss.

Free automatic updates. Got to love it.

Re:There aren't infinite bugs (0)

Anonymous Coward | about 5 months ago | (#46788275)

And if software "engineers" were held legally liable for their work (or at least their employers were) this would change.

Re:There aren't infinite bugs (4, Interesting)

RabidReindeer (2625839) | about 5 months ago | (#46788279)

People talk about bug free code. It is a matter of won't, not a matter of can't.

Sometimes, there are products out there which can be considered "finished". Done as in no extra features needed, and there are no bugs to be found. Simple utilities like /usr/bin/yes come to mind. More complex utilities can be honed to a reasonable degree of functionality (busybox comes to mind.)

The problem isn't the fact that secure or bug free software can't be made. It is that the procedures and processes to do this require resources, and most of the computer industry runs on the "it builds, ship it!" motto [1]. Unfortunately, with how the industry works, if a firm does do the policy of "we will ship it when we are ready", a competitor releasing an early beta of a similar utility will win the race/contracts. So, it is a race to the bottom.

[1]: The exception to this rule being malware, which is probably the most bug-free code written anywhere these days. It is lean, robust, does what it is purposed to do, and is constantly updated without a fuss.

Once upon a time, I read somewhere (Yourdon, possibly) that the number of bugs in a software product tends to remain constant once the product has reached stability. The number for IBM's OS/MVS mainframe operating system was somewhere in the vicinity of 10,000!

It's been likened to pressing on a balloon where when you squeeze one bump in, another pops out, because the process of fixing bugs itself introduces new bugs.

And OS/MVS is about the most critical software you could put on a mainframe. You can't just Ctrl-Alt-Delete a System/370. Or power it off and back on again. Mainframes are expensive, and expected to work virtually continually. Mainframe developers were expensive as well, since after a million dollars or so of hardware and software, paying programmers handsome salaries wasn't as big an issue back then. Plus there was no offshore race to the bottom where price trumped quality at the time. In fact, there wasn't even "perma-temping" yet.

Still, with all those resources on such an important product, they could only hold the bug count constant, not drive it down to zero.

Actually speaking of OS/MVS, there's a program (IEFBR14) whose sole purpose in life is to do nothing. There have been about 6 versions of this program so far, and several of them were bug fixes. More recently, it had to be upgraded to work properly on 64-bit architecture, but some of the bugs were hardware-independent.

Re:There aren't infinite bugs (1)

Number42 (3443229) | about 5 months ago | (#46788331)

[1]: The exception to this rule being malware, which is probably the most bug-free code written anywhere these days. It is lean, robust, does what it is purposed to do, and is constantly updated without a fuss.

I contest that point. Have you ever even used MS Office?

Re:There aren't infinite bugs (5, Insightful)

SourceFrog (627014) | about 5 months ago | (#46788021)

It's retarded to assume that you can't make a product secure; there aren't "infinite" bugs, there are obviously a finite number of bugs in any piece of software, and anyone who thinks otherwise either has some strange mental illness or doesn't understand software. But the reason bug bounties mostly don't work has nothing to do with the author's wiffle-waffle, it's just simple math and cost of labor. If I have the level of skills required to find security holes in a large piece of software like Windows or IE, chances are I can sell my labor at a minimum of $50/hour. To find a bug, I'm likely going to have to spend several days or weeks at it. If there's a $1000 bounty, that means I can spend at most 20 hours on the problem until I am literally losing money in opportunity costs. And hackers have to pay their mortgages and bills too. It's kind of insulting to think that an experienced security expert is going to labor away to find your bugs for you at well below market rates for that work if you had to pay someone to do it, as if a dog getting excited about a small treat, it's patronizing and insulting. If it would cost a company like MS $100,000/annum to have a security expert on their dev team to find those same bugs, then any 'bounty' has to START at well above those effective hourly rates.

Re:There aren't infinite bugs (1)

bennetthaselton (1016233) | about 5 months ago | (#46788251)

Yeah, I think $1,000 is way too low, I just used it as a sample number.

I think all that matters is that the dollar number matches the black-market value. Then it doesn't matter whether most people would find the effective hourly rate "insulting"; all that matters is that anybody who does find an exploit will turn it in to the company rather than selling it on the black market or exploiting it themselves.

Re:There aren't infinite bugs (1)

Eunuchswear (210685) | about 5 months ago | (#46788385)

Are you an economist?

That's explain it.

Re:There aren't infinite bugs (1)

Wycliffe (116160) | about 5 months ago | (#46788547)

But what is the "black market rate" for 1 million credit card numbers? $20 a piece? What is the cost to the company if they lose 1 million
credit cards? This is a job for the bean counters but in some cases it might be worth it not to pay for the bug if you think it'll cost you less
than $20 million in mitigation of reputation,etc.. In other cases, it might be worth alot more than $20 million if for instance a lose of 1 million
credit cards causes Bank of America to lose $100 million of business. I think the best strategy is probably to break it up into smaller
domains so that noone can ever get 1 million credit card numbers. If we do that and the maximum they can get is 10k credit card numbers
then you've both reduced the value on the black market and the value you should have to pay for the bug. Basically the best way to prevent
a breach is to make the amount of reward less than the amount of effort. That's the reason that a house with more expensive stuff in it needs
better security than a house with nothing of value and why a jewelery store has better security than a pet store. It's also the reason that you
see signs that say "driver carries less than $20 in cash". Criminals are always going to go for the low hanging fruit which is what gives the
most reward for the least amount of risk and effort so reducing the reward is probably one of the best and cheapest ways to increase your security.

Re:There aren't infinite bugs (1)

pr0fessor (1940368) | about 5 months ago | (#46788583)

What about scarcity? Wouldn't scarcity of exploits on the black market just drive the prices up eventually putting the price over the infinite bug price?

Although I agree that if there is enough money involved someone will find a way even if it is an indirect exploit to an otherwise solid application.

Re:There aren't infinite bugs (1)

JesseMcDonald (536341) | about 5 months ago | (#46788595)

Then it doesn't matter whether most people would find the effective hourly rate "insulting"; all that matters is that anybody who does find an exploit will turn it in to the company rather than selling it on the black market or exploiting it themselves.

You're assuming they can only choose one. What is there to prevent someone from exploiting the bug themselves for a while, selling it on the black market (to a discrete buyer), and still eventually turning it in to collect the bounty?

Re:There aren't infinite bugs (1)

bennetthaselton (1016233) | about 5 months ago | (#46788721)

Good point, someone else mentioned this and I'll just copy and paste what I wrote here:

Right, I forgot to mention something: To prevent double-use like this, a company should say that you don't get paid until they've fixed the bug and issued a patch for it in their software, all without the exploit ever being spotted in the wild. (If someone else finds your vulnerability and exploits it in the wild, that's just bad luck. So to incentivize researchers, Microsoft might have to increase the prize money proportionally, to make up for the fact that sometimes people won't get paid because their exploits were found by someone else.) This incentivizes people to report bugs and not release them to the black market as well.

Re:There aren't infinite bugs (0)

Anonymous Coward | about 5 months ago | (#46788727)

It would cost MS about $400,000 per year, when you factor in benefits, office space, free soda, etc.

Which is a good segue to something the author didn't point out: In order to run a $1000 per security bug program you'd need to spend way more than $1000 per bug. You'd probably need a team of 30 or 40 full time people from multiple disciplines (dev, test, PM, PR, management), an up to date lab for testing exploits and fixes, etc. You'd probably be spending about $10,000 per hour, even on days with no valid exploits coming in. Which isn't to say, of course, that people and equipment would be sitting around idle, because 99% of the submissions would be from idiots/crazies who think being able to run "format c:" from the command line is a security bug or from grifters trying to resell known exploits.

Re:There aren't infinite bugs (1)

omgwtfroflbbqwasd (916042) | about 5 months ago | (#46788333)

Counterpoint: Even the best teams are not capable of making secure software.

Case in point, the NASA shuttle avionics system. CMMI level 5 certified software development program, track record of 2 Sev-1 defects per year during development.

Timeline Analysis and Lessons Learned [nasa.gov] (see page 7/slide 6) You'll find that there were hundreds of unknown latent Sev-1 defects (potentially causing loss of payload and human life) and even ~150 defects 15 years after the program started.

The question isn't whether your team is capable or willing to fix the issue, you must acknowledge that there is nearly 100% certainty that there are unknown vulnerabilities in any software you write. The question goes back to whether a bug bounty program will ever cross the inflection point of a ROI chart.

Cost of formal verification (1)

tepples (727027) | about 5 months ago | (#46788467)

We start with the assumption that the vast majority of the market isn't willing to pay a company substantially more to ship a formal proof of a software product's security along with the product. I'm interested in your bright ideas for making such a formal proof economical.

say what? (0)

Nick (109) | about 5 months ago | (#46787931)

What a waste of time.

However.... (2)

NiteMair (309303) | about 5 months ago | (#46787981)

Paying people to find bugs and report them responsibly does give those people an incentive to not do something worse with them.

In a way, this economy takes possible would-be black hats and turns them into white hats. I suspect there are far fewer people capable of finding every last exploit than there are exploits, so if we keep those people busy and paid doing what they do best, at least they won't be doing something more nefarious.

Re:However.... (1)

bennetthaselton (1016233) | about 5 months ago | (#46788301)

Paying people to find bugs and report them responsibly does give those people an incentive to not do something worse with them.

Right, I forgot to mention something: To prevent double-use like this, a company should say that you don't get paid until they've fixed the bug and issued a patch for it in their software, all without the exploit ever being spotted in the wild. (If someone else finds your vulnerability and exploits it in the wild, that's just bad luck. So to incentivize researchers, Microsoft might have to increase the prize money proportionally, to make up for the fact that sometimes people won't get paid because their exploits were found by someone else.) This incentivizes people to report bugs and not release them to the black market as well.

Re:However.... (1)

jc42 (318812) | about 5 months ago | (#46788805)

To prevent double-use like this, a company should say that you don't get paid until they've fixed the bug and issued a patch for it in their software, all without the exploit ever being spotted in the wild.

One problem with this is that there's already a documented history of companies rejecting bug reports and not paying the bounty, and then some time later include a fix for it in their periodic updates. It's basically the same process that causes a company's "app store" to reject a submitted tool to do a particular job, and then a few months later releasing their own app that does the same thing.

I know a good number of people who've been bitten by the latter, from both MS and Apple. In the case of a bug, it's a lot harder to document that this has happened, but various software guys I know express a strong suspicion that it has been done to them.

It's widely believed that corporations don't have ethics at all, only costs and income, which would easily explain this sort of fraudulent "offers" of rewards with no intent to pay. We've heard here often from lots of people who think that this is right and proper, and that corporations should only be motivated by the bottom line.

When combined with the growing penchant for treating someone who reports a security bug as a criminal "security hacker" and prosecuting people who report bugs in software products, this should reasonably make a sensible developer reluctant to take rewards programs seriously. Given an offer which could get you thanks and some money, or could land you in jail for your efforts, and no way to know beforehand which the company will do, why would you even consider letting them know your name?

(Actually, my name has appeared in numerous companies' lists of honored contributors thanks to my bug reports and patches. But I haven't sent in security-related bug reports to many companies, only to the ones I have reasons to believe I can trust.)

Wrong (4, Insightful)

Stellian (673475) | about 5 months ago | (#46788001)

There is no such thing as a "black market value" of a security vulnerability. Both the demand and supply have curves. I.E there are security researchers who would demand say 1 million bucks before selling the bug to the CIA (because they view that action as unethical, illegal and risky careerwise) while they would gladly accept 10.000$ in a responsible disclosure offer. Other color hats would go to the highest bidder. Similarly, there are large transaction costs and information asymmetries, it's not necessarily true that the demand and supply meet or that they can trust each other. A spy agency might rather develop in house (at a much larger cost) then shop around and rise suspicion.

In short, offering a non-trivial sum of money will always increase the costs of the average attacker and might completely shut off the low impact attacks like spam zombification, email harvesting etc., the developers of which can't invest millions in an exploit but would gladly use the free zero day+exploit just made public.

Re:Wrong (0)

Anonymous Coward | about 5 months ago | (#46788341)

Supply and Demand always have curves (only in basic economics classes are the curves actually straight lines), and yet there is almost always a market value. In your example, you are providing different markets ("good guy selling to good people" and "good guy selling to bad people[CIA]"). Information asymmetry also doesn't imply there's not a value, the party or parties are just unaware of that value. Just because you bought a shirt in India for $100 that normally sells for $5 doesn't mean $5 isn't the market price, it just means you got shafted. And naturally, different parties will pay different amounts for the same thing, some would pay higher, some would pay lower, and the value is where the seller ends up wanting to sell. (As you pointed out, to whom the seller is selling might affect the value at which he is willing to sell). In the case they can take the product to the highest bidder, it's simply a very inelastic market with a supply of 1 very unique product.

Your summary doesn't really have anything to do with there being a black market value of an exploit or not.

Content! (0)

celeb8 (682138) | about 5 months ago | (#46788035)

Content!

Security is all or nothing? (1)

tomhath (637240) | about 5 months ago | (#46788043)

Every bug fixed raises the bar slightly. Although I suppose if you're pretty sure there are infinitely many security holes in your code that are all roughly equally easy to find then you shouldn't bother fixing them - you should get another job.

tldr (5, Insightful)

Zero__Kelvin (151819) | about 5 months ago | (#46788049)

I did read far enough to realize that this person is an idiot. We need only look at the heartbleed bug. If a bounty was offered and resulted in a fix earlier the number of stolen keys would be less, but that is almost besides the point. Once closed they might find another bug, but the likelihood that it will leak private keys is extremely low. To use a car analogy, every car has problems. This is essentially like claiming that fixing the exploding gas tanks in a Pinto is of no use, because the car will still have other issues. Seriously?

Re:tldr (1)

bennetthaselton (1016233) | about 5 months ago | (#46788361)

The analysis only applies to similar classes of vulnerabilities. If you find a remote root exploit in Apache with $5,000 worth of effort, but it turns out there are an effectively infinite number of remote root exploits that can also be found with $5,000 worth of effort, then in fact it is pointless to fix it since that is well below the black-market value of such an exploit, and new ones will never stop being found. But it's irrelevant if there are other far less serious bugs.

Re:tldr (1)

tomhath (637240) | about 5 months ago | (#46788527)

To paraphrase:

IF you can easily find a serious security hole AND IF there are a very large number of other serious security holes AND IF there are also a very large number of less serious security holes, THEN there's no point in offering a bounty because the number of less serious security holes plus the number of more serious security holes is so large you'll never fix them all.

Yes that's true. But it doesn't take a page long monolog to say it.

However, IF your bounty turns up a security hole like Heartbleed THEN the bounty was money well spent.

Re:tldr (1)

bennetthaselton (1016233) | about 5 months ago | (#46788647)

Except this analysis is wrong and that's what happens if you try and take shortcuts. It doesn't matter whether there is a "very large number of serious security holes", it matters if there is a very large number of serious security holes that can be found for a cost which is less than the black-market value of the security hole.

Yes, I'm sure the article could have been made shorter.

Re:tldr (0)

Anonymous Coward | about 5 months ago | (#46788807)

Bennett admitted he could have written less! MOD PARENT UP.

Re:tldr (1)

Zero__Kelvin (151819) | about 5 months ago | (#46788551)

The analysis is absurd, and I'm pretty surprised that you would show your face at all. The fact that your appearance didn't involve an apology speaks volumes.

Re:tldr (1)

bennetthaselton (1016233) | about 5 months ago | (#46788679)

Which part of it doesn't make sense to you? That if there are effectively infinitely many remote-root vulnerabilities that can be found for $5,000 worth of effort, and the black market value of such a vulnerability is $10,000, then finding and fixing one of those $5K-vulnerabilities will not affect the expected amount of time that it takes the black hat attacker to find one themselves when they start looking?

Re:tldr (2)

Zero__Kelvin (151819) | about 5 months ago | (#46788783)

"Which part of it doesn't make sense to you?"

Actually there are e few things:

  1. 1) You wrote a long, absurd and useless analysis piece and then actually decided to broadcast your drivel in the apperently mistaken belief that it is actually well written and has a point "I just don't get"
  2. 2) Slashdot actually published your drivel
  3. 3) You continue to defend it, despite an overwhelming number of people pointing out how incredibly stupid it actually is

All of those things baffle me

Re:tldr (1)

twocows (1216842) | about 5 months ago | (#46788607)

I don't think so. New ones *will* stop being found, or at least the rate of finding will slow down, especially if they're being patched. The effort required to find such exploits will also go up, which will also raise the price on the black market. Past a certain point, blackhats will likely just focus their efforts elsewhere.

Re:tldr (1)

bennetthaselton (1016233) | about 5 months ago | (#46788703)

Well yes, if eventually you run out of bugs that can be found with less than $10K worth of effort and $10K is the black-market value of the exploit, then that is the case where the black-market value is below the infinite bug threshold, and the product can be made secure, and as you said, black hats will move on to something else. That was my point :)

Re:tldr (2, Funny)

Anonymous Coward | about 5 months ago | (#46788363)

Yes, but to be honest, if your car has windscreen wipers that leave a smear, why worry about the random chance of explosion?

Re:tldr (5, Funny)

Thruen (753567) | about 5 months ago | (#46788431)

I did read far enough to realize that this person is an idiot.

So you only got to "Bennett Haselton writes:" then?

Re:tldr (1)

Rich0 (548339) | about 5 months ago | (#46788683)

Yup. If you're just going to throw up your hands and say that bug-free software is impossible, why not just intentionally write software that doesn't work at all?

My Linux kernel HAS to be broken. So, why not just edit the source and put an infinite loop at the entry point? The resulting black screen when I boot up must be just as useless as the OS I'm typing on right now, right?

Re:tldr (1)

TubeSteak (669689) | about 5 months ago | (#46788795)

This is essentially like claiming that fixing the exploding gas tanks in a Pinto is of no use, because the car will still have other issues.

No, it's like claiming that the Pinto is always going to have an exploding gas tank issue, even if you fix the current cause.

Some cars (software) have so much going on that there will always be problems,
unless you use a design process (coding language) that doesn't allow it to happen.
Or do you really think that Microsoft products with millions of lines of code are someday going to be bug free?

You want to solve bugs? (0)

Anonymous Coward | about 5 months ago | (#46788089)

Make software programming an actual profession like electrical engineering. Because right now it looks more like a bunch of overgrown children moving pictures around on a screen.

It's not the infinite bugs... (1)

HaeMaker (221642) | about 5 months ago | (#46788105)

...it's the lack of accountability. The reason why Microsoft should take the cash is because they are not accountable for their bugs by contract. Finding a vulnerability costs them money, it does not make or save money. The only case that can be made for disclosing and fixing vulnerabilities is improved goodwill, but even that is tempered by the fact that what ever meager goodwill they gain by fixing the bug is probably cancelled by the loss in goodwill from having the bug in the first place.

From basic programming to advanced (1)

erroneus (253617) | about 5 months ago | (#46788115)

Like so many others, my first code was:

10 PRINT "HELLO WORLD"

We started out with some basic operations and grew from there. Unfortunately most people kept what they liked and discarded the rest. Things like data and input validation are seen as a waste of time by so many. Strings and other data which get passed to other processes in other languages (like SQL, or Windows image libraries) also warrant some inspection.

The types of vulnerabilities we find most often happen because programmers are neglecting to pay attention to some of these very basic things. Others are more complex, but if these basic issues are still going on, then it's hard to see programmers as generally professional whether they are commercial or open source writers.

It may come as a surprise to some people, but the mistakes made in coding these days are increasingly critical in nature as civilization is increasingly reliant on what is being written and run out there. Much scrutiny and soul searching should be done. (It won't happen until some really bad things happen and frankly, the truly bad things are too much of an advantage to alphabet agencies so we won't hear a push for this from government in case anyone was waiting for it.)

I hate to TL;DR, but... (2)

SecurityGuy (217807) | about 5 months ago | (#46788131)

...the notion that if you can't make software bug free, you may as well not bother is just stupid on a scale that's hard to comprehend. I skimmed as much of that article I could stomach, but I'm done.

If we can't make cars crash proof, we may as well not make them safer.
If we can't make people immortal, we may as well stop advancing medicine.

You know what? If you can't find perfect stories, you may as well stop posting junk like this.

Crack is a hell of a drug (0)

JoeyRox (2711699) | about 5 months ago | (#46788133)

Huh?

"you might as well not bother" (1)

rebelwarlock (1319465) | about 5 months ago | (#46788141)

I'm gonna have to stop you right there, because your entire premise is retarded. If someone finds a bug in your software, and you don't bother to fix it, you are intentionally keeping the software less secure than it could be. That should be criminal, but I'd be satisfied with Ben 10 not being allowed to have a blog on slashdot anymore.

Re:"you might as well not bother" (1)

bennetthaselton (1016233) | about 5 months ago | (#46788453)

My point is that if there are (effectively) infinitely many bugs below the black-market value threshold, then the software isn't any less secure because you didn't fix a bug -- because you haven't changed the amount of effort it would take for the attacker to find their next vulnerability.

That's where you are wrong. (1)

khasim (1285) | about 5 months ago | (#46788717)

My point is that if there are (effectively) infinitely many bugs...

No need to read any further because that is an incorrect assumption.

There cannot be an infinite number of bugs (effectively or otherwise) because there is not an infinite about of code NOR an infinite number of ways to run the finite amount of code.

From TFA:

(He confirmed to me afterwards that in his estimation, once the manufacturer had fixed that vulnerability, he figured his same team could have found another one with the same amount of effort.)

Then he was wrong as well.

There are a finite number of times that buffers are used in that code base. Therefore there are a finite number of times that buffers could be overflowed. If someone went through the code and checked each instance and ensured that an overflow situation was not possible then it would not be possible.

"Infinite" does not mean what you think it does.

WTF? (1)

mbone (558574) | about 5 months ago | (#46788143)

I don't think he understands how security works.

True for crap software (1)

flyingfsck (986395) | about 5 months ago | (#46788179)

Not true for software that was written properly, reviewed, unit tested and system tested.

There is important factor being ignored. (0)

Anonymous Coward | about 5 months ago | (#46788195)

If one makes the assumption that regardless of the infinite bug threshold, the security researchers are simultaneously both the black and white hats and therefore looking for and finding the bugs. The purpose of the prize is not to solicit the search for bugs but to become the purchaser of the bugs which are going to be found in any event. Not offering prizes means that infinite stream of bugs goes to the black market, offering prizes means that stream of bugs goes to the vendor.

What on earth makes someone assume there are two distinct groups, black and white who will only sell to their own respective markets? There is one large group that consists of everyone who makes a living hunting for bugs and they will sell to whichever market is most desirable. The black and white hats simply designate which market bought a given vulnerability. There might be a tiny smattering who will only sell to the white market on principle but they would have disclosed without a prize. There are probably zero or nearly zero who would refuse to take the money of the vendor today even if they've always sold to the black market in the past.

Security compiler? (2)

eyepeepackets (33477) | about 5 months ago | (#46788197)

Why not a security compiler? Seems some clever, creative hackers could work up something which would take raw code, subject it to some scrutiny and give output/feedback. Perhaps even a security switch to the standard compilers or even a security test suite. Shouldn't be that hard to do.

Re:Security compiler? (1)

swillden (191260) | about 5 months ago | (#46788775)

Why not a security compiler? Seems some clever, creative hackers could work up something which would take raw code, subject it to some scrutiny and give output/feedback. Perhaps even a security switch to the standard compilers or even a security test suite. Shouldn't be that hard to do.

Shouldn't be too hard... in the sense that solving the Halting Problem shouldn't be too difficult. I conjecture that with an appropriate set of assumptions it's possible to use Rice's Theorem to prove that security analysis is equivalent to the Halting Problem.

Of course, static analysis can catch some vulnerabilities, and can highlight potential vulnerabilities. That's what Coverity does. But I don't think any mechanical process can defeat a creative attacker.

Security research effort is non-deterministic (1)

Anonymous Coward | about 5 months ago | (#46788221)

Haselton's analysis (for what it is) in this post assumes that, just because a Professor said he thought it might take "about the same amount of time" for his team to find another vulnerability, that ALL security vulnerability research carries a linear, or at least deterministic, cost (in terms of man-hours, whether those hours are paid or unpaid).

This is simply not true. The reasons are as follows:

(1) Security vulnerabilities are best described by the methods or mechanisms used to find and then exploit them. Although there are some security vulnerabilities (such as buffer overflows) that can be searched for using deterministic methods that are relatively fast and sometimes even automated, there are many classes of vulnerabilities where the time investment required is completely unpredictable -- anywhere from minutes to thousands of years, assuming you have world-class security researchers working on them full-time. Not only is it not deterministic, but we can't even put any kind of reasonable bound on the amount of time it would take for someone to deliberately (or accidentally) discover such vulnerabilities.

(2) There are security vulnerability types (a "type" is classified by a close similarity in the procedure used to detect and/or exploit the vulnerability) which are either very rarely known to anyone on the planet, or completely unknown as of today, and may not be found or described at all for many years to come. These types may bring with them completely arbitrary, but interesting properties. For example, a yet-to-be-revealed class of security vulnerabilities may be trivial to detect, but require advanced degrees in physics or mathematics to exploit. Another class may require resources such as a quantum computer just to detect them, a resource which is not widely available in 2014. Since we don't yet know what the characteristics of undiscovered security vulnerability types may be, we can't predict, or even estimate, what the cost of finding them might be. It's not impossible to conceive that quantum computers (that is, either using quantum computers as a tool in detecting vulnerabilities in digital computers, or vulnerabilities in quantum computers *themselves*) might expose entire new classes of vulnerabilities that are trivial to detect and exploit, and frighteningly severe.

The "economics" of security research (where I speak of economics in terms of the amount of human and/or computer resources have to be "spent" to find and exploit a particular vulnerability) are far, far too dynamic to start throwing around big round numbers and inequalities. That kind of reasoning only applies when a specific type of product has been found to have many vulnerabilities of the exact same type, and these vulnerabilities are being continuously found using the same techniques day by day (or however long it takes to find a new one). This parade of same-class vulnerabilities may continue for a while, but in general, once a manufacturer gets slapped in the face with 2 or 3 vulnerabilities of a particular class, the next patch or product release they introduce tends to completely close off that class of vulnerability, after which the cost of finding a new class of vulnerability rises to the level of "indeterminate, and unpredictable even in principle".

Take Android rooting for example. For a long time we were able to root Android devices rather trivially, using similar vulnerabilities such as symlink attacks and unchecked input vulnerabilities in privileged system processes. Now, both manufacturers and Google have wisened up to these types of vulnerabilities, and either the bloatware devs or the manufacturers are testing for these vulnerabilities before they release their OTAs. So far, the Motorola Droid Ultra's 4.4 firmware has not been rooted, despite the line having a fairly large user base and more than a month of exposure (several months by now, actually) and $1500+ root bounty. That's because the attackers are using "old, tried and true" exploit types, which are now largely obsolete.

Seek help (0)

h4x0t (1245872) | about 5 months ago | (#46788245)

Bennett,
I know life can seem like an endless rat race sometimes. It is difficult to refute that logic to a rational mind, but we are not simply rational minds. We love and lose and fix endless bugs, and we shouldn't just give up.
You should consider seeking help. I know... it will be from some git psych major, but it's possible they will put your mind at ease.

I think you're working from a few false assumption (5, Insightful)

Opportunist (166417) | about 5 months ago | (#46788345)

First, bugs in a given program are not infinite in number. By definition. Because the code itself is finite. Finite code cannot have infinite bugs. Also, due to the nature of code and how it is created, patching one bug usually also takes care of many others. If you have a buffer overflow problem in your input routine, you need only patch it once, in the routine. Not everywhere that routine is being called.

I have spent a few years (closer to decades now) in IT security with a strong focus on code security. In my experience, the effort necessary to find bugs is not linear. Unless the code changes, bug hunting becomes increasingly time consuming. It would be interesting to actually do an analysis of it in depth, but from a gut feeling I would say it's closer to a logarithmic curve. You find a lot of security issues early in development (you have a lot of quick wins easily), issues that can easily even be found in a static analysis (like the mentioned overflow bugs, like unsanitized SQL input and the like), whereas it takes increasingly more time to hunt down elusive security bugs that rely on timing issues or race conditions, especially when interacting with specific other software.

Following this I cannot agree that you cannot "buy away" your bug problems. A sensible approach (ok, I call it sensible 'cause it's mine) is to get the static/easy bugs done in house (good devs can and will actually avoid them altogether), then hire a security analyst or two and THEN offer bug hunting rewards. You will usually only get a few to deal with before it gets quiet.

Exploiting bugs follow the same rules that the rest of the market follows: Finding the bug and developing an exploit for it has to be cheaper than what you hope to reap from exploiting it. If you now offer a reward that's level with the expected gain (adjusted by considerations like the legality of reporting vs. using it and the fact that you needn't actually develop the exploit), you will find someone to squeal. Because there's one more thing working in your favor: Only the first one to squeal gets the money, and unless you know about a bug that I don't know about, chances are that I have a patch done and rolled out before you got your exploit deployed. Your interest to tell me is proportional to how quickly I react to knowing about it. Because the smaller I can make the window in which you can use the bug, the smaller your window gets to make money with the exploit, and the more interesting my offer to pay you to report the bug gets.

Re:I think you're working from a few false assumpt (1)

bennetthaselton (1016233) | about 5 months ago | (#46788587)

First, bugs in a given program are not infinite in number. By definition. Because the code itself is finite. Finite code cannot have infinite bugs.

I agree... I did wrote, "Obviously the amount of vulnerabilities is not really infinite — you can only do finitely many things to a product in a finite amount of time, after all — but suppose it's so close to infinite as to make no difference, because the manufacturer would never be able to fix all the vulnerabilities that could be found for that amount of effort."

Also, due to the nature of code and how it is created, patching one bug usually also takes care of many others. If you have a buffer overflow problem in your input routine, you need only patch it once, in the routine. Not everywhere that routine is being called.

Right, I also said, "I'm hand-waving over some details here, such as the disputes over whether two different bugs are really considered "distinct," or the fact that once you've found one vulnerability, the cost of finding other closely related vulnerabilities in the same area of the product, often goes way down. But I don't think these complications negate the argument."

In fact I agree with everything you said, it just sounds like you're reaching the same conclusion that I did. Once you're done with in-house bug finding, offer a prize close to the black market value of an exploit in the software. If there are finitely many bugs in that range -- as you said, "You will usually only get a few to deal with before it gets quiet" -- then the prize will sweep them up.

Perhaps I should have emphasized: You don't have to start your bug-fixing by offering a prize, you can find as many of them as possible in-house, and from outsiders who report the easy bugs for free. You could even save money by starting with a lower prize, and then ramping it up slowly. (However, this runs the risk that someone might find a valuable bug early on, but keep it secret waiting for the prize money to go up. If they do this, they run the risk that someone else will find the same bug and claim the prize money and then the original discoverer gets nothing. But somebody still might try this. So that's a downside of slowly increasing the prize money.) As long as you end by offering a prize proportional to the black-market value of the vulnerability.

So let's disband the Secret Service then. (1)

jthill (303417) | about 5 months ago | (#46788349)

Because it's widely understood that if anyone competent _really_ wants to kill the President, they're going to do it.

Right argument, wrong conclusion? (1)

Anonymous Coward | about 5 months ago | (#46788391)

I agree with much of your analysis, but I think the conclusion you draw isn't the most interesting or useful one from the available data. The better, but related, line of reasoning goes like this:

1) The more security-critical your software is to the world (protecting more dollars, as in users multiplied by the value of the what the users lose when this software breaks), the higher the black-market value of finding a bug.
2) The more total bugs your software has (defect rate multiplied by LOC), the more it costs to fix a given fraction of the bugs via a bounty system (whether that fraction is half or all of the finite bug count).
3) A software company can only rationally afford a given total bug-bounty payout for a product before the entire product (money earned on making+selling the software minus bug bounty payouts) is a net loss and they might as well discontinue and withdraw the software from the market. This sets constraints on the maximum bounty the company can rationally offer, which we can then compare to the black market bug value.
4) Therefore, in approximately the same cases you state it's not worth offering a bug-bounty at all (because there will always be more bugs that are "worth it" on the black market), the best conclusion is that the company should not be selling the software to users *at all*, and users should not rationally be consuming this software, because it's a net loss for everyone involved (except the black hat hackers).
5) Therefore, in any case where your original analysis concludes that it's not in the rational best interest of a company to offer a bug bounty, it's *also* not in the best interests of the company or its users for the company to even be selling that software in the first place (or for the users to be using it). So the meta-meta-conclusion is that after an unbiased self-analysis, a rational and responsible company has two real options: offer a bug bounty that's high enough to increase security (given black-market value), or withdraw the product from the market. It's never rational to keep selling the software *and* not offer a decent bounty, or any bounty at all.

duh? (1)

beernutmark (1274132) | about 5 months ago | (#46788405)

In other news, there is no point in cleaning your house because it will just get dirty again later.

An infinite number of people think Bennett is boob (1)

Anonymous Coward | about 5 months ago | (#46788443)

If we pay the police to stop one person from punching Bennett in the nose for being a boob what have we accomplished, there are an infinite number of people who want to punch Bennett in the nose for being a boob and therefore trying to stop people from punching Bennett in the nose is an exercise in futility.

More to bounties than bugs (1)

wjcofkc (964165) | about 5 months ago | (#46788585)

Bug bounties don't always involve bugs. A lot of times it is paying someone to back port software. For example software x version 1.5 is available for and popular on... lets say an Ubuntu 12.04 based system. Version 2.0 comes out with a host of cool new features, except that it is only available for Ubuntu 13.10 based systems and the maintainers are not going to port it to 12.04. So, within the same frame work of a bug bounty, community members pool money and pay someone $300 to back port the software. I see this sort of thing happen all the time and have personally benefited from it. I also see distro maintainers offer bounties to fix bugs for their own projects or bounties to back port features of their latest system to their previous version. Or is he only talking closed source style bounties? Overall the article is hard to follow logically and seems to have a very narrow view of the world of software in general and I admittedly did not finish it because of that.

Stupid Argument (1)

smutt (35184) | about 5 months ago | (#46788597)

We should stop looking for bugs because we can never find them all. Maybe we should stop prosecuting criminals because we can't seem to stop finding more. There will always be murderers, so let's make killing legal.

Inductive Fallacy (1)

swillden (191260) | about 5 months ago | (#46788645)

This analysis is based on an erroneous assumption which is derived from an inductive fallacy. Specifically, the author assumes that because one researcher who found one bug believes he could have found a second for roughly the same level of effort means that the researcher believes this process could be repeated indefinitely. I'm certain that if Kohno were asked he would deny the validity of this assumption. I'm sure he would say that his team could find a handful of similar bugs for similar level of effort, but once the pool of low-hanging fruit bugs was exhausted, the cost and difficulty would rise.

Core assumptions are wrong (1)

gurps_npc (621217) | about 5 months ago | (#46788653)

First, he assumed that given x effort, you could find bu #1. That is a reasonable expectation, given the state of programming today. Bugs, while not infinite, are in fact so numerous so that the amount of time it takes to find them all exceeds the project life of software.

Then he assumed that given y effort you could then find bug #2. Again a reasonable assumption.

Third assumption, that x=y. This is FALSE. For that assumption to be true, then bugs are being found randomly, not by effort. The truth is x is ALWAYS less than y, because it takes skill and effort to find them.

Each successive bug is more and more difficult to find. However, it is an exponential chart. This means when just starting out, it APPEARS that x=y, but the further you go along, then Y starts being significantly greater than x.

This is a common problem, faced by mothers cleaning their house and by cops facing criminals. By the time they clean up one mess, a new one has popped up. But that does not mean you stop cleaning. Your efforts do mean something. The idea is to always be one step AHEAD of the mess, not behind it. That way you always end up with an acceptably dirty situation, rather than a virus infected/crime ridden area.

WRONG! (1)

Junior J. Junior III (192702) | about 5 months ago | (#46788671)

Security is not binary. Security is not absolute. There is ALWAYS residual risk. There is no such thing as invulnerability or immortality. Everything can be taken down. Security is not an end state. It is an ongoing process. If you do not continually improve the security of software, by addressing known vulnerabilities, performing a sane risk assessment, identifying threats, and doing what you can to mitigate them, you will regret it. The notion that implementing fixes is pointless because there will always be more vulnerabilities is wrong. Yes, there will always be vulnerabilities. Yes, security is a job that never ends. No, you can't ignore vulnerabilities once you know of them.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>