Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Heartbleed Sparks 'Responsible' Disclosure Debate

Soulskill posted about 7 months ago | from the arguing-about-ethics dept.

Security 188

bennyboy64 writes: "IT security industry experts are beginning to turn on Google and OpenSSL, questioning whether the Heartbleed bug was disclosed 'responsibly.' A number of selective leaks to Facebook, Akamai, and CloudFlare occurred prior to disclosure on April 7. A separate, informal pre-notification program run by Red Hat on behalf OpenSSL to Linux and Unix operating system distributions also occurred. But router manufacturers and VPN appliance makers Cisco and Juniper had no heads up. Nor did large web entities such as Amazon Web Services, Twitter, Yahoo, Tumblr and GoDaddy, just to name a few. The Sydney Morning Herald has spoken to many people who think Google should've told OpenSSL as soon as it uncovered the critical OpenSSL bug in March, and not as late as it did on April 1. The National Cyber Security Centre Finland (NCSC-FI), which reported the bug to OpenSSL after Google, on April 7, which spurred the rushed public disclosure by OpenSSL, also thinks it was handled incorrectly. Jussi Eronen, of NCSC-FI, said Heartbleed should have continued to remain a secret and be shared only in security circles when OpenSSL received a second bug report from the Finnish cyber security center that it was passing on from security testing firm Codenomicon. 'This would have minimized the exposure to the vulnerability for end users,' Mr. Eronen said, adding that 'many websites would already have patched' by the time it was made public if this procedure was followed."

Sorry! There are no comments related to the filter you selected.

No Good Solution. (5, Insightful)

jythie (914043) | about 7 months ago | (#46786737)

This really strikes me as the type of problem that will never have a good solution. There will always be competing interests and some of them will be mutually exclusive while still being valid concerns.

Re:No Good Solution. (3, Insightful)

gweihir (88907) | about 7 months ago | (#46786777)

Indeed. But there is a _standard_ solution. Doing it in various ways is far worse than picking the one accepted bad solution.

Re:No Good Solution. (3, Interesting)

Opportunist (166417) | about 7 months ago | (#46786829)

Standard means jack. As long as there is no good reason (like, say, avoiding a fine that breaks your back or jail time) bugs like that are not being told, they're being sold.

Re:No Good Solution. (1, Troll)

gweihir (88907) | about 7 months ago | (#46786849)

That is BS. You are mixing two things to make your non-existing point: People that want to disclose and people that want to sell. The second ones are always black-hats, even if some of them pretend otherwise.

Re:No Good Solution. (1)

Anonymous Coward | about 7 months ago | (#46787015)

You are missing the third choice- people who disclose publicly and irresponsibly to the public before the developers as a means of scoring some shallow PR points. This one also involves Google researches usually as it pertains to Microsoft.

Re:No Good Solution. (1)

Anonymous Coward | about 7 months ago | (#46787535)

Doing it in various ways is far worse than picking the one accepted bad solution.

Worse for who?

Once you recognize that there are competing interests, then what's best for you might not be best for me, and you and I are going to do different things.

Re:No Good Solution. (1)

Fnord666 (889225) | about 7 months ago | (#46787693)

Indeed. But there is a _standard_ solution.

Citation needed.

Re:No Good Solution. (1)

mwvdlee (775178) | about 7 months ago | (#46788337)

So the solution to competing interrests and mutually exclussive valid concerns is to always pick only one concern/interrest and always ingore the others?
Good luck with that.

Re:No Good Solution. (-1, Flamebait)

Anonymous Coward | about 7 months ago | (#46786863)

This really strikes me as the type of problem that will never have a good solution. There will always be competing interests and some of them will be mutually exclusive while still being valid concerns.

Therefore the best solution is to public release so everyone has the information at the same time. Let them compete for the patch; Awful software publisher will be the one caught with bugs. Good one will be patch and secure while everyone else suffer their bad choice.

Over time the best software will prevail and only idiots will still be using Microsoft products... that the theory. In practice there is corruption and bad software will linger for decades.

Re:No Good Solution. (3, Interesting)

Anonymous Coward | about 7 months ago | (#46786901)

There is no right, it's already gone bad so you've just got a lot of wrongs to choose from. So my opinions on disclosure are informed by risk minimization. Or to borrow a term, "harm reduction."

The order people were informed about heartbleed smells more like matter of "It's about who you know." than getting the problem fixed. If OpenSSL isn't at or or real close to the top of the list of people you contact the first day, you're either activity working against an orderly fix or don't trust the OpenSSL folks with the knowledge to fix their own software and are beyond a healthy level of paranoia.

We protected 1 billion people by notifying trusted (2)

raymorris (2726007) | about 7 months ago | (#46787719)

This was handled similarly to a flaw I discovered, and I think it makes sense. Facebook, for example, has about a billion users. If you have a colleague you trust at Facebook, informing that one colleague can protect Facebook's billion users.

The risk is of a leak before a fix is widely deployed is dependent on a) the number of people you inform and b) how trustworthy those people are to keep quiet for a couple of days. It's quite reasonable to minimize the risk of a leak by keeping it low profile for a few days, while minimizing the damage by protecting as many people as possible.

For CVE-2012-0206 , developers knew that wikimedia was the largest user. Wikipedia and related properties account for over half the the end-users that could be affected. So by letting just one person know about it ahead of time, we could protect millions of wikipedia users. That seems like a good trade, so we let wikipedia have the patch 24 hours before the main distros like Red Hat put the patch out publicly and the vulnerability became well known. Nobody was harmed by hearing about it on Tuesday rather than on Monday, and all of wikipedia's users were protected from being affected by keeping it secret for a day while wikipedia's servers were patched.

But what if someone *is* harmed by the delay? (1)

Anonymous Brave Guy (457657) | about 7 months ago | (#46788007)

Nobody was harmed by hearing about it on Tuesday rather than on Monday

Isn't that assumption where the whole argument for notifying selected parties in advance breaks down?

If you notify OpenSSL, and they push a patch out in the normal way, then anyone on the appropriate security mailing list has the chance to apply that patch immediately. Realistically, particularly for smaller organisations, it will often be applied when their distro's mirrors pick it up, but that was typically within a couple of hours for Heartbleed, as the security and backporting guys did a great job at basically all of the main distros on this one.

As soon as you start picking and choosing who else to tell first, yes, maybe you protect some large sites, but those large sites are run by large groups of people. For one thing, they probably have full time security staff who will get the notification as soon as it's published, understand its significance, and act on it immediately. For another thing, they probably have good automated deployment systems that will systematically patch all their affected servers reliably and quickly.

(I accept that this doesn't apply to those who have products with embedded networking software, like the Cisco and Juniper cases. But they can still issue patches to close the vulnerability quickly, and the kinds of people running high-end networking hardware that is accessible from outside a firewall are also probably going to apply their patches reasonably quickly.)

On the flip side, as long as you're giving advance warning to those high profile organisations, you're leaving everyone else unprotected. In this case, it appears that at least two different parties identified the vulnerability within a few days of each other, but the vulnerability had been present for much longer. There is no guarantee that others didn't already know about it and weren't already exploiting it. In general, though it may not apply in this specific case, if some common factor prompted the two contemporaneous discoveries, it might well be the case that additional, hostile parties have found it around the same time too.

In other words, you can't possibly know that nobody was harmed by hearing about it a day later. If a hostile party got hold of the vulnerability on the first day, maybe prompted by whatever also caused the benevolent parties to discover it or by some insider information, then they had a whole day to attack everyone who wasn't blessed with the early knowledge, instead of a couple of hours. This is not a good thing.

Re:We protected 1 billion people by notifying trus (1)

sabri (584428) | about 7 months ago | (#46788031)

This was handled similarly to a flaw I discovered, and I think it makes sense. Facebook, for example, has about a billion users. If you have a colleague you trust at Facebook, informing that one colleague can protect Facebook's billion users.

Ah yes, the duckface pictures of a bunch of teens are way more important than, let's say, millions of tax returns.

Re:We protected 1 billion people by notifying trus (0)

Anonymous Coward | about 7 months ago | (#46788441)

Nobody was harmed by hearing about it on Tuesday rather than on Monday

Are you absolutely sure about that? Completely positive?
I suspect more than zero people were if not harmed then defiantly harmed further by such a situation.

Yes, you can easily marginalize the small percentage of people who were exploitable by black hats for 371 days instead of only 370 days as the case was, but just because 1/370 is a small fraction doesn't mean it is zero - people attacked using this exploit in the prior 370 days to Google announcing it would certainly disagree with you about the number of days notice they got - and the 1/370 percentage of people being exploited either again or for the first time on that last day would also disagree with you.

Remember, just because it was first discovered by the white hats at Google just this year does NOT mean this exploit wasn't being actively exploited in the underground for hundreds of days already - because they were.

Blame Game. (4, Insightful)

jellomizer (103300) | about 7 months ago | (#46787111)

That is the biggest problem. Other then rewarding the people who fix the problem, we try to figure out who is to blame for every freaking thing.

Oh look a flood hit the city unexpected, well lets blame the mayor for not thinking about this unexpected incident.

Or a random guy blew up something, why didn't the CIA/NSA/FBI know that he was doing this...

We are trying to point blame on too many things, and less time trying to solve the problem.

Blame Game (0)

Anonymous Coward | about 7 months ago | (#46787373)

This is a common and well known americanism related to a complex interaction between hierarchal socialism, legality, and the fact you westerns seem to think the best way to repent for making mistakes is to dump it all on someone else either by means of blame or legal charges (most commonly in ameica in the form of suing others). Good luck changing that.

In the east people gain face by solving problems succinctly and gracefully, without making the kind of fuss westerners do when something goes wrong. As opposed to finding ways to make others lose face because they made a mistake.

This could probably also be summed up as a comparison of authoritarianism (west focuses on self and power) vs. communism (east focuses on the big picture and the community)

Re:Blame Game. (1)

Fnord666 (889225) | about 7 months ago | (#46787751)

That is the biggest problem. Other then rewarding the people who fix the problem, we try to figure out who is to blame for every freaking thing.

"Fix the problem, not the blame."
Rising Sun (1993) - Capt. John Connor (Sean Connery)

Re:Blame Game. (-1)

Anonymous Coward | about 7 months ago | (#46788233)

Just do what the Democrats do, blame Bush for everything.

Re:Blame Game. (0)

andydouble07 (2344014) | about 7 months ago | (#46788305)

Better than blaming blaming Bush for everything for everything.

Not that good (-1)

Anonymous Coward | about 7 months ago | (#46786743)

Open source software is often made freely available at no costs to downloaders and embedders. There is little incentive for these users to pay anything for it, including for support, since the main reason to adopt this software is to not pay at all. The result is that there are few resources for testing or documenting the software and no incentive for the developers to care about the usage by others and actively develop the software outside of their own use cases.

Further aggravating the issue is the claim by activists that the software code is reviewed by millions of people as it is freely available to anyone. The fallacy of this claim resides in the lack of interest of anyone to do this. Indeed, who would review other people's code for free or for fun? Vulnerabilities such as the Heartbleed bug are always found by using and probing the software, not by reviewing the code.

OpenSSL and the Hearbleed bug is the new poster child for the failed open source movement. No one cared, no one will care. Repeat expected.

Re:Not that good (1)

gweihir (88907) | about 7 months ago | (#46786775)

Mindless propaganda and, as it happens, untrue. See for example http://developers.slashdot.org... [slashdot.org]

But I guess proponents of closed source will always use any lie that is handy, just to propagate their ideology.

Re:Not that good (3, Insightful)

Opportunist (166417) | about 7 months ago | (#46786841)

Would you put your life on closed source software not having any bugs that we just don't know about because it's closed source and hence can NOT be reviewed sensibly?

Closed source and open source share one problem: Both can and will have bugs. Open source only has the advantage that they will be found and published. In closed source, usually NDAs keep you from publishing anything you might come across, ensuring that knowledge about these bugs stays within certain groups that have a special interest in not only knowing about it but abusing them.

Re:Not that good (2)

jones_supa (887896) | about 7 months ago | (#46786975)

Open source only has the advantage that they will be found and published. In closed source, usually NDAs keep you from publishing anything you might come across, ensuring that knowledge about these bugs stays within certain groups that have a special interest in not only knowing about it but abusing them.

That doesn't still automatically mean that closed source fares worse in found bugs. Companies often have quite bad-ass internal quality assurance measures. They have money to put in it and, it actually produces them value. There is an incentive to do it properly. Of course the tools and methodologies vary from company to company. But let's take Microsoft: they have very rigorous code quality standards and very thorough code audits, before anything gets out from the house.

Sure, we can have lots of eyeballs scanning open source code, but there is no guarantee that a quantified amount of review ever happens. That's really, really bad.

Re:Not that good (2)

Opportunist (166417) | about 7 months ago | (#46787677)

Sorry, but no. Just because it produces them revenue doesn't mean they have an incentive to do it properly. They have an incentive to do it good enough that people buy it. That does not necessarily mean that the software is of high quality.

What is necessary to this end is that the software appeals to decision makers. They are rarely if ever the same people that are by any means qualified to assess the technical quality of code.

For reference, see SAP.

Re:Not that good (1)

Zero__Kelvin (151819) | about 7 months ago | (#46787785)

". Companies on rare occassion have quite bad-ass internal quality assurance measures. "

FTFY (Have you ever worked at any actual companies?)

Needless subject (0)

Anonymous Coward | about 7 months ago | (#46787297)

The fact that OpenSSL is open source and had a trivial mistake that exposed the entirety of the Internet's encrypted traffic to easy eavesdropping for years should be a clue to how trustworthy open source software can be.

Re:Needless subject (3, Insightful)

Opportunist (166417) | about 7 months ago | (#46787699)

The whole point of OSS is that I do not need to trust it. I can review it if I please.

Trustworthiness is only a matter with closed source. Because there all I can really do is trust its maker.

False sense of security (1)

Anonymous Brave Guy (457657) | about 7 months ago | (#46788067)

The whole point of OSS is that I do not need to trust it. I can review it if I please.

But you didn't review it and find the vulnerability, did you?

And apparently, despite the significance and widespread use of this particular piece of OSS, for a long time no-one else did either, or at least no-one who's on our side did.

Your argument is based on theory. The AC's point is based on pragmatism. It's potentially an advantage that OSS can be reviewed by anyone, but a lot of the time that gives a false sense of security. What matters isn't what could happen, it's what actually does happen.

Re:False sense of security (2)

Opportunist (166417) | about 7 months ago | (#46788503)

What I really don't like about the whole statement behind it is the implied assumption that closed source offered any kind of better protection.

You know what's the main difference between an OSS and a CSS audit? That I can't go "hey, psst, take a look at $code. Maybe you see something interesting..." to you when I find something in CSS software and someone in a badly fitting suit tells me to shut up about it.

Re:Not that good (3, Interesting)

Nemesisghost (1720424) | about 7 months ago | (#46786913)

Open source software is often made freely available at no costs to downloaders and embedders. There is little incentive for these users to pay anything for it, including for support, since the main reason to adopt this software is to not pay at all.

Well, one could hope that issues like this will prompt those selfish companies to begin either developing their own software & quit relying on the freely given work of others or give them an incentive to support those who are building the critical software components. My personal opinion is that if a company is going to utilize a FOSS project and do self support, that they would provide some sort of resource back to the project.

Further aggravating the issue is the claim by activists that the software code is reviewed by millions of people as it is freely available to anyone. The fallacy of this claim resides in the lack of interest of anyone to do this. Indeed, who would review other people's code for free or for fun?

I happen to know several people who like reviewing & examining other people's code, especially complex code like what one would find in OpenSSL. These are the same type of people who just so happen to be the ones fixing a lot of the bugs you run into in OSS projects. It is people like that who make OSS projects succeed. I mean Linus Torvalds wrote Linux as a hobby project, and continued to review people's additions as a part of that hobby(now he gets paid to do what he was doing for fun). I personally don't do it because my free time interests lie elsewhere, but I enjoy software development enough that I would without those other distractions. So I'd say your argument is invalid.

Re:Not that good (3, Interesting)

Tom (822) | about 7 months ago | (#46786945)

Several fundamental mistakes in there.

First, OpenSSL is not typical of Free Software. Cryptography is always hard, and other than, say, an Office Suite, it will often break spectacularily if a small part is wrong. While the bug is serious and all, it's not typical. The vast majority of bugs in Free Software are orders of magnitude less serious.

Second, yes it is true that the notion that anyone can review the source code doesn't mean anyone will actually do it. However, no matter how you look at it, the number of people who actually do will always be equal or higher than for closed source software.

Third, the major flagships of Free Software are sometimes, but not always picked for price. When you're a fortune-500 company, you don't need to choose Apache to save some bucks. A site-license of almost any software will be a negliegable part of your operating budget.

And, 3b or so, contrary to what you claim, quite a few companies contribute considerable amounts of money to Free Software projects, especially in the form of paid-for support or membership in things like the Apache Foundation. That's because they realize that this is much cheaper than having to maintain a comparable software on their own.

What (0)

Anonymous Coward | about 7 months ago | (#46787437)

However, no matter how you look at it, the number of people who actually do will always be equal or higher than for closed source software.

Upon what data is this assumption based? How many people have reviewed the code behind Microsoft's BitLocker vs. how many have reviewed the code for TrueCrypt, for example? The real question is how many QUALIFIED people are reviewing the code. In the case of OpenSSL it appears the answer was ONE (and they missed a trivial mistake).

Re:Not that good (1)

Anonymous Brave Guy (457657) | about 7 months ago | (#46788117)

However, no matter how you look at it, the number of people who actually do will always be equal or higher than for closed source software.

Why? I see little evidence that this is happening in general.

Most established OSS projects seem to require no more than one or two reviewers to approve a patch before it goes in, and then there is no guarantee that anyone will ever look at that code again later.

How does that guarantee that more experts will review a given piece of security code than in a proprietary, closed-source, locked-up development organisation that also has mandatory code reviews?

Tom = multiple /. sockpuppet using scum (-1)

Anonymous Coward | about 7 months ago | (#46788511)

Let's let TOM speak shall we:

"I'm having great conversations on this site with one of my alias accounts" - by Tom (822) on Monday April 07, 2014 @02:29PM (#46686259) Homepage

FROM -> http://slashdot.org/comments.p... [slashdot.org]

APK

P.S.=> Tom *tried* to libel me & failed after I destroyed him in a technical debate on hosts files... result?

Tom ended up "eating his words" here http://slashdot.org/comments.p... [slashdot.org] spiced with "the bitter taste of SELF-defeat" + HIS FOOT IN HIS MOUTH

... apk

Why free and fun? I review FOSS for a living. (3, Informative)

raymorris (2726007) | about 7 months ago | (#46787519)

> Indeed, who would review other people's code for free or for fun?

Some people do, of course. I have, specifically for security issues, because that's a major resume point in the security world - having actually found and fixed real-world security issues.

99% of the time, I'm being paid to review and improve open source code. All of those companies that use open source, including Google, have a vested interest in making sure that the code they use is good. Since it's open source, the Google techs can actually dig into the code and find issues like this, then fix it, just like they did in this case. They didn't do it for free and for fun, they did it because Google relies on OpenSSL.

My employer also relies on OSS. My job is to administer, maintain, and improve the OSS software we use. I've found and fixed security issues. Not for free and for fun, but because we want our systems to be secure, and having the source allows me to do that.

When I craft an improvement, at LEAST three people have to look at it before it's committed upstream. Typically, five or six people will comment on it and suggest improvements or state their approval before it's finalized.

Re:Not that good (1)

suutar (1860506) | about 7 months ago | (#46787859)

Indeed, who would review other people's code for free or for fun?

Well, right offhand, Coverity will [coverity.com] . They're not perfect, of course, but they're pretty good. Their system didn't flag Heartbleed, but Heartbleed showed them how they could add a new test that would and that has reportedly found other possible issues [coverity.com] , which are being investigated and will either be fixed or found to be false positives and used to refine the new test. Either way, not a bad thing.

WTF? (5, Insightful)

gweihir (88907) | about 7 months ago | (#46786757)

The only possible way is to disclose to the responsible manufacturer (OpenSSL) and nobody else first, then, after a delay given to the manufacturer to fix the issue, disclose to everybody. Nothing else works. All disclosures to others have a high risk of leaking. (The one to the manufacturer also has a risk of leaking, but that cannot be avoided.)

The other thing is that as soon as a patch is out, the problem needs to be disclosed immediately by the manufacturer to everybody (just saying "fixed critical security bug" is fine), as the black-hats watch patches and will start exploiting very soon after.

All this is well known. Why is this even being discussed? Are people so terminally stupid that they need to tell some "buddies"? Nobody giving out advance warnings to anybody besides the manufacturer deserves to be in the security industry in the first place as they do not get it at all or do not care about security in the first place.

Re:WTF? (4, Interesting)

Tom (822) | about 7 months ago | (#46786881)

The only possible way is to disclose to the responsible manufacturer (OpenSSL) and nobody else first, then, after a delay given to the manufacturer to fix the issue, disclose to everybody. Nothing else works. All disclosures to others have a high risk of leaking. (The one to the manufacturer also has a risk of leaking, but that cannot be avoided.)

It's not about leaking. The reason I'm not alone in the security community to rage against this "responsible disclosure" bullshit is not that we fear leaks, but that we know most of the exploits are already in the wild by the time someone on the whitehat side discovers it.

Every day you delay the public announcements is another day that servers are being broken into.

Re:WTF? (4, Insightful)

Anonymous Coward | about 7 months ago | (#46786933)

If no fix is available yet, they're still being broken into - but you've just added the thousands of hackers who *didn't* know about it to the list of those exploiting it.

Re:WTF? (1)

jones_supa (887896) | about 7 months ago | (#46787093)

Exactly this.

Re:WTF? (1)

Ardyvee (2447206) | about 7 months ago | (#46788009)

Couldn't sys admins disable the heartbeat feature as a preventive measure while the patch was prepared? Please note that I'm rather ignorant on all the things involved, but AFAIK the feature in question in the very recent case was not crititcal and could be disabled with minimal damages to the functioning of the service.

I agree with you, though, that the developers should be informed of it first. But I also think that it depends on the issue. If you tell me that feature x in software a has a security issue and I can live without feature x while devs fix it, I think I would rather know so I can disable it instead of waiting for a patch. Just saying.

Re:WTF? (1)

fustakrakich (1673220) | about 7 months ago | (#46787559)

That's just too bad. Public announcement is the way to fix the problem as fast as possible.

Re:WTF? (4, Interesting)

medv4380 (1604309) | about 7 months ago | (#46787649)

Not to sound like too much of a conspiracy nut, but Heartbleed did look like a deliberate exploit to some people, and still does to others. If it had been, and had been put there by someone at OpenSSL they are the last ones you actually want to inform until you have already patched it yourself. From the timeline that's what Google did, and then tapped the shoulders of their closes friends so they could ether patch it or disable the heartbeat feature as CloudFlare did. I agree that OpenSSL should have been informed first, but what do you do when you suspect the proper channels are the ones who put it there in the first place.

Re:WTF? (1)

Anonymous Coward | about 7 months ago | (#46787957)

If no fix is available yet

There's always a fix.

It might involve yanking a power cord, but it will absolutely guarantee nobody is breaking in.

Re:WTF? (1)

Wycliffe (116160) | about 7 months ago | (#46786973)

It's not about leaking. The reason I'm not alone in the security community to rage against this "responsible disclosure" bullshit is not that we fear leaks, but that we know most of the exploits are already in the wild by the time someone on the whitehat side discovers it.

Every day you delay the public announcements is another day that servers are being broken into.

So are you going to take your server offline until there is a patch? Or are you going to write a patch yourself?
I think giving the software vendor 2 weeks to fix the bug (1 week if it's trivial or you provide the patch)
is reasonable as 99% of people are not going to be able to do anything about it until there is a patch anyways.
As soon as the patch is available then it should be publicly announced.

Re:WTF? (0)

Anonymous Coward | about 7 months ago | (#46787963)

So are you going to take your server offline until there is a patch? Or are you going to write a patch yourself?

Or, option (3): disable Hearbeat. Then again, if you're a bank and option (3) isn't available, then it makes a lot of sense to choose option (1) instead of crossing one's fingers on an active exploit not being used on your server(s).

I think giving the software vendor 2 weeks to fix the bug (1 week if it's trivial or you provide the patch)
is reasonable as 99% of people are not going to be able to do anything about it until there is a patch anyways.
As soon as the patch is available then it should be publicly announced.

What's reasonable for you may not be reasonable for me. If there's an older version to switch to or a 100% mitigation technique, why should I or lots of others wait two weeks? Because some people may not be aware of the announcement and/or they'll continue to run a vulnerable system? Well, that happens even after the patch is released because lots of people aren't anal retentive about patching updating, it takes time to verify any patch that's released regardless*, some software simply can't be updated, and plenty just don't care enough to do anything about it.

Really, the logic behind your argument says more about (1) responsible disclosure should be prompt and well published to minimize the delay before people can act and (2) responsible disclosure should include sufficient mitigation techniques, if possible, to address the issue. After all, a patch may be nothing more than a mitigation technique. So, why place such emphasis on one sort of patch over another?

*And this is yet another major argument for early announcement since if it takes up to two weeks to verify and deploy a mitigation technique, then by the time an "official" patch would be released would be the time a lot of companies would already have made the problem a non-issue. I mean, the whole point of giving the vendor so many weeks is precisely because there is the risk of unintended consequences and companies who want to deploy a patch have to verify against their own things of which the vendor simply can't. It's not that the patch writing takes anywhere near two weeks. So, instead of having to rush through verification by the vendor and then by the company, a possibly much simpler mitigation technique can be used and the patch can be thoroughly tested as part of the standard update cycle. After all, the end objective on the end user side is to not have the bug being actually exploited; the how and why are much more moot--just like the ideology of full disclosure for the sake of full disclosure doesn't seem to trump your pragmatic concerns of actual exploits.

Re:WTF? (1)

Tom (822) | about 7 months ago | (#46788505)

So are you going to take your server offline until there is a patch?

Depends, but yes for many non-essential services, that is indeed an option. Imagine your actual web service doesn't use SSL, but your admin backend does. It's used only by employees on the road, because internal employees access it through the internal network.

Sure you can turn that off for a week. It's a bit of trouble, but much better than leacking all your data.

Or if it's not about your web service, but about that SSL-secured VPN access to your external network? If you can live without home office for a week, you can turn that off and wait for the patch, yes.

Most importantly, who are you to decide that everyone should wait for a patch instead of giving people the opportunity to deploy such mitigating measures?

I think giving the software vendor 2 weeks to fix the bug (...) is reasonable

People don't learn.

We used to do that.

Full disclosure evolved primarily as a countermeasure because vendors took those grace periods not as a "we need to get this fixed in that time", but as a "cool, we can sit on our arses doing nothing for another two weeks".

Re:WTF? (1)

drinkypoo (153816) | about 7 months ago | (#46787295)

Every day you delay the public announcements is another day that servers are being broken into.

Yes, but it's also easier to make use of the exploit information to produce an exploit than a patch. That's why it's responsible to report the bug to the maintainers before announcing it publicly. But your argument is the reason why you don't wait indefinitely for the maintainers to kick out a patch, either.

As usual, the answer lies somewhere between extremes.

Re:WTF? (1)

Tom (822) | about 7 months ago | (#46788449)

As usual, the answer lies somewhere between extremes.

My preferred choice of being left alone or being beaten to a pulp is being left alone, not some compromise in the middle, thank you. Just because there are two opposing positions doesn't mean that the answer lies in the middle.

I've given more extensive reasoning elsewhere, but it boils down to proponents of "responsible disclosure" conveniently forgetting to consider that every delay also helps those bad guys who are in posession of the exploit. Not only can they use it for longer, they can also use it for longer against targets who don't know they are vulnerable.

Many, many companies run non-essential services that they would not hesitate to shut down for a few days if they knew that there's an exploit that endangers their internal systems. Other companies could deploy mitigating measures while waiting for the patch.

Don't pretend sysadmins are powerlessly waiting with big eyes for the almighty vendor to issue a patch.

Tom = multiple /. sockpuppet using scum (-1)

Anonymous Coward | about 7 months ago | (#46788491)

Let's let TOM speak shall we:

"I'm having great conversations on this site with one of my alias accounts" - by Tom (822) on Monday April 07, 2014 @02:29PM (#46686259) Homepage

FROM -> http://slashdot.org/comments.p... [slashdot.org]

APK

P.S.=> Tom *tried* to libel me & failed after I destroyed him in a technical debate on hosts files... result?

Tom ended up "eating his words" here http://slashdot.org/comments.p... [slashdot.org] spiced with "the bitter taste of SELF-defeat" + HIS FOOT IN HIS MOUTH

... apk

Re:WTF? (2)

paskie (539112) | about 7 months ago | (#46786925)

"Very well known?" This is very much *not* the way how for example many security bugs in linux distributions are handled (http://oss-security.openwall.org/wiki/mailing-lists/distros). Gradual disclosure along a well-defined timeline limits damage of exposure to blackhats and at the same time allows enough reaction time to prepare and push updates to the user. So typically, once the software vendor has fixed the issue, they would notify distributions, which would be given some time to prepare and test an updated package, then the update is pushed to users at a final disclosure date.

For a bug of such severity, I'd agree that the embargo time of 7-14 days used by distros@ is way too long. But a 12-24 hour advance announcement would be quite reasonable. Large website operations typically may have suitable staffing to be able to bring a specific update for a critical bug (similar in potential damages to a service outage) online within 6-12 hours, so a next step would be passing the information from distributions to these users (e.g. via a support contract with distros@-subscribed vendor).

In this timeframe, you have a good chance to prepare updated packages for major archs and do an emergency rollout. At the same time, even if there is a leak, the leak needs to propagate to skilled blackhat developers, they need to develop an exploit and this exploit needs to get propagated to people who would deploy it in the remaining time frame.

Re:WTF? (1)

WD (96061) | about 7 months ago | (#46786939)

"High risk of leaking?" And what would the consequences of such a leak be? The affected vendors are only slightly better off than they were with how it actually turned out with Heartbleed?

When Heartbleed was disclosed, virtually no affected vendor (e.g., Ubuntu, Cisco, Juniper, etc.) had an update available. So there was a window where the vulnerability was public, but nobody had official updates from their vendor that would protect them. You are claiming that this is better than a coordinated release, where there would have been actual updates available to install?

It's not "buddies" that is being discussed here. It's the people producing the software that is affected!

Re:WTF? (1)

Ardyvee (2447206) | about 7 months ago | (#46788015)

But isn't the Heartbeat feature a part of the software that is optional and can be disabled?

Re:WTF? (1)

bill_mcgonigle (4333) | about 7 months ago | (#46786947)

There's no one-size fits all solution. I've made the argument for informed disclosure [bfccomputing.com] here in the past, but in this case it probably wouldn't work. The DTLS code is so small and self-contained and the code so obvious to an auditor that just saying that there's an exploit in DTLS or to compile without heartbeat is probably enough to give the blackhats a running start. But there are other situations where informed disclosure is better than responsible disclosure.

Did Google do the right thing here? I'm not sure, but it's not completely clear that they didn't. There are several factors that bridge the gap between theoretical ideal and what can work in every situation in the real world.

Re:WTF? (0)

Anonymous Coward | about 7 months ago | (#46787095)

This bug does represent an exposure, but on some linux systems there is a bug trapping app-suite (rpm -qa "abrt*") which will dump a buggy app and send its dumps up to the mothership. Consider the adobe flash player, involved in many crash/cash transactions and has never been stable. I have seen android "antivirus" apps which anly need permissions for your phone list and network access. Wonder what they do? The credit card industry is boinked almost daily now and no changes. NSA pirating data feeds... Backdoors in principle encryptions... The heartblead bug needs to be fixed, but the real work is still very much out there.

Disclosure!? Fix it, damn it! (1)

Anonymous Coward | about 7 months ago | (#46786781)

These guys are apparently competent enough to find a bug like this. The fix is damned near trivial. So "disclose" it to OpenSSL, accompanied by a patch and let OpenSSL do the rest.

The disclosure was fine. (0)

lasermike026 (528051) | about 7 months ago | (#46786789)

Lite a fire under everyone butts and it got fixed and deployed in hours. I see nothing wrong with this. Maybe it's time to crank up the heat.

As bad ideas go... (3, Insightful)

ClayDowling (629804) | about 7 months ago | (#46786793)

This notion ranks right up there. Manufacturer was told. Everybody else was then told. That's how it's supposed to work. This notion of "let's just tell our close friends and leave everybody else in the dark" is silly. You'd only wind up leaving most people open to exploit, because if you think your secret squirrel society of researchers doesn't have leaks, you're deluding yourself.

Yes, that was handled badly (0, Insightful)

Anonymous Coward | about 7 months ago | (#46786809)

There should have been a public advisory telling everybody with an OpenSSL based server to shut down the server, wait for an update, install the update and only then put the server online again. The biggest mistake was to keep vulnerable servers running for even a short while after the vulnerability was published.

Re:Yes, that was handled badly (0)

Anonymous Coward | about 7 months ago | (#46787195)

Or just disable Heartbeat, I mean that's where the vulnerability was.

Re:Yes, that was handled badly (1)

Unordained (262962) | about 7 months ago | (#46787339)

Yes. You don't have to notify people of the exact flaw and how it can be exploited, to help them protect themselves while waiting for a patch. The immediate response should have been to tell people to disable heartbeat, or barring that, shutdown their affected systems. Yes, it would suck, but since you don't know for sure that the exploit is known only to the researchers, you should assume it's in the wild, and this is the only safe thing to do in the interim. (Could this be used as a form of DoS? Sure, if sysadmins get used to wholly shutting down services anytime there's a warning from anyone, or if the partial shutdown of one service in fact makes another service less secure. TBD.)

All this discussion of disclosing to OpenSSL first, letting them patch, giving distros time to get updates ready ... ignores that the moment OpenSSL goes to fix the bug, the patches are public. Attackers waiting to see a flaw in OpenSSL would be monitoring version-control regularly, to see if any given patch looked interesting. While your distros are being quietly told to get updates ready, the attackers are analyzing the patch to see what kind of bug you fixed, knowing that, because there's radio silence, sites are vulnerable.

Making a big stink about it is the only way to make sure sites actually get updated, anyway. Distros and whatnot having updates available does not get those updates installed. We don't have auto-update on any servers. As was discussed recently, many sysadmins have to submit patches to change-control boards for approval, and if there's not a furor over the issue, there's no emergency approval.

So:
    (a) a blitz identifying only the versions affected and what to do about it,
    (b) a patch release sufficiently delayed to give end-users a chance to shutdown affected services,
    (c) a blitz about the availability of the update, which people will care about more because they've already had to take action to protect themselves, and are possibly sitting in a shutdown state.

Re:Yes, that was handled badly (0)

Anonymous Coward | about 7 months ago | (#46787409)

No, you can not start by telling them to disable heartbeat. The heartbeat function is the vulnerable part. By revealing the exact location of the vulnerability without making sure that everybody had a chance to shutdown servers beforehand, you give attackers a window of opportunity. That's why you wouldn't tell them which versions are vulnerable either. A lot of secrets have been revealed just because vulnerable servers were kept online while the admins were waiting for patches or an opportune moment to patch. Some of those secrets may have given an attacker the ability to decrypt years of highly sensitive recorded traffic. Making sure that servers with OpenSSL are offline when the news of the exact vulnerability becomes public should have been the highest priority. It wasn't, and that made things a lot worse.

Re:Yes, that was handled badly (1)

Unordained (262962) | about 7 months ago | (#46787585)

So does that mean you're suggesting the safest course would be to:
    (a) tell everyone to shutdown ALL OpenSSL-backed services, urgently.
    (b) after 1 day, tell everyone they can bring their 0.9.8 services back online.
    (c) after 1 day, tell the remainder that it's okay to come back online, with heartbeat disabled.
    (d) have the patch ready for distribution around this time.
?

I agree with the caution. TBD:
    (1) there's the risk that telling admins to shut everything down, all versions, without telling them why, will cause them to ignore the notice
    (2) while these services are shutdown, what will admins do instead? will they use insecure services because "the show must go on"?

Re:Yes, that was handled badly (1)

Anonymous Coward | about 7 months ago | (#46787875)

No need to stagger the disclosures. The important bit is to tell everyone that there is a trivially exploitable critical vulnerability in OpenSSL that leaves no trace in logs, so shut down the server and await further information. 24 hours later you tell everybody that the heartbeat handling is the vulnerable part, and this means that 0.9.8 servers can be brought back up as is, everybody else must either patch or disable heartbeat before bringing the servers back online. Have the patches ready as soon as possible. Have anybody who suggests that the show must go on and SSL servers can't be shut down even if there is a critical vulnerability fired.

Web sites? End users? (0)

Anonymous Coward | about 7 months ago | (#46786819)

And how do you differentiate between "web sites" and "end users"? Why should Facebook be treated differently than me?

Re:Web sites? End users? (0)

Anonymous Coward | about 7 months ago | (#46786889)

Really? Well, you I don't know. So you get the benefit of the doubt. You are probably fine. But Facebook? I'd kick them in the teeth if I could. (Since in the US our SCOTUS claims that Facebook is a person). So sure, I will treat Facebook differently than you.

moms of the nile,, etc...; stop the bleeding (-1)

Anonymous Coward | about 7 months ago | (#46786821)

feed the innocent starving diaper addicts. free the innocent stem cells. we'll all feel better. good sports with good spirits should know better stuff..

CISSP opinion: the patch proves Google f***ed up (1)

xxxJonBoyxxx (565205) | about 7 months ago | (#46786835)

>> Google notified OpenSSL about the bug on April 1 in the US – at least 11 days after discovering it.

"OK, maybe it was caught up in legal. Suits at large corporations can take a while."

>> Google would not reveal the exact date it found the bug, but logs show it created a patch on March 21,

"On second thought, if the geeks on the ground had the authority to patch and roll to production, then why the finger to the Open Source community, Google?"

Re:CISSP opinion: the patch proves Google f***ed u (-1)

Anonymous Coward | about 7 months ago | (#46786893)

Yeah, if they knew it was a problem and patched it, then they should have submitted a patch to OpenSSL as soon as possible. I assume one of the engineers involved wanted to sell the vulnerability.

Issue? (5, Insightful)

silanea (1241518) | about 7 months ago | (#46786839)

What exactly is the issue here? Maybe I misread TFS and the linked articles, but as I understand the chief complaint - apart from Google's delay in reporting to OpenSSL - is that some large commercial entities did not receive a notification before public disclosure. I did not dig all too deep into the whole issue, but as far as I can tell OpenSSL issued their advisory in lieu with a patched version. What more do they expect? And why should "Cisco[,] Juniper[,] Amazon Web Services, Twitter, Yahoo, Tumblr and GoDaddy" get a heads-up on the public disclosure? I did not get a heads-up either. Neither did the dozens or so websites not named above that I use. Neither did the governmental agency I serve with. Nor the bank whose online-banking portal I use. Are we all second-class citizens? Does our security matter less simply because we provide services to fewer people, or bring lower or no value to the exchange?

A bug was reported, a fix was issued, recommendations for threat mitigation were published. There will need to be consequences for the FLOSS development model to reduce the risk for future issues of the sort, but beyond that I do not quite understand the fuss. Can someone enlighten me please?

Re:Issue? (0)

Anonymous Coward | about 7 months ago | (#46787001)

Does our security matter less simply because we provide services to fewer people

Yes. That is all.

Was that so hard to understand?

Re:Issue? (0)

Anonymous Coward | about 7 months ago | (#46787003)

but as I understand the chief complaint - apart from Google's delay in reporting to OpenSSL - is that some large commercial entities did not receive a notification before public disclosure

I don't really care about those commercial entities. For me its all about OpenSSL knowing. The fact that they and the big distros didn't have a new package ready to roll the second this went public is everything I need to know to call this "disclosure" a clusterfuck.

Re:Issue? (0)

Anonymous Coward | about 7 months ago | (#46787221)

OpenSSL should have been the first party to receive a resport of the disclosure, I mean that's where the patch needs to go to fix it and they could have yanked any binaries that default to using Heartbeat until it was fixed.

This is pretty much irresponsible disclosure, notify your friends, but not the people who actually produce the software and to hell with anybody else that integrates the software in their product. This is one of the downsides to OSS, since people can patch things like this on their own, they can patch their shit silently before notifying people that they need a patch for it.

wtf ? (3, Interesting)

Tom (822) | about 7 months ago | (#46786845)

IT security industry experts are beginning to turn on Google and OpenSSL, questioning whether the Heartbleed bug was disclosed 'responsibly.

Are you fucking kidding me? What kind of so-called "experts" are these morons?

Newflash: The vast majority of 0-days are known in the underground long before they are disclosed publicly. In fact, quite a few exploits are found because - drumroll - they are actively being exploited in the wild and someone's honeypot is hit or a forensic analysis turns it up.

Unless you have really, really good reasons to assume that this bug is unknown even to people whose day-to-day business is to find these kinds of bugs, there is nothing "responsible" in delaying disclosure. So what if a few script-kiddies can now rush a script and do some shit? Every day you wait is one day less for the script kiddies, but one day more for the real criminals.

Stop living in la-la-land or in 1985. The evil people on the Internet aren't curious teenagers anymore, but large-scale organized crime. If you think they need to read advisories to find exploits, you're living under a rock.

Re:wtf ? (0)

Anonymous Coward | about 7 months ago | (#46786993)

Real criminals and governments, that is. But I repeat myself.

Re:wtf ? (2)

jones_supa (887896) | about 7 months ago | (#46786995)

Newflash: The vast majority of 0-days are known in the underground long before they are disclosed publicly. In fact, quite a few exploits are found because - drumroll - they are actively being exploited in the wild and someone's honeypot is hit or a forensic analysis turns it up.

It's not that black and white. You expose the vulnerability to even more crackers if you go shouting it around like was done here.

Re:wtf ? (3, Insightful)

MrL0G1C (867445) | about 7 months ago | (#46787083)

As an end-user I'm glad it was shouted about because it gave me the chance to check that any software that could affect me financially was updated or invulnerable.

So, can you tell me why I shouldn't be notified?

Re:wtf ? (1)

jones_supa (887896) | about 7 months ago | (#46787257)

Because the vulnerability was in the server side.

Re:wtf ? (0)

Anonymous Coward | about 7 months ago | (#46787333)

The vulnerability was in the server side of all servers using OpenSSL*. That includes an enormous number of home servers in addition to large commercial ones. Should their admins not be notified?

* Not to mention all the end-user products using OpenSSL

Re:wtf ? (1)

gman003 (1693318) | about 7 months ago | (#46787267)

Yes, which is why the best compromise is a private disclosure to whoever can *fix* the bug, followed by a public announcement alongside the fixed release. That limits the disclosure to the minimum necessary while the flaw is unfixed.

Re:wtf ? (1)

Tom (822) | about 7 months ago | (#46788393)

There's a black market where you can buy and sell 0-days.

Sure you give it to more people (and for free) than before. But the really dangerous people are more likely than not to already have it.

"the underground" (0)

Anonymous Coward | about 7 months ago | (#46787425)

Are you fucking kidding me? What kind of so-called "experts" are these morons?

Newflash: The vast majority of 0-days are known in the underground long before they are disclosed publicly.

But "the underground" is not some monolithic entity. It's spread out over the entire planet and over tens (hundreds?) of thousands of people. Some may know about a particular exploit and some may not. Once you announce it however, everyone does know it.

So by allowing the vendor 1-2 weeks to issue a patch, you contain the exploit from being used by 100% of "the underground", to 'only' 0.001/1/2/5/7/10/33/whatever percent. If there's only a dozen guys in the Republic of Elbonia that know it, that's different then the entire Russian mafia and/or the Chinese cyber-army knowing.

Scale matters.

Re:"the underground" (1)

Tom (822) | about 7 months ago | (#46788371)

That is true. However, you also need to take a few other things into account. I'll not go into detail, I think everyone has enough knowledge and imagination to fill in the blanks:

  • There is an actual black market for exploits where they are bought and sold.
  • Not announcing a weakness withholds the information not just from the bad guys, but also from sysadmins, preventing mitigating measures and proper risk awareness.
  • We have over 20 years of history proving that vendors regularily move slower or not at all until a weakness is making headlines
  • There have been many cases where several researchers had partial information about an exploit, and only once combined was the true impact known. For example, one research might know about the problem and how to exploit it, but thinks it can't be leveraged to a compromise. Another might know about the potential compromise, but think it can't be triggered in a real-world scenario.

Despite all the theoretical arguments seemingly in favour, security through obscurity does not work and we've known that for like forever.

Tom = multiple /. sockpuppet using scum (-1)

Anonymous Coward | about 7 months ago | (#46788469)

Let's let TOM speak shall we:

"I'm having great conversations on this site with one of my alias accounts" - by Tom (822) on Monday April 07, 2014 @02:29PM (#46686259) Homepage

FROM -> http://slashdot.org/comments.p... [slashdot.org]

APK

P.S.=> Tom *tried* to libel me & failed after I destroyed him in a technical debate on hosts files... result?

Tom ended up "eating his words" here http://slashdot.org/comments.p... [slashdot.org] spiced with "the bitter taste of SELF-defeat" + HIS FOOT IN HIS MOUTH

... apk

are we seriously blaming google (1)

Anonymous Coward | about 7 months ago | (#46786887)

and not NSA who found the bug 4 years ago when the bug was first introduced?

Re:are we seriously blaming google (4, Insightful)

xxxJonBoyxxx (565205) | about 7 months ago | (#46786979)

>> are we seriously blaming google and not NSA who found the bug 4 years ago when the bug was first introduced?

Yes. The NSA is the US gov's lead black hat. Google's an advertising company that depends on people trusting the Internet for information and commerce. I'd expect the NSA to hoard information to assist their black-hatting, and I'd expect Google to quickly share anything they know so security vulnerabilities can be patched and people don't lose faith in the Internet*.

* = (Seriously, when people have asked me what to do about Heartbleed, I've said "don't buy anything you don't need, and try to avoid paying any bills online or doing any online checking for a week or two - then change your password as soon as you sign on.")

Re:are we seriously blaming google (0)

Anonymous Coward | about 7 months ago | (#46787029)

Indeed, the really tricky part about OSS is that all vulnerabilities are handed on a silver platter to NSA, without them ever telling us what they might have found.

I wouldn't be surprised if NSA knows more backdoors to OSS stuff than to closed source software. It takes them much less effort to analyze the source than machine language code.

One Cyberneticist's Ethics (2)

VortexCortex (1117377) | about 7 months ago | (#46787069)

Once again the evil of Information Disparity rares its ugly head. To maximize freedom and equality entities must be able to decide and act by sensing the true state of the universe, thus knowledge should be propagated at maximum speed to all; Any rule to the contrary goes against the nature of the universe itself.

They who seek to manipulate the flow of information wield the oppression of enforced ignorance against others despite their motive for doing so. The delayed disclosure of this bug would not change the required course of action. The keys will need to be replaced anyway. We have no idea whether they were stolen or not. We don't know who else knew about this exploit. Responsible disclosure is essentially lying by omission to the world. That is evil as it stems from the root of all evil: Information Disparity. The sooner one can patch their systems the better. I run my own servers. Responsible disclosure would allow others to become more aware than I am. Why should I trust them not to exploit me if I am their competitors or vocal opponent? No one should decide who should be their equals.

Fools. Don't you see? Responsible disclosure is the first step down a dangerous path whereby freely sharing important information can be outlawed. The next step is legislation to penalize the propagators of "dangerous" information, whatever that means. A few steps later will have "dangerous" software and algorithms outlawed for national security, of course. If you continue down this path soon only certain certified and government approved individuals will be granted license to craft certain kinds of software, and ultimately all computation and information propagation itself will be firmly controlled by the powerful and corrupt. For fear of them taking a mile I would rather not give one inch. Folks are already in jail for changing a munged URL by accident and discovering security flaws. What idiot wants to live in a world where even such "security research" done offline is made illegal? That is where Responsible Disclosure attempts to take us.

Just as I would assume others innocent unless proven guilty of harm to ensure freedom, even though it would mean some crimes will go unpunished: I would accept that some information will make our lives harder, some data may even allow the malicious to have a temporary unfair advantage over us, but the alternative is to simply allow even fewer potentially malicious actors to have an even greater power of unfair advantage over even more of us. I would rather know that my Windows box is vulnerable and possibly put a filter in my IDS than trust Microsoft to fix things, or excuse the NSA's purchasing of black-market exploits without disclosing them to their citizens. I would rather know OpenSSL may leak my information and simply recompile it without the heartbeat option immediately than trust strangers to do what's best for me if they decide to not do something worse.

There is no such thing as unique genius. Einstein, Feynman, and Hawking, did not live in a vacuum; Removed from society all their lives they'd have not made their discoveries. Others invariably picked up from the same available starting points and solve the same problems. Without Edison we would still have electricity and the light bulb. Without Alexander Bell we would have had to wait one hour for the next telephone to enter the patent office. Whomever discovered this bug and came forward has no proof that others did not already know of its existence.

Just like the government fosters secrecy of patent applications and reserves their right to exclusive optioning of newly patented technology, if Google had been required keep the exploit secret except to government agencies we may never have found out about heartbleed in the first place. Our ignorance enforced, we would have no other choice but to keep our systems vulnerable. Anyone who thinks hanging our heads in the noose of responsible disclosure a good idea is a damned fool.

Re:One Cyberneticist's Ethics (0)

Anonymous Coward | about 7 months ago | (#46787331)

Did you seriously just call yourself a "Cyberneticist"?

Public-facing disclosure (1)

simplypeachy (706253) | about 7 months ago | (#46787107)

The real scandal is how organisations are giving information to their users as to how they are affected and what users should do. Many big-name companies are using very specific phrasing such as "key services were not vulnerable", but no mention of secondary services...sounds like a liar's hiding place to me. There are also far too many who don't understand the problem such as Acronis [twitter.com] , the Aus bank [theregister.co.uk] etc. Then the likes of Akamai who can't make their mind up. Some irresponsibly down-playing the whole thing and of course, the majority of the rest who haven't said sweet FA. In the middle are the poor people who can't be expected to make informed decisions on what they need to do or how exposed they are.

You thought rfc-ignorant, abuse@ ignoring fuckwits, running their company around the Internet with Flash-only sites was bad? This is what happens when their incompetence starts to actually harm people's online security.

Commercial decision by Google (0)

Anonymous Coward | about 7 months ago | (#46787167)

Selective leaks to friends, screw over the competitors.

They've set the precedent now - time to sit back and watch them get burned by it in the future.

tuB6irl (-1)

Anonymous Coward | about 7 months ago | (#46787177)

WRONG (1)

doas777 (1138627) | about 7 months ago | (#46787197)

If this hadn't been publicly disclosed, it would have just gone into the 0-day libraries which Intelligence agencies around the globe have been amassing. We'd never learn we were vulnerable, and their ability to impersonate and eavsdrop would have increased beyond any reasonably-articulatable expectation.

Responsible disclosure to sufficient parties to address the issue would also expose it to potential attackers, and there will always be players with need-to-know who won't be identified for notification.

Doesn't ANYONE get it??? (1)

marienf (140573) | about 7 months ago | (#46787415)

> and not as late as it did on April 1

That must have been the most expensive April Fool's joke EVER.

-f

Global release is preferable. (1)

Kremmy (793693) | about 7 months ago | (#46787493)

The only thing you do by hiding this kind of information is limit the number of heads working to fix it. I'm tired of these attempts at plugging the hole in the dam by pretending the hole isn't there until someone plugs it.

Full disclosure, nothing else (1, Interesting)

allo (1728082) | about 7 months ago | (#46787511)

Look, Google knew it. Google is part of prism. You are still wondering, if the NSA may have used Heartbleed?

Certainly semi-public state is the worst (1)

iamacat (583406) | about 7 months ago | (#46787549)

Once the discoverer of the bug patched their own servers and the software creator has an official fix, the only ethical thing is to tell everyone at once. It is not realistic to expect a secret to be kept in a dozen independent companies with thousands of employees each. Also, why should Facebook get an unfair business advantage over Yahoo? Most users having dozens of accounts where overlapping private information is stored and get no benefit from just one server being patched.

Make sure a fix is available and then publish quickly so that bad actors have less time to develop exploits.

Actual Experience Against "Responsible Disclosure" (4, Interesting)

DERoss (1919496) | about 7 months ago | (#46787789)

Historically, so-called "responsible disclosure" has resulted in delayed fixes. As long as the flaw is not public and causing a drum-beat of demands for a fix and a possible loss of customers, the developer organization too often treats security vulnerabilities the same as any other bug.

Worse, those who report security vulnerabilities responsibly and later go public because the fixes are excessively delayed often find themselves branded as villains instead of heroes. Consider the case of Michael Lynn and Cisco in 2005. Lynn informed Cisco of a vulnerability in Cisco's routers. When Cisco failed to fully inform its customers of the significance of the security patch, Lynn decided to go public at the 2005 Black Hat conference in Las Vegas. Cisco pressured Lynn's employer to fire him and also filed a lawsuit against Lynn.

Then there was the 2011 case of Patrick Webster, who notified the Pillar Administration (major administrator of retirement plans in Australia) of a security vulnerability in their server. When the Pillar Administration ignored Webster, he used the vulnerability to extract personal data from about 500 accounts from his own pension plan (a client of the Pillar Administration). Webster made no use of the extracted personal data, did not disseminate the data, and did not go public. He merely sent the data to the Pillar Administration to prove the existence of the vulnerability. As a result, the Pillar Administration notified Webster's own pension plan, which in turn filed a criminal complaint against Webster. Further, his pension plan then demanded that Webster reimburse them for the cost of fixing the vulnerability and sent letters to other account holders, implying that Webster caused the security vulnerability.

For more details, see my "Shoot the Messenger or Why Internet Security Eludes Us" at http://www.rossde.com/editoria... [rossde.com] .

No. (1)

drolli (522659) | about 7 months ago | (#46787971)

If i find a bug which is critical to my employer while being plaid by my employer, the first and only thing which is do is assess the impact to my emplyer, and identify the most important measures for the employers business.

IMHO they acted correctly: protect your own systems, and then the systems with the biggest impact.

I don't trust "secret circles" (1)

WaffleMonster (969671) | about 7 months ago | (#46788083)

This is foolish when you apply a patch to an open source project it essentially becomes public knowledge to anyone who is paying attention at that point. The more you do this the more eyes on patches. This only yields ignorance and suppresses urgency.

Only telling a select few (normally by subscription to very expensive security services) gives giant media an advantage it is not clear to me they have a right to or in any way deserve.

Finally as much money locked up in black/gray hat activities we don't need to be enriching anyone for contributing to an industry of an elite few none of us have any reason to trust.

Behavior of crowd at recent BlackHat toward Mr. Alexander made crystal clear to me kids have all grown up and money runs the show now. The more money the more "ethics" bend towards production of additional money.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?