Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Project Zero Exploits 'Unexploitable' Glibc Bug

Unknown Lamer posted about a month ago | from the never-say-never dept.

Security 98

NotInHere (3654617) writes with news that Google's Project Zero has been busy at work. A month ago they reported an off-by-one error in glibc that would overwrite a word on the heap with NUL and were met with skepticism at its ability to be used in an attack. Google's 'Project Zero' devised an exploit of the out-of-bounds NUL write in glibc to gain root access using the setuid binary pkexec in order to convince skeptical glibc developers. 44 days after being reported, the bug has been fixed. They even managed to defeat address space randomization on 32-bit platforms by tweaking ulimits. 64-bit systems should remain safe if they are using address space randomization.

Sorry! There are no comments related to the filter you selected.

microsofties here is your chance to party (-1)

Anonymous Coward | about a month ago | (#47761313)

enjoy it well, you only get this in a lifetime. you could join the linux zealots though, they party 24/7

Re:microsofties here is your chance to party (-1)

Anonymous Coward | about a month ago | (#47761353)

Dear Retard,

It's called a "Shift Key". You may have heard of it... it was in all the papers.

HTH

Re:microsofties here is your chance to party (2, Funny)

Anonymous Coward | about a month ago | (#47761381)

CAN YOU HEAR ME NOW?? HELLO?

Re:microsofties here is your chance to party (-1)

Anonymous Coward | about 2 months ago | (#47763139)

i lost my shift key after I shoved my keyboard up your mom's ass and she puckered at the wrong moment

Re:microsofties here is your chance to party (0)

Anonymous Coward | about 2 months ago | (#47769993)

Dear Retard,

Posting as AC then adding your monograph is a good way to get trolled.

HTH

Re: microsofties here is your chance to party (5, Insightful)

AvitarX (172628) | about a month ago | (#47761359)

Actually, I find the arrogance of calling an obvious bug "unexploitable" disturbing.

Most ARM is 32 bit...

Re: microsofties here is your chance to party (1, Insightful)

Anonymous Coward | about a month ago | (#47761401)

The word you're looking for is 'skeptical', and then they went and fixed it when they were proven wrong. This is actually the opposite of arrogant.

Re: microsofties here is your chance to party (2, Insightful)

Ralph Wiggam (22354) | about 2 months ago | (#47761507)

The first part is arrogance. The second part is pragmatic humility.

Re: microsofties here is your chance to party (1)

thsths (31372) | about 2 months ago | (#47763757)

The first part is also pragmatic. Releasing a security fix is a lot of work, not just for the developers, but also for everybody else. So you only do that if you have reasonable suspicion that the bug is a security risk. They were good reasons to believe that it is not the case here, although in the end they did not apply in every situation.

If you treat every bug as a security issue, you end up with the Google situation where only one version, the latest, is ever supported. And for libc that is not an acceptable option.

not the same thing (2)

luis_a_espinal (1810296) | about 2 months ago | (#47764121)

The first part is also pragmatic. Releasing a security fix is a lot of work, not just for the developers, but also for everybody else. So you only do that if you have reasonable suspicion that the bug is a security risk. They were good reasons to believe that it is not the case here, although in the end they did not apply in every situation.

If you treat every bug as a security issue, you end up with the Google situation where only one version, the latest, is ever supported. And for libc that is not an acceptable option.

It is one thing to say we will not fix it right now because of the costs and the unlikely of seeing this in the wild. It is quite another to call it unexploitable. The former is pragmatism. The later is hubris.

Re:not the same thing (0)

Anonymous Coward | about 2 months ago | (#47788569)

Read the threads again - nobody called it 'unexploitable'. The summary is sensational and misleading. There was sceptcism for all of a day, after which even Florian realized that it could be exploitable.

Re: microsofties here is your chance to party (1)

Anonymous Coward | about 2 months ago | (#47761509)

You're right, I unfairly used summary words as quotes.

Re: microsofties here is your chance to party (3, Insightful)

phantomfive (622387) | about 2 months ago | (#47762125)

The word you're looking for is 'skeptical', and then they went and fixed it when they were proven wrong. This is actually the opposite of arrogant.

They should have fixed the bug as soon as they realized it was there, and not waited until someone proved it was an especially bad bug.

Re: microsofties here is your chance to party (0)

Anonymous Coward | about 2 months ago | (#47762671)

Because all bugs that are found have the same priority, right?

They probably had a backlog of other bugs to fix (or features to work on), and put this one in the Severity 5 (Trivial) bucket. Once the exploit was discovered it then got moved into the Sev 1 bucket, i.e. the "Drop Everything and Fix this" bucket

Re: microsofties here is your chance to party (1)

aybiss (876862) | about 2 months ago | (#47762941)

It's an oldschool attitude to not touch things, from back in the day where software was so flaky that chances were someone had already 'exploited' the bug to do something non-malicious.

It drives me fucking crazy, having been born pretty much into the internet age where the corrected answer can be available in *seconds*. It's pretty obvious from the description what the bug is, so saying you aren't going to fix it is, as you say, pure laziness.

Re: microsofties here is your chance to party (2)

phantomfive (622387) | about 2 months ago | (#47762981)

It's an oldschool attitude to not touch things, from back in the day where software was so flaky that chances were someone had already 'exploited' the bug to do something non-malicious.

Ah, that actually makes sense, good analysis.

. It's pretty obvious from the description what the bug is, so saying you aren't going to fix it is, as you say, pure laziness.

This sort of thing worries me about glibc, and the attitude that 'bugs are no big deal' is a dangerous one that is infecting software developers all over.

Re: microsofties here is your chance to party (0)

Anonymous Coward | about 2 months ago | (#47763441)

It's an oldschool attitude to not touch things, from back in the day where software was so flaky that chances were someone had already 'exploited' the bug to do something non-malicious.

That problem is still prevalent today. Since there are a lot of people who considers the source code and the API description to be sufficient documentation one can't assume that the users can make a good distinction between what is bugs and what is features.
If intentional side effects aren't documented then all side effects will be assumed to be intentional.
If intended usage isn't documented then all possible ways to use the function will be used.

Re: microsofties here is your chance to party (1)

pegdhcp (1158827) | about 2 months ago | (#47763591)

I felt very old, seeing the -almost- standard assembler practices called old school. When I was young, most CPUs had lots of undocumented instructions, usually due to overuse of Karnaugh. Given the basic electronic structures are still the same, I have a strong suspicion, that position still holds true...

Re: microsofties here is your chance to party (1)

aybiss (876862) | about 2 months ago | (#47772417)

You may be correct but there's a couple of differences - firstly the processor designs are so incredibly complex now (Intel recently issued a 'microcode patch' that actually disabled some instructions on a certain batch of CPUs) that they're all optimised by computer, so it's unlikely that there's much leftover unused functionality. That brings me to the second point, in that whatever 'undocumented' behaviour is available is unlikely to be as useful as e.g. a deprecated opcode on a ZX80. Moreover, you don't go buy a ZX80 with exactly the same processor as everyone else any more. Not only do you have multiple brands to choose from for the same architecture, you probably aren't even paying attention to the exact model you are buying.

Re: microsofties here is your chance to party (1)

luis_a_espinal (1810296) | about 2 months ago | (#47764281)

It's an oldschool attitude to not touch things

It's called engineering.

, from back in the day where software was so flaky that chances were someone had already 'exploited' the bug to do something non-malicious.

It drives me fucking crazy, having been born pretty much into the internet age where the corrected answer can be available in *seconds*.

Just because we are in the era of the Interweebz, that does not mean everything is a web app whose solutions can be put together in seconds. Specially something like a compiler, a shared library or an embedded system. You have to think of regression testing and crap like that, the backlog of issues that are begging fixing, etc, etc, etc. As a result, you do not touch things unless you truly need to, in a controlled manner.

If it is a web-based system with limited visibility, yeah, slap that fix and test it right there, just browse the page to see that it works. A web service or composite other systems depend on, hmmm, first device a functional test with SoapUI just to validate behavior before and after the change. An enterprise system with hundreds of developers and thousands of issues in back logs, slow down, time to prioritize a bit. Something system-level, and used by millions, hold on your danged horses.

I'm not saying the Glibc developers did the right thing at first - I mean, calling a bug "unexploitable" just like that, that is arrogance, not competence or prudence.

But that is a far cry from saying oh, we know what it is, we can put some code in place in seconds. Slapping some code changes =/= fix. A fix is a code change preceded by a cost analysis and followed by a regression/acceptance test, Internet or no Internet.

It's pretty obvious from the description what the bug is, so saying you aren't going to fix it is, as you say, pure laziness.

In this particular case, perhaps. In general, see my previous sentences above.

Re: microsofties here is your chance to party (1)

aybiss (876862) | about 2 months ago | (#47772435)

If you're talking about 'hold your head this way, right click on your keyboard then unplug your RAM == crash' then yes a change might be something you weigh up.

When you're looking at the code and you see *'this is logically incorrect'* then you fix it immediately. If you're smart you also create some unit tests _proving_ that it was incorrect before and is now correct.

Fuck everyone else who wants to reformat the headers of this part of the project and has it all checked out, fuck people who bitch about 'stability' in the sense of things not changing (after all, you don't HAVE to upgrade), you just fucking fix it.

That's what a programmer does - tells a computer how to *correctly* solve a problem.

effort, priority and severity. (1)

luis_a_espinal (1810296) | about 2 months ago | (#47764211)

The word you're looking for is 'skeptical', and then they went and fixed it when they were proven wrong. This is actually the opposite of arrogant.

They should have fixed the bug as soon as they realized it was there, and not waited until someone proved it was an especially bad bug.

Hmmmm, not really. You fix bugs according to cost of fixing it which includes regression testing to ensure you do not break something else with your fix (effort), the likelihood of the bug manifesting itself in the wild (priority) and the ramifications when the bug manifests itself (severity.)

More systems have been broken by people "fixing" things without doing the proper analysis than by actually looking at the backlog and deciding what shall be fixed (fixed in this release), what will be fixed (fixed in this or some other release), what should be fixed (fix not bound to a release yet), what should not be needing a fix (no consequences of fixing it right now, gives room to fix more important things), what will not be fixed (not in this release), and what shall not be fixed (too risky, not worth it).

We are in the business of engineering complex systems, from inception to realization to deployment to support and decomission. This is how you manage how to engineer complex things.

Re:effort, priority and severity. (1)

phantomfive (622387) | about 2 months ago | (#47766469)

I know how to engineer complex things. I looked at the eventual fix, and it should have been done long ago.

Furthermore, if regression tests are important (and they are), they need a suite of automated tests so those things aren't all being done manually.

Finally, it's not like the glibc team traditionally avoids breaking things.

Re: microsofties here is your chance to party (0)

Anonymous Coward | about 2 months ago | (#47764349)

they? or you; this is FOSS after all

Re: microsofties here is your chance to party (0)

Anonymous Coward | about a month ago | (#47761455)

Has Drepper become a maintainer again?

Re: microsofties here is your chance to party (5, Informative)

Anonymous Coward | about 2 months ago | (#47761485)

Embedded stuff would typically use uClibc. Android uses Bionic libc.

Most ARM might be 32 bit but most ARM doesn't use Glibc.

Raspberry Pi, obscure NAS boxes (4, Interesting)

dutchwhizzman (817898) | about 2 months ago | (#47762853)

While you have a point, you shouldn't forget the Raspberry Pi. It is probably the most popular internet facing non-mobile ARM platform today. Literally millions of these run glibc and at least hundreds of thousands are in some way or form directly connected to the internet. While I don't believe that this bug can be exploited without first gaining RCE on the raspberry pi, once an attacker gets access to the rpi, this bug should be able to get them to escalate to root privileges.

There are quite a few people that put a full debian (or other) distribution on their NAS server. I own a zyxel NSA 325 and it is possible to install a full debian release on this and some other NAS boxes. These might be a limited amount of systems overall, but it's significant enough to deserve mentioning because they too often are internet facing.

And (some, rare) phones. (1)

Eunuchswear (210685) | about 2 months ago | (#47763483)

And 3 of my phones.

N900, N9 and Jolla all use glibc.

Re:Raspberry Pi, obscure NAS boxes (0)

Anonymous Coward | about 2 months ago | (#47763641)

While you have a point, you shouldn't forget the Raspberry Pi. It is probably the most popular internet facing non-mobile ARM platform today. Literally millions of these run glibc and at least hundreds of thousands are in some way or form directly connected to the internet.

cool story bro

Re: microsofties here is your chance to party (1)

countach (534280) | about 2 months ago | (#47762539)

Was the glibc boffin who said it looked unexpoitable just expressing a casual opinion, or was he actually trying to wriggle out of fixing it? If the former, then its not very interesting. If the latter, then yeah it's a problem.

Re:microsofties here is your chance to party (0)

Anonymous Coward | about a month ago | (#47761369)

OpenBSD assumes all bugs are capable of being used, and they treat them as such.

Re:microsofties here is your chance to party (1)

Narcocide (102829) | about 2 months ago | (#47761473)

I think that's the definition of the difference between being "paranoid" and being "observant."

Re:microsofties here is your chance to party (4, Insightful)

Sun (104778) | about 2 months ago | (#47762317)

No.

Off by ones are much easier to fix than to prove safe. The amounts of bugs called "unexploitable" until an exploit was provided is staggering. No mildly security aware person will avoid fixing a buffer overflow because it is unexploitable.

Shachar

Re:microsofties here is your chance to party (2)

TheRaven64 (641858) | about 2 months ago | (#47763559)

The OpenBSD philosophy says that the difference between a bug and a vulnerability is the intelligence of the attacker. There are lots of categories of bugs (null pointer dereferences, integer overflows) that were thought to be unexploitable, right up until someone exploited them. It's the same as with cryptosystems: the fact that you can't break your encryption algorithm doesn't mean that it's secure.

Re:microsofties here is your chance to party (0)

Anonymous Coward | about 2 months ago | (#47761475)

But OpenBSD is being run by masturbating monkeys. Linus said so.

Re:microsofties here is your chance to party (0)

Anonymous Coward | about 2 months ago | (#47762213)

At least OpenBSD wasn't stupid enough to include glibc in their OS.

Honestly, when will people learn? (5, Insightful)

Anonymous Coward | about a month ago | (#47761417)

Never say never.

Unexploitable? Srsly? GAC.

An acquaintance recently posted "Six Stages of Debugging" on his g+ page. (1. That can't happen, 2. That doesn't happen on my machine, 3. That shouldn't happen, 4. Why does that happen? 5. Oh, I see, and 6. How did that ever work). Doesn't an software dev who has been working for more than about three years go straight to No. 4?

The things they don't teach you in a CS degree.

Re:Honestly, when will people learn? (4, Insightful)

Narcocide (102829) | about 2 months ago | (#47761481)

This is seriously shit your CS 100 or 200-level teacher SHOULD have taught you, if you got a CS degree. I think it may depend largely upon where/when you got your degree though. They're only all the same on paper.

Re:Honestly, when will people learn? (1)

jopsen (885607) | about 2 months ago | (#47762571)

This is seriously shit your CS 100 or 200-level teacher SHOULD have taught you, if you got a CS degree.

A CS professor shouldn't teach you to "never say never"... just ask for a formal proof :)
Especially, if you're claiming that P != NP or the like...

Re:Honestly, when will people learn? (3, Interesting)

grahamsaa (1287732) | about 2 months ago | (#47761483)

No. While it depends on your end users (end users of some products / libraries / etc are very technical, while other products draw from a much larger, less technical user base), a non-trivial number of bug reports are due to user error, or to something that you don't actually have any control over. Skipping stage 1 probably makes sense in all cases, but the rest of the stages are all valid. Sometimes you never get past stage 2 because the answer is "oh, right, because my machine isn't infected with something" or "because I didn't mis-configure the application".

Re:Honestly, when will people learn? (3, Insightful)

JazzXP (770338) | about 2 months ago | (#47761891)

Yes, but according to your clients, it's still your fault.

Re:Honestly, when will people learn? (3, Insightful)

Anubis IV (1279820) | about 2 months ago | (#47761943)

Sure, which is why you have proper logging that allows you to point them in the right direction. At least a few times a year, I have to advise users to get in touch with their IT department to fix their corrupted Arial font file or some other such nonsense since it's causing problems for our app (and probably a number of others as well). Where the fault lies is a tangential discussion, however. What matters here is that Step 2 is actually valuable at times, since it can assist you in answering #4 by narrowing down the possible causes.

Re:Honestly, when will people learn? (3, Interesting)

katterjohn (726348) | about 2 months ago | (#47761879)

While I don't feel buffer overflows are something to ignore, from what I see the developer never actually said "unexploitable."

From the "skeptical glibc developer" link:

> if not maybe the one byte overflow is still exploitable.

Hmm. How likely is that? It overflows in to malloc metadata, and the
glibc malloc hardening should catch that these days.

Re:Honestly, when will people learn? (2, Insightful)

Anonymous Coward | about 2 months ago | (#47762711)

The things they don't teach you in a CS degree.
Actually they *do* teach you that in a CS degree, and also how to fix it. FTFY. Also, they don't put the word 'an' before a word beginning with a consonant.

Re:Honestly, when will people learn? (1)

Anonymous Coward | about 2 months ago | (#47767147)

Also, they don't put the word 'an' before a word beginning with a consonant.

Not even if the word is "hour"?

Re:Honestly, when will people learn? (0)

Anonymous Coward | about 2 months ago | (#47769681)

The things they don't teach you in a CS degree.
Actually they *do* teach you that in a CS degree, and also how to fix it. FTFY. Also, they don't put the word 'an' before a word beginning with a consonant.

That was supposed to be "... _any_ software dev... I bet you could have figured that out if you hadn't been so excited about flaming me for my mistake. Get over yourself. Typos happen. Who gives a flying fsck?

And no, they don't necessarily teach that in a CS degree. I've worked with MIT grads who couldn't program in C, because the MIT CS degree doesn't require it; they assume you already know it. Every Uni's degree program is different.

Re:Honestly, when will people learn? (1)

Wrath0fb0b (302444) | about 2 months ago | (#47765693)

An acquaintance recently posted "Six Stages of Debugging" on his g+ page. (1. That can't happen, 2. That doesn't happen on my machine, 3. That shouldn't happen, 4. Why does that happen? 5. Oh, I see, and 6. How did that ever work). Doesn't an software dev who has been working for more than about three years go straight to No. 4?

Absolutely true for debugging. But there's a few steps you missed.

Somewhere near 3-4: Ok, how bad would it be if that happened? Does it recover without user intervention (i.e. service crashes and cron restarts it)? Does it recover with user intervention ("did you turn it off and back on?)? Does it lose user data (oh poop)?

The question here (which is altogether not trivial) is exactly this: "how bad would it be if we wrote an extra '\0' somewhere"? And what geohot did was answer that in the most productive way possible - by actually showing with a real example that the impact is major and permanent. If you aren't explicitly doing assessment of the impact of your bugs for schedule/priorities then you must be doing it implicitly somehow because most projects have more bugs than coders/time.

There's another step you missed, happens probably at step 10 or 11 and probably not by the developer that fixes the bug -- given the impact and the risk of the fix, when/how should this be deployed? Should it be backported to the stable releases? Do we have to ping everyone downstream? Is this so bad we should post on /. telling everyone to pull the emergency fix ASAP or else zombie Putin will kill Natalie Portman?

Again, if you aren't doing this step explicitly, it's either happening implicitly or else you are just letting it land whenever/however.

meanwhile.... (-1)

Anonymous Coward | about a month ago | (#47761435)

Meanwhile, most languages now have buffer overrun protection built into the language.

Re:meanwhile.... (0)

Anonymous Coward | about 2 months ago | (#47761489)

Meanwhile, slopping programming in any language results in unintended side effects.

C Needs Bounds Checking (5, Informative)

Sanians (2738917) | about 2 months ago | (#47762223)

Meanwhile, slopping programming in any language results in unintended side effects.

Yes, but the lack of bounds checking in C is kind of crazy. The compiler is now going out of its way to delete error-checking code simply because it runs into "undefined behavior," but no matter how obvious a bounds violation is, the compiler won't even mention it. Go ahead and try it. Create an array, index it with an immediate value of negative one, and compile. It won't complain at all. ...but god-forbid you accidentally write code that depends upon signed overflow to function correctly, because that's something the compiler needs to notice and do something about, namely, it needs to remove your overflow detection code because obviously you've memorized the C standards in their entirety and you're infallible, and there's no chance whatsoever that anyone ever thought that "undefined behavior" might mean "it'll just do whatever the platform the code was compiled for happens to do" rather than "it can do anything at all, no matter how little sense it makes."

Due to just how well GCC optimizes code, bounds checking wouldn't be a huge detriment to program execution speed. In some cases the compiler could verify at compile time that bounds violations will not occur. At other times, it could find more logical ways to check, like if there's a "for (int i = 0; i < some_variable; i++)" used to index an array, the compiler would know that simply checking "some_variable" against the bounds of the array before executing the loop is sufficient. I've looked at the code GCC generates, and optimizations like these are well within its abilities. The end result is that bounds checking wouldn't hinder execution speeds as much as everyone thinks. A compare and a conditional jump isn't a whole lot of code to begin with, and with the compiler determining that a lot of those tests aren't even necessary, it simply wouldn't be a big deal.

...but let's assume it was. Assume bounds checking would reduce program execution speeds by 10%. How often do you worry about network services you run being exploitable, vs. worrying that they won't execute quickly enough? Personally, I never worry about code not executing enough. I might wish it were faster, but worry? Hell no. On the other hand, I don't even keep an SSH server running, despite how convenient it might be to access my computer when I am away from home, because I fear it might be exploitable. I'd prefer more secure software, and if I'm then not happy with the speed at which that software executes, I'll just get a faster computer. After all, our software is clearly slower today than it was 20 years ago. I can put DOS on my PC and run the software from that era at incredible speeds, but I don't because I like the features I get from a modern OS, even if those features mean that my software isn't as fast as it could be. Bounds checking to prevent a frequent and often exploitable programming mistake is just another feature, and it's about time we have it.

..and like everything else the compiler does, bounds checking could always be a compile-time option. Those obsessed with speed could turn it off, but I'm pretty certain that if the option existed, anyone who even thought about turning it off would quickly decide that doing so would be stupid. Maybe for some non-networked applications that have already been well-tested with the option enabled and where execution speed is a serious factor, it might make sense to turn it off, but when it comes to network services and web browsers and the like, no sane person would ever disable the bounds checking when compiling those applications because everyone believes security is more important than speed.

Re:C Needs Bounds Checking (1)

Anonymous Coward | about 2 months ago | (#47762507)

like if there's a "for (int i = 0; i

How is the compiler going to know the size of a runtime-allocated array? Your idea only works if bound sizes are defined at compile time which is hardly going to be even a majority of cases.

Re:C Needs Bounds Checking (3, Interesting)

Sanians (2738917) | about 2 months ago | (#47763155)

Your idea only works if bound sizes are defined at compile time which is hardly going to be even a majority of cases.

Use your imagination...

I was imagining a special type of pointer, but one compatible with ordinary pointers. Kind of how C99 added the "complex" data type for complex numbers, but you can assign to them from ordinary non-complex numbers. A future version of C could add a type of pointer that includes a limit, and a future version of malloc() could return this new type of pointer, and for compatibility, the compiler can just downgrade it to an ordinary pointer any time it is assigned to an ordinary pointer, so that old code continues to work with the new malloc() return value, and new code can continue to call old code that only accepts ordinary pointers. Of course, we won't call them "new" and "ordinary," we'll call them "safe" and "dangerous" when, after several years, we grow tired of hearing of yet another buffer overflow exploit discovered in some old code that hasn't yet been updated to use the new type of pointer.

...or I'm sure there's many other possibilities. This isn't an impossible thing to do.

Re:C Needs Bounds Checking (1)

reikae (80981) | about 2 months ago | (#47763227)

I'd like to hear from someone who knows their stuff better than I do. Is this sanians_imaginary_ptr feasible in C and how would it technically work? Without sacrificing optimisations C allows, low-level access and things like that.

Re:C Needs Bounds Checking (0)

Anonymous Coward | about 2 months ago | (#47763333)

Make it syntactic sugar.
In the language, it's an artifact, in the compiled code, you expand it to its real shape. A bit like how references work in languages like C# or Java. It's actually pretty simple.

Re:C Needs Bounds Checking (0)

Anonymous Coward | about 2 months ago | (#47763485)

Well, first of all, the idea of automatically downgrade one pointer to another sound like something that goes against anything sane and also seems pointless. If the old code doesn't support it then there is no point in using bounds checking at all and for new code it is better to use the bounds checking for everything.
Typically you would just use a struct with the pointer and the size and typedef it to sanians_imaginary_ptr.
Then you write a wrapper for malloc that writes the size to the struct together with the pointer and a macro for statically allocating a fixed size array together with the struct.
On top of that you want to make a bunch of macros for all bounds checking functions defined in the normative Annex K of the C11 standard. (Those are bounds checking versions of array handling library functions, but since they require the size in as an argument you need the macro as syntactic glue.)

As for directly accessing the array, overloading it with bounds checking functionality will really have an impact on performance and bounds checking should be moved out from the code that does the work in that case. If you really want that you should probably got with C++ instead of C but to be honest I don't see much point in it. You'd better off going with a scripting language if you can afford that kind of performance loss.

Re:C Needs Bounds Checking (2)

TheRaven64 (641858) | about 2 months ago | (#47763563)

It is possible, but for good performance it needs hardware support. We've implemented hardware-enforced bounds checking for C code using our processor [bericpu.org] . If you only care about accidental bugs and not about a malicious attacker, and don't use threads (or are happy to bound every pointer store with a transactional region), and don't mind that the semantics of C are subtly broken in the kinds of permitted pointer operations, then Intel's Memory Protection Extensions will do the same thing.

Re:C Needs Bounds Checking (0)

Anonymous Coward | about 2 months ago | (#47764023)

The stdlib free function only takes a pointer to previously malloc'd (or realloc'd) storage as its single parameter, so presumably the allocated size is available at some level.

Re:C Needs Bounds Checking (1)

epyT-R (613989) | about 2 months ago | (#47762913)

Nah. C just needs competent programmers who know something about how a computer works. While your attitude towards security is admirable, your attitude of "we'll just get faster computers" is the cause of all these bloated stacks we have nowadays...stacks that STILL aren't secure, and they were written with managed code languages no less!

Some C compilers already have bounds checking (2)

Sits (117492) | about 2 months ago | (#47762927)

You can already ask some compilers to do what you are asking - it's just often not on in shipped builds.

At compilation time warnings can be generated for out of bounds accesses that can be determined statically. Clang has -fsanitize=bounds [llvm.org] , GCC has -Warray-bounds [gnu.org] .

As an Anonymous Coward pointed out, it can be hard to detect runtime allocations overruns at compilation time. For these something like Clang's AddressSanitizer [llvm.org] (GCC has added it too [google.com] will help but at a cost of both time (slow down factor of 2) and space which is why you're unlikely to find it enabled on your precompiled SSH server binary. It's true there are cheaper checks (such as GCC's FORTIFY_SOURCE [redhat.com] ) that are less thorough/specialized that are often enabled by distros.

Re:C Needs Bounds Checking (5, Informative)

Dutch Gun (899105) | about 2 months ago | (#47763071)

Personally, I never worry about code not executing [quickly] enough.

You know, people say stuff like that all the time, but all it proves is you're not a programmer that developers speed-critical applications. Guess what? There are lots of people who are. Game programmers (me). Simulations programmers. OS / Kernel developers. There are some situations where fast is never fast enough. You're thinking like a desktop developer who writes business applications that are probably not that demanding of the CPU. Get a faster processor? I wish! Not possible for console developers, or when you're running software in data centers with thousands of machines. Those are real problems, and they require highly optimized code, not more hardware. Most programmers have no idea how much the constant push for efficiency colors everything we do.

Just today the other day I was looking at a test case where a complicated pathfinding test scenario bogs pegs my 8 core CPU when a lot of units are on-screen at once. That's not some theoretical problem, and telling users they need some uber-machine to play the game is a non-starter. I either need to ensure my game design avoids those scenarios or I'll need to further optimize the pathfinding systems to allow for more units in the game.

That being said, I agree with your complain about C's fundamental insecurity, but it's not so simple as adding in a compilers switch. For the most common and checkable types of bounds problems, or library functions that can cause problems, Microsoft's C/C++ compiler already does what you've suggested to a degree (not as certain about GCC). The big problem with bounds checking in C is that arrays are simple pointers to memory. The compiler doesn't always know how big that free space is, because there's no type or size associated with it. It's possible in some cases to do bounds-checking, but not in many others. It's a fundamental difficulty with the language, and it's impossible for the compiler to check all those bounds without help from the language or the programmer.

Re:C Needs Bounds Checking (1)

pjt33 (739471) | about 2 months ago | (#47763307)

The compiler doesn't always know how big that free space is, because there's no type or size associated with it. It's possible in some cases to do bounds-checking, but not in many others. It's a fundamental difficulty with the language, and it's impossible for the compiler to check all those bounds without help from the language or the programmer.

That's not quite true: the compiler could arrange to pass around more than just the raw pointer (or in extremis could maintain a duplicate of the malloc table and work out the bounds given the pointer), but the performance hit would be considerably more than for direct checking.

Re:C Needs Bounds Checking (2)

Dutch Gun (899105) | about 2 months ago | (#47770217)

I'm not sure how well you know C, but... you can't turn a pointer into something more than a raw memory pointer. This would flat-out destroy all sorts of code that relies on that behavior, both in C and C++, and not necessarily badly-written code. The behavior of memory pointers is a part of the language contract, and you can't change it without breaking the language. For systems programmers with large, legacy codebases, they'd never risk turning on such an intrusive feature because of the simply fact that it would break compatibility, nor would they wish to pay a global penalty to apply protection for some very specific vulnerabilities.

In my own code (which is C++, not C, but the point still applies), I'm actually performing my own low-level memory management in order to improve efficiency - it's pretty much standard practice in the videogame industry, at least for large projects. Anything the compiler tried to do in terms of mucking around with allocations, pointers, or arrays could very well break code, and wouldn't help in any case. For instance, it's pretty common to allocate a big block of memory and to pass off small chunks of it to structures instead of performing an OS-level heap allocation for each one. In this scenario, a pointer has no explicit type at all, and any inferred type can be malleable to such a degree that it could never really be analyzed either at compile time or runtime.

When discussing a "no-brainer" feature like this, I tend to assume that other people (compiler writers) have already thought of this idea and rejected it for very pragmatic reasons. I'd imagine no one wishes more than C programmers that they could flip a magic switch and have a lot more protection, but I just don't see how it could realistically happen.

Re:C Needs Bounds Checking (0)

Anonymous Coward | about 2 months ago | (#47765045)

A program that runs slower than desired may still be very useable. On the other hand a program that has bugs may be misleading, useless or even dangerous.

Re:C Needs Bounds Checking (1)

gumbi west (610122) | about 2 months ago | (#47770923)

Well, right tool for the job. I think that for servers and clients that connect to untrusted servers, probably C is not the right tool. For example, I'd rather sshd was written in a language that checks out of bounds conditions and I'd rather have it be slow than insecure.

Re:C Needs Bounds Checking (0)

Anonymous Coward | about 2 months ago | (#47765863)

Why the hell are you using a SIGNED variable for an array index?

AC

Re:C Needs Bounds Checking (1)

Yunzil (181064) | about 2 months ago | (#47766653)

Go ahead and try it. Create an array, index it with an immediate value of negative one, and compile. It won't complain at all

It complains with -Wall -O2.

99 reasons (0)

Anonymous Coward | about a month ago | (#47761449)

99 reasons not to use a 32BIT OS, even windows 32bit has issues not seen on the 64bit version thanks to a more advanced address space randomization...
Yup, time to dumb 32bit...

Summary is completely exagerated (5, Informative)

Anonymous Coward | about 2 months ago | (#47761679)

I read through the thread and at no point was the bug considered "Unexploitable". Even skepticism is too strong of a word to use. The only doubt that was raised was asking "How likely is that?"

Re:Summary is completely exagerated (4, Informative)

NotInHere (3654617) | about 2 months ago | (#47762603)

I chose the word scepticism, and still I think it is. I agree that the word "unexploitable" was a bit exaggerated, but that was added by unknown lamer.

Florian Weimer [sourceware.org] said:

My assessment is "not exploitable" because it's a NUL byte written into malloc metadata. But Tavis disagrees. He is usually right. And that's why I'm not really sure.

Its however true that he corrects himself the same day a bit later:

>> if not maybe the one byte overflow is still exploitable.
>
> Hmm. How likely is that? It overflows in to malloc metadata, and the
> glibc malloc hardening should catch that these days.

Not necessarily on 32-bit architectures, so I agree with Tavis now, and
we need a CVE.

Re:Summary is completely exagerated (1)

Cramer (69040) | about 2 months ago | (#47768869)

And to be perfectly fair, the issue hinges on glibc's completely idiotic insistence on free()ing everything at exit() instead of just f'ing exiting. The kernel knows exactly what to return to the free pool and does not depend on, or require, the application to return the memory it requested.

"Unexploitable" sudo bug pre-1.6.3p6 (5, Interesting)

Anonymous Coward | about 2 months ago | (#47762061)

Reminds me of this overflow bug [seclists.org] which was fixed in sudo 1.6.3p6. It writes a single NUL byte past the end of a buffer, calls syslog(), and the restores the original overwritten byte. Seems unexploitable, right?

Wrong. Here's the detailed writeup [phrack.org] of the exploit. It requires some jiggering with the parameters to get the exploit to work on a particular system, but you don't need a local root exploit to work every time, you just need it to work once and you own the system.

Re:"Unexploitable" sudo bug pre-1.6.3p6 (3, Informative)

NotInHere (3654617) | about 2 months ago | (#47762627)

I've read a bit through the threads and think that the reason it took so long was because they decided to remove a feature [openwall.com] to fix the problem:

I believe the current plan is to completely remove the transliteration
module support, as it hasn't worked for 10+ years.

The git commit message states the same. There were really some problems in that function: https://sourceware.org/ml/libc... [sourceware.org]

Amazing to use such a crude programming language (2)

aberglas (991072) | about 2 months ago | (#47762073)

One that a slight slip anywhere in millions of lines of code could produce random memory corruptions with unpredictable consequences. Who would have believed that anybody would even dream of using a language with constructs such as ptr++. And we are surprised to find bugs...

Re:Amazing to use such a crude programming languag (0)

Anonymous Coward | about 2 months ago | (#47762467)

Protip: Your fancy "modern" language is written in this "crude" language.

Re:Amazing to use such a crude programming languag (0)

Anonymous Coward | about 2 months ago | (#47762655)

Not necessarily. The lambda calculus languages predate the procedural languages and object languages that are in wide use today. They were less efficient on older hardware so they never gained currency (there's also the theory that requiring an understanding of recursion instead of sequential programming was a barrier to adoption). But LISP can be used to create the C++ compiler, just as C++ can be used to create the LISP compiler.

Re:Amazing to use such a crude programming languag (1)

epyT-R (613989) | about 2 months ago | (#47762945)

At some point, you'll have to break that high level language down to opcodes the cpu can understand, that means breaking high level logic down to many simple steps, which is what procedural languages are for. You can force the programmer to write these steps one at a time in assembly, 'script' the generation of assembly in C, or have a runtime and/or VM do it at a cost of speed and footprint, but there is no magical way to skip generating that list of procedures.

Re:Amazing to use such a crude programming languag (0)

Anonymous Coward | about 2 months ago | (#47762689)

Nope. My fancy modern language is written in said fancy modern language.

Re:Amazing to use such a crude programming languag (0)

Anonymous Coward | about 2 months ago | (#47763403)

Which one is that?
Haskell, Python, PHP, Perl all end up in either compilers or interpreters written in C.

Re:Amazing to use such a crude programming languag (1)

Urkki (668283) | about 2 months ago | (#47763045)

Protip: Your fancy "modern" language is written in this "crude" language.

Even if a compiler for a "fancy" safe language were written with a "crude" unsafe language, it would still be just one program to verify for ptr++ kind of bugs. Additionally, a compiler is a classical input -> output kind of non-interactive program, which yields itself very well for running under verification tools like valgrind, which increases confidence that at least for any given input, it will not do nasty things.

Re:Amazing to use such a crude programming languag (0)

Anonymous Coward | about 2 months ago | (#47763423)

This is a C library.
Even if you wrote all your programs in python, the runtime library at some point will be implemented in C or assembler.
So you would have the same problem.

Re:Amazing to use such a crude programming languag (1)

Urkki (668283) | about 2 months ago | (#47764285)

Nothing prevents writing runtime libraries on safer languages than C, even C++11 would be a lot better (unless abused, but that applies to C too). And assembler is used very little these days, because there are many relevant CPUs in the market (ARM variants, x86, x64).

Re:Amazing to use such a crude programming languag (1)

Wootery (1087023) | about 2 months ago | (#47766975)

Indeed. And yet, in Java, it's impossible for me to accidentally shoot myself in the face with pointer arithmetic.

I use C++, and like it in its way, but you don't have much of a point.

Re:Amazing to use such a crude programming languag (0)

Anonymous Coward | about 2 months ago | (#47762583)

My name is Joe Dangerous.

Re:Amazing to use such a crude programming languag (1)

TheRaven64 (641858) | about 2 months ago | (#47763569)

What high-level language does not depend on the C standard library and so would be suitable for implementing the C standard library?

Re:Amazing to use such a crude programming languag (1)

Wootery (1087023) | about 2 months ago | (#47766951)

Of course, the C standard library itself is hardly [wikipedia.org] a shining example of secure library design.

Re:Amazing to use such a crude programming languag (1)

Yunzil (181064) | about 2 months ago | (#47766741)

Yeah, they should have just invented Python in 1950.

It's safe to ignore anyway... (0)

Anonymous Coward | about 2 months ago | (#47762085)

We're talking about Google here. They use binary blobs and all sorts of non-free software and licensing. Plus they're generally creepy. So whatever they claim or even prove doesn't matter anyway. They don't exist. LALALA

Deja vu from 1998 (0)

Anonymous Coward | about 2 months ago | (#47762753)

Anyone remember the poisoned NUL byte: http://www.ouah.org/nullbyte.html

Address space randomization does not help. (5, Interesting)

Animats (122034) | about 2 months ago | (#47763103)

64-bit systems should remain safe if they are using address space randomization.

Nah. It just takes more crashes before the exploit achieves penetration.

(Address space randomization is a terrible idea. It's a desperation measure and an excuse for not fixing problems. In exchange for making penetration slightly harder, you give up repeatable crash bug behavior.)

Re:Address space randomization does not help. (1)

igomaniac (409731) | about 2 months ago | (#47763253)

1) if you make exploitation less likely than an astroid hitting the earth, then for all practical purposes you can say that it is prevented.
2) 'repeatable crash bug behavior' doesn't matter, it will be repeatable if it is run in valgrind/address sanitizer or via a debugger which is really all that matters to a developer. An end user couldn't care less about repeatable crashes and would prefer if it occasionally/usually continued running.

Re:Address space randomization does not help. (0)

Anonymous Coward | about 2 months ago | (#47763257)

Address space randomization is a last-effort defense when all else fails. If you think it means you can write unsafe programs and let the ASR catch it then you are still a freaking idioit. A seat-belt can harm you too, and not using one is more likely to have no effect at all... untill that one day. In short; You are wrong. (regardles of relying on ASR or not)

Re:Address space randomization does not help. (1)

jones_supa (887896) | about 2 months ago | (#47763429)

Yes, ASLR somewhat works but is an afterthought. The ultimate solution would be to stop using computers which mix data and code adjacently, in other words get rid of the whole von Neumann computer architecture.

Re:Address space randomization does not help. (1)

tlhIngan (30335) | about 2 months ago | (#47765353)

Yes, ASLR somewhat works but is an afterthought. The ultimate solution would be to stop using computers which mix data and code adjacently, in other words get rid of the whole von Neumann computer architecture.

There are plenty of processors that are Harvard architecture out there (separate data/instruction memory). Though modern architectures do have a bit of Harvard in them (the separated instruction and data caches). And memory segmentation and permissions do help split code and data into separate areas.

The problem is that von Neumann makes computers extremely useful because you're able to treat code as data, so you can do fancy things like load a program off disk into memory and execute it, or load a program from a network device using any programmable protocol and run it. This only works because the OS treats the code text as data temporarily to load it off storage (local or otherwise) and then into memory. (After all, loading a program into memory consists of reading the executable off disk like you'd read a regular data file into memory, then you'd need to runt hat code). Heck, modern paging systems in an OS rely on it - reloading a memory page from disk doesn't care if it's code or data - the OS just sets up a new memory page to hold the contents, finds the location on disk, and tells the disk driver to populate that memory with data, and on completion, re-executes the failed instruction (or performs the pre-fetch)

Harvard architecture machines need to have a way to load their program information and pre-load data into memory, which is why traditionally they only run fixed program code (like DSP). Or have a von Neumann machine load the code into instruction RAM. (They're great for streaming and signals where the code doesn't change, but you're constantly passing data through the system)

Linux security is a joke.. (-1)

Anonymous Coward | about 2 months ago | (#47763411)

Open source + C = endless number of security holes.

Anybody with some skills can make some patches that cause very difficult to detect security holes to the system. That's why Linux is BAD choice for servers etc.

Actually Linux might be slightly better for development and desktop use where security is not super important.

That's why most bugs _are_ security bugs (0)

Anonymous Coward | about 2 months ago | (#47763883)

This clearly shows that any bug is a potentially security bug. It is just that some are a lot more obviously exploitable than others. A memory leak used to fill the heap so that it meets the stack?!

Three important facts to learn from this:
1. ulimit and setuid/setgid don't mix well. This one we need to fix in the general case, adding some extra limits to the ulimits(!) so that they cannot be too small. setuid/setgid already ignores a lot of crap from the linker.

2. complicated libraries are for the desktop bling and have no place in tooling and pumbling. If it is setuid/setgid, it cannot link to complicated crap like full glib, we need a *safe subset* of these libraries, properly lobotomized to be safe to use in sensitive security scenarios like setuid/setgid.

3. *any* bug in a setuid/setgid binary is a security bug.

pkexec?? (1)

putaro (235078) | about 2 months ago | (#47764017)

Sorry, old Unix guy here. My first reaction was "What the F is pkexec and why is it running setuid?"

Yet another way to execute arbitrary privileged executables is yet another potential security hole. This dumb thing is apparently part of the "Free Desktop" but it's depended on by all kinds of stuff including the fricking RedHat power management. What's wrong with plain old sudo?

thanks redhat (0)

Anonymous Coward | about 2 months ago | (#47764361)

Hmm. How likely is that? It overflows in to malloc metadata, and the
glibc malloc hardening should catch that these days.

--
Florian Weimer / Red Hat Product Security

Brilliant work (1)

jgotts (2785) | about 2 months ago | (#47769777)

Don't make excuses for not fixing bugs. When you find bugs, fix them.

All software is buggy by definition because the entire stack from the moving charge carriers to the behavior of the person using the computer cannot be mathematically proven to be correct.

No matter what measures you as the hardware or software creator take, there will be bugs.

Don't make people angry at you or ridicule their bug reports because that's a major incentive for them to make you look foolish.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?