Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security Businesses Red Hat Software Software Linux

Red Hat Introduces NX Software Support For Linux 188

abertoll writes "In this story at ZDnet, Red Hat has apparently added NX support to Linux. NX security technology is a hardware attempt at stopping malicious code." (We recently posted about Transmeta's announcement that its chips will incorporate the NX bit as well.)
This discussion has been archived. No new comments can be posted.

Red Hat Introduces NX Software Support For Linux

Comments Filter:
  • by PiGuy ( 531424 ) <squirrelNO@SPAMwpi.edu> on Saturday June 05, 2004 @11:13PM (#9348122) Homepage
    What I fail to understand is the difference between this 'no execute' bit and the 'executable' bit in standard 386 protected mode. Does the 'executable' bit not cause an exception if the PC proceeds to pages without it set? Even then, protected mode also has a 'read-only' bit - isn't this set for code pages? And if not, why not?
    • by tepples ( 727027 ) * <tepplesNO@SPAMgmail.com> on Saturday June 05, 2004 @11:18PM (#9348141) Homepage Journal

      Standard 386 protected mode controls per segment, where CS (code segment) is executable and DS (data segment) is writable. However, many 32-bit operating systems use a so-called "tiny" memory model, setting CS = DS, and the 386 allows for turning off read and write privileges per page but not execute privileges (if you can read a page in an executable segment, you can execute from it).

      However, true W^X (shorthand for "no segment is both writable and executable") support won't work for applications that depend on self-modifying code, such as JIT-compiling virtual machines for Java and .NET platforms.

      • by forkazoo ( 138186 ) <<wrosecrans> <at> <gmail.com>> on Saturday June 05, 2004 @11:59PM (#9348310) Homepage
        However, true W^X (shorthand for "no segment is both writable and executable") support won't work for applications that depend on self-modifying code, such as JIT-compiling virtual machines for Java and .NET platforms.

        data char* temp = new data char[len];
        executable char* code = new executable char[len];
        int function() = code;

        compile(javasrc, temp);
        copy(temp, code);
        function();

        From what I've heard, allocations will default to non-executable, but there will be some sort of API that allows executable space to be allocated on every OS that deals with NX bits. You will probably also see WinXP and the like with the ability to "Run this program in compatibility mode..." until the developer updates to deal with the tweaks made in the updates.
        • compile(javasrc, temp);
          copy(temp, code);
          function();

          And watch as NX::copy() has a huge overhead from going into kernel space and back.

        • From what I've heard, allocations will default to non-executable, but there will be some sort of API that allows executable space to be allocated on every OS that deals with NX bits.

          Fortunatly, Unix already has an API for that, both mmap and mprotect have the PROT_EXEC flag. There may be a few apps that get into trouble for not using it where it's needed since on x86, it's effectively been always set up until now.

      • NX is page table level. It is very hard to make use of segment based protecting in operating systems using 32bit flat paging modes (like all of the modern ones). Solar Designer produced some patches that try and do this at least for non exec stack but they require magic kernel side exception fixups.

      • However, true W^X (shorthand for "no segment is both writable and executable") support won't work for applications that depend on self-modifying code, such as JIT-compiling virtual machines for Java and .NET platforms.

        I have heard this is a serious problem for LISP as well. I hope, for the sake of these platforms, that W^X, which seems to be the future for most operating systems will have some sort of loophole for the neat and useful computer language features that aren't compatabile with it.

    • the i386 has no hardware support for an "execute" bit. It just has a read bit and a write bit. If you have read access to a page then you can execute that code. The "NX" bit is the implementation of the "execute" bit, except when it's /set/ it prevents execution as opposed to the expected reverse, which is why it's called "NX" not "X" :)
      • People, do yourselves a favor and read the Intel specs. Please? There is in fact, a bit for defining code segments. These code segments can be marked as read only or execute only. The problem (as I managed to wrangle out of people the LAST time this thing was posted) is that a data block can also be executed without exception. The NX flag merely prevents data blocks from ever executing code.

        • by awkScooby ( 741257 ) on Sunday June 06, 2004 @04:20AM (#9349022)
          People, do yourselves a favor and read the Intel specs. Please? There is in fact, a bit for defining code segments.

          Linux, Windows, BSD, etc. don't use segments, but instead use paging. Intel has dragged their feet on adding NX support because the feature "already exists", but the reality is that hardly anyone uses segments.

          Ok, technically everyone uses segments -- they just create a single segment which covers all of the memory space. The GDT (Global Descriptor Table) must be configured when you switch to protected mode. Paging is optional.

          The NX flag prevents a page (typically 4k) from executing. By marking all stack pages as NX, buffer overflow attacks won't be able to remotely execute arbitrary code. I assume that an exception will be generated when an attempt is made to execute from an NX page, which will probably cause the running program to halt. So, remote explots turn into DOS attacks.

          Buffer overflow attacks have been known about for decades, and solutions such as NX have been known for quite some time too. As has been mentioned elsewhere on /., this does not remove the responsibility of developers to write good, secure code. But, as history has shown, they will probably continue with the long standing practice of writing insecure code.

          NX will prevent buffer overflow attacks. NX will not be able to determine whether a program you choose to execute is good or evil. Viruses existed and managed to propogate back in the days before the Internet or even networking were in common use. NX won't solve all security problems, but it is a good tool to help reduce the possibility of remote exploits.

          The NX flag isn't new, it's just new to the x86 world. Kudos to AMD for being the first to add this to the x86!

    • the bits you're referring to are the execute permission for segment descriptors.

      the NX bit operates at page level - within segments. it is bit 63 of the Page-Translation-Table entry, and is only available in PAE mode. it is enabled by the NXE bit of the EFER ("Extended Feature Enable Register"). and it applies to all execution rings.
    • This is just an attempt by the hardware developers to patch problems made by the software developers. Ultimately in the end, we will lose performance because they are adding overhead to general processing. The software developers (or companies) should be held responsible. It's no different than trying to patch a patch a patch, etc...

      This is why linux is so efficient; bugs are corrected in the kernel and recompiled for the new releases. It's a much better solution that adding code bloat or processor o
      • I beg to differ.

        All modern architectures implement all 3 different protections bits (read, write, execute). It should have been implemented a long, long time ago, and you definitely cannot emulate it perfectly in software.
        I don't known why it wasn't implemented from the begining or at least when the 386 was released, but it was sorely missed by everyone working on improving the security of an OS. I guess Intel didn't think that this architecture would survive in the 21th century.

        So adding the NX is a long
      • Ultimately in the end, we will lose performance because they are adding overhead to general processing.

        Point taken, but if NX cuts down on the worm/virus/virus notice email we get because of infected Windoze systems, it'll be a performance boost for us UNIX users.

      • by fatphil ( 181876 ) on Sunday June 06, 2004 @05:48AM (#9349247) Homepage
        As a long time happy linux user, but also a kernel author (not x86 though, C80), I can't share your positive attitude towards linux on this issue.

        Linus sloppily decided to avoid _almost all_ of the protection mechanisms that the 386 makes available to the system. That's why you can smash the stack for fun and profit. He chose to let CS access the same pages as DS (and SS,ES,FS,GS), whe he could have allocated some linear addresses as code-only, and others as data-only. After that you simply need to ensure that no CS ever was given access outside the executable range, and no other segment was given access in the executabe range.
        And you can ensure this - as you, the kernel, are entirely in charge of setting up user-space descriptors.

        To do so would have added a bit more complexity to the memory management (with lower case letters) part of the kernel, but would have prevented all smash stacking and heap smashing attacks.

        Linux is not _technically_ as good OS at all. It's simply _practically_ (for people like me) a good OS.

        Tannenbaum is still right. (And when Tannenbaum says "run 20% slower" he means "take up 0.6% of the CPU rather than 0.5% of it, thus giving apps 99.4% of the cpu rather than 99.5%. But that's another rant.)

        FP.
      • This is just an attempt by the hardware developers to patch problems made by the software developers.

        Well, this is pretty low-level hardware operation-type stuff we are talking about. If you call this feature a "patch" that shouldn't exist, then you could say the same thing about processors knowing how to do almost anything at all... Those lazy developers don't really NEED a subtraction function on the processor. Those software guys don't NEED a BIOS to interface with the hardware. Those software guys

  • by celeritas_2 ( 750289 ) <ranmyaku@gmail.com> on Saturday June 05, 2004 @11:15PM (#9348125)
    I personally can't wiat until some great evil makes a virus harnising NX to say.....block the execution of MSIE .....widespread luser panic is always fun
  • by zoloto ( 586738 ) on Saturday June 05, 2004 @11:15PM (#9348130)
    And I always wanted processor support for the Evil Bit. Dang.
  • Remember kids... (Score:2, Insightful)

    by Anonymous Coward
    ... NX support is not an excuse to write potentially unsafe code.
    • by Moraelin ( 679338 )
      That is a great lesson, no doubt. One that more people would do well to keep in mind.

      However, bugs happen when writing code.

      Worse bugs happen when someone modifies code they don't understand. Some code depends on non-explicit assertions, such as an array size being already checked somewhere else, or some buffer being already initialized somewhere else. The maintenance programmer sees the code like through one of those cardboard tubes in toilet paper rolls, so he/she can easily miss such dependencies. When
  • Darn. (Score:1, Offtopic)

    by sploo22 ( 748838 )
    I noticed Slashdot was down for a few minutes just prior to posting this. I'll assume they were upgrading their servers.

    So does this mean I'm out of luck with all those shellcodes I keep posting in my comments?
  • by Timber_Z ( 777048 ) on Saturday June 05, 2004 @11:17PM (#9348139)
    Windows has supported that for years.

    Why just yesterday it stoped executing for no particular reason.
  • There you go (Score:4, Insightful)

    by Anonymous Coward on Saturday June 05, 2004 @11:21PM (#9348157)
    ... all those fellow /.'ers who cried out loud "we don't want no DRM" when they first read the titles of the stories about NX support in upcoming procs, without even bothering to understand WTH NX is for, and kept and kept writing idiotic comments about how evil Windows must be because it now supports NX (which they seriously thought was some form of ah-so-evil DRM feature)

    See, NX is a good thing, now even Linux has support for it :) I am happy that you will now have an opportunity to open your minds to this fine new technology.

    Cheers.
    • One step at a time (Score:1, Interesting)

      by SoSueMe ( 263478 )
      This, to me, seems like just one more slow, inexhorable step towards "Trusted Computing".
      • "Trusted Computing" has a lot of useful applications, such as in creating large distributed computing networks and online voting, using "trusted third parties" (TTPs). That is, provided it has support for multiple TTPs which are selected by the end user.
      • Then you must have no clue what it is.

        It stops you from accidentally executing your data (e.g. buffer overflow onto stack).

        Open BSD has it. It's a security enhancement. Even if you're running windows, you don't want buffer overflows. It's good. It's not DRM.
      • It has little or nothing to do with "Trusted Computing". The OS is free to set or clear the NX bit as it sees fit, including at the command of the user. "Trusted computing" is more like having a ring -1 and the OS being powerless to do anything about it.

    • Re:There you go (Score:1, Offtopic)

      by Jeff DeMaagd ( 2015 )
      Well, hey, now maybe we can hope for some other distribution to include this, hopefully one that doesn't suddenly yank their maintainance support out from under you only sixteen months after introducing a product?
    • I am happy that you will now have an opportunity to open your minds to this fine new technology.

      Yea, right, open my mind. Haven't you ever heard of cognative dissonance? It means I can hold two contradictory thoughts in my head and not be bothered. So Microsoft is evil for including NX, and linux is awesome for including NX. What do you have to say to that?
  • by xmas2003 ( 739875 ) on Saturday June 05, 2004 @11:23PM (#9348163) Homepage
    I just hope that with all the overclockers out there, they don't add support for the Halt and Catch Fire Instruction [ic.ac.uk] ;-)

    Seriousely, the NX stuff is a "good" thing to add to slow down malicious code - the only thing better would be a HULK Instruction [komar.org] which would SMASH Puny Human malicious code ... ;-)

  • A cross between... (Score:5, Insightful)

    by 3) profit!!! ( 773340 ) on Saturday June 05, 2004 @11:29PM (#9348185) Homepage
    This "NX" stuff to separate data and instructions is sort of like crossing current CPUs' Von Neumann architecture [wikipedia.org] with a Harvard architecture [wikipedia.org] type of chip, where the storage is actually separate from the executable code.
  • Fine No Execute (Score:5, Insightful)

    by oldstrat ( 87076 ) on Saturday June 05, 2004 @11:29PM (#9348186) Journal
    This is all well and good, but is certainlly not a panacia.
    No execute means that somewhere, somehow there will be an override and the day the override is used the virus' will follow by tricking (and explaining how) to the user why this is needed and bingo, it's in.

    And of course I could be completely wrong in that this no execute bit does not exist on older processors and that in itself is going to cause problems. Intel has xbit on newer processors, but what about AMD, VIA, whoever else? Is this part of the Intel half of the WinTel duopoly?

    I think it's probably a good idea, but I'm suspicious.
    • AMD has No Execute on Athlon 64 processors, so it's certainly not an Intel specific thing. As the Slashdot blurb mentions, Transmeta recently added it as well. But no, older processors do not have No Execute on it.
      • Re:Fine No Execute (Score:5, Interesting)

        by explorer ( 42481 ) on Saturday June 05, 2004 @11:57PM (#9348305)
        Right, all AMD K8-class processors have the NX-bit already. And despite the Intel-centric spin on the ZDNet article, the fact is that Intel has only announced that support for it is coming in future Intel parts. Unlike AMD, it doesn't appear you can buy any CPUs from Intel that support the NX bit today.

        In other words, Intel is playing catch-up.

        And note the comment in Ingo's linux-kernel posting that refers to the "existing NX support in the 64-bit x86_64 kernels ... written by Andi Kleen". I.e. NX-bit support was already available to AMD64 owners running 64-bit linux kernels.
    • Comment removed (Score:5, Informative)

      by account_deleted ( 4530225 ) on Saturday June 05, 2004 @11:46PM (#9348260)
      Comment removed based on user account deletion
      • The pushing parameters onto stack before calling a function sounded rather kludgy to me when I first learnt about it years ago.

        Why don't people use different stacks for return addresses and parameters/variables?

        That way one reduces the chances of "running arbitrary code of the attacker's choice". In event of a bug the attacker is more likely to only easily "overwrite/specify arbitrary paremeters/variables for existing functions". Which seems magnitudes safer.
        • You mean like FORTH?

          It's not really a bad idea, but I'm not at all sure how easy it would be to implement with the current compilers & code-base. I suspect quite difficult. The NX bit is probably transparent on systems that don't have the capability, which a dual stack system wouldn't be. (OTOH, a dual-stack system wouldn't need to depend on new CPUs.)
          • You mean like FORTH?

            Exactly like FORTH! The fact that FORTH runs on nearly anything shows that hardware support isn't required, but having explicit support for a data stack might be interesting for performance.

            Although it would be a bit of work, with a modified kernel syscall and gcc, it could be implemented on existing hardware.

          • Well the gentoo guys probably won't mind recompiling everything ;).

            As for FORTH - I heard it's still vulnerable to buffer overflows.

            Plus if you're not careful the fact that typically code=data in FORTH just creates the same problem in the next level. Joe Programmer may not know to use different "dictionaries" or whatever they call those, to isolate things.

            A while back I crashed a forth webserver on my first try (zhttp). Sent a single quote to a http basic-auth password prompt... It really crashed too. Do
            • I didn't mean use FORTH as the language. That's just slightly better than using assembler. I meant "Is this the kind of system you mean?". Actually, if one wanted to implement this, it would take more than two stackes for security. One stack would be solely for executable addresses, One stack for integers, and one for floats. (One could probably get away with mixing characters and integers, but string references would need to be to executable addresses...still, each character retrieved from the string
              • Well having a separate address stack may make it easier for the CPU to figure out program flow. So the performance hit might not be that bad.

                As for the other stuff, I think just a single general parameter stack may be good enough - let the programmers figure out what they want to pass to routines and how they want to do it.

                Shouldn't be too difficult to detect if the stacks collide. Start and current pointers for each of the two stacks, or something similar. Load in the pointers for each context switch.

                Th
      • It gets substantially better when the system also messes with your memory layout. There's also a patch to randomize the locations where libraries and sections of the binary are loaded. If you don't know where the code is, and you don't know where the strings are, and you can't insert any code that will be executed, you have a very small chance of exploiting the program, and you only get one chance before the program crashes. And next time, the libraries will be loaded differently.
    • Re:Fine No Execute (Score:5, Informative)

      by 0racle ( 667029 ) on Sunday June 06, 2004 @12:29AM (#9348434)
      NX is not a new thing, and neither Intel or AMD did it first. SPARC's, UltraSPARC's and Alpha's have had this for some time, and it wouldn't surprise me if its in the Power chips as well.

      As far as it not being on older processors, I assume you mean older ia32's, and surprisingly this was brought up in a MS TechNet event I was at on Thursday. I don't know all the details, but he presenter said it was in older chips, at least back to the original Pentium if I remember, but with the way ia32 chips do paging, it was never implemented in the OS's until recently, which i can only assume the Athalon64, Opeteron and Itanium do this differently, but don't quote me on that.

      Personally, I'm just wondering exactly what ia32 chips will Linux and OpenBSD use NX on.
    • Re:Fine No Execute (Score:4, Informative)

      by kasperd ( 592156 ) on Sunday June 06, 2004 @04:11AM (#9348993) Homepage Journal
      somehow there will be an override and the day the override is used the virus

      First of all you shouldn't expect the NX bit to do any good against a virus. A worm OTOH might be stoped by the NX bit. OK I'll assume you mean the worm would use a way to override it. If it could be disabled per executable like execshield on Linux, you could only exploit vulnurable programs with the security feature disabled. So if the vulnurable service is running with the security feature enabled, you cannot disable it, unless you already control the machine. So it doesn't help a worm gaining control.

      What are the chances the vulnurable service would run with this security feature disabled? Not large, because you would only disable it, if the service didn't work otherwise. And the number of programs breaking in case of Linux is not large. Fedora Core 1 has exec shield which does a best effort at implementing this without specific hardware support. Arjan van de Ven explains [google.com] that hardly any program broke. Ingo Molnar explains [google.com] in a bit more detail, that the X module loader was the only program breaking. (Some other programs broke for other reasons). So when it is only one program breaking, you fix it, rather than starging to disable this security feature.

      However as Linus has explained, there are ways to exploit a vulnurable service in spite of NX. This specific attack relies on using /bin/sh, which means it wouldn't work against Windows. But anybody who knows as much about Windows as Linus knows about Linux would surely be able to come up with a similar attack against a Windows service. For example there is probably a function you can call to change the protection bits on a memory range. So you first fill code in the buffer, which cannot be executed at the moment, the return address you overwrite with a pointer to this function call, and you provide it parameters specifying to make the buffer executable. The return address from the function call then just needs to point to the buffer.
  • How would Just-in-time compilers and interpreters work? If I understand this correctly, you can't write data to executable areas of memory, but then how do you run instructions that are written to memory!?!? Could someone explain?
    • Um. Well obviously there will be APIs to mark data regions as executable or to allocate executable data regions. The later would be better cause then you could better ensure that offflows from non-executable data regions won't overflow into executable data regions.
    • Here you go... (Score:3, Informative)

      by SoSueMe ( 263478 )
      Some legitimate programs, such as Java compilers that perform just-in-time code generation, execute instructions within data areas -- and will have to be rewritten for Service Pack 2. But the most common exploiters of x86 architecture's porous program and data boundaries are applications (called, as a matter of fact, exploits) that perform buffer overrun attacks -- one-two punches that first flood a program's input area with more data than it's designed to handle, then deliver a poisonous executable payload
      • Re:Here you go... (Score:5, Informative)

        by m_pll ( 527654 ) on Sunday June 06, 2004 @02:50AM (#9348798)
        Some legitimate programs, such as Java compilers that perform just-in-time code generation, execute instructions within data areas -- and will have to be rewritten for Service Pack 2.

        Of course, if those programs were written correctly in the first place they wouldn't need to be fixed to work on NX platforms.

        Win32 has always had PAGE_EXECUTE flag [microsoft.com], and if you wanted to execute dynamically generated code you were supposed to include this flag when allocating memory [microsoft.com] (or use VirtualProtect afterwards).

        Most people didn't bother with PAGE_EXECUTE because it wasn't enforced on x86. But technically it's always been required.

    • by Anonymous Coward
      The JIT Compiler writes its output into a writeable data segment. Then the JIT controller makes an operating system call to change the mode of that segment from "writeable data" to "unwriteable code". Then the JIT interpreter runs the "unwriteable code".
      • And now s/JIT [\w]+/malicious code/g. Where's the protection? IMHO this should be set on a case-by-case basis (a'la chpax), so that you *know* you get executable data.
        • There are a lot of people who don't seem to understand this. A buffer overflow usually happens when a person hacks into A LEGITIMATE RUNNING NETWORK PROGRAM THAT HAS A BUG. For example, a bug like this was just found in the CVS server. There are likely dozens of exploits like these that the general public doesn't even know about, that people are using to spy on other computer users. It is a serious security issue that can even plague the computer savvy.

          On the other hand, if a person doesn't know the risks

  • by dekeji ( 784080 ) on Saturday June 05, 2004 @11:49PM (#9348271)
    Calling it a "technology" I suppose detracts from the fact that the lack of an executable bit in x86 page tables is a deficiency. You see, this "feature" has been around since, oh, the middle of the last century, and many processors other than x86 have supported it without even considering it worth mentioning.
    • Mod Parent Up (Score:2, Insightful)

      by Anonymous Coward
      Yes, I sincerely agree. Unfortunately this usage error of the word is now so widespread, I fear nothing can be done anymore.

      Looks like only the wise understand the distinction among "tool" and "feature" and and "technique" and "technology", but the rest of the people who gather their world knowledge from buzzword driven press articles will keep thinking that Visual Basic is a "technology" as well as Java.

      Actually it would be interesting to discuss how the scopes of these 3-4 concepts should be in the area
    • A friend of mine with a lot of experience with x86 assm claims the architecture already supports non-executable memory areas anyway. I wonder if he can find a reference.. maybe it's just not fine-grained enough (i.e. per-page) to be useful?
  • I'm Captain Jonathon Archer of the starship, Red Hat Enterprise, NX-01 class security. ;-)

  • by l0ungeb0y ( 442022 ) on Saturday June 05, 2004 @11:52PM (#9348284) Homepage Journal
    "AMD's Athlon 64 and Opteron processors have had NX since their debut, though the extra bit won't do anything on a Windows XP system until you obtain and install Service Pack 2. Intel is expected to add NX (or XD) to the next generation of its 90-nanometer-process Pentium 4 "Prescott" CPUs -- bundling the security enhancement with a larger 2MB Level 2 cache and perhaps a faster 1066MHz front-side bus -- in the fourth quarter of this year."

    This year has truly been AMD's year to guide the microprocessor market. Remember not so far back when everything AMD did was a response to Intel? This year it's been Intel responding to AMD. I hope this trend continues as it shows that the so-called WIntel stranglehold is starting to crack and that it is possible for the competition to assume a leading role in the market. Now hopefully, IBM has something in the works for it's PPC/Power lines, as they've been working closely with AMD and this processor feature is something that every networked system could use.

    • At this point, it doesn't really matter, because they're all going to screw us over with Trusted Computing soon enough.

    • Remember not so far back when everything AMD did was a response to Intel?

      Including the stupid stuff, like switching to "slot" processor/mobo interfaces...

      Well, on the plus side, AMD learned very well from their mistakes... After that, they've stuck with "Socket A" this whole time, while Intel conituned the madness, switching to a different socket every month it seems.

      Then again, AMD64 seems to have put them back in the mad mindset again, having 3 different sockets for their parallel chip lines. Hope th

  • by Anonymous Coward on Sunday June 06, 2004 @12:03AM (#9348324)
    This new patch is to support NX in 32-bit processors or 64-bit processors running in 32-bit mode.

    The 2.6.6 kernel already included an NX patch for x86_64. Details are in the "Non-Exec stack patches" LKML thread here [seclists.org].
  • Now it is time for you, young grsshopper, to learn as well.
    translation:
    Malicious code executing itself via a buffer overflow is actually one of the lesser evils in the virus world. Most users will gladly allow anything to run on their box, especially if it does something cool (time, weather, cutesy things, etc), and with everyone being root on Windows boxes, this means the program can do whatever the hell it wants and windows won't say anything/much.
    The NX bit is great, especially for servers where generally the only kind of attack is a buffer overflow. Like I said the procesor has learned well, but the users must learn also.
  • by The_Bagman ( 43871 ) on Sunday June 06, 2004 @12:28AM (#9348428)
    This is basically an "execute / no-execute" bit in the page-table entries. It means the OS can mark portions of an application's virtual address space as non-executable - such as pages in the heap or the stack. It'll help against buffer-overflow attacks that put new assembly code in the stack and return into it. It won't help against buffer-overflow attacks that return into existing code (e.g., to do a system call). It won't help against worms that take advantage of meta-character expansion vulnerabilities. It won't help against scripting flaws (such as javascript, active-x, or visual-basic/outlook vulnerabilities). It won't help against weaknesses in the OS itself.

    Think of this as raising the bar. Of course, the "clever" attackers will still find flaws, and still write code for the script kiddies to use to exploit them.
  • http://zdnet.com.com/2100-1104-5227102.html [com.com]:

    In addition to the NX work, Intel this year released prototype wireless network support--albeit nearly a year after full-fledged support was available in Windows.

    Don't they mean that Linux had new wireless network support this year? Or was Intel the wireless support contributor for Linux? Either way I think the sentence is in error. Though I'm probably just being pendantic for raising it.

    ---
    VPS Hosting [rimuhosting.com]

  • ...of the Evil Bit?

  • Exciting news... (Score:4, Informative)

    by rice_burners_suck ( 243660 ) on Sunday June 06, 2004 @01:31AM (#9348647)
    I'm glad support for this is finding its way into Linux. I think OpenBSD has had this for a while now, as part of ProPolice... I'm not sure about that though.

    From what I've read, it certainly makes sense to break a few apps for this functionality, as you can always run them in a build without it. Things should be a lot safer, as crap like buffer overruns from carefully formatted input strings can no longer contain executable code.

    I think this should be available for individual programs to set the NX bit on memory pages that should only contain data, so, for example, when you download a file, it is impossible to execute it (say, while in memory) until you save it and explicitely set the execute bit. In other words, there is a completely non-executable path for all untrusted code from its inception until the user explicitly makes it run. Now, when some Joe Luser clicks an email attachment virus made for Linux, if this ever happens, it will be very difficult for him to make it run, and hence, it won't. Add to that the protections inherent in all Linux systems (multiuser permissions, heterogeneous configurations, etc.), and it's very unlikely that Linux users will experience the kind of crap that Windows users have to put up with on a daily basis, even if Linux somehow gains a huge market share on the desktop.

    These are exciting times.

    • This is not DRM and does nothing to stop downloaded code from running.

      The NX bit is used to mark parts of memory of a running program. Certainly it will mark anything allocated from the heap, such as the memory used to store a piece of data downloaded from the net, and then written to the disk. However it does not have any magic "sticky" property that stays with the data. If the downloading program thinks that data is a program that should be run, rest assurred that it will have the capability of saving it
  • It might be a bit 'off topic' but the draw back of NX is that self modifying code is no more supported.

    Although, I don't know if SM code is supported on Linux - Under Windows you had to use 'VirtualProtect with PAGE_READWRITE -, anyway it's a bit 'outdated' technique - lot of cache misses issues, Intel was against SM code, although i used it a lot a long time ago). So it shouldn't be a real issue.

    The question is : can NX will disable with the (root) user under Linux, as it will be under WinXP SP2 ?
    • by Animats ( 122034 ) on Sunday June 06, 2004 @03:10AM (#9348845) Homepage
      Modern x86 CPUs go to incredible lengths to support self-modifying code. PowerPCs, by comparison, don't support it at all; they have separate instruction and data caches. If you modify code in a PowerPC, you have to flush the instruction cache or it won't work. There are system calls for this under the MacOS. And nobody notices. In fact, the PPC 601 didn't even have the instruction cache flush instruction. For some years, Linux for PPCs had to flush the cache by preventing interrupts and loading a big block of junk data to invalidate the entire data cache. About the only time this is done is during fork/exec sequences.

      There's some history of self-modifying code from the 16-bit DOS world, but it's probably time to kill that off.

      It's been a long time since self-modifying code improved performance. Today, self-modifying code on an x86 machine works something like this.

      • The processor is going along, fetching ahead perhaps ten instructions, and executing as many as possible simulataneously. Ten to twenty instructions may be in the pipelines. The retirement unit is running ten to twenty cycles behind the execution units, committing results back into cached memory and registers once all possibility of trouble has passed.
      • Trouble usually comes in the form of mispredicted branches, which are handled reasonably efficiently using cached bits that record which way the branch went the last few times. Less common is an exception, like a floating point overflow, which looks like a forced branch. Least common is a modified instruction.
      • The superscalar x86 machines (Pentium Pro/2/3/4 and later) check for modified instructions. Storing into an instruction immediately ahead will be handled properly. People on the Pentium Pro team sweat blood over making this work right. And it does work. But not rapidly. The retirement unit views the instruction as an "operand" for collision detection purposes, so a change to that "operand" invalidates all the results that depend on it.
      • The CPU then has to deal with the mess. Retirement stops. Instruction fetching stops. The pipelines are flushed. The functional units are idled. Instruction fetching is backed up to the modified instruction and restarted. The CPU pipelines refill with the new program. After a few tens of cycles have been lost, instruction execution is moving forward again.
      • In AMD land, it's even worse. AMD's approach to superscalar CPU design involves expanding instructions into a RISC-like fixed length form at cache load time. Storing into an instruction not only requires flushing the CPU, but the whole block of instructions has to be reparsed.

      So, in general, self-modifying code is not going to help performance. Generating blocks of code and then making them executable is fine, but changing code you're about to execute went out with "ALTER paragraph-name TO PROCEED THROUGH paragraph-name" in COBOL.

    • by Alan Cox ( 27532 ) on Sunday June 06, 2004 @05:21AM (#9349173) Homepage
      Under Linux at least you can ask for executable mapped pages. This is what the fixed X loader does for x86 now. Most non x86 processors have execute bits on page table entries and POSIX/SuSv2 therefore have a MAP_EXEC bit in mmap so you can say "I want to run this"
    • More confusion with DRM.

      Yes Windows supports this with a call to indicate that NX should be turned off on allocated pages. They added this because they wanted Windows to work on non-Intel processors at one time. Linux has a similar ability or it would be impossible to make exec work on such processors. The problem is that apparently programs don't bother calling this when they want memory for code because it is not necessary on Intel.

      For compatabilty with such programs it will probably be necessary to hav
  • Considering the current Linux architecture, I really don't see a problem with what's proposed... especially if the chances of breaking things are almost nill. Doesn't seem a far stretch due to the way things currently run in Linux anyway. Developers can work around any problems that may arise anyway. This will help to ensure that Linux remains one of the most promising operating systems available, and even moreso than it is already.

    I saw mention in the linked article that Microsoft plans NX support in th
  • PaX (Score:3, Insightful)

    by XNormal ( 8617 ) on Sunday June 06, 2004 @02:52AM (#9348806) Homepage
    The PaX [grsecurity.net] patch effectively implements this feature on older x86 processors that don't have hardware NX support. It takes advantage of the fact that data and code have separate page table caches.

    It comes with a pretty high performance overhead, though. A page fault will occur for any miss of the TLB cache while normally they are just loaded from the page table in main memory.
  • How does this affect C++ compilers that generate vtables of class function pointers? Does that mean we'll all need new compilers (or updated run time libraries), not just an API call that we'd only use if our code actually needs to execute from mallocated memory?
  • by RAMMS+EIN ( 578166 ) on Sunday June 06, 2004 @03:04AM (#9348832) Homepage Journal
    Why do we need a per-page NX bit if the write and execute permissions are already set for the segment?

    Even on the 286 (running in protected mode), code segments are executable, but cannot be writeable, and non-code segments can be writeable, but not executable. I think that's basically what you want - non-executable data, and non-modifiable code (of course, the code needs to be written to memory once, but you can make it non-writeable before starting execution).

    So how come we also need an NX bit on pages (knowing that pages can only be accessed if there is a segment that references them)? Do our operating simply ignore the security that the segment permissions provide, and if yes, why? Why is per-page control so much better than per-segment control?
  • by Anonymous Coward
    Grsecurity/PAX users have had this on ALL the platforms for couple years already.

    Grsecurity/pax has had a few hundred more security enhanchement improvements over the stuff the articles now here are talking about. So what's the fuzz? Hah.

    Btw, the development of Grsecurity (which is the best [most secure, most effective, easiest] way to make Linux platform secure) stopped already and the project will officially die tomorrow due the lack of sponsors.
  • When is that to be integrated into the CPU, and supported by all OS's?

    Just so know when to stop buying hardware and horde older equipment that isnt crippled.....
  • The IA32 CPU architecture defines 4 protection rings, with ring 0 being the most privileged and ring 3 being the less privileged. This type of protection is not used in modern operating systems, though, because this protection involves segmentation. What is used is the page descriptor's R/W bits and the user/supervisor bit.

    Instead of having R/W or user/supervisor bits, the page descriptor could have separate ring information for each type of access( write/read/execute), as well the ring level of the page.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...