Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Bug

Don't Forget That Worms Happen Everywhere 391

friday2k writes "Securityfocus has a nice column on Worms and their origin in 1988. It explains what everybody should never forget. We have dealt with *NIX worms (Sadmind, li0n, ...) and they will come back again. Maybe then the MS fanatics will laugh and say: didn't we always tell you Open Source is insecure (too?) ..."
This discussion has been archived. No new comments can be posted.

Don't Forget That Worms Happen Everywhere

Comments Filter:

  • The major issue is not whether Linux can have worms. The major issue is that Microsoft products seem to be of very low quality. Extremely poor security is only one aspect of that.

    No Linux email programs or word-processing programs have the authority to take over the entire operating system. Microsoft products sometimes do.

    Many of the security bugs in Microsoft products seem to come from sloppy programming. The open source world would have a difficult time being as sloppy.

    The popular Linux programs give a general impression of quality, and of sincerely wanting to do a good job. Microsoft programs give the general impression (to me) that Microsoft wants to give as little as possible to the customer, so that the customer will feel motivated to upgrade.
    • HOWEVER it's not fair to snicker if the 'other' operating system got stroken by a worm. There were many unix based worms also, remember the buffer offerflow hole 'bind' had?

      So what happens if the BSD TCP Stack is found to have such an overflow error? This would automatically infect ALL systems I can think of, who doesn't use BSD's stack today?

    • No Linux email programs or word-processing programs have the authority to take over the entire operating system.
      Really? Great! I'm going to email you a new version of vim. Make sure you run it as root. Don't worry, it won't have the authority to take over the entire operating system.
      • Ya Right, run as root *and* run untrusted code. Sounds like a typical windows user executing an email attachment to me. Informative my ass...more like typical M$ thinking

        This is why we create user accounts. This is why we run suspicious code in that account in the first place. You gonna send the code with that VIM?. How are you gonna hide the exploit? Geez...I'll bet you're one of those accessing Slashdot through IE right?

  • Difference (Score:3, Insightful)

    by The Ape With No Name ( 213531 ) on Wednesday August 15, 2001 @02:52PM (#2116677) Homepage
    A *nix sysadmin is less likely to let a machine go unpatched, in the best of all possible worlds.
    An NT/2000 sysadmin is a secretary who reboots when the internet thingy stop hoogjamajigging, in the best of all possible worlds.
    Seriously, in tracking down a couple of thousand hosts on campus who had Code Red, I have never ran into such righteous indignation over a simple lecture on systems maintenance as patching. Of course, many of these users/sysadmins were dumbasses who installed Win2K server because they could, not because they had to. 3 machines in one room were being used as everyday workstations and not offering services for any particular use by the office. Mind you, the services were still offered. Hit the average Code Red machine with your web browser and you will see the default webpage.
    • Let's not be so mean to the secretary, shall we? "Patching" MS is a pain that often breaks unrelated services. Caution is not always folly or slopyness, but forced by inferior software. Upgrading Debian is a two command operation, that produces far fewer headaches:

      apt-get update
      apt-get upgrade

      Another great difference that should be accounted for is the ease of learning how to run Linux. Oh sure, it looks harder, but the information is available and it's SO MUCH EASIER to really know what you are doing than it is to trust a particular vendor. Greif, it's hard to keep a single MS box running. The cloud of BS that MS keeps its users under is awful and we should be nicer to those suffering there.

  • by webmaven ( 27463 ) <webmaven@nOsPAM.cox.net> on Wednesday August 15, 2001 @03:02PM (#2118882) Homepage
    I think that the real reason that MS systems were hit so hard by Code Red and it's descendents is that there is a real difference in the culture of the respective developer communities.

    There is no reason why all those home systems and corporate desktops should have IIS running in the first place. There is also no reason (generally) for a home linux system to be running, say, BIND or wu-ftpd.

    So why does Microsoft encourage the installation of unneccessary software on it's systems, and why doesn't it make it easier to not install those services in the first place?

    It comes down to culture. Unix-like operating systems are minimalist and modular, because the development communities appreciate elegant code (not neccessarily elegant interfaces).

    Whereas Microsoft prizes a DWIM (Do What I Mean) approach, which encourages adding functionality 'just-in-case', as Microsoft seems to think that actually asking a user to install a component is a failure on their part.

    In the long run, elegant, minimalistic code is easier to understand, and therefore easier to secure (examples are Sendmail vs. qmail, or BIND vs. djbdns).
    • Actually, I particularly enjoy having BIND running locally. Since I fired it up:

      1) I haven't had outages because my @home DNS servers have gone to lunch, and

      2) I've gotten rid of a lot of junk after setting up some bogus entries for doubleclick.{net|com} and x11.com.

      I agree that there's no reason for most home users to have a BIND system visible to the net at large, but there are some pretty good reasons for one if it can be located behind your firewall.
    • "Unix-like operating systems are minimalist and modular"

      It would have been curious to hear you make that same statement back in 1992, when I first started working with Linux and having 16 Megs of RAM to run X11 was considered a luxury.

      You know Windows 2000 comes with a telnet server? It's installed, but not started by default.

      Can you say the same about most Unix distributions? No.

      Furthermore Redhat for the longest time went off and installed a whole load of services by default. My Solaris install at home has sendmail running by default. Do I need sendmail? No.

      I think you'd like to believe what you are saying. But I really don't find a whole lot of evidence to support it as fact.
    • by alispguru ( 72689 ) <bob,bane&me,com> on Wednesday August 15, 2001 @04:46PM (#2142263) Journal
      In the long run, elegant, minimalistic code is easier to understand, and therefore easier to secure (examples are Sendmail vs. qmail, or BIND vs. djbdns).
      That's the first (and hopefully only) time I ever hope to see the words "elegant", "minimalistic", and "Sendmail" together in the same sentence.
      • That's the first (and hopefully only) time I ever hope to see the words "elegant", "minimalistic", and "Sendmail" together in the same sentence.
        I guess I wasn't clear that Sendmail and BIND weren't minimalistic, and that qmail and djbdns were (at least by comparison), and therefore more secure.
  • Good eye, spotter.

    Who should we send the wormsign spotting bonus to?

    Dammit, where are those carryalls??!?!?!

    InigoMontoya(tm)

  • Take a look at the SANS Institute's "Ten Most Critical Internet Security Threats" here [sans.org].

    Notice that the level of representation of MS products is quite low. Consider that the Open Source Community's conventional wisdom is that closed source leads to insecurity. I am risking the almighty flame when I say so, but here it is: Monoclonal OS prevalence is the issue, not open source versus closed source.

    What I am saying is that the OS with the greatest market share attracts the hackers the most because they get the most "bang for the buck."

    But two conclusions can be drawn about this observation, one good, one bad:

    The good: the move towards an "OS ecosystem" of various flavors of OS is the healthiest for the Internet. Because if something like Code Red were to reappear, only a minority portion of the pie chart of OS prevalance would succumb, as opposed to the majority slice. I use the biological allegories "monoclonal" amd "ecosystyem" because you can say the same thing about crop resistance to insect/ bacterial/ fungal/ viral pests: the more the genetic similarity of crops, the greater the risk of one solitary biological pest taking out all of the Midwest as opposed to one cornfield.

    The bad: Microsoft, having the greatest exposure to exploits now, is getting the most experience with dealing with exploits. Dealing with them at a business, PR, and technical level. The more you fight a war, the better you get at it, and Microsoft will only get better and better at it, the general public will only grow more and more confident with their fight, and less and less exploits will be discovered. Other OSs haven't borne the brunt of the kind of hacker attention yet that fosters this kind of improvement, unfortunately for us all, who live in the ecosystem of the Internet.
    • The bad: Microsoft, having the greatest exposure to exploits now, is getting the most experience with dealing with exploits. Dealing with them at a
      business, PR, and technical level.
      I read this and think of the 'ping of death' or WinNuke attacks that plagued Windows in early '97. As I recall, there were two or three relatively similar vulnerabilities in the TCP/IP stack or winsock and maybe related software, one was widely exploited to lock up and BSOD machines. MS suffered a little over that, maybe drove some of us to Linux but in the long run it didn't make much difference one way or the other.
      Dealing with them at a business, PR, and
      technical level.
      Obviously they haven't quite gotten the hang of them at a technical level. Winnuke wasn't the only one, weren't there a few Front Page vulnerabilities back in '97 or '98 as well? But then the OSS community hasn't either, we continually have our share of buffer overflow pain and other security problems.

      The worm that takes everyone offline will exploit multiple holes in multiple operating systems and network services. It may very well operate in a stealth mode, trying to stay under the radar for as long as possible instead of defacing web sites and leaving obvious back doors. It may make a coordinated search of the IP space as described in a recent article.

      We are cursed to live in interesting times...

      • Actually as I recall one of the really popular 'ping of death' attacks affected Linux as well. Teardrop I think it was called. You sent some sort of fragmented packet at the machine and it just got lost trying to deal with it.

        The sad thing is, these were fixed almost immediately in all the respective OSes, but it took quite a while for people to apply the patches.
    • Also, there's a world of difference between 1999 and 2001 in security terms - as CodeRed illustrates. At peak, an IIS box here would have been broken every three minutes per IP (so if it owned a Class C subnet, every 0.7 seconds).

      New Linux boxes hitting the net aren't arriving with known superuser vulnerabilities (except one in Samba, difficult to exploit, not installed by default, configured unusably by default even if installed, and you'd have to be a bean-head to expose SMB to the Internet anyway; I get SMB probes several times per hour per IP during the quiet periods); new Win2k boxes hitting the net are arriving with known superuser vulnerabilities.

      The more you fight a war, the better you get at it, and Microsoft will only get better and better at it, the general public will only grow more and more confident with their fight, and less and less exploits will be discovered.

      You left off a qualifier: ``by Microsoft.'' Crackers will continue to find exploits, and one day, one of them will release the worm-to-end-all-worms for IIS. I favour one which installs Linux, copies across the existing services, and sets up shop as a P2P server for its children to download from. Wouldn't it be fun to see all of the penguins popping up on the screens in a Windows server farm? (-:

  • by BortQ ( 468164 ) on Wednesday August 15, 2001 @03:03PM (#2123666) Homepage Journal
    If you patched your systems on a quarterly basis, you would not have been vulnerable to a single one of the Linux worms.

    I'm waiting for the time when a worm comes out that exploits a vulnerability that has yet to be 'discovered' yet.

    All that has to happen is for a worm writer to be the first person to find a vunerability. Then (assuming that this person is malicious) thier worm would have a tremendous advantage. They would be garanteed that every single server running that particular OS would be open to attack. If they took the time to write a really nasty worm (say it's set to replicate itself 10 times and then try and erase everything it can reach on the networks it has access to, except itself) this would quite assuredly bring a large proportion of the internet to a grinding halt.

    And you know it's got to happen some day...

    • You watch. And if the service with the hole is non-critical, you turn it off.

      For instance:
      Code Red looked specifically for default.ida, which invoked index server. So, shut down index server if you don't need it. If you do, rename or delete default.ida and hope and watch until a patch comes out.
      • Shutting down the index server and renaming default.ida would result in no benefit.

        The problem was with the index ISAPI filter, and you had to either delete that, or just remove it .ida and .idq mappings from your IIS website.

        There are many of us who didn't have problems with Code Red specifically because we had made these changes last year before there was a known problem, patch, exploit, etc.

        Microsoft has also learned from that mistake, and supposedly IIS6 in XP doesn't install this crap by default.
  • My two cents... (Score:3, Interesting)

    by pi_rules ( 123171 ) on Wednesday August 15, 2001 @04:17PM (#2125123)
    Summary: IIS alone is providing holes for the MS platform at a rate that exceeds -every- popular *nix based product right now

    Do I have any numbers for this? Nope... I'll leave that for somebody else to dig up. I'm a BugTraq reader, and I'm amazed at the sheer number of serious IIS eploits that have recently been coming out. I haven't seen anything new in the past few weeks, which is good, but take a look at the sheer number of buffer overflows alone that have been found in IIS lately. I bet it's more, or really close, to the total number of buffer overflows found in things like sendmail, bind, apache, and event telnetd in the same time span.

    As a programmer I'm appauled here by IIS. Buffer overflows are old, but they keep coming back up. IIS is a new product, most likely written entirely in C++, which should be making the string handling much simpler than the C counter parts. These IIS holes are coming but due to either laziness, incompetence, or indifference in the MS coders parts. Theese aren't obscure either. You request a long URL and you overflow a buffer? 'Cmon here. The URL is coming from untrusted users -always-. Access point #1 into the system isn't even being looked at for possible holes... over and over.

    One would think (read: hope) that MS has got a slew of people over-looking all areas of IIS for possible buffer overflows right now. Maybe they'll actually fix some before they're found? Doubtful... given their track record of re-active security.

    Justin Buist

  • The author writes

    Excepting the Morris worm, before which nobody cared much about Internet security, all of these worms have one thing in common: the exploited holes were discovered months before the worm, and official patches for the affected packages were widely available.

    This was true for the Morris worm as well. Both the sendmail and fingerd issues being exploited by the worm were fairly well-known at the time of the exploit. If I recall correctly, part of the reason that Morris wrote the worm was because of his frustration over the continued presence of these security holes, and paradoxically, part of the reason that he released it prematurely was because one of holes had suddenly gotten extra attention.
  • by ragnar ( 3268 ) on Wednesday August 15, 2001 @02:51PM (#2127480) Homepage
    I'll say it yet again, since this is just another way of drudging up the Code Red issue. The problem isn't the platform, it is the administration of the platform. If Unix can be counted on to be mismanaged then an exploit will surely surface. In short, if the Unix world ever finds itself in the state of the Windows NT world, where boxes aren't administered and patched, we too will be nailed. I anyone surprised? No. Okay, lets let this tired topic die already.
    • The problem isn't the platform, it is the administration of the platform.

      Tony-A's answer was succinct, but I'd like to add that you're ignoring both the frequency and the quality of vulnerabilities on each system. More of the Unix holes are mere DoSes and/or extremely difficult to exploit than is the case for Windows, and when an exploitable hole is more than a DoS it often either requires local access and/or only gives you the provs of the user running the service (e.g. `apache' or `nobody') rather than open slather.

      Those are big differences and largely independent of administration.

  • When speaking about CodeRed? Just because the networks have stopped talking about it, doesn't mean it's gone away.

    I don't know about anyone else, but I'm still getting hundreds of CodeRed attacks every week.

  • The idea of *nix worms are far more easy to digest, since those who wrote the software with said vulnerability arn't living in huge mansions and driving fast cars. They tried their hardest, and wern't profiting as much for demonstratably insecure software.

    The OS argument always seems to be about quality, but I'm also interested in the esotaric aspects of it - if you're gunna get rich off something, than it had better damn well work; if you do it out of the kindness of your heart and/or scientific curiousity and research, well .. worms will always exist, but I'd rather the software I didn't have to pay for be guilty than the software I did.
  • Get Real (Score:2, Interesting)

    by MakinWaves ( 251435 )

    Yes of course we remember the *nix worms. Here's another thing to remember. *nix will never be the veritable screen door of security holes that M$ products are. I find "Whistler" to be aptly named.

    I wonder what would happen if IT professionals were paid $1 per machine for each security update. Guess TCO with M$ products would go through the roof eh? One particular week this year would have netted me $600.

  • by cansecofan22 ( 62618 ) on Wednesday August 15, 2001 @02:49PM (#2136844) Homepage
    No matter if it is a DOS attack or a worm or any other kind of attack. No matter what OS you are supporting and using if you as an Admin dont have the proper service packs and updates installed then your OS will be a victim sooner or later. Having competinent people running the shop is where it is all at. If you look at the latest worms, Red Hat's and MS's, they could BOTH be avoided by updating software.

    Sorry about the spelling, I really need to get a spell checker plugin for /. posts!
    • Personally, I think most security problems are a factor of how little documentation you get/read with new PCs. I'm not quick to bash admins (some are ignorant and lazy but that includes every category of people) as this worm is more @home based than .com based.

      Home users get a PC with the promise of easy to use blah blah and a handful of killer apps. It doesn't matter much if its Redhat or MS, if you don't understand the security aspects of being on-line you shouldn't be running a server.

      This worm is pretty benign, no deleted system files or content just a big fat backdoor. Its all over the media but I'm really curious if the average @home user got any real message out of this. Maybe they just know to download the patch because its on Cnet and run IIS with one security patch. Ideally, the message should be to get ALL the patches if you're planning on running IIS and subscribe to MS's security list. From what I've read in the media, its probably the former.
    • No matter what OS you are supporting and using if you as an Admin dont have the proper service packs and updates installed then your OS will be a victim sooner or later.

      "Sooner or later" is effectively a LIE because whether it's sooner or it's later makes a huge difference in securityville. You're also ignoring the ``quality'' of the intrusion (such as carte blanche versus mere DoS).

      Me for later, much later. While I could do even better, I use Mandrake 8.0 for production work. It's a bit bleeding edge in some ways - and I pay for that - but it comes with two massive advantages over many Linux distros: it installs reasonably securely unless you tell it not to (warns you when you install world-visible services and if you choose a "high security" install even disables those), and it can automagically update itself. Debian users in particular have long had these comforts.

      All Linuces have at least five huge additional advantages over Windows:

      1. There are significantly less holes to start with, because (among other reasons) they are generally implementation mistakes rather than systemic design flaws; and
      2. If a hole opens, the damage that can be done is less because you don't automatically get ring-zero (better than administrator/root) privs; and
      3. Patches tend to come out sooner and often involve no more than restarting a single service rather than downing the whole machine; and
      4. Tricks like chrooting the whole service, and/or using the immute bit (chattr +i) plus running with a kernel incapable of removing it (patch or capabilities) and a chattr program/syscall that rings bells and flashes lights instead of ch'ing the attrs, and/or one-way capabilities patches are simple to do; and
      5. Most distros arrive with secure remote administration, so dealing with a widespread attack (successful or not) is much easier; and (-:
      6. for Win 9X/ME in particular :-) distinction is actually made between superuser and mere mortals

      Yes, administration makes a big difference, but all OSes are a loooooong way from interchangeable when it comes to vulnerability.

  • by HiredMan ( 5546 ) on Wednesday August 15, 2001 @02:58PM (#2138503) Journal
    On November 2, 1988 the "Morris Worm" was unleashed on the net. It jumped from college to college (that was most of the net then) and, because of a bug in the code, would reproduce itself within the machine until it ran the machine into the ground as it tried to infect others.


    Imagine Code Red in which almost all servers are NT/IIS and there is no web, no central authority, no "experts"...
    It caused the Inet as it was to cease to function. People had to pull their boxes off-line to keep from getting repeatedly infected.


    The confusion and panic that followed lead to the creation of CNet and was the start of most of the big, early Inet security organizations that exist today.


    <old codger>
    You young whippersnappers don't know from worms. We used to create worms on punch cards and you had to mail them around to get infected! Those were the days!
    </old codger>


    I suddenly feel old and have to go lie down....


    =tkk

    • We used to create worms on punch cards and you had to mail them around to get infected!

      Actually there WAS a game in the mid 70's that reproduced itself on UNIVAC's with tapes that were send around; details here [fourmilab.ch].

    • Of course the Morris worm was not the first worm. The Christmas Tree Exec was December 1987, bringing down both BITNET and the IBM network, almost a year before the Morris worm. The Christmas Tree Exec was not self spreading however, it relied on users executing the email attachment- very similar to the I Love You and successors. I belive the George Santayana had some comments on this problem.
  • This is slightly off topic, But I've been thinking about it for a while. What if someone made a worm that behaved like an unitelligent life form. It would send some random (but predetermined) instructions to the processor, then make some judgement on whether it has more RAM than other instances of the program to survive. If it does, It would spawn more instances that are like itself, but altered slightly in the random instruction portion. Eventually, one may randomly "figure out" how to copy itself to another computer on the network.

    I realize it would take millions of generations before this happened, but once it did, it might become a very robust worm, and one that eats a lot of memory. All it would take is a few dedicated computers and some incredible Darwinian selection methods for it to occur.
  • by SpanishInquisition ( 127269 ) on Wednesday August 15, 2001 @02:49PM (#2138881) Homepage Journal
    We just have to claim that Linux worms :
    • are faster
    • are more portable
    • use less ressources
    • can be more easily modified since you have access to the source
    • Aren't tied to a single vendor


    That should make the point of the superiority of Linux worms over Windows worms and end all the FUD.

  • It can happen (Score:5, Insightful)

    by huh_ ( 53063 ) on Wednesday August 15, 2001 @02:47PM (#2139209)
    You all say that Unix admins know more, or that open source programs have patches out faster, but what about all those people who know little about linux and install it. They can just as easily leave their computers unpatched, running 24/7 using some cable provider. More and more people are trying out linux, it doesn't mean all of them are smart. So of course the same thing can happen.
    • Red Hat Network was the Red Hat answer to apt-get in Debian. I am not going to argue that all people should install Debian - it's not a total newbie distrobution.

      However, a nightly apt-get against security.debian.org is a VERY good way to patch your system for holes. Debian is really good about releasing quick fixes to their packages.

      Red Hat Network may or may not be good about keeping your system completely up to date. I don't know, because I am not willing to shell out a monthly amount of money for keeping my free system up to date.

      Really, I don't think MOST people are willing to pay for this sadly necessary excercise in security. By charging for this functionality, Red Hat is reducing the security of a large portion of the installed linux servers. It is simply going to create a bad rep for all of the linux community when worms start to work they way around linux servers using old vulnerabilities. Users with systems that automatically patch themselves will sleep fairly soundly (of course, there is a 24 hour time frame between every time you patch yourself. In the meantime, someone MIGHT have found an exploit and created a worm that utilizes that exploit).

      I realize they are in the money-making business. However, they are also representatives for linux. I think they need to be gently prodded to either make red hat network a one-time fee, or totally free. .NET has not yet made people used to paying for software subscriptions.

      Oh - and I DO know that patching alone is not enough. You also need to use secure services, and as few services as possible with explicit firewall rules for controlling who can access those services, plus making a good security policy altogether (most important).
    • Yes, *nix presents at least as much of a target as Win boxes, if not more since the services running on a default install are likely to include daemons like ftp and telnet. However, it is also really easy to run a perl script like Bastille [bastille-linux.org] to tighten security fast and with little technical know-how. Try that on an NT box.
    • Re:It can happen (Score:5, Insightful)

      by Rick the Red ( 307103 ) <Rick.The.Red@nOsPaM.gmail.com> on Wednesday August 15, 2001 @03:30PM (#2157529) Journal
      You're absolutely right, which is why it's just as important for Linux distributions to come locked down tight as it is for Windows distributions to come locked down tight. Microsoft isn't listening; are RedHat and the others?

      Also, Microsoft is supposed to be open to XP configuration changes by the hardware vendors. Does that extend to default security settings? If so, we can only hope that PC Magazine and the rest will rate new computers on how secure they are out-of-the-box. Are Dell, Compaq, Gateway, and the others listening? Is the computer press listening? If I know Dells come secure but Gateways ship Microsoft-default-wide-open, I'll recommend Dell to my friends and family. If I know Debian comes secure but RedHat installs wide open I'll recommend Debian. But only if I know, and I'll only know if the press does their job and tells me.

      This is a social problem, not a technical problem, and it requires a social solution. That means that everyone in the society must play their part -- the companies, the press, and the consumers. If Microsoft won't be a good citizen, bad on them. But why should they be a good citizen if their enemies are not, and especially if their friends are not?

      • First, I'll wager there are just as many or more Red Hat with Apatche run by someone who does not even know it's there. I know, because I ran one that way. The boogey men did not come and get me for the month or two I had it that way. Why? Because Red Hat 6.2 had far fewer holes by rational design than MS trash which is driven by marketroids.

        Second, they have tightened things up. 7.1 comes with a graphically configurable firewall, and bugs you about it on install. That's a big step from the "Everything" install of long ago. It may not be as tight as Debian, and really I must recomend Debian too, but it's not nice to FUD unless you are sure of what you say.

        All of the Linux distros are doing good things for teaching their users security. It's in the design and philosopy of free and open software to teach users. If man pages, online help and Slashdot are not enough, you can always fall back to the stone age dead tree intructions.

      • This is a social problem, not a technical problem, and it requires a social solution.

        While I agree that there is a social element to this problem, I think that there is definitely a technical solution: firewalls.
        Personally, I would never attach a computer to the internet unless it was a firewall, or was protected by a firewall. It does not have to be a hardware solution (although that is preferable, and those black-box firewall devices are ideal for home use), PCs can run personal firewall code as well.

        Being behind a firewall is no guarantee that you won't get 0wned, and is no substitute for secure-by-default operating systems, but it is an important part of securing your system.
      • The various "home" versions of Windows (the 9x series) have all be fairly secure out of the box. They have no remotely accessable or exploitable services. It's only when users do stupid stuff like enable file sharing or run strange executables that problems develop. I don't see how'd you tighten them up more than that without OS-level security policies. And since the home version of XP is going to be similarly single user, so that's out too.

        About the best the OEMs are willing to do is bundle Norton Antivirus and maybe a software firewall.
  • Code Red (Score:5, Funny)

    by briggsb ( 217215 ) on Wednesday August 15, 2001 @02:47PM (#2139211)
    Talked about his experience as a worm. In the interview here [bbspot.com]. It has some advice for newer worms and viruses.
  • If the big Windows worm attacks Whitehouse.gov, does that mean that the big Linux worm, whenever it arrives, will attack Whitehouse.com?

    Talk about biting the hand that feeds you!

  • is not whether or not worms can be written for one system or another, but whether or not the system explicitly defines primitives that directly enable the functioning of said worms.

    In the case of the internet mail worm, the function of the worm was based on unanticipated behaviors of both the worm code (the author had intended the worm to limit its speed of propagation) and the internet mail system (the author was exploiting a bug in the mail transfer agent). Clearly, this sort of situation, while a threat to security, is easily remedied once the exploit is known. The remedy can even be implemented with little or no effect on daily operations, since the erroneous behavior of the program will not have been used as part of any applications.

    In the case of the various Outlook worms, however, the situation is reversed. The worms rely on explicit features of the Outlook suite for their functioning. These same features have been incorporated into all sorts of applications built upon the Outlook suite, which means that in order to disable the worm, many production applications must be modified or discarded.

    This is a design issue, at its heart. There are some cultural effects involved (e.g. the MS assumption of a monoclonal computing environment leads to the expectation, and exploitation, of features that would not be reliably present in a heterogeneous enviornment.) but the central problem is the explicit decision by Outlook program managers to include features that were inherently insecure. (Consider that, while Sun may have a similar monoclonal outlook to Microsoft, Java was designed for both security and provision of a wide and reliable feature set)

    The question is not "can worms be written for systems other than Microsoft's?" -- to which the answer must always be 'yes', even if only because we can't rule out the possibility entirely -- but, rather, "is it easier or harder to write worms for Microsoft systems than for other systems?" The answer is, pretty clearly, that Microsoft's design decisions make worms far easier to implement on MS platforms than on other platforms.

  • except (Score:5, Insightful)

    by linuxpng ( 314861 ) on Wednesday August 15, 2001 @02:43PM (#2140386)
    don't most UNIX admins need to know something about the OS other than the size of the install base therefore actually patching their security holes in a reasonable amount of time. Let's not forget the issue is NOT microsoft's security hole. All oses have that, it's that the userbase is not up to date on installing the security fixes. We just hope everyone who bashes MS will patch their own holes come unix worm time.
    • There are REALLY important issues that interact with this one.

      1) A box should come with only absolutely absolutely necessary web services running. Anything else should require the admin manually to turn the service on. This would prevent about 90% of all worm cracks.

      2) The providers of a distro have a responsbility to ensure that security updates get to all people affected - not just those who subscribe to mailing lists. They have a responsibility to ensure that fixes are easy to get and easy to apply. Debian probably has the best security model in this regard due to apt-get.

      Microsoft fails on all fronts. They ship NT server and Windows2000 server with IIS enabled by default. They do not push publicity out about worms that impact their systems - they make a low key effort to acknowledge that they have a problem only when they have a fix.

      Redhat has also been particularly poor in this regard in the past - more recent installs seem not to enable internet server software by default, and to include warnings when you enable things.

      Whereas Microsoft software is buggier and less secure than any other software, they also fail to enable their users when security fails. For this the blame goes squarely on the shoulders of a giant that banks $1 billion per month for avoiding bad publicity in order to help their users.
  • The Point Is (Score:3, Informative)

    by Catskul ( 323619 ) on Wednesday August 15, 2001 @03:03PM (#2141508) Homepage
    I think there are 2 real points to the fact that *NIX systems are more secure. First of all, UNIX is more mature than MS software, therefore they have already been through the more trivial problems with holes. The second point is that because of Open Source customers get to choose what part of the software gets the most development. Security gets attention when those affect by bad securty get to decide.
  • Securityfocus has a nice column on Worms and their origin in 1988.

    Okay, if worms appearded in 1988, then what the hell ate all the dead bodies in the thousands of years ago?
  • Blame the language (Score:3, Interesting)

    by Tom7 ( 102298 ) on Wednesday August 15, 2001 @05:31PM (#2144257) Homepage Journal
    Yes, worms can happen everywhere. That's because practically all network software is written in C (or its perverse descendent, C++).

    If we were coding our network software in a secure ("safe") language (one without buffer-overflow "capabilities") such as Java, O'Caml, (or even scripting languages like Python, to an extent) we would greatly reduce our security risk. Given that these languages also typically increase productivity, it seems like a clear win to me...

    Microsoft realizes the contribution C and C++ make against stability and security; they've recently hired up a lot of famous programming language folks to work on new language technologies. Microsoft knows that large projects written in languages without sophisticated modularity constructs (ie C, C++) tend to get out of hand quickly. They're working to fix this! They're even working on technologies to improve the stability of device drivers through language technologies (see the Vault project, for instance).

    However, C has always been the UNIX platform's language. Will UNIX stay in the 60s as even Microsoft moves on? If so, I say it will be the "wormy" operating system family of the 21st century...
  • by Curien ( 267780 ) on Wednesday August 15, 2001 @03:51PM (#2144605)
    I have read a lot of posts in this discussion (and similar discussions in the past) talk about how *nix is better than NT. Then, some of the more level-headed among us pipe up and remind us that no OS is truly secure, and that the difference lies not with the system itself but with the system administrators. Thus, it follows that *nix admins are better than NT admins.

    I most heartily disagree. Sure, there are *some* *nix admins that mop the floor with NT admins... but the opposite is also true.

    I think we are all forgetting exactly what an "admin" is. An admin is *not* any JoeBlow@aol.com that stands up a web server! A system administrator is an IT professional who researches his work and prides himself on keeping his machines running smoothly.

    If you think about it a little, I believe that you'll agree that the major cause of the whole Code Red problem is not the NT admins out there, but rather the JoeBlow@aol.com's who really don't know what they're doing. Ignorance, people... ignorance is our enemy! Not Bill Gates, not MS, not closed source! It's ignorance and apathy.
  • by kisrael ( 134664 ) on Wednesday August 15, 2001 @02:58PM (#2144762) Homepage
    I'm not a very close observer to any of these things, but it seems like the recently noticed telnetd exploit has really screwed over more sites than Code Red has, which seems more of a bandwidth hog. I mean, a years-old simple string buffer overflow giving root access on so many linux boxes is inexcusable for people trying to "sell" Linux on its general security and reliability...
    • Win2k comes with a telnet server, no? Sniff, sniff, ewwwww, what's that smell? Did someone step in MS again?
    • Considering telnet is essentially a security hole that you could drive a Ben-Hur chariot race through (user and root passwords passed in plaintext? yum!), and has been recognized as such since... well, forever, by Unix admins, and even is not installed by default on recent RedHat releases, I'd say that there's deeper problems than "telnetd has an exploit." Installing telnetd on a unix machine is about the same as shipping Windows boxes with back orifice and code red already installed.

    • This is probably irrelevant but I'm going to spout on, basically because the telnetd exploit does nothing to my boxen. Putting aside the exploit, telnet is completely insecure from the ground up. Ever su into a box over telnet? Guess what, you're not the only one with your password now. For those of you who haven't switched to SSH yet, you were asking for it. This just gives you another reason to switch.
    • by The Troll Catcher ( 220464 ) on Wednesday August 15, 2001 @03:28PM (#2157602)
      Of course, the very fact that you're running telnetd at all means you don't give two craps about security.Do you have ANY IDEA how easy it is to sniff passwords from telnet? I tell you, it's scary. When someone rooted a box here a while back, I looked thru the sniffer log and found working root passwords for a number of HP-UX machines here...
      • by Chops ( 168851 )
        I highly recommend showing people how insecure telnet is -- in a dorm, for example, pop up ethereal on one machine and log in over telnet from a machine in a different room. Follow TCP stream, and point to your real password displayed on the screen. This is more effective than lecturing people about TCP/IP and ethernet, and I've only had one guy start asking dismaying questions about how to sniff other people's passwords.
        Change your password after, of course. Now if only there were an equivalent way to get people to use PGP...
  • They ALL Suck (Score:3, Informative)

    by Detritus ( 11846 ) on Thursday August 16, 2001 @05:29AM (#2154965) Homepage
    Debating whether Windows, Linux, BSD or UNIX is more secure is a waste of time. From a security point of view, they all suck. It's just a matter of degree.

    Windows (NT/2000) has some good security features in the kernel, the problem is that they are not properly used by the operating system as distributed by Microsoft. Locking things down would break too much stuff.

    UNIX/Linux has an archaic security model that hasn't changed in decades.

    Both operating systems suffer from being implemented in C, an unsafe language. It is possible to write secure code in C, but most people have neither the expertise nor time to do it correctly.

E = MC ** 2 +- 3db

Working...