Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software IT

Server Consolidation Guide via Virtualization 26

sunshineluv7 writes to tell us TechTarget is running a good overview of 'why, when, and how to use virtualization technologies to consolidate server workloads.' The summary provides links to several podcasts and other articles relating real world experience with how to utilize virtualization to best meet your needs. From the summary: "Advances in 64-bit computing are just one reason that IT managers are taking a hard look at virtualization technologies outside the confines of the traditional data center, says Jan Stafford, senior editor of SearchServerVirtualization.com."
This discussion has been archived. No new comments can be posted.

Server Consolidation Guide via Virtualization

Comments Filter:
  • A "good" overview? (Score:5, Insightful)

    by lucabrasi999 ( 585141 ) on Wednesday August 09, 2006 @05:57PM (#15876958) Journal
    That submission is what constitutes a "good overview" these days? Maybe it is, if you are the person trying to drive traffic to TechTarget.com sites....
  • 64bit? (Score:1, Insightful)

    by rf600r ( 236081 )
    What does 64bit have to do with driving virtualization? Are people really that ignorant about 64 bit processors and what the mean and don't mean? Seriously, how are these two technologies correlated?
    • Re:64bit? (Score:2, Insightful)

      Finally enough addressable RAM that you can have more than one virtual server with a decent amount of ram.
      • Re:64bit? (Score:3, Interesting)

        by afidel ( 530433 )
        With PAE you could already give each virtual server 4GB to play with up to 64GB total with Windows 2003 Enterprise or 128GB with Datacenter. Linux 2.6 allows up to 64GB through the HIGHMEM_64G flag, all on standard x86 of P2 or later vintage (PPro had rudimentry PAE but implementing it was very hackish)
        • Re:64bit? (Score:3, Informative)

          by demon ( 1039 )
          2k3 Datacenter can't support 128 GB on i386; it's not possible, as PAE only adds an extra 4 address bits (going from 32 to 36 bits of physical address space). Also, there are still user process limitations that make it impossible for apps like, say, database servers to address more than 3 GB (not 4 GB; it's a limitation due to kernel address space mappings in a process). x86_64 wipes that out easily, so for healthy sized virtualization environments, it's definitely the preferred environment (and you can sti
          • That's not what this [microsoft.com] table from MS says along with several other references to PAE on Microsoft's site.
            • Feel free to do the math. the short answer however ((2**36)/(2**30)) = 2**6 = 64. Either that table is wrong, or they're using some other technique.

              Also feel free to check that PSE/PAE is 36bits.
  • DR (Score:5, Insightful)

    by afidel ( 530433 ) on Wednesday August 09, 2006 @06:05PM (#15876995)
    Disaster Recovery and test environments are the two biggest reason's I can see for using virtualization. Having the ability to pick up your system and plop it on any old box makes things so much easier. In theory HAL's should have made this possible years ago but they never really lived up to their promise. As to virtualization making management easier, bullocks. Some of the tools bundled with good virtualization products like ESX might make management somewhat easier, but you still need additional good tools to make management bareable for large numbers of server/virtual servers.
    • by sheldon ( 2322 )
      We use VMWare heavily at my company.

      Disaster recovery, ability to move virtuals to new hardware... these are great positives.

      The negatives are performance, performance and performance.
      • If you use VMWare heavily, I'm sure you're running ESX, but I'll ask you anyway:

        Can you (or anyone else) tell me the recommended way to back up your virtual machines with VMWare Server? All of the documentation I've found talks about ESX Server. They give you 2 choices: 1) Run backup software *inside* of the VM and back it up like any other machine, or 2) Back up the VM files directly. In the case of ESX, you use the Perl API to set up a redo log, but AFAIK that's not possible with Server. Without tha

        • I think you need to look at ESX version 3 they have gotten snapshots working.
        • by tadheckaman ( 578425 ) <tad@heckama n . com> on Wednesday August 09, 2006 @11:10PM (#15878285) Homepage
          Place the server in undo mode/snap shot mode, and then just backup the vmdk. When its placed into the undo/snap mode, it makes the vmdk readonly, writing the changes to a seperate file. Then all you need to do it copy that vmdk, and when done, commit the undo/snap. When restoring the backup, the system is brought online as if it lost power. On ESX its a snap to do, and Vizioncore makes software that does this for you (ESXRanger), however I leave the VMware Server as an exercise for the reader. As I dont have any need for this, I havent looked into actually scripting it in VMware Server. But the idea is the same, and I bet that its possible.
          Doing a quick search on the forums, sounds like vmware-cmd is the tool to use, or write a script to talk to VMware's SDK.
        • Well... at work we use VMWare ESX, but our disk is on Hitachi SAN... and so backups are performed by doing shadow copies at the disk level.

          At home I use Virtual Server, and I've found running backups inside works.

          The other method I've used is to pause the VM and then copy the files to a different location. On my little home machine with 4 virtuals and about 80 gigs of data this takes around 4 hours to complete. I have a script that does it for me, so each machine is maybe only offline for 30-45 minutes.

          Th
  • by Anonymous Coward
    When they talk "virtualization", do they mean running virtual domain service or virtual ips, allowing one computer to handle multiple web services, or do they mean running multiple virtual machines on one box?

    I'm not too sanguine about the virtual machine approach. I suppose the only reason they'd be doing it that way is so that one server could seem like it's being multiple computers, so later the tasks can be split up if loads become high, or so that it looks absolutely identical to a two-machine setup t
    • If I understand this correctly, instead of running four or five big applications on one physical computer, you give each application a virtual machine to run in on the same server. If one goes barf, the others are not affected. One article said a company went from three server farms to one server farm by running 225 VMs on 15 computers (or 15 VMs per computer).
      • Its not so much one VM going bad, but that your application is totally self-contained on that VM, so you can move it (live, as with VMware ESX) to another hardware device with no worries about changing DNS, IPs, odd dependendencies in /usr/lib, etc.
    • I suppose the only reason they'd be doing it that way is so that one server could seem like it's being multiple computers, so later the tasks can be split up if loads become high, or so that it looks absolutely identical to a two-machine setup that's being replaced

      Bingo - the room full of NT machines each with a seperate task to replace one Sun machine can be replaced again by a single machine of your choice. Having a single machine to do nothing but DHCP and DNS for only sixty workstations always seemed l

      • Having a single machine to do nothing but DHCP and DNS for only sixty workstations always seemed like a waste - especially when you still had to reboot it once a week to cope with memory leaks that would crash it sometime between day 10 and 30. Now you can run that as a virtual machine on whatever environment you like and the memory leak will be contained.

        More importantly, you can run two such instances, so one is always running while you're rebooting the other one.

    • RTFA - it's about virtual machines, not virtual-domain web servers (which by now are old technology and an obvious win.) Yes, virtualization does take some extra resources, and you need a disciplined approach to administration to use them successfully in a production environment, but production environments already needed disciplined administration and enough resources - the assertion of the virtual-machine people is that it's actually easier than maintaining multiple boxes, especially given the extremely

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...