Options

2 boxes vs 1

mayhem87mayhem87 Member Posts: 73 ■■□□□□□□□□
I really want to start learning vmware and am looking at setting up a lab. However, I am having a hard time deciding if I should build one powerful desktop to run nested vm's on or build two physical with esxi.

The cost is going to be around the same so really trying to figure out which would be more beneficial. This will be for lab purposes only and more then likely powered off or suspended when not in use.

Does anyone have any advice?
«1

Comments

  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Build one decent computer and save on power. 16GB of RAM, an i5/i7 and an SSD or two - this is a good combo for labbing ESXi.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    MacGuffinMacGuffin Member Posts: 241 ■■■□□□□□□□
    I'm doing the same and one thing I realized real quick is that if the lab is running ESXi on the metal then there needs to be more than two boxes. You'll need the two EXSi servers but also at least one storage server and a management computer. Do you intend to run the file servers on a computer you already have?

    I don't have any real advice to offer just that I realized that running ESXi on the metal will likely mean a minimum of four computers in a lab network. I've been warned about the network usage in an on the metal ESXi lab, do you have gigabit ethernet on your computers?

    I debating which path to take as well. I'm leaning towards a single box solution to avoid hardware compatibility headaches. With ESXi running in a virtual environment the issues of having the right NIC, drive controller, or even the right keyboard, all disappear.
    MacGuffin - A plot device, an item or person that exists only to produce conflict among the characters within the story.
  • Options
    netsysllcnetsysllc Member Posts: 479 ■■■■□□□□□□
    As long as you have CPU cores, ram and spindles you are good. If can do that with one computer then that is a good way to go. If you have the budget to setup iscsi on some sort of san then go with two boxes so you can practice with migrations and imports.
  • Options
    mayhem87mayhem87 Member Posts: 73 ■■□□□□□□□□
    Thanks for the quick replies. I guess I should include some more info about my current setup.

    At the moment I have a SynologyNAS (can do iSCSI and NFS) that I can dedicate 2 bays for just VM. I was planning on putting SSD's in them. As for the network, currently there are some open ports on a managed gigabit switch that I can dedicate to computers. While I do have another computer that I can use for management/daily purposes I would also like the lab available via vpn since I get some down time at work during the later hours.

    As for specs of the boxes:
    Nested VM box
    i7 3770
    32GB Ram
    vs
    2 Physical
    i5 2400
    16 GB Ram
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    For a SAN, you can run Starwind's free iSCSI SAN which installs on a Windows VM and works without a problem. You can do migrations and imports and whatever else to your heart's content. You really dont need more than 1 box.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    You can buy a server (Dell, HP, etc) or 2 off ebay pretty cheap.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • Options
    mayhem87mayhem87 Member Posts: 73 ■■□□□□□□□□
    Think I'm going to go with nested and also use the computer for gns3 use.
  • Options
    MacGuffinMacGuffin Member Posts: 241 ■■■□□□□□□□
    Essendon wrote: »
    Build one decent computer and save on power. 16GB of RAM, an i5/i7 and an SSD or two - this is a good combo for labbing ESXi.

    I've seen a spec list like this many times, the SSD seems to be a critical component for speed in a nested VM lab. One question though, if going with a multiple box lab with ESXi on the metal are SSDs just as critical?
    MacGuffin - A plot device, an item or person that exists only to produce conflict among the characters within the story.
  • Options
    dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    MacGuffin wrote: »
    I've seen a spec list like this many times, the SSD seems to be a critical component for speed in a nested VM lab. One question though, if going with a multiple box lab with ESXi on the metal are SSDs just as critical?

    Usually 2 biggest performance bottle necks are RAM & IOPS. SSDs have ~2000 IOPS while 15k SAS drive are around 180 IOPS.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • Options
    MacGuffinMacGuffin Member Posts: 241 ■■■□□□□□□□
    dave330i wrote: »
    Usually 2 biggest performance bottle necks are RAM & IOPS. SSDs have ~2000 IOPS while 15k SAS drive are around 180 IOPS.

    Right, since SSDs don't have to move a read head across a spinning platter so seeks times are essentially zero. That means the operations they can perform in a given time can be much higher.

    What I'm considering is getting three or four small servers, two would be ESXi boxes, one a file server, and any more beyond that might switch roles between file server or ESXi box depending on need. I can get these servers with a SSD or HD but there is a difference in price, performance, and size. What kind of performance boost could I expect with a SSD drive? Is this performance boost worth the money and/or loss in drive space?

    I realize much of what I am asking is subjective but I have to start somewhere. I suppose I could get one with HD and another with SSD and test them out myself.
    MacGuffin - A plot device, an item or person that exists only to produce conflict among the characters within the story.
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    The performance difference is quite significant, you'll have your VM's booting up in like 15 seconds. You dont need to assign too much disk to your VM's anyway, just go with a gig or so more than what the OS requires.

    While I agree by researching you learn more, but I reckon you should keep it simple and just get a desktop machine that runs an i5/i7 with 16GB or more of RAM and an SSD or two. Run nested ESXi and a Starwind iSCSI SAN on a VM on the host ESXi and your good to go. You can create any number of vNIC's on your nested ESXi VM's and play with vSwitches and dvSwtiches as much as you want. That way your not strapped by the number of pNIC's you have on your physical machine. The space and power savings are a no-brainer too.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    MentholMooseMentholMoose Member Posts: 1,525 ■■■■■■■■□□
    If you are going with a physical lab, the ESXi machines don't necessarily even need hard drives. There's not much need to lab with local datastores, and you can install ESXi on a USB stick. Many newer servers include internal USB ports for this purpose. You can then put additional, bigger, and/or faster/SSD drives in your SAN/NAS machine to store VMs.
    MentholMoose
    MCSA 2003, LFCS, LFCE (expired), VCP6-DCV
  • Options
    nhan.ngnhan.ng Member Posts: 184
    I'm gonna go with 2 boxes plus a storage box. This is how most companies have their set up. You can play with the networking portion to see how it effecting your vm peformance. There's alot that go into keeping everything running smoothly, from troubleshooting network performance, fail over set up...etc. than just esxi itself.
  • Options
    MacGuffinMacGuffin Member Posts: 241 ■■■□□□□□□□
    Essendon wrote: »
    The performance difference is quite significant, you'll have your VM's booting up in like 15 seconds. You dont need to assign too much disk to your VM's anyway, just go with a gig or so more than what the OS requires.

    While I agree by researching you learn more, but I reckon you should keep it simple and just get a desktop machine that runs an i5/i7 with 16GB or more of RAM and an SSD or two. Run nested ESXi and a Starwind iSCSI SAN on a VM on the host ESXi and your good to go. You can create any number of vNIC's on your nested ESXi VM's and play with vSwitches and dvSwtiches as much as you want. That way your not strapped by the number of pNIC's you have on your physical machine. The space and power savings are a no-brainer too.

    Two SSDs? Is the idea to RAID them? Spread the load over two disks (not RAID just two independent volumes)? More space? Dual booting? All the above?

    Other than my confusion on the need or desire for dual SSDs I agree with your points.
    If you are going with a physical lab, the ESXi machines don't necessarily even need hard drives. There's not much need to lab with local datastores, and you can install ESXi on a USB stick. Many newer servers include internal USB ports for this purpose. You can then put additional, bigger, and/or faster/SSD drives in your SAN/NAS machine to store VMs.

    Using USB sticks as the only persistent storage is intriguing in many ways. In my search I have not seen any servers in my price range that lack storage. As pointed out in other threads I've started I'm reluctant to build a server for many reasons.

    How does booting from a USB stick compare to HD and SSD when it comes to performance? I assume it lies somewhere in between. Anything else I should know about stripping out the SATA drives before I grab my screwdriver?

    One idea that just crossed my mind is that I could order two, three, or more, servers with identical specs and move all the drives into the file server. This could be a nice way to keep my costs low and performance high. Preconfigured systems tend to cost less than customized systems, even if that means removing one drive from one system and adding an identical drive to another.
    nhan.ng wrote: »
    I'm gonna go with 2 boxes plus a storage box. This is how most companies have their set up. You can play with the networking portion to see how it effecting your vm peformance. There's alot that go into keeping everything running smoothly, from troubleshooting network performance, fail over set up...etc. than just esxi itself.

    I'm tending to agree here. There's more to managing a virtual machine network than just setting up ESXi, there's hardware to manage as well. In a completely virtual environment there's something lost.

    I'm not sure how the cost difference between the two set ups work out. I'm looking at a two or three small servers for about $2500, each with 2GB or 4GB RAM, a dual core i5 or so, and a small HD. Add in a display, KVM switch of some sort, ethernet switch, USB sticks and maybe some other stuff and it adds up in the $3000 range. On the other hand I could go with a single desktop computer, quad core i7, 16GB RAM, a SSD in the 200 - 500GB range, display, keyboard, and other stuff and it adds up to be also in the $3000 range. A laptop would be about the same price as well but I'd lose a bit on the processor speed, drive space, screen size, and maybe other areas, but gain in the portability, power consumption, noise, and convenience. With some sacrifice in performance I could probably bring the price for any option down to about $2000 but I don't believe I'd want to go any lower than that.

    I'm thinking about a new laptop if only because this project gives me an excuse to replace my current one which is starting to have issues, it's just plain getting wore out.
    MacGuffin - A plot device, an item or person that exists only to produce conflict among the characters within the story.
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    MacGuffin wrote: »
    Two SSDs? Is the idea to RAID them? Spread the load over two disks (not RAID just two independent volumes)? More space? Dual booting? All the above?


    By two I meant spread the load around, two independent volumes. Just more space. A lappie is not a bad idea too. RAM and disk IOPS is really what you need to take into consideration, any solution would do.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    mayhem87mayhem87 Member Posts: 73 ■■□□□□□□□□
    @Macguffin
    you can build i5's for cheap

    Heres what I built up on newegg
    i5 3450s (quad core) $200
    G Skill 16GB Ram $95
    ASrock H77 Mobo $90
    Case w/ 420W PS $80

    after shipping came to around

    475.00 i was going to multiply this by 2 for two desktops so $950

    or

    The one powerful desktop
    i7 3770s (quad core + HT) $320
    G Skill 16GB Ram $220
    ASrock H77 Mobo $90
    Case w/ 420W PS $80

    plus shipping = 719.95

    obviously your still going to need some hdd's or ssd's somewhere in the mix.
  • Options
    MacGuffinMacGuffin Member Posts: 241 ■■■□□□□□□□
    Essendon wrote: »
    By two I meant spread the load around, two independent volumes. Just more space.

    That's what I thought you meant, just wanted to be clear on it.
    Essendon wrote: »
    A lappie is not a bad idea too. RAM and disk IOPS is really what you need to take into consideration, any solution would do.

    The laptops I'm looking at are higher end and so will have the 16GB RAM, quad core i7, and SSD that so many recommend. The real pricey part is the SSD storage. I can keep the laptop price under $3000 with a 256GB SSD, anything bigger and I can easily go over $4000. With a desktop I can keep it under $2000 if I don't get the SSD but after seeing some demonstrations and benchmarks I don't believe I'll be very happy with spinning disks.

    My current laptop has a 500GB spinning disk. If I do without the dual boot partition and don't move over my music library I should be happy with 256GB on a new laptop for both ESXi labbing and my everyday computing.

    I'm still wondering though, what kind of performance hit would I see in having HDs with running multiple ESXi servers? I'm guessing that since the storage would live on a separate file server in most cases the ESXi servers won't see the performance drop directly. Would the gigabit ethernet lag mask any performance lost with HDs over SSDs? I'm thinking I could also make up for some of the loss by RAID mirroring the drives in the file server. If I strip the drives from the computers running ESXi then I should have plenty of HDs to stack in the file server for a RAID.

    Thanks to everyone for the help here. Lots of good stuff but still plenty to think about.
    MacGuffin - A plot device, an item or person that exists only to produce conflict among the characters within the story.
  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Your vCenter will need x64 Windows anyway so you might as wel go for 2007R2 and install the free iSCSI target - quick enough for lobbing. Single server setups tend to have the disk as bottleneck so SSD is a good, needed choice.
    My own knowledge base made public: http://open902.com :p
  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Oh and for disks. I run a Adaptec 5805 with 2 arrays. 4x 300GB 15k SAS in Raid 10 and can get decent iops. The second array used 1TB 7200rpm spindles in Raid 10 and I am struggling with I/O when running nested VMs. So yes, storage will be bottleneck and I am not sure that SSDd are THAT much quicker. It all depends how many VMs you intend to run and what those VMs will be doing. I got a SLIGHT performance gain by using eager zeroed thick disks but that means you run out of storage quickly as surely your SSDs aren't the most expensive ones.

    I also run a FusionIO card and that thing flies but surely way above budget :)
    My own knowledge base made public: http://open902.com :p
  • Options
    MacGuffinMacGuffin Member Posts: 241 ■■■□□□□□□□
    mayhem87 wrote: »
    @Macguffin
    you can build i5's for cheap

    I've explained this in other threads, and I don't want to drive this thread to far off topic, so I'll just simply say that building a computer has been considered but is not likely a good option for me.
    mayhem87 wrote: »
    obviously your still going to need some hdd's or ssd's somewhere in the mix.

    I'll need other stuff two. I won't bore you with details but I'll simply say a lot of my computer equipment is just plain getting old. I used to have a stack of spare mice, keyboards, and so on but a lot of that stuff is now nearly worn out or just plain obsolete. Unless these computers you specced out have PCI slots, VGA and PS/2 ports I'll have to consider the price of a new display, keyboard, mouse and perhaps a few other things. That's just part of the reason why I believe building a computer would be a poor choice for me.
    MacGuffin - A plot device, an item or person that exists only to produce conflict among the characters within the story.
  • Options
    MacGuffinMacGuffin Member Posts: 241 ■■■□□□□□□□
    jibbajabba wrote: »
    Your vCenter will need x64 Windows anyway so you might as wel go for 2007R2 and install the free iSCSI target - quick enough for lobbing. Single server setups tend to have the disk as bottleneck so SSD is a good, needed choice.

    Really? I need x64 Windows for the ESXi management software? I could have sworn I saw it run on x86 Windows XP. I did plan on running Win2008 Server for some stuff in the time limited evaluation mode. I've got a stack of WinXP computers around here too that I can use for some things. Did I mention my stack of old hardware? :D

    jibbajabba wrote: »
    Oh and for disks. I run a Adaptec 5805 with 2 arrays. 4x 300GB 15k SAS in Raid 10 and can get decent iops. The second array used 1TB 7200rpm spindles in Raid 10 and I am struggling with I/O when running nested VMs. So yes, storage will be bottleneck and I am not sure that SSDd are THAT much quicker. It all depends how many VMs you intend to run and what those VMs will be doing. I got a SLIGHT performance gain by using eager zeroed thick disks but that means you run out of storage quickly as surely your SSDs aren't the most expensive ones.

    I also run a FusionIO card and that thing flies but surely way above budget :)

    Some of the servers I was looking at did have 15k HDs as standard equipment. I should be able to RAID the drives or otherwise spread the load among the drives somehow. Your experience makes me feel better about that option.
    MacGuffin - A plot device, an item or person that exists only to produce conflict among the characters within the story.
  • Options
    Forsaken_GAForsaken_GA Member Posts: 4,024
    If you are going with a physical lab, the ESXi machines don't necessarily even need hard drives. There's not much need to lab with local datastores, and you can install ESXi on a USB stick. Many newer servers include internal USB ports for this purpose. You can then put additional, bigger, and/or faster/SSD drives in your SAN/NAS machine to store VMs.

    This is what I did. I just replaced my big noisy ass DL385's with a pair of custom built boxes. AMD FX6100's on ASUS boards with 16 gigs of RAM, and additional NIC cards. I already had a storage NAS (Synology 1511+), so all I needed were boxes to provide proc and mem. Built both boxes for about 900 bucks, installed ESXi 5 on USB thumb drives
  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    MacGuffin wrote: »
    Really? I need x64 Windows for the ESXi management software? I could have sworn I saw it run on x86 Windows XP.


    The client, maybe, but not the server, that requires 64 Bit and enough oompf for SQL Express

    VMware KB: Minimum system requirements for installing vCenter Server

    With v4 you can get away with 32 Bit, but v5 requires 64 ..
    My own knowledge base made public: http://open902.com :p
  • Options
    MentholMooseMentholMoose Member Posts: 1,525 ■■■■■■■■□□
    MacGuffin wrote: »
    Two SSDs? Is the idea to RAID them? Spread the load over two disks (not RAID just two independent volumes)? More space? Dual booting? All the above?
    Besides those possibilities, another is price. Currently there is a huge jump between 256 GB and 512 GB SSDs, so two 256 GB SSDs should be significantly cheaper than one 512 GB SSD. Now is a great time to buy a 256 GB SSD as there has been many deals lately. I've seen 256GB Crucial M4 and Samsung 830 SSDs (both are well regarded) going for $200 or less (I couldn't resist picking one up).
    MacGuffin wrote: »
    How does booting from a USB stick compare to HD and SSD when it comes to performance? I assume it lies somewhere in between. Anything else I should know about stripping out the SATA drives before I grab my screwdriver?
    Performance of what? The disk the hypervisor runs from does not need to be fast, unless you will be rebooting it constantly and really need fast boot times. And booting ESXi from a USB stick is not all that slow anyway. If you mean performance of VMs running on a USB stick, ESXi won't let you do this, and if there is some hack to allow it I assume performance would be poor. But like I said, labbing with VMs on a local datastore is not useful anyway.
    MacGuffin wrote: »
    I'm not sure how the cost difference between the two set ups work out. I'm looking at a two or three small servers for about $2500, each with 2GB or 4GB RAM, a dual core i5 or so, and a small HD.
    I'd aim a bit higher. 2 GB RAM is just too low to be useful, and 4 GB is better but still limiting. I recommend at least 8 GB RAM. Have you checked out Dell Outlet? Refurb Dell R210 II servers with specs like that (Core i3, 2 GB RAM, 500 GB SATA disk drive) are under $700, and a guaranteed compatible 8 GB RAM kit is $85 from Crucial. The R210 II is a nice server (I have access to some at work) and on the VMware HCL.
    MentholMoose
    MCSA 2003, LFCS, LFCE (expired), VCP6-DCV
  • Options
    MacGuffinMacGuffin Member Posts: 241 ■■■□□□□□□□
    jibbajabba wrote: »
    The client, maybe, but not the server, that requires 64 Bit and enough oompf for SQL Express

    VMware KB: Minimum system requirements for installing vCenter Server

    With v4 you can get away with 32 Bit, but v5 requires 64 ..

    From that link...
    The vCenter Server 5.0 system can be a physical machine or virtual machine.

    If it can be a virtual machine then I should be good with running it on the same computer as ESXi. Or, am I assume too much? This could be a problem, like a $600 problem.
    MacGuffin - A plot device, an item or person that exists only to produce conflict among the characters within the story.
  • Options
    Forsaken_GAForsaken_GA Member Posts: 4,024
    jibbajabba wrote: »
    The client, maybe, but not the server, that requires 64 Bit and enough oompf for SQL Express

    VMware KB: Minimum system requirements for installing vCenter Server

    With v4 you can get away with 32 Bit, but v5 requires 64 ..

    Forgetting about the vCenter 5 Linux appliance, are we?
  • Options
    Forsaken_GAForsaken_GA Member Posts: 4,024
    MacGuffin wrote: »

    If it can be a virtual machine then I should be good with running it on the same computer as ESXi. Or, am I assume too much? This could be a problem, like a $600 problem.

    You can certainly run vCenter as a VM, I do it myself since it's just a lab. If you're only goint to run ESXi on a single box, then vCenter is overkill, you can save yourself the effort and just connect to the ESXi host directly to manage it. If you are going to deploy more than one host with ESXi though, you will want vCenter.

    As an aside, vCenter 4.1 also requires 64 bit windows, vCenter 4 can run on either 32 bit or 64 bit. vCenter 5 has a Linux appliance available that you can deploy as a VM if you don't want to deal with Windows licensing or whatever. I personally have vCenter installed on 2008R2 x64 because I figured I might as well use my TechNet licenses for something
  • Options
    MacGuffinMacGuffin Member Posts: 241 ■■■□□□□□□□
    Besides those possibilities, another is price. Currently there is a huge jump between 256 GB and 512 GB SSDs, so two 256 GB SSDs should be significantly cheaper than one 512 GB SSD. Now is a great time to buy a 256 GB SSD as there has been many deals lately. I've seen 256GB Crucial M4 and Samsung 830 SSDs (both are well regarded) going for $200 or less (I couldn't resist picking one up).

    Agreed. I saw that price difference when playing with the configurations on the computers I was considering. Most of the systems don't offer dual SSD but do have an option for one SSD and one HD. I can order them that way and move the drives around as needed to optimize performance.

    Performance of what? The disk the hypervisor runs from does not need to be fast, unless you will be rebooting it constantly and really need fast boot times. And booting ESXi from a USB stick is not all that slow anyway. If you mean performance of VMs running on a USB stick, ESXi won't let you do this, and if there is some hack to allow it I assume performance would be poor. But like I said, labbing with VMs on a local datastore is not useful anyway.

    There's boot times, just like you said. I was assuming there would be some swapping to the drive. Maybe swapping does not happen often enough to matter or its sent off to a file share. Thinking about it more I believe you. The Flash drive is likely used almost like a read only drive any way. No need to be concerned with
    I'd aim a bit higher. 2 GB RAM is just too low to be useful, and 4 GB is better but still limiting. I recommend at least 8 GB RAM. Have you checked out Dell Outlet? Refurb Dell R210 II servers with specs like that (Core i3, 2 GB RAM, 500 GB SATA disk drive) are under $700, and a guaranteed compatible 8 GB RAM kit is $85 from Crucial. The R210 II is a nice server (I have access to some at work) and on the VMware HCL.

    Really? 8GB for a computer with ESXi on the metal running two, three, maybe four VMS at a time, all of them not really doing anything. I believe I was looking at that very same computer or something very similar. I was also looking at another brand that was a bit more decked out for about $1200. Three of the low end systems or two of the higher ones means a grand total of about $2500 which is about where I started in my previous estimate. The additions of the networking stuff ti hook it together adds to the cost, other things like displays and keyboards means a total system cost of about $3000. Even if I need to add to this cost estimate for more RAM then I should be able to stay under budget,

    Ok, Ill have to look at this tomorrow. I took my sleeping pils and weird stuff is going now. The couch couch cushions
    are giving me a mean look. The fan keeps pacing back and forth across the room, making me nervous. I think the DVD player wants to play nintendo, or maybe eat the controllers. I think it's time to lie down now that the kleenex bog is waving good doggy at me.
    MacGuffin - A plot device, an item or person that exists only to produce conflict among the characters within the story.
  • Options
    dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    You can certainly run vCenter as a VM, I do it myself since it's just a lab. If you're only goint to run ESXi on a single box, then vCenter is overkill, you can save yourself the effort and just connect to the ESXi host directly to manage it. If you are going to deploy more than one host with ESXi though, you will want vCenter.

    If OP is planning on getting VCP, he'll need to setup a vCenter to practice.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • Options
    Forsaken_GAForsaken_GA Member Posts: 4,024
    dave330i wrote: »
    If OP is planning on getting VCP, he'll need to setup a vCenter to practice.

    Well certainly, but VCP was not a stated aim of the OP, just learning VMWare. I setup my VMWare cluster without any intention of ever pursuing a VCP, with the trends toward virtualization, I figured that as a network engineer, it would be a good idea to get some experience with the implementation so I had a clue on the network aspects of running VMWare. I now use VMWare for many other things, as I basically have a full enterprise server infrastructure supported on my VMWare cluster, but I still have no intention of going to a VCP hehe
Sign In or Register to comment.