2008 r2 lab hardware recommendations
w^rl0rd
Member Posts: 329
So it looks like my organization is finally moving to 2008 r2 so I'm going
to need to update my skills. I've got a very nice 2003 lab full of x86 hardware
but as we all know, a 2008 lab will require x64 hardware which I'm guessing
is going to be expensive.
I'm thinking I'd like to just buy one very robust machine and host a bunch of
2008 r2 VM's using ESX or Hyper-V. For those of you with 2008 r2 labs, how did
you do it on the cheap?
to need to update my skills. I've got a very nice 2003 lab full of x86 hardware
but as we all know, a 2008 lab will require x64 hardware which I'm guessing
is going to be expensive.
I'm thinking I'd like to just buy one very robust machine and host a bunch of
2008 r2 VM's using ESX or Hyper-V. For those of you with 2008 r2 labs, how did
you do it on the cheap?
Comments
-
ptilsen Member Posts: 2,835 ■■■■■■■■■■Almost all x86 processors made in the last five years are x64. That is not a concern you should have.
Any modern poly-core processor should be fine. I'd recommend the following hardware if you want a full, dedicated lab:
6-core, 8-core, or 4-core with HT
16GB RAM
120GB SSD or fast drives in RAID10
Those are IDEAL specs, and you can get away with about half that if you need to.
You should do Hyper-V, not ESX. Not because ESX is an inferior platform (quite the contrary), but because Hyper-V is a significant portion of 70-643 and moonlights on 70-640 and 70-647.
If you are really ambitious, build a lab with three nodes as such:
One iSCSI or fibre channel server (any platform or hardware)
Two hosts of the following:
32GB SSD or cheap hard drive
4-core CPU or better
8GB RAM or better
Three or more NICs
Two or more physical switches or a switch with VLAN support.
This will allow you to fully lab Hyper-V with cluster shared volumes, MPIO, along with pretty much everything on every exam, all within a single environment.
The first setup I described will cost about $700-800. The second will cost $1200 to $2000.
You can achieve most, probably all of the certifications without either setup, but a good lab goes a long way. It will be very hard to achieve some of the objectives if you dont have either real experience with them or a good lab. -
Tackle Member Posts: 534I bought a Dell PowerEdge T110 and loaded ESXi 4.1 on a USB that I plug into the Mobo to boot. That's probably the cheapest of the Dell servers, can support up to 16GB ram. I got the one with the Quad-Core Xeon.
I had it conencted up to a SAN, which was an old Pent 4. PC with 2 GB of ram.
You're going to need a good Disk subsystem if you want to have more than a couple VM's running at once. I noticed that 3 VM's running off 1 hard drive is pushing it. 1 or 2 VM's per hard drive is ideal. Unless you have a SAN or a NAS.
$500 or so for the PowerEdge, and a couple hundered to get everything else (Disks, SAN, extra nics). -
ptilsen Member Posts: 2,835 ■■■■■■■■■■Actually, the used server approach can be a good one. It's more likely to come with a decent disk system or extra NICs to setup an iSCSI environment. Two-gen-old servers are pretty cheap and usually come with dual quad-core processors.
I still have to recommend using Hyper-V unless you've already had significant exposure. It is different enough from Vmware that you can probably study for and correctly guess 80% of the questions on it, but between that 20% and the other storage questions, you can fail partly because of those objectives. -
kriscamaro68 Member Posts: 1,186 ■■■■■■■□□□Actually, the used server approach can be a good one. It's more likely to come with a decent disk system or extra NICs to setup an iSCSI environment. Two-gen-old servers are pretty cheap and usually come with dual quad-core processors.
I still have to recommend using Hyper-V unless you've already had significant exposure. It is different enough from Vmware that you can probably study for and correctly guess 80% of the questions on it, but between that 20% and the other storage questions, you can fail partly because of those objectives.
With something like this you need to take in power costs if you will be using it alot. I am personally using an athlon 6000+ with 4gb of ram for 2 of my os's then I have a q6700 with 16gb of ram running 4 os's. I also have an athlon 3500+ and a hp workstation with dual cpu's that are also dual cores with 8gb of ram but it is power hungry so I havent set it up yet. I run a single hd\ssd for each vm run. You can get 60gb ssd's for pretty good deals if you look for them. Anyways suits my needs with my old equipment I have laying around. -
Hypntick Member Posts: 1,451 ■■■■■■□□□□Actually, the used server approach can be a good one. It's more likely to come with a decent disk system or extra NICs to setup an iSCSI environment. Two-gen-old servers are pretty cheap and usually come with dual quad-core processors.
I still have to recommend using Hyper-V unless you've already had significant exposure. It is different enough from Vmware that you can probably study for and correctly guess 80% of the questions on it, but between that 20% and the other storage questions, you can fail partly because of those objectives.
I'm going to second this one here. I picked up a poweredge 1950 on ebay a while back, it's served me well so far. Obviously there are things I can't lab due to lack of infrastructure, direct access for example, but I think it's kinda nice to have around.
I can see power costs becoming an issue if you have multiple servers, however with one, with the single PSU hooked up, it's not very bad at all. Mine is powered up 3-4 hours daily with no noticeable increase in energy costs.WGU BS:IT Completed June 30th 2012.
WGU MS:ISA Completed October 30th 2013. -
ptilsen Member Posts: 2,835 ■■■■■■■■■■Power costs are pretty insignificant unless you are running for 5+ hours a day. I used to run a set of servers comparable to what I described 24/7, and it was maybe $100/month. That will vary a lot based on region and the actual hardware, but you won't realistically run servers 24/7. You run them only when you lab. Even if you run the hosts 24/7, they won't use that much power at idle.
-
bdub Member Posts: 154For myself I did it on the cheap by building a new gaming/lab rig.
i5 2500k @ 4.5ghz
16gb of RAM
120gb OCZ SSD
x2 500gb SATA RAID 0
x2 2tb SATA RAID 1 (originally this was x2 1tb drives but upgraded to 2tb when 1 of the 1tb drives started failing)
GTX 560 Ti
I run 2008 as my desktop OS with HyperV for my lab. The only real downside was when I wanted to lab the VMM which required me to join my host/desktop to my lab domain (VMM needs to be installed under a domain account).
Cheaper than building a dedicated lab since it serves more than one purpose and will still be useful after I'm done with the MCITP. The 16gb of ram is plenty and running my most used VM's off of the RAID 0 array and the others off of the RAID 1 I can run plenty of VM's without any performance issues.
This has gotten me through 640, 642 and 643, currently working on 647. -
lordxar Member Posts: 14 ■□□□□□□□□□One thing to watch out for, make sure your server has the 64 bit VM settings in the BIOS. I bought a nice 64 bit server off ebay only to find out those settings were not there. Unfortunately, that means that I can only run a host and no VMs. HyperV and VMWare will not start unless those settings are there.