cyberguypr wrote: » Where did you read about the 12 concurrent VMs? That sounds insanely high. Even in the MOC they have you shut down unused VMs. Keep in mind that you can run some VMs with the bare minimum memory assigned, they'll just be slow. Also, some roles can coexist and do not really require a separate box.
the_Grinch wrote: » Would this do the trick?Newegg.com - ASUS Maximus IV Gene-Z LGA 1155 Intel Z68 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard
Essendon wrote: » Go with a desktop if money is an issue. Cheaper RAM, cheaper HDD's. Not to mention a desktop can be fixed by just about any repair shop down the road. Get an i7, max out the RAM (get like 24GB on a decent motherboard if you can afford it), and you'll be laughing. Chuck in SSD's if you can and you are good to go. If the SSD are too expensive, you can go with usual ones. Most of these exams only need 3-5 VM's, I can only remember the RMS lab needing like 7-8 VM's, apart from that you can easily get by with 3-5 VM's, but like they say, the more the merrier!
demonfurbie wrote: » dont forget to look at the gaming hardware i make alot of servers using gaming hardware over server boards
jamesbrown wrote: » Can you recommend a better motherboard for me?
Krunchi wrote: » Newegg.com - ASRock P67 EXTREME4 (B3) LGA 1155 Intel P67 SATA 6Gb/s USB 3.0 ATX Intel Motherboard I'm using this MB with the Intel Sandy Bridge I7 2600K and 16 gig's of ram. I have ran up to 6 2008 R2 vm's and 2 win 7 Vm's with out any issues. As for 12 vm's don't think you'll run that many at once I'm taking the 640 cert next week and the most vm's I have run at once was Eight and that was over kill. If you want my full PC build let me know i't is so smoking fast it's unreal I have never locked it up or bogged it down been trying just can't do it. A few tip's for labbing that work's for me is use a separate drive for your VM's and use a VHD to dual boot into 2008 R2 and run your VM's there also defrag your VM drive once a week.
jamesbrown wrote: » Can you give me your full PC built.
Krunchi wrote: » Here you goNewegg.com - Once You Know, You Newegg Like the post a few up said you cant go wrong with gaming hardware it is made to perform and it does. You can go cheaper on a few things but you need to watch the cooling these things run pretty hot if you don't pick the right hardware. This systems runs nice and cool and is very quiet and can handle anything you throw at it if you need some more help just ask. The two 1 TB drives can setup for Raid 0 for more speed the 500 Gig drive is my VM drive. Here is the link for how to setup a VHD if you want go that route.How to install Windows Server 2008 R2 with Hyper-V and Windows 7 on the same partition, from Colin Smith - White Papers, Webcasts and Case Studies - ZDNet
jamesbrown wrote: » You spent a lot of money on your toy. I didn't buy some of the stuff but i'm at $960.
Krunchi wrote: » That was just the basic's all the extras made my wife scream and yell at me. Stuff like the Case, Power Supply, Video card, NIC, Sound Card, Card reader and the extra drives can be changed are removed. The processor can be changed but I would stick with Intel the I5 2500K is great and a little cheaper. Stay away from AMD right now Intel has them bet at the moment on performance by allot. As for the memory you could change the brand and save maybe 20 bucks to go cheap but I highly advise to stick with the Corsair ram on the list, I have bought 2 sets of them for two computers and have had zero issues. As for the heat sink and the thermal paste they are a must have. don't go cheap there. You can shoot my a private message if you want more help are keep it going here good luck on building that new computer and the lab's
jamesbrown wrote: » Can I just add another power supply? The one you bought is sold out?
MentholMoose wrote: » Yes, PSUs are generally interchangeable. Do you plan on playing games on this PC you're planning to build? If you don't have (or plan to ever install) a gaming video card, you don't need an expensive PSU... $40-50 from a name brand is fine. I don't recommend skimping out on the PSU (e.g. no name under $20) since I've seen cheap PSUs die and kill other components too many times. If you are just building a lab machine that needs to run VMs (e.g. in VMware Workstation, VirtualBox, or Hyper-V), you can build a machine with good quality components for about $500. For labbing the three things you need to worry about, in order of importance, are storage, RAM, and CPU (basically the opposite of a gaming rig).Storage - A huge hard drive is unnecessary... a 1TB drive could store 100 VMs each with a 10GB virtual disk, but you might be able to run 10 of them simultaneously, if you're lucky (5 is more realistic, or even fewer if you are doing anything disk intensive). Get more, smaller drives. RAM - I did my SA, EA, and EDA7 on a box with 8GB of RAM, but 16GB seems to be the sweet spot right now... about $100 will get you good quality DDR3 from a name brand (lifetime warranty). That's what I paid for 8GB of DDR2 when I built my lab machine in early 2009. CPU - For labbing, what matters is simply the core count. The exception would be ultra-budget CPUs meant for netbooks and nettops (e.g. Intel Atom), which aren't really suitable for labbing. If you want to build a machine specifically for labbing, I recommend finding the cheapest quad or hex-core CPU with virtualization extensions (Intel VT-x, AMD-V) you can and building a machine around it. AMD has sub-$100 quad-core CPUs ($140 gets you hex-core), so for a lab machine that is what I'd recommend. A $200+ CPU is great for gaming, video editing, CAD, and similar use cases, but for labbing it is not necessary to spend that much. If you have a lot of VMs running, a $140 hex-core CPU is likely to outperform a $300 quad-core. Chances are, however, that when labbing you will run out of disk performance long before hitting any CPU bottleneck. At my last job I had a cluster of eight XenServer hosts running Windows XP VMs... each server only had eight cores (two quad-core Opterons) and 64GB of RAM, but one of those hosts could run 100+ VMs because the storage could handle it. That environment, however, was STILL limited by storage... if I actually tried to spin up 800 VMs in that cluster, I'm sure the storage would have died. I sized the VMs so there would be about 40 VMs per host max (typically 20-30).