Need help to build virtualization lab

jace5869jace5869 Member Posts: 15 ■□□□□□□□□□
So I am looking at getting a used server off ebay to learn some virtualization. I need to test out ESXi for work purposes, but I'm also looking to test and lab with Hyper-V for certification purposes.

I see some great prices on servers with 48g and 72g of RAM for various Dell PowerEdge C1100 and HP ProLiant servers. I'm thinking going the 72g of ram is best, but would upgrading it any further help out extensively?

Would I want to go with Dual, Triple, or Quad on the memory?

How many VM's could I get running (respectably fast) with 72g of RAM?

The servers I'm looking at and trying to decide between are below...I'm looking at similar configurations and not just these:

I'm having trouble on deciding the cpu as well. The L5520 seems pretty power efficient, but the X5650 seems like a beast, and of course, 2 additional cores. Anyone have a preference?

I know most people want power efficient and that is a concern, but I also won't be running this 24/7. I'm going to put together a Lenovo TS140 for personal use (will probably be a hyper-v box).


Intel L5520 cpu

Dell PowerEdge C1100 CS24 Ty 1U 2X Xeon QC L5520 2 26GHz 4XTRAYS 72GB DDR3 | eBay

HP Proliant DL160 G6 1U 2X Xeon QC L5520 2 26GHz 4XTRAYS 72GB DDR3 000491532004 | eBay


Intel X5650 servers

Dell PowerEdge C1100 CS24 Ty 1U 2X Xeon Hex Core X5650 2 66GHz 4XTRAYS 72GB DDR3 | eBay

Dell PowerEdge C1100 1U 2X X5650 Xeon 2 66GHz Six Core CPU's 48GB Mem 4X 250GB | eBay

HP Proliant DL160 G6 1U 2X Xeon Hex Core X5650 2 66GHz 4XTRAYS 72GB DDR3 8844200713 | eBay




Also, I guess I would iSCSI for storage and not actualy store anything on the internal drives but the VM's themselves, correct? Probably, be a gppd idea to raid together some 60 or 120g SSD's for the datastore of the VM's?

Thanks!
«1

Comments

  • cruwlcruwl Member Posts: 341 ■■□□□□□□□□
    Most likely your work loads wont be that intensive. You biggest bottle neck with any of these will be your disk IO.

    My previous lab was running 6-10 Windows 2008 servers on 16Gb of ram. Your CPU and RAM on any of these will most likely be just fine.

    You can create a windows server 2008 box and us it as an iSCSI provider for the rest of the VMs if you want.

    SSDs will help for boot times as that will most likely be your biggest bottle neck and slow down times, but regular disk will work just fine since it is a lab. All depends on how much $$ you want to spend. Personally I would lean to one of the first 2,
  • Asif DaslAsif Dasl Member Posts: 2,116 ■■■■■■■■□□
    Yeah, I would just get one of the Intel L5520 servers, 72 GB of RAM will be plenty. The Dual X5650's are about 50% more powerful than Dual L5520's but I would doubt you will need that power for what you are doing in a lab.

    I would get as big an SSD as you can afford - like a 250 or 500 GB SSD rather than RAID them. I think you future proof it a little by getting a bigger drive but it should be able to handle anything you throw at it. I've a couple Samsung 840 EVO 500 GB drives which fly along, I'd highly recommend them.
  • jace5869jace5869 Member Posts: 15 ■□□□□□□□□□
    50% faster? I'm a sucker for speed. haha..

    Do most people just raid hdd's, or get a big SSD for the onboard raid to use with the VM O.S. install and then use iSCSI for data? Kind of confused on that part.

    I think the first two would do just fine, but I guess I need to weigh the pro' and con's of each. How much extra power would X cpu's pull?

    Would I need to buy any type of raid controller on these?f
  • Asif DaslAsif Dasl Member Posts: 2,116 ■■■■■■■■□□
    I did a little Google and there is no onboard RAID on either AFAIK. Email the seller to know for sure.

    I should have linked to the reference of 60% faster - Dual L5520 (intel spec) V's Dual X5650 (intel spec)

    You usually RAID to increase your IO or for redundancy - if redundancy doesn't matter to you for the lab then just get a single SSD.
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    jace5869 wrote: »
    How many VM's could I get running (respectably fast) with 72g of RAM?

    How long is a string :D ?

    Really depends ...

    As it stands, my nested box has 72GB as well and as you can see, I run it at full blast :

    34zbkso.png

    And even here, some VMs are stupidly highly over-specced (and can be lowered WAAYYY below their current config - just lazy :D)

    And CPU is only high because here 3 virtual hosts are currently booting up.

    DC | 8GB RAM
    NFS | 8GB RAM
    Router | 4GB RAM
    SQL | 8GB RAM
    Tools | 4GB RAM
    vCenter | 8GB RAM
    vMA | 600MB
    CentOS 6 | 4GB RAM
    Win8.1 | 6GB RAM
    vcd-esxi-01 | 12GB RAM
    vcd-esxi-02 | 12GB RAM
    vcd-vcc-01 | 4GB RAM
    vcd-vcc-02 | 4GB RAM
    vcd-vcs-01 | 8GB RAM
    vcd-vsm-01 | 8GB RAM
    virtual-esxi-01 | 8GB RAM
    virtual-esxi-02 | 8GB RAM
    virtual-esxi-03 | 8GB RAM
    esxi6-01 | 12GB RAM
    esxi6-02 | 12GB RAM
    vcs6-01 | 8GB RAM
    My own knowledge base made public: http://open902.com :p
  • Asif DaslAsif Dasl Member Posts: 2,116 ■■■■■■■■□□
    jibbajabba wrote: »
    And even here, some VMs are stupidly highly over-specced (and can be lowered WAAYYY below their current config - just lazy :D)

    DC | 8GB RAM <--- this one right here! lol
    Nooo, ya don't say! icon_lol.gif

    Edit - Jibba I thought you had a SuperMicro setup, no?
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Asif Dasl wrote: »
    Nooo, ya don't say! icon_lol.gif

    Edit - Jibba I thought you had a SuperMicro setup, no?

    lol yea - Happens when you use templates and keep forgetting to change the resources haha ..

    I sold the Supermicro stuff to my current company as they haven't had a lab at all - it draws 6 Amps so not for the un-rich people :p

    So from that



    To that







    MUCH cheaper :)
    1.jpg 81.5K
    2.jpg 58.2K
    3.jpg 66.3K
    4.JPG 20.7K
    My own knowledge base made public: http://open902.com :p
  • JoJoCal19JoJoCal19 Mod Posts: 2,835 Mod
    Jibba, do you find that you can do everything on the nested setup that you could do on the physical setup?
    Have: CISSP, CISM, CISA, CRISC, eJPT, GCIA, GSEC, CCSP, CCSK, AWS CSAA, AWS CCP, OCI Foundations Associate, ITIL-F, MS Cyber Security - USF, BSBA - UF, MSISA - WGU
    Currently Working On: Python, OSCP Prep
    Next Up:​ OSCP
    Studying:​ Code Academy (Python), Bash Scripting, Virtual Hacking Lab Coursework
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Everything. The only limitation AT THE MOMENT are VLANs. The nested vCloud infrastructure uses some routed setups and I need to replace my physical switch. As a result I cannot use VLANs properly. Easy 'fix' for the time being is having routed VMs and virtual router on the same host. This way you avoid the VLAN (or lack thereof) limitation.

    But the physical host got 6 NICs so once I got my Cisco SG-300 back I will be back in business. Its a lab .. So doesn't matter anyway.

    In fact until last night I even had some Hyper-V setup nested on that thing :)
    My own knowledge base made public: http://open902.com :p
  • tstrip007tstrip007 Member Posts: 308 ■■■■□□□□□□
    Fault tolerance is limited to x86 guests still isn't it? Not a big deal, just sayin...
  • ShdwmageShdwmage Member Posts: 374
    I bought the HP ProLiant DL160. It works great. I've been using it for a few months now. I popped a couple of SSDs in there as well as some spindle drives.
    --
    “Hey! Listen!” ~ Navi
    2013: [x] MCTS 70-680
    2014: [x] 22-801 [x] 22-802 [x] CIW Web Foundation Associate
    2015 Goals: [] 70-410
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    tstrip007 wrote: »
    Fault tolerance is limited to x86 guests still isn't it? Not a big deal, just sayin...

    True, not an issue for labs me thinks ... You can still test the "feature" in a lab and let's face it - given the 1vCPU limitation all you probably do with FT is configuring it for the exam, rather than production use :p
    My own knowledge base made public: http://open902.com :p
  • JoJoCal19JoJoCal19 Mod Posts: 2,835 Mod
    Good to know, thanks jibbajabba. While not in my certification plans at the moment, I've had an interest in virtualization so I plan on getting familiar with the VMWare stuff.
    Have: CISSP, CISM, CISA, CRISC, eJPT, GCIA, GSEC, CCSP, CCSK, AWS CSAA, AWS CCP, OCI Foundations Associate, ITIL-F, MS Cyber Security - USF, BSBA - UF, MSISA - WGU
    Currently Working On: Python, OSCP Prep
    Next Up:​ OSCP
    Studying:​ Code Academy (Python), Bash Scripting, Virtual Hacking Lab Coursework
  • kriscamaro68kriscamaro68 Member Posts: 1,186 ■■■■■■■□□□
    Asif Dasl wrote: »
    I did a little Google and there is no onboard RAID on either AFAIK. Email the seller to know for sure.

    I should have linked to the reference of 60% faster - Dual L5520 (intel spec) V's Dual X5650 (intel spec)

    You usually RAID to increase your IO or for redundancy - if redundancy doesn't matter to you for the lab then just get a single SSD.

    They do not have raid unless the seller specifically says it does. I have bought both dell c1100's and hp dl160's from that seller and can tell you it does not have raid. Now with that said if you want to Install MS on there you can install the OS on 1 drive and then in the OS make a storage pool with the other 3 drives for both speed and size. Also when it comes to CPU get the L5520. It will cost less to run it with that cpu and you really won't see the benefit in speed unless you are running CPU intensive apps in your vm's. Right now I have 2 hp dl160 g6's in a failover cluster. Prior to that I had 2 dell c1100's in a failover cluster. They all worked great and used 90 watts a piece with 4 ssd's. They were all the same config with 2 L5520's and 72gb of ram.
  • kriscamaro68kriscamaro68 Member Posts: 1,186 ■■■■■■■□□□
    Asif Dasl wrote: »
    Kris - Why did you change from Dell C1100's to HP DL160's if everything was working?

    I sold the Dell's to get some money to finish a rifle build I was doing. Ended up going with the HP's because when I bought the lab setup again the HP's where cheaper for the same thing more or less. The Dell's come with Intel nics where as the HP's come with an HP nic. I am not sure what chip the HP nics use as they show up as hp nics in device manager. Either way they are both compatible with Server 2012/Hyper-V and both on VMWare's HCL.
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    I think biggest difference really is the maximum RAM .. C1100 does "only" 192GB whereas the G6 can take up to 288GB ..

    Obviously irrelevant for a lab ..
    My own knowledge base made public: http://open902.com :p
  • jace5869jace5869 Member Posts: 15 ■□□□□□□□□□
    I think I'm going to go with the HP ProLiant DL160 G6 with 72GB of RAM and L5520 processor.

    Probably going to go with:
    HP Proliant DL160 G6 1U 2X Xeon QC L5520 2 26GHz 4XTRAYS 72GB DDR3 000491532004 | eBay

    but got interested in the following when I saw the processor:
    HP Proliant DL160 G6 1U 2X Xeon Hex Core L5639 2 13GHz 4XTRAYS 72GB DDR3 | eBay

    Would upgrading to two hex-cores be reasonable for $300 more? I'm guessing no, but wanted to ask.


    I need some recommendations on decent, but not too expensive raid controllers. Something HP or DELL branded maybe?

    I know some people recommended a single 480GB SSD to store the VM's on, but what would be the downsides of getting 2 240's and RAID 0 them? I'm looking to buy the SSD(s) when they go on sale so...this will help me decide what to look for.

    As far as storage... what should I get for iSCSI? I'm assuming this is like NFS, or basically local drives that are actually on the network..right?

    I'm moving into a new house soon and I'm trying to balance building my lab and decking out my new place with a NAS box (probably be a TS140) and running some VM's on it for media streaming and backup. I want to move everything off my work machines and isolate there.

    I'm guessing with the TS140 I can run Hyper-V and build an array with onboard RAID and pass it to the VM's easily enough? Speed wise it should be decent for streaming (maybe some transcoding) and backups?
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    I always run out of RAM before I run out of CPU so personally I wouldn't be able to justify the premium. I got two E5520 in my rig and I usually only get to 100% when things boot up. Once I arm at 100% RAM (72GB as well) - my CPU sits at about 30-40% - if even.

    Not sure who suggested a single 480GB SSD - I did post a picture of a single SSD, but this is only because I a. Don't have room for two and b. backup the VMs . You can certainly use two 240s instead and I suppose you aren't at any less risk of losing data (given that you still lose the lot when one SSD fails, same with me losing the lot when my only 480 fails).

    Bear in mind though - unless the raid controller is on the HCL - there is a chance that your ESXi installs sees two individual drives. For example the Microserver's onboard fake-raid. You can create a mirror but ESXi would see two individual drives.

    As for NFS / ISCSI .. both IP, yes, but ISCSI is block storage - once formated in ESXi you won't be able to read the filesystem with anything but ESXi. NFS is transparent. So you could even mount the NFS mount on Linux / Window and get access to the VM files (backup ?).

    In labs I usually use both, ISCSI and NFS - but that is simply to play with both, one being the backups / iso / template LUN and one the VM lun.

    TS140 - again - you need to check the HCL whether the onboard raid is supported. As for speed - depends on the raid level. If ESXi doesn't support the raid card then you can always just pass a disk from each lun to the OS and use the OS way of mirroring (Dynamic drives / LVMs etc.)
    My own knowledge base made public: http://open902.com :p
  • jace5869jace5869 Member Posts: 15 ■□□□□□□□□□
    Regarding the TS140 I was thinking of using just Hyper-V for it. I think it is less picky on raid controllers, but that could be a misunderstanding on my part.

    So, for iSCSI I will not want that for my main storage providing streaming and backups then. NFS will be better for that, as I could still use it for data and backups.


    I'm justing going to keep a look out for different sized SSD's. I'm trying to find a compatible raid controller in case I want to raid them for better I/O.
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Oh Hyper-V is indeed more forgiving. I am not 100% sure but I think even the free Hyper-V as full driver support for all sorts of onboard raid. Even if not I am sure you can find the appropriate drivers on the vendors webpage. You won't have that option with ESXi .. rarely anyway...
    My own knowledge base made public: http://open902.com :p
  • The NihilistThe Nihilist Member Posts: 7 ■□□□□□□□□□
    I also went with a DL 160 equipped with dual CPUs - x5650's Hex core - from eBay and stuck 92GB of RAM in it for good measure.

    One bonus was it also came with an iLO license so remote KVM/virtual media and power on/off were an unexpected bonus icon_smile.gif
  • jace5869jace5869 Member Posts: 15 ■□□□□□□□□□
    Looking for a switch for this lab/new build/house..

    Was looking at the new Cisco SG300 series switches, but also stumbled upon a D-Link that looks just as good.


    D-Link DGS-1210-28
    D-Link 28-Port Web Smart Gigabit Ethernet Switch - Lifetime Warranty (DGS-1210-28) - Newegg.com

    or

    Cisco SG300-20 (SRW2016-K9-NA)
    Cisco SG300-20 (SRW2016-K9-NA) 20-port Gigabit Managed Switch - Newegg.com

    I doubt will ever need PoE, but I am looking to work with some VLAN'ing. Especially, to segregate the network from my family and my lab.
    I do want overkill, to an extent, and want it managed. I will be selling my Netgear AC1750 to my aunt so I will be remodeling the network completely. I am going to plan on hvaing drops put in at some point in the house.

    What about Zyxel? and Netgear? anything comparable. I'm open to suggestions.
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    I bought the SG-300 28 without PoE. Best switch for the money hands down.
    My own knowledge base made public: http://open902.com :p
  • jace5869jace5869 Member Posts: 15 ■□□□□□□□□□
    Thaks JibbaJabba you've been extremely helpful
  • jace5869jace5869 Member Posts: 15 ■□□□□□□□□□
    Made an offer on a new HP ProLiant BL460c G6 for $399. Seems like a steal and I should be able to upgrade RAM for little money and be good to go. Does this already have hardware raid? and then just add a SSD for VM storage?

    HP PROLIANT BL460c G6, 2x SIX CORE X5650 2.66 GHz, 48GB RAM, 2x 72GB 15K SAS

    Will probably use this for heavy labbing and then get a smaller more efficient server for home media streaming like a single L5520 and 24GB of RAM.


    or... would the dl160 g6 with the l5520 be better?
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    A blade? UhOh ... You do realise this is a blade which requires a blade chassis to work (C3000/C7000) ? Kudos if you have the power requirements outside a datacenter.

    You can have these with hardware raid, 2.5" disks,, micro SD card or USB.
    My own knowledge base made public: http://open902.com :p
  • ShdwmageShdwmage Member Posts: 374
    jibbajabba wrote: »
    A blade? UhOh.

    I agree with this.
    --
    “Hey! Listen!” ~ Navi
    2013: [x] MCTS 70-680
    2014: [x] 22-801 [x] 22-802 [x] CIW Web Foundation Associate
    2015 Goals: [] 70-410
  • jace5869jace5869 Member Posts: 15 ■□□□□□□□□□
    Yeaaah...someone got excited when thye were too tired to shop. I guess I'm going back out shopping.


    I think I will stick with the DL160 G6 for around ~$500 and get a SSD for it. I might spring for a single L5520 PRecision server to use as my personal streamer/backup. That might be better than a TS140 right?
Sign In or Register to comment.