VMWare Lab

MrAgentMrAgent Member Posts: 1,310 ■■■■■■■■□□
For all those that are building labs, and wonder what would be a good lab, I found this the other day and thought I would share it.

Another VMware home lab | Neil Koch
«1

Comments

  • gabyprgabypr Member Posts: 136 ■■□□□□□□□□
    Nice guide Mr. Agent. Small, quiet, and powerful is what most people look in for today. Here in PR where electricity is almost .30c per kilowatt a setup like this would be nice to have.
    EC-Council Master in Security Science M.S.S [Done]

    Reading Project Management Professional (PMP) Certification Exam prep by Sohel Akhter
  • kriscamaro68kriscamaro68 Member Posts: 1,186 ■■■■■■■□□□
    Here is my lab:
    1x-http://www.ebay.com/itm/Dell-Poweredge-C1100-1U-2X-XEON-QC-L5520-2-26GHZ-NO-HDD-72GB-DDR3-Tested-/251269644666?pt=COMP_EN_Servers&hash=item3a80d6817a

    1x-http://www.amazon.com/SanDisk-Solid-State-Consumption-SDSSDP-064G-G25/dp/B007ZWLRSU/ref=sr_1_1?s=electronics&ie=UTF8&qid=1372781767&sr=1-1&keywords=60gb+ssd

    3x-http://www.amazon.com/Samsung-MZ-7TD250BW-Series-Solid-2-5-Inch/dp/B009NHAEXE/ref=sr_1_1?ie=UTF8&qid=1372781684&sr=8-1&keywords=samsung+840

    With this setup you get 16 cores, 72gb of ram, 1x-60gb ssd for host OS and 3x-250gb ssd for vm's.

    I set this up with server 2012 on the 60gb ssd and then setup a storage pool for the other 3 ssd drives. This netted me 700mb/s reads and in the mid 600mb/s writes. This allows for both a hyper-v setup and a VMware setup on the same computer. Also I tested this with a killawatt and the average idle consumption was in the 90 watt range and never saw it peak above 120watts. 90% of the time it was in the 90 watt range. Also this server is slightly louder than a desktop setup with fans all over and quitter than a highend video card blowing during a good gaming session.

    I am very happy with this setup and would recommend this to anyone. Plus this server is on the HCL for ESXi.

    1x-Server: $445
    3x-250gbSSD:$522
    1x-60gbssd:$60
    Total: $1027
  • MrAgentMrAgent Member Posts: 1,310 ■■■■■■■■□□
    My VMWare lab maching is actually much different than what I posted.
    I have an intel i7 970
    32gb RAM
    4x 1tb storage
    1 Fusion-io ioDrive2 Duo 1.2TB

    I dont remember the motherboard, but it does PCI pass through, which is what I really needed when doing my labs.
  • cyberguyprcyberguypr Mod Posts: 6,928 Mod
    I've been running the SH67H3 with an i7-2600 processor, 16GB RAM, 2 SSDs, and 1 SATA drive for a year and a half now and it has been a solid, quiet lady. Love that machine. I keep flipping it between vSphere and Hyper-V as needed. Never complains about anything. Highly recommended.
  • gabyprgabypr Member Posts: 136 ■■□□□□□□□□
    I decided to buy a custom toshiba laptop for labbing, studying, and stuff with these specs:

    Intel® Core™ i7-3630QM Processor (6M Cache, up to 3.40 GHz) with Intel® Turbo Boost Technology
    16GB DDR3 1600MHz SDRAM
    256 SSD Hard Drive
    1TB Hard Drive (I replaced this with a Samsung 840 512gb SSD)
    2GB GDDR3 NVIDIA® GeForce® GT 630M with NVIDIA® Optimus™ Technology
    Blu-ray Disc™ RE with SuperMulti DVD±R/RW Double Layer drive*
    17.3" FHD TruBrite® LED Backlit Display (1920 x 1080)
    SRS Premium Sound 3D®, harman/kardon® Quad Speakers
    2-USB (3.0) ports, 2-USB (3.0) ports with Sleep and Charge
    Glossy Black LED Backlit Tile keyboard
    WiFi® Wireless networking (802.11b/g/n) with Bluetooth® version 4.0

    I can run various vm's with hyper-v without problems, im very happy with it.
    EC-Council Master in Security Science M.S.S [Done]

    Reading Project Management Professional (PMP) Certification Exam prep by Sohel Akhter
  • emerald_octaneemerald_octane Member Posts: 613
    I have an extra server box at work i'm using. iSCSI storage, 32 GB mems 2x quad core xeon works a treat!
  • TheProfTheProf Users Awaiting Email Confirmation Posts: 331 ■■■■□□□□□□
    I built two whiteboxes with AMD 8 core CPU and 32GB of ram... runs pretty well, only thing it doesn't support is FT for the moment, but its enough to get through the VCP and even VCAP. I also use a synology DS412+ for storage along with SSDs and HP v1910 gigabit switch.
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    That's mine
    jibbajabba wrote: »
    I always had fully blown server which were doing my head-in. You could usually hear them buzzing in my garage from down the street. I looked for a solution with multiple server and yet I don't pay £$£$ for electricity.

    First of all, I know nested setups work, but I don't like em :)

    Over the last year I "collected" a few Microserver. I love these little things so I used them as sole hardware for my lab.

    Here the lab from "left to right" (currently vSphere until I got my DCA / IAAS, then change to Citrix)

    1. N40L - Running vSphere 5.1 (Standalone)
    Boot from USB
    16GB of Ram (KVR1333D3E9SK2/16G)
    Adaptec 3805
    4x 2TB SATA 3.5" (3x2TB Raid 5 + Hotswap)

    In 3.5" Optical Bay using 4x2.5" enclosure

    3x 300GB 10k SAS 2.5" (2x300GB Raid 1 + Hotswap)
    1x 750GB SATA / SSD Hybrid 2.5" (Standalone for Host Cache, isos, patches)

    Hosting VMs required to run the whole lot
    1. DC (2008R2 core)
    2. SQL (2008R2 + SQL 2008R2 Standard)
    3. FreeNAS (Presenting local SATA storage as NFS / iSCSI)
    4. WEB (CentOS 6 - Dev Web)
    5. Veeam (2008R2 + Veeam B&R Free)

    2. N40L - Running Server 2008R2 - Running vCenter Server 5.1 suite + PowerCLI etc.
    Booting from SSD (128GB)
    8GB of RAM

    3. N40L - Running vSphere 5.1
    HA / DRS Cluster (shared Storage coming from FreeNAS)
    Boot from USB
    16GB of Ram (KVR1333D3E9SK2/16G)

    4. N40L - Running vSphere 5.1
    HA / DRS Cluster (shared Storage coming from FreeNAS)
    Boot from USB
    16GB of Ram (KVR1333D3E9SK2/16G)

    Got a N36L as well - but that is (probably) dead but if fixed then it becomes a dedicated storage server.

    As for performance : Good enough. Memory is always the bottleneck - CPU never really ... The standalone box sits usually at 50% when running updates of some sort, but apart from that it is a lot lower - maximum I have seen is 90% when I am "doing stuff".

    As for memory though - those 4 essential VMs total up to 14GB but the Veeam and Web server are on only when needed and I am sure I could even lower the SQL server's ram (currently 6GB).

    But quite high maintenance and runs actually out of oompf lately .. Just got my old Supermicro Chassis out of the garage so will probably build a new single / nested lab .. We'll see...
    My own knowledge base made public: http://open902.com :p
  • blargoeblargoe Member Posts: 4,174 ■■■■■■■■■□
    I did just fine with a Lenovo laptop with a QC Core-i7, 16GB RAM, and a single 512GB SSD running VMware Workstation. If I wasn't running an iSCSI /NFS storage server for my shared datastores on the same laptop, and wasn't needing the laptop to do actual work, 256GB would have been plenty.
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    I have a HP DL 380 G5 workhorse with 32GB RAM, dual quad-core Xeon's and about 1.5TB in SAS storage. I have been running a nested ESXi lab for about a year and a half now. I only use 1 power supply. Works like a treat, no performance problems ever.

    Best thing about this setup? It cost me just $150.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • kj0kj0 Member Posts: 767
    I'm running nested inside VM Workstation. It's been a little flaky the last couple of days, but I think that's the whole machine due to an updated AV Client.

    However: my system

    Gigabyte-Z68XPUD4
    i7 2700k
    16gb Ripjaws gSkill
    I've got quite a number of drives in my machine, but my VMs sit on 2 x 128Gb SSD in Raid 0 on an LSI Raid card.
    I'm running 9 Drives but 4 are SSDs, however, I had to put a 550Watt PSU in from when my 650 Died which may be on the limits of providing enough power.

    I currently have a spare box set up with 2 x 160 in raid 0 and 2 x 120 IDE's in Raid 0 for the purpose of a SAN. I was running FreeNAS but it doesn't do iSCSI very well. So I'm looking for a new way of running it
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Maybe this should be sticky ... Questions about labs coming up all the time.
    My own knowledge base made public: http://open902.com :p
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    I vote sticky too. Good idea jibba!
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • JBrownJBrown Member Posts: 308
    Essendon Just to be clear, is that what you used for your VCAP5-DCA /DCD Labs ?

    I think kriscamaro68's setup is quite nice. I am actually considering the following for the VCAP-DCA/ Vmware View exams:

    3 x Dell PowerEdge C1100 1U 2X Xeon QC L5520 2 26GHz No HDD 72GB DDR3 Tested | eBay
    with vSphere 5.1 USB flash boot
    1x Synology 413 with
    4x Samsung 840 250GB

    I hope it can handle the VCAP and View stuff.


    Essendon wrote: »
    I have a HP DL 380 G5 workhorse with 32GB RAM, dual quad-core Xeon's and about 1.5TB in SAS storage. I have been running a nested ESXi lab for about a year and a half now. I only use 1 power supply. Works like a treat, no performance problems ever.

    Best thing about this setup? It cost me just $150.
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Yes, that's what I used for both VCAP's. Does the job quite nicely, there were a couple of niggles here and there. For example, I couldnt turn on FT, couldnt play with EVC (no biggie this one), adding extra NIC's to nested ESXi hosts sometimes required 2 reboots. I believe this wasnt a problem with the physical machine itself, but an issue with nesting ESXi. You may run into them with any nested setup.

    I reckon one of the best things about a nested setup is the ability to create as many ESXi's (includes NIC's, datastores, VM's) as your hardware will permit and lab it up to your heart's content. Shared storage works without issues too, just go with the software iSCSI adapter. I had 4 nested ESXi's (with ESXi running on the physical machine too), a few VM's and vApps. I setup SRM too using an EMC storage appliance, worked without issue. Played with multiple vCenter's, Linked Mode, setup multiple clusters, moved hosts in and out, the whole works really. Never missed a beat.

    Cant beat a nested setup really. Everything in the one box, keeps electricity costs down too. The noise from the fans aint too bad either. Cant complain!
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • QHaloQHalo Member Posts: 1,488
    I have a laptop like blargoe with modified NetApp simulators for nested storage. Works great.
  • santaownssantaowns Member Posts: 366
    JBrown wrote: »
    Essendon Just to be clear, is that what you used for your VCAP5-DCA /DCD Labs ?

    I think kriscamaro68's setup is quite nice. I am actually considering the following for the VCAP-DCA/ Vmware View exams:

    3 x Dell PowerEdge C1100 1U 2X Xeon QC L5520 2 26GHz No HDD 72GB DDR3 Tested | eBay
    with vSphere 5.1 USB flash boot
    1x Synology 413 with
    4x Samsung 840 250GB

    I hope it can handle the VCAP and View stuff.

    JBrown, maybe should consider the DELL F1D its basically the same exact server as the c1100 except around 300 cheaper and typically includes hard drives.
  • JBrownJBrown Member Posts: 308
    MrAgent wrote: »
    My VMWare lab maching is actually much different than what I posted.
    I have an intel i7 970
    32gb RAM
    4x 1tb storage
    1 Fusion-io ioDrive2 Duo 1.2TB

    I dont remember the motherboard, but it does PCI pass through, which is what I really needed when doing my labs.

    ioDrive2 is like $15K, isnt it an overkill for an i7 970 desktop ? How did you manage to get your hands on a beauty like that one ?
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    JBrown wrote: »
    ioDrive2 is like $15K,

    If you're lucky :)
    JBrown wrote: »
    How did you manage to get your hands on a beauty like that one

    All you need is a chequebook :p

    I was working at a MSP and one big customer used four Fusion IO drives in Raid 0 in their SQL boxes to get the required I/O. Naturally they had multiple boxes for redundancy ... Each quad socket Nehalem server had a value of $120k (2U Supermicro) because of that (they had 8 )...
    My own knowledge base made public: http://open902.com :p
  • MrAgentMrAgent Member Posts: 1,310 ■■■■■■■■□□
    I work for Fusion-io currently. I have a few of them at home for various uses. The nicest one I have is a ioDrive2 Duo 2.4TB
  • QHaloQHalo Member Posts: 1,488
  • whatthehellwhatthehell Member Posts: 920
    This is what I just bought --- need to build, and planning to use a QNAP 2 bay NAS for storage (2x2tb WD red drives)

    CPU: Intel Xeon E3-1230 “Sandy Bridge” – 3.2GHz, 4 Cores, 8 Threads, 8MB (Newegg)
    Motherboard: Supermicro X9SCM-F – Intel C204, Dual GigE, IPMI w/Virtual Media, 2x SATA-3, 4x SATA-2 (Newegg)
    RAM: 32GB (4 x 8GB) Kingston 240 PIN DDR3 SDRAM ECC Unbuffered 1600 (PC3 12800) Server Memory Model (Amazon)
    Disk: Lexar Echo ZX 16GB (Newegg)
    Case: LIAN LI PC-V351B Black Aluminum MicroATX Desktop Computer Case (Newegg)
    Fans: 2 x Scythe SY1225SL12L 120mm “Slipstream” Case Fan (Newegg)
    Power: Seasonic 400W 80 Plus Gold Fanless ATX12V/EPS12V Power Supply (Amazon)

    Yes this is the Wahl Network white box...thank you Senior Wahl for posting!
    2017 Goals:
    [ ] Security + [ ] 74-409 [ ] CEH
    Future Goals:
    TBD
  • QHaloQHalo Member Posts: 1,488
    That motherboard has a NIC that's not on the HCL and requires a custom driver injected into the ESXi installation.

    [H]ard|Forum - View Single Post - Intel 82579LM and 82579V
    Intel 82579LM and 82579V - Page 10 - [H]ard|Forum
    Enabling Intel NIC (82579LM) on Intel S1200BT under ESXi 5.1 | Virtual Drive

    I'm sure you're aware, but if not, now you are. :)
  • santaownssantaowns Member Posts: 366
    I run the dell f1d I suggested, just picked up a HP dl380 g4 and gonna pair it with a sma60 or sma70 to be my "San"
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    santaowns wrote: »
    HP dl380 g4

    CPU might not have VT though. G5 and higher almost certainly ... They are usually 64 Bit but without required VT. Even if you put a CPU with VT in it - the Bios won't support it. HP used to sell a Dual Core CPU kit which had VT - but because of the Bios revision you wouldn't be able to install afterwards and you had to buy it straight from HP.

    But again, it has been ages since I touched G4 (binned a few recently), so I might have the wrong end of the stick after all.
    My own knowledge base made public: http://open902.com :p
  • santaownssantaowns Member Posts: 366
    good to know, but I wont be running VMware on it, probably a free san Linux type program. The dell I have is a 1u and its running out of space fast so I will be using the hp strictly for storage.
  • datgirldatgirl Member Posts: 62 ■■□□□□□□□□
    MrAgent wrote: »
    For all those that are building labs, and wonder what would be a good lab, I found this the other day and thought I would share it.

    Another VMware home lab | Neil Koch
    I like his mention of nested virtualization, and the link to Building the Ultimate vSphere Lab – Part 1: The Story.
  • gbdavidxgbdavidx Member Posts: 840
    datgirl wrote: »
    I like his mention of nested virtualization, and the link to Building the Ultimate vSphere Lab – Part 1: The Story.

    Is this possible?

    I have so far one windows server 2012 in vmware v9 (2012)

    I have it running as vnet0 (bridged) - external connection auto - bridging

    I am trying to assign it a static ip address, but whenever I try and do this it fails and then I can't connect to the internet

    Do i need a hyper-v server within a server for my lab environment?
  • whatthehellwhatthehell Member Posts: 920
    QHalo wrote: »
    That motherboard has a NIC that's not on the HCL and requires a custom driver injected into the ESXi installation.

    [H]ard|Forum - View Single Post - Intel 82579LM and 82579V
    Intel 82579LM and 82579V - Page 10 - [H]ard|Forum
    Enabling Intel NIC (82579LM) on Intel S1200BT under ESXi 5.1 | Virtual Drive

    I'm sure you're aware, but if not, now you are. :)

    Thanks for posting! Once I get around to building the box, will definitely use the provided info!
    2017 Goals:
    [ ] Security + [ ] 74-409 [ ] CEH
    Future Goals:
    TBD
  • JBrownJBrown Member Posts: 308
    I decided to go with one server, and nest the setup. I ordered one server with config listed bellow+512GB 7.2k rpm drive for $420, and 2 x 250GB Samsung 840 drives. But, the vSPhere host was dropping network connection every 1-3 hours, had to hard boot it couple of times. The 512GB Seagate - its running all the VMs for now- was hitting 41 degrees in Celsius, according to syslog. i am guessing it could not handle the hot weather we had in New York for last few days.

    I just installed the SSD drives, so did not have a time to test its speed yet.
Sign In or Register to comment.