Options

Esxi 5

Bl8ckr0uterBl8ckr0uter Inactive Imported Users Posts: 5,031 ■■■■■■■■□□
Anyone build a box for ESXI 5?

Comments

  • Options
    cyberguyprcyberguypr Mod Posts: 6,928 Mod
    Do you mean for labbing?
  • Options
    Bl8ckr0uterBl8ckr0uter Inactive Imported Users Posts: 5,031 ■■■■■■■■□□
    Yes. I am probably going to put on together in a few months.
  • Options
    jmritenourjmritenour Member Posts: 565
    I've built a 4 node cluster at work, all Dell R710s with 64GB of RAM, dual 6 core Intel. At this point, I'm not seeing any major differences from ESXi 4.1, but to be fair, I haven't done much more than build, setup vCenter, and configure HA at this point - I'll be migrating some physical machines to this cluster in the next few weeks, but we'll see.

    As for labbing, I haven't got any further than building an ESXi 4.1 VM in Workstation, and upgrading to 5 just to see it in action, since I know the requests will be coming from customers sooner or later. Really groaning at this point that you can upgrade directly from ESX 3.5 to ESXi 5 without wiping and starting anew - that could've saved me quite bit of time this past summer. I have plans to repurpose one or more of my boxes at home for ESXi 5 in order to get as much hands on with it before I take the VCP 5 exam in January/February, but the box I had planned to start with does not have VT capability. I thought the processor in it was newer, but I was mistaken. Not a game breaker, but I had planned to run my vCenter as a VM on the first node, and since I won't be able to run 64 bit OS on it, I need to find something else.
    "Start by doing what is necessary, then do what is possible; suddenly, you are doing the impossible." - St. Francis of Assisi
  • Options
    Bl8ckr0uterBl8ckr0uter Inactive Imported Users Posts: 5,031 ■■■■■■■■□□
    Do you think it would be easier/better to buy a box for esxi and then just use my existing parts for a storage server?
  • Options
    jmritenourjmritenour Member Posts: 565
    Do you think it would be easier/better to buy a box for esxi and then just use my existing parts for a storage server?

    Probably so, that's actually one of the things I'm thinking about doing myself - buying/building a cheap white box for ESXi, and installing openfiler on the bboxes I was going to use for a cheap ISCSI storage array.
    "Start by doing what is necessary, then do what is possible; suddenly, you are doing the impossible." - St. Francis of Assisi
  • Options
    cyberguyprcyberguypr Mod Posts: 6,928 Mod
    I've been putting together a box for 2 months now. I just can't commit to it. I keep changing specs on a weekly basis. One day I want a Supermicro board and the next day something else. I currently have 4.1 running on an old HP DC7600 which is slow, but works.

    Let's see if a few of us exchange build ideas to get motivated.
  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Every ESXi box we built(d) is based on Supermicro, ATP Ram with chipkill, Xeon 56 series, LSI raid cards (as array details are shown in ESXi otherwise we'd use Adaptec), Seagate SAS and the OS itself is installed on ATB flash header.

    What specs are you looking at for what budget ?

    What i did for a small lab is using an HP Microserver with 4x2TB, Adaptec 3805, 8 GB of RAM and 2GB flash drive for ESXi.

    Runs a few VMs just fine. 2008R2 as iscsi (for backups) and PS3 media and iTunes server, a RedHat LAMP server, a server core AD server, and a server with all sorts of ESX administration (PowerCLI etc.) and other templates I create for later use.

    So it really depends what you want to do with it. If you want to lab for example you need to be careful as some features require very specific hardware. Fault Tolerance and the vSphere Storage appliance for example. Make sure you either check the HCL or google whether people got it working on the hardware you intend to use it on.

    I also use another micro server with similar spec as cluster in a box for demonstration purposes. Similar spec as above. ESX installed. then four VMs. Two nested ESXi server, one 2008R2 running vcenter and one centos vm presenting iscsi and nfs shares. The vcenter is also AD / DNS / DHCP for good measure.

    So I can just pickup the box and show people a fully fletched ESXi cluster without the need of another hardware or network:)
    My own knowledge base made public: http://open902.com :p
  • Options
    Bl8ckr0uterBl8ckr0uter Inactive Imported Users Posts: 5,031 ■■■■■■■■□□
    I am thinking my budget will be around 600 or so. I have a case already. Mind you, I am not looking for something to do a VCP on, I am just looking to run a few VMs (probably 3-4 linux server environments, a small AD environment and maybe a unix server, all of these for pen test practicing and learning security).

    It doesn't need to be super fast.
  • Options
    QHaloQHalo Member Posts: 1,488
    Supermicro X8SIL-F
    Intel 3440
    8GB of RAM
    16GB USB flash Drive
    Lian Li Case
    Rosewill powersupply


    Total cost = 682.94. I have two of these machines, well I have 3430's instead but the rest is the exact same. I have a couple Intel Pro 1000PT dual ports in them as well that I got off eBay.
  • Options
    Bl8ckr0uterBl8ckr0uter Inactive Imported Users Posts: 5,031 ■■■■■■■■□□
    QHalo wrote: »
    Supermicro X8SIL-F
    Intel 3440
    8GB of RAM
    16GB USB flash Drive
    Lian Li Case
    Rosewill powersupply


    Total cost = 682.94. I have two of these machines, well I have 3430's instead but the rest is the exact same. I have a couple Intel Pro 1000PT dual ports in them as well that I got off eBay.

    +Rep

    Looks really good. I think I would need to up the ram though (and of course include some HDDs). How many VMs do you run?

    *You must spread rep around before....*
  • Options
    QHaloQHalo Member Posts: 1,488
    Right now one is running my Dynamips Ubuntu machine, 1 Windows 2003 ACS server, 2 Windows XP machines, and Win2k8 for VCenter. I can do all that on one host and they do not choke at all. Grab another 8GB of RAM and max it out to 16GB. I'm about to pickup another 8GB because I'm going to run ACS 5.2 in a VM and I've dedicated 4GB to the Ubuntu machine for running routers.
  • Options
    Bl8ckr0uterBl8ckr0uter Inactive Imported Users Posts: 5,031 ■■■■■■■■□□
    How many Nics?
  • Options
    QHaloQHalo Member Posts: 1,488
    The board has 2 gigabit ones, and an IPMI management interface (iLO/DRAC/RSAII/etc). I added another 2 gigabit ports via the Intel Pro 1000PT's for 4 ports. I got those mostly for messing around with failover in VMware. It's not a necessary expense.
  • Options
    Bl8ckr0uterBl8ckr0uter Inactive Imported Users Posts: 5,031 ■■■■■■■■□□
    I might need a few for vlan tagging and such. I do want this traffic in its own vlan (away from my production stuff).
  • Options
    QHaloQHalo Member Posts: 1,488
    Unless you want to physically separate the traffic, you could run all your services across those two NICs through one vSwitch and have the physical switch tag the frames. You'd need a supported switch obviously to setup the tagging. I have an 1810G that does LACP and works just fine. Just create port groups with the proper VLANs on the vSwitch. There's several creative ways to skin this thing.

    VMware KB: Sample configuration of virtual switch VLAN tagging (VST Mode)
    http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1004048
  • Options
    Bl8ckr0uterBl8ckr0uter Inactive Imported Users Posts: 5,031 ■■■■■■■■□□
    Good information. Gives me something to think about. I thought the vswitch was only in esx (which I know is now gone).
  • Options
    QHaloQHalo Member Posts: 1,488
    vSwitch is available on any version, it's the standard virtual switch. You're probably thinking of vDistributed Switch which requires VCenter and proper licensing. Free ESXi should do all you need it to do.
  • Options
    Bl8ckr0uterBl8ckr0uter Inactive Imported Users Posts: 5,031 ■■■■■■■■□□
    Excellent. Now just need to gather funds...
  • Options
    azjagazjag Member Posts: 579 ■■■■■■■□□□
    Hopefully I'm not too late to join in the thread. Two weekends ago we migrated our entire production environment over to esxi 5. They are running on 6 HP DL980's with 8 x 8 core procs and 1tb ram each. It was a busy weekend especially having to cold migrate the servers, reboot to in install vmtools and change the nic to a distributed switch. As far as performance increase with just upgrading to esxi 5 we have not noticed any. I'll let you know when we upgrade the test/dev/sandbox environment, using the old production hardware, to esxi 5. Now all that is left is to upgrade the machine version from 7 to 8. Anybody have an upgrade path for that?
    Currently Studying:
    VMware Certified Advanced Professional 5 – Data Center Administration (VCAP5-DCA) (Passed)
    VMware Certified Advanced Professional 5 – Data Center Design (VCAP5-DCD)
  • Options
    kerxkerx Member Posts: 38 ■■□□□□□□□□
    I'm really glad you opened a topic like this up. I've been continuously putting together a shopping list for a new machine that I could use for VMware labbing. I also, have been considering a good storage server, since I'm running low on my laptop, and have so much data spread across all my other machines and would like to consolidate.

    From the answers I've seen here, it sounds like having a separate ESXi server and Storage server is a better idea, when it comes to building a lab.

    I have about 4TB worth of data I need to consolidate, and I'm trying to find a cost effective solution. I'd like to use the storage server for ESXi labbing as well. Any suggestions on DIY storage solutions? Something cost effective, but reliable, since I'll be using it to back up family photos, work data, etc. as well as VMWare labbing.
  • Options
    MentholMooseMentholMoose Member Posts: 1,525 ■■■■■■■■□□
    I have a few server-grade lab hosts (Supermicro motherboard, Xeon CPU, ECC RAM), but in hindsight I think it was overkill to use server-grade stuff. IPMI sounded useful but I only use it for remote power control which isn't often needed (and I could easily use WOL instead anyway), and for remotely installing a hypervisor (XenServer or ESXi), which also isn't necessary often. I have found it annoying to use a server motherboard, though, because they seem to be quite particular about what RAM they will take. If the requirements say dual rank, don't buy single rank... if they say unbuffered, don't buy registered... if 1.6v, don't buy 1.8v... and so on. It's not really a problem when you first build it (basically just pay more for the exact RAM you need), but later on you might want to swap motherboards or move the RAM to another machine... and it doesn't work... and the manufacturer will tell you to go away since it's not on the HCL. I got it all on eBay and clearance sales so at least it was cheap. As far as servers go I do like Supermicro since their remote access program (IPMIView) runs okay on Linux (no stupid ActiveX browser junk required).

    If I was building a new lab machine today, I'd go with a standard desktop that supports DDR3 RAM. 16GB of desktop RAM is $80 and will simply work with any desktop motherboard (sub-$100 is no problem) that supports that much RAM without issue. Compared to ESXi 4.1, ESXi 5.0 has better out of the box support for unsupported/desktop NICs, which is the main compatibility concern (I guess they were tired of forum posts about cryptic errors halting the install on unsupported gear). For the CPU, I'd spend $100-150 and just ensure that it has 4 cores and works with the common hypervisors out there (so it needs 64-bit, DEP, and VT-x / AMD-V) in case you want to switch.
    MentholMoose
    MCSA 2003, LFCS, LFCE (expired), VCP6-DCV
  • Options
    Bl8ckr0uterBl8ckr0uter Inactive Imported Users Posts: 5,031 ■■■■■■■■□□
    You wouldn't want a box that supports raid or anything?
  • Options
    MentholMooseMentholMoose Member Posts: 1,525 ■■■■■■■■□□
    For running lab VMs on local storage I prefer SSDs. To use RAID with ESXi you need a supported RAID card which are generally expensive and you can't use a cheap software RAID card. SSDs are continuously dropping in price, so you can get one big enough to last for a while, then get another in 6-12 months for much cheaper. If you need a lot of disk space for something you can mix and match SSDs and regular disks. If you do want a RAID card you can still use a desktop motherboard.
    MentholMoose
    MCSA 2003, LFCS, LFCE (expired), VCP6-DCV
  • Options
    Bl8ckr0uterBl8ckr0uter Inactive Imported Users Posts: 5,031 ■■■■■■■■□□
    Interesting. Good advice. What do you use for iscsi? Mind you I am a super noob on storage and I literally know next to nothing about it but I am mostly interested in learning it from the network engineering perspective (if that makes sense).

    This is looking like it would fit the bill:

    http://www.newegg.com/Product/Product.aspx?Item=N82E16813128520
  • Options
    MentholMooseMentholMoose Member Posts: 1,525 ■■■■■■■■□□
    Interesting. Good advice. What do you use for iscsi? Mind you I am a super noob on storage and I literally know next to nothing about it but I am mostly interested in learning it from the network engineering perspective (if that makes sense).
    If you are just getting started with network-based storage, you might want to go with NFS. It's easier to understand (no need to care about initiators, targets, IQNs, etc.), you can get good performance, and it's easy to setup. Install any common Linux distribution (CentOS, Fedora, Ubuntu, etc.), edit /etc/exports by following one of the many tutorials out there (an NFS share is configured with literally one line of text, so not too intimidating), configure iptables if enabled, and you are good to go.

    There are many storage-focused *nix distributions out there that can do iSCSI or NFS... Openfiler, Open-E, FreeNAS, Nexenta, to name a few. I've used some of these and I think it's a good option. My current "lab SAN", however, is a whitebox server running Solaris 11 Express. At a previous job I worked a lot with a Solaris-based SAN that leveraged ZFS, and while I didn't care for it (long story, but basically if you so much as glanced in its direction it would crash, taking a variety of production systems with it), I did like the feature set of ZFS and Solaris. ZFS has some nice performance-enhancing features that work with commodity hardware, such as RAM acceleration ("ARC" in ZFS) and SSD acceleration ("L2ARC"), that otherwise would require very expensive, specialized, hardware.

    A RAID array with even a few standard disks will normally provide great sequential performance, but with virtualization what matters is random performance. To improve this with a standard RAID card you simply have to add more disks. With ZFS, the ARC uses system RAM to boost random performance, and the L2ARC lets you add an SSD for further gains... and it actually works, you can use dtrace to "see" the I/O handled by ARC/L2ARC. Besides the nice ZFS stuff, Solaris has an iSCSI target called COMSTAR which seems to work well, whereas I have seen problems with other iSCSI targets (IET has had problems, e.g. VMware KB: SCSI Reservation Conflicts when using OpenFiler iSCSI Storage Devices).
    It looks good to me. Of course I cannot guarantee it will work, but I did a quick search and it looks like the Realtek 8111E NIC works on ESXi 5.0.
    Realtek onboard NIC support in vSphere 5 | ESX Virtualization

    And here's someone who used a Z68-based gaming motherboard for ESXi 5.0.
    http://tinkertry.com/vzilla/
    MentholMoose
    MCSA 2003, LFCS, LFCE (expired), VCP6-DCV
  • Options
    mishymishy Member Posts: 209 ■■■□□□□□□□
    jibbajabba wrote: »

    What i did for a small lab is using an HP Microserver with 4x2TB, Adaptec 3805, 8 GB of RAM and 2GB flash drive for ESXi.

    Runs a few VMs just fine. 2008R2 as iscsi (for backups) and PS3 media and iTunes server, a RedHat LAMP server, a server core AD server, and a server with all sorts of ESX administration (PowerCLI etc.) and other templates I create for later use.

    Sorry to hijack a thread but how did you manage to get the HP Microserver to detect more than 1 sata drive because mine is only seeing one even though I have 3 drives in the machine? I have also tried changing BIOS from AHCI to IDE but it is still the same.

    I am using ESXi 5 on an HP Micro server running from USB.

    Thanks
  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    mishy wrote: »
    Sorry to hijack a thread but how did you manage to get the HP Microserver to detect more than 1 sata drive because mine is only seeing one even though I have 3 drives in the machine? I have also tried changing BIOS from AHCI to IDE but it is still the same.

    I am using ESXi 5 on an HP Micro server running from USB.

    Thanks

    It won't do it out of the box I am afraid. As mentioned, I am using an Adapec 3805. ESXi boots from USB and the 4x2TB are connected via the supplied multilane fanout to the Adaptec card. You don't have to use a 3805 (only useful if you intend to use more than 4 disks really), but a 2405 or Areca 12xx series would do too. The advantage of the Microserver is, that the backplane uses proper multilane cable (it's a server after all) so you can simply unplug the cable from the motherboard and plug it straight into the raidcard and it just works. There are other raid cards you could use from HP too - you just have to make sure it is a low profile card.

    You can get 16TB of raw storage into this little beauty using a Supermicro caddy, which fits 4x1TB 2.5" drives (fits fine in the DVD drive slot) and 4x3TB using the internal ports. All 8 drives can then be connected to an 8 port raid card (e.g. Adaptec 3805 / 5805) - but that's around $2k ish all in :)
    My own knowledge base made public: http://open902.com :p
  • Options
    mishymishy Member Posts: 209 ■■■□□□□□□□
    jibbajabba wrote: »
    It won't do it out of the box I am afraid. As mentioned, I am using an Adapec 3805. ESXi boots from USB and the 4x2TB are connected via the supplied multilane fanout to the Adaptec card. You don't have to use a 3805 (only useful if you intend to use more than 4 disks really), but a 2405 or Areca 12xx series would do too.

    I have done a quick search on the net and the Adapec 3805 cards seem to be going for around £300, I am only looking for something cheap so is there anything you can recommend. I plan to have a maximum of 4 SATA drives, 1 or 2 for my VMs 2 for Storage.

    Thanks.
  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    mishy wrote: »
    I have done a quick search on the net and the Adapec 3805 cards seem to be going for around £300, I am only looking for something cheap so is there anything you can recommend. I plan to have a maximum of 4 SATA drives, 1 or 2 for my VMs 2 for Storage.

    Thanks.

    Like I say - 2405 would do too, Areca 1210, 1220 and people reported the HP SmartArray controller 212 and 410 work too. Only ever worked with Adaptec though so I can't vouch for any other card. Bottom line, as long as the card is PCIe, low profile, is provided with a low profile slot metal thing and has multilane fanout connectors, you'll be fine. 2405 is half that on eBay. Oh, apart from LSI, the raid bios apparently isn't showing up on post.

    Bear in mind, this is a cheap server, but still a server so it comes with all the associated price tags when it comes to hardware :)

    What is great though is the IPMI (£60).

    I am more than happy to answer more questions, but better leave it to another thread. I opened here one in regards to this box a while ago, just search my name and post in there :)
    My own knowledge base made public: http://open902.com :p
Sign In or Register to comment.