Just wanted to say...

DeathmageDeathmage Banned Posts: 2,496
I just wanted to say special thanks to: Essendon, dave330i, jibbajabba, kj0, tomtom1, and azjag for all of the support and to everyone else I might have forgotten for your feedback the past few months thru my VCP training and then onto the exam and now into my 1st fully deployed ESXi 5.5 cluster of my own at my place of work.

You support of best practices and feedback to questions have allowed me to make the ESXi 5.5 cluster deployment 98% without issues. The Dell Remote Support Techs that I worked with said it was the most well-designed and configured ESXi host/vCenter plus infrastructure configuration/deployment they've seen in a very long time. Really helps to plan everything out to the last detail, I was nearly paranoid about every little detail. I'm quite sure my tail-end CCNA study helped too with the network design; in-fact maybe now that I don't have to worry about the project I can sit the dam exam!

All-in-all I successfully P2V's 25 servers over 4 weeks will no downtime to end-users with the final migration to the Equalogic last night from local array storage (however probably loss of sleep with 90 hour weeks - lots of overnighters) and now have yielded a 10x increase in performance/stability/reliability across all spectrums of a server/network/storage infrastructure.

Again, thanks for all your help, it truly meant allot to me. icon_smile.gif

Who knows maybe VCAP5-DCA/DCD will be much easier to obtain in the coming months and if I have questions I know were to turn. icon_biggrin.gif


Plus to make things better, got this minor thing this morning, just got it just because.



~You all know my name. icon_smile.gif

Comments

  • techfiendtechfiend Member Posts: 1,481 ■■■■□□□□□□
    Good job man. 25 P2V's sounds like a lot of fun but 4 weeks to do it (alone?) sounds like hell. How many physical machines did you consolidate to?
    2018 AWS Solutions Architect - Associate (Apr) 2017 VCAP6-DCV Deploy (Oct) 2016 Storage+ (Jan)
    2015 Start WGU (Feb) Net+ (Feb) Sec+ (Mar) Project+ (Apr) Other WGU (Jun) CCENT (Jul) CCNA (Aug) CCNA Security (Aug) MCP 2012 (Sep) MCSA 2012 (Oct) Linux+ (Nov) Capstone/BS (Nov) VCP6-DCV (Dec) ITILF (Dec)
  • DeathmageDeathmage Banned Posts: 2,496
    techfiend wrote: »
    Good job man. 25 P2V's sounds like a lot of fun but 4 weeks to do it (alone?) sounds like hell. How many physical machines did you consolidate to?

    two brand-new Dell R720xd's with a Equalogic 4100s backend. We have a R710 that was a physical box, in about 6 months I was promised money to make that a third host, but right now the company can fit on one host fine, so in times of normal operation the load is split.
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    That's invaluable experience Trev! Keep up the great work and as always pay great attention to detail. It's a hallmark of good engineers.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • kj0kj0 Member Posts: 767
    No worries, This is what these forums, and others are for, as well as your local VMUG community.

    Keep up the good work, Will be great to see you as a VCDX designing with hundreds of servers!
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Well done Trev, I am more surprised people nowadays manage to P2V anything without downtime. Damn ghost NICs usually mean at least a temporary disconnection from the network.
    My own knowledge base made public: http://open902.com :p
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    It's your hard work. I just try to point you in the right direction.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • techfiendtechfiend Member Posts: 1,481 ■■■■□□□□□□
    25 to 2 is quite a consolidation. I'm about done assisting with 8 to 2 R710's. They didn't want to purchase a SAN and my clustering suggestion was denied, so it's not nearly as challenging as yours. If we were working on it full time, it'd probably take the two of us 2-3 weeks to complete, it's turned into almost 3 months now.
    2018 AWS Solutions Architect - Associate (Apr) 2017 VCAP6-DCV Deploy (Oct) 2016 Storage+ (Jan)
    2015 Start WGU (Feb) Net+ (Feb) Sec+ (Mar) Project+ (Apr) Other WGU (Jun) CCENT (Jul) CCNA (Aug) CCNA Security (Aug) MCP 2012 (Sep) MCSA 2012 (Oct) Linux+ (Nov) Capstone/BS (Nov) VCP6-DCV (Dec) ITILF (Dec)
  • DeathmageDeathmage Banned Posts: 2,496
    techfiend wrote: »
    25 to 2 is quite a consolidation. I'm about done assisting with 8 to 2 R710's. They didn't want to purchase a SAN and my clustering suggestion was denied, so it's not nearly as challenging as yours. If we were working on it full time, it'd probably take the two of us 2-3 weeks to complete, it's turned into almost 3 months now.

    I spent about one full Saturday planning it out on paper then made the configs at home in my home-lab and then when the hardware arrive it was quick. Switches took 25 minutes each, ESXi hosts the configs took 35 minutes each tops, and then I installed vCenter my 1st two weeks there on a desktop that I just P2V'd to cluster and whalla. All my Cat5e cabling was done before hand with labels and color coded cables for each zone: Production, vMotion, iSCSI. Every little detail was planned, and I only knew of it from the home-lab.

    I spent more time waiting for P2V to happen than I actually spent on configuration/planning.

    25 to 2 is fine honestly, each server has 128 GB's each, but I keep the low IO server(s) on one host and the High OI server(s) on another host however both have bonded triple gigabit connections spread across 3 quad nic cards to balance the load across nic controllers which in my mind equates to higher throughput, well lol until Southbridge is the bottleneck! LOL!!!! lastly server(s) are on there own /26 subnet to keep broadcasts low which has helped with performance across the wire.

    all in all it's really helpful to have a replica production home-lab to break ****, really helps!
  • techfiendtechfiend Member Posts: 1,481 ■■■■□□□□□□
    It all depends on what the 25 servers are doing and what hardware, are they 2x12 cores? Why separate high and low I/O? Wouldn't it be more effective to balance them considering the link is the bottleneck? NIC bonding is good but it diminishes a lot I'd love to get a hold of 10Gbe when using a SAN, but $$$. Do you have them setup as a failover cluster?

    There's 12 servers at work, 4 were already virtual on a 710. 6 on each 710 now and load is mostly balanced on internal storage, at times they are a little sluggish but it's tolerable. Each is replicated back to the other host. We double team (reliability over speed) nics between the two on their own switch and the backup is also on that switch. Then we double team nics to the large switches. Crossovers between the 2 would be nice to eliminate single point of failure of the switch but than all the (hourly/daily/weekly) backup traffic would be on the main network. I can only wish we had resources to do a lot more but it works as is and I'd say the designer did a pretty good job within the constraints, I don't know if I could have done better. Hopefully I'll get a chance to prove it some day.
    2018 AWS Solutions Architect - Associate (Apr) 2017 VCAP6-DCV Deploy (Oct) 2016 Storage+ (Jan)
    2015 Start WGU (Feb) Net+ (Feb) Sec+ (Mar) Project+ (Apr) Other WGU (Jun) CCENT (Jul) CCNA (Aug) CCNA Security (Aug) MCP 2012 (Sep) MCSA 2012 (Oct) Linux+ (Nov) Capstone/BS (Nov) VCP6-DCV (Dec) ITILF (Dec)
Sign In or Register to comment.