vCenter Deployment: what is considered best-practice?
Deathmage
Banned Posts: 2,496
Hi guys, I'm curious what is considered best practice with vCenter in a ESXi cluster. Installing vCenter on a VM on a ESXi host server or install the vCenter on a physical server that is separate of the VM cluster.
I'm basically doing this is preparation for the Stanly class in August. Below is my hardware setup in a 48U rack in my basement, were would be the ideal location?
Equipment:
(2) Dell R610 Dual Xeon 2.2Ghz Hyperthreaded CPU's with 32 GB's or RAM and a RAID5 300 GB VelocoRaptor array. (6 instances of 2008 R2 on them)
(2) Dell Poweredge 2950 GEN III Dual Xeon 2.0Ghz (nonthreaded) CPU's in 16 GB's each with RAID5 250 GB 7.2k array. (4 instances of 2008 R2 on them both)
(2) Cisco 2600 routers to logical separate the server on separate vlans.
(2) Cisco 2950 switches to connect the servers on each vlan.
(1) Cisco 1721 edge router to join the 2600's to the backbone home network.
(1) HP Procurve 2910al-48G-POE+ switch being use as MDF with vlans for wireless, core network, and Cisco Lab.
(2) Broadcom Extreme Nic's in my gaming PC with a connection to both 2950 switches (management), using a Bigfoot 2100 gaming nic as my primary.
(2) Sonicwall TZ 210's in HA.
(1) Cisco 1142 Aironet AP.
(1) Cable 101mbit static IP.
This lab kind of touches my CCENT/CCNA, MCSA, Sonicwall and VCP training all at once but as of right now it's all working. So now that you guys have a understanding of my setup, were would be the ideal spot (I do have a few i7-2600k based gaming rigs too) for the vCenter?
I'm basically doing this is preparation for the Stanly class in August. Below is my hardware setup in a 48U rack in my basement, were would be the ideal location?
Equipment:
(2) Dell R610 Dual Xeon 2.2Ghz Hyperthreaded CPU's with 32 GB's or RAM and a RAID5 300 GB VelocoRaptor array. (6 instances of 2008 R2 on them)
(2) Dell Poweredge 2950 GEN III Dual Xeon 2.0Ghz (nonthreaded) CPU's in 16 GB's each with RAID5 250 GB 7.2k array. (4 instances of 2008 R2 on them both)
(2) Cisco 2600 routers to logical separate the server on separate vlans.
(2) Cisco 2950 switches to connect the servers on each vlan.
(1) Cisco 1721 edge router to join the 2600's to the backbone home network.
(1) HP Procurve 2910al-48G-POE+ switch being use as MDF with vlans for wireless, core network, and Cisco Lab.
(2) Broadcom Extreme Nic's in my gaming PC with a connection to both 2950 switches (management), using a Bigfoot 2100 gaming nic as my primary.
(2) Sonicwall TZ 210's in HA.
(1) Cisco 1142 Aironet AP.
(1) Cable 101mbit static IP.
This lab kind of touches my CCENT/CCNA, MCSA, Sonicwall and VCP training all at once but as of right now it's all working. So now that you guys have a understanding of my setup, were would be the ideal spot (I do have a few i7-2600k based gaming rigs too) for the vCenter?
Comments
-
Bloogen Member Posts: 180 ■■■□□□□□□□Just create it on an ESXi host as a VM. That is the best practice and makes the most sense for your lab based on the hardware you have available.
-
emerald_octane Member Posts: 613Keep in mind even for VCP your lab doesn't need to be overly complex. You can configure all your physical hosts to be a part of an ESXi cluster with real networking et al, OR you can create an entire lab on one physical machine and just use nested ESXi. It's more about your goals then anything.
-
Deathmage Banned Posts: 2,496emerald_octane wrote: »Keep in mind even for VCP your lab doesn't need to be overly complex. You can configure all your physical hosts to be a part of an ESXi cluster with real networking et al, OR you can create an entire lab on one physical machine and just use nested ESXi. It's more about your goals then anything.
This is good to know! - I'm sure the lab is overkill. I just want to be as close to a real network as possible so I can have a firm understanding of VMware as it ties into a production windows network.Just create it on an ESXi host as a VM. That is the best practice and makes the most sense for your lab based on the hardware you have available.
Thanks! - I'll just pop the vCenter on a normal 2008 R2 server instance. Can't imagine I need to allocate much horsepower to the VM. -
jibbajabba Member Posts: 4,317 ■■■■■■■■□□The only tricky bit when running the vCenter as a VM is trying to hunt it down when it breaks. Imagine you have vCenter as a VM and DRS enabled. Your vCenter VM will move around and now imagine something breaks with the VM - you cannot connect to it anymore and you need to reboot it (it is Windows after all).
Now imagine having a cluster with 32 VMs - it can take a while to log into each and one of them trying to find the VM in order to bounce it
We for example have a production cluster, with customer VMs etc., and a management cluster, with all VMs like vCenter and the lot - using different storage as well
On top of that the management cluster has a rule configured to make sure the vCenter VM is only allowed on two particular hosts. That way you limit your hunting
Initially we had a rule to have the vCenter always on the same host, but that one failed one day and the vCenter didn't come up - so we still had to hunt for it - hence using two hosts, limiting the time required to find the VM.My own knowledge base made public: http://open902.com -
Deathmage Banned Posts: 2,496jibbajabba wrote: »The only tricky bit when running the vCenter as a VM is trying to hunt it down when it breaks. Imagine you have vCenter as a VM and DRS enabled. Your vCenter VM will move around and now imagine something breaks with the VM - you cannot connect to it anymore and you need to reboot it (it is Windows after all).
Now imagine having a cluster with 32 VMs - it can take a while to log into each and one of them trying to find the VM in order to bounce it
We for example have a production cluster, with customer VMs etc., and a management cluster, with all VMs like vCenter and the lot - using different storage as well
On top of that the management cluster has a rule configured to make sure the vCenter VM is only allowed on two particular hosts. That way you limit your hunting
Initially we had a rule to have the vCenter always on the same host, but that one failed one day and the vCenter didn't come up - so we still had to hunt for it - hence using two hosts, limiting the time required to find the VM.
Thanks for the useful scenario. Will keep this in mind. Thanks! -
Priston Member Posts: 999 ■■■■□□□□□□We have 2 clusters at work, the vCenter for the Production cluster is on the Management cluster and the vCenter for the Management cluster is on the Production cluster.A.A.S. in Networking Technologies
A+, Network+, CCNA -
jibbajabba Member Posts: 4,317 ■■■■■■■■□□Where's your SQL server ?My own knowledge base made public: http://open902.com