stryder144 wrote: » Is this accurate?
stryder144 wrote: » The Official Cert Guide (pg 32/33) lists them in the following order: 1. vCenter 2. VUM 3. ESXi 4. VMs Is this accurate?
jibbajabba wrote: » It is. Another source is the official upgrade sequence KMVMware KB: Update sequence for vSphere 5.5 and its compatible VMware products VUM is a component of vCenter so it will be upgraded alongside vc and then you move down the chain.
GSXRules wrote: » Assuming VUM is installed on the same server as vCenter.
tomtom1 wrote: » Correct, always start with the vCenter..
Bloogen wrote: » 1. Due to admission control not being enabled there was not enough resources reserved for failover. Specifically there was not enough memory to support the the virtual machines on the remaining hosts. To avoid this admission control should be enabled to tolerate a host failure of at least 1 or specify 25% of resources (assuming 4 node cluster). Additional physical resources (RAM) likely need to added to enable VM operation under this constraint considering the scenarios current overcommitment. 2. When “Host failures a cluster tolerates” = 1 given the cluster will not allow VM power after a point where a node failure would overcommit memory/cpu resources if a single node were to fail. The issue we have is one of our hosts has a much larger amount of memory then the others. (An "unbalanced cluster") VMware needs to calculate a slot size based on a largest common VM in the cluster based on reserved resources to determine the amount of resources needed if a node fails. If no resource reservations are made then it will attempt to use a baseline slot size which can result in very large VM's not "fitting" into one of the smaller nodes under a failover scenario. To overcome this you can: - Choose to use memory/CPU reservations so VMware can accurately calculate the slot side. - You could separate similar hosts/VM's into their own cluster. - Or you could use percentage of reserved resources instead of host failures.
Asif Dasl wrote: First off, what a great thread!! My guess, and it is a guess, is that because you disabled admission control there wasn't enough resources so you couldn't power on all of the VMs and as a result of disabling never received an error from the VMs that they could not be powered on - the solution is to enable admission control so that you get a warning.
Asif Dasl wrote: My second guess is that the host on the left failed and there was not enough resources for all of the VMs to run on the hosts, the solution is to increase the RAM substantially on at least one of the other hosts. But preferably all the hosts would have the same configuration.
Essendon wrote: » Question 3:1. A company new to virtualization has hired you on as their go-to person for the project of virtualizing their environment. The first question they ask is if vCenter should be virtual or physical. Why would you would recommend a virtual vCenter, come up with atleast 5 solid reasons. 2. The company is concerned that putting vCenter in a cluster with DRS enabled may make it harder to find among all VM's if there's, say a power outage in the datacenter, and you need to power on vCenter before other VM's. How can you ensure you can find the vCenter VM in such a scenario (remember DRS is enabled in the cluster)?
Essendon wrote: » @tomtom1 - I'd be careful with that too! Thing is if there's a VM with say 32GB RAM reservation and you set the slot size to say a value too small, the VM will occupy multiple slots and DRS may need to move things around to fit that in and reduce resource fragmentation. I know it doesnt "affect" anything as such, just putting it out there.
Essendon wrote: » I quite surprises me that VMware admins and system managers alike struggle to understand the idea of vCPU's v/s pCPU's.