Lexluethar wrote: » What i've learned this week though is it really doesn't matter - at least those are the KB articles i've found from VMware. The idea of sockets vs cores came as a licensing issue which is why it was introduced (that and older OS's may not recognize more than 4 cores). .
Lexluethar wrote: » I'm still not sure which to do - i'm now leaning towards just giving as many sockets as possible and leaving cores at 1. For one that is vmware's default. Secondly from what i've read this allows vmware cpu scheduler to properly schedule cpu cycles. .
blargoe wrote: » I have ran into this with some SharePoint VMs that have been in this environment for 5-6 years... the machines needed more vCPU, but the OS did not support additional sockets. Fortunately, VMware doesn't care how many sockets and cores per socket you use. To the vmkernel, a core is a core. I do the opposite, because with hot-add, you can edit the number of vCPU on the fly in a running VM, but not the number of cores per vCPU. So I start with one socket and two cores per socket by default. If I think it is a VM that may need to scale up, I might go with 2 sockets/2 cores per socket, or possibly 1 socket/4 cores per socket. Especially for Windows 2008 VMs, where there are more limitations with socket count in the OS.
Lexluethar wrote: » Funny Death i was just researching this myself. I've always followed the 'physical world' in terms of going with sockets vs cores. Mostly i've never gone above 2 sockets because we don't have more than 2 physical sockets on a host in our environment. What i've learned this week though is it really doesn't matter - at least those are the KB articles i've found from VMware. The idea of sockets vs cores came as a licensing issue which is why it was introduced (that and older OS's may not recognize more than 4 cores). I'm still not sure which to do - i'm now leaning towards just giving as many sockets as possible and leaving cores at 1. For one that is vmware's default. Secondly from what i've read this allows vmware cpu scheduler to properly schedule cpu cycles.
dave330i wrote: » Always use sockets unless licensing issue.
Deathmage wrote: » I only ask because we have a Exchange box having issues and I think my predecessor didn't grasp the idea of vCPU's. the Exchange box has 16 vCPU's and it's having performance issues... I would think maybe 2 vCPU's for 2000+ users or a stretch and go to 4 vCPU's NOT 16!!!!!!! I'm pretty sure our CPU Scheduler is ******** a brick.
Essendon wrote: » I don't completely agree here. Within your host's NUMA node sizing, it doesn't matter whether you have 2 sockets and 4 cores or 4 sockets and 2 cores. I agree with the licensing bit, most products go with sockets for licensing. So if your product's licensed for 2 sockets only and your machine needs say 16 cores - go with 2 sockets and 8 cores. Again, think of NUMA and vNUMA.@Trev - I'd go with 1 vCPU to begin with. Now this is complete generalization without knowing what apps are going to run and how much workload is going to be put on your hosts. Don't forget - performance isn't about vCPU misconfig, it can have a lot to do with storage and/or network. So do your investigations before dropping vCPUs dramatically.@blargoe - Enabling CPU hot-add incurs a fair bit of overhead depending on size of machine. If you're doing it for every VM, you're doing it wrong to begin with. Say if you have 200 VMs in a cluster, each with 8 vCPUs and hot-add enabled - you'll likely see gigs and gigs of unnecessary overhead. In addition, I've found if people get wind of there being the idea of hot-add CPUs to a VM - you'll see more and more VMs end up being oversized.@Trev again - Exchange and SQL design are slightly furry beasts, they are not ordinary apps. VMware have sizing guides for Exchange, HIGHLY recommend you look 'em up before you go 1/4 or 4/1 - not so simple dude! Guys - performance isn't just about cores - think of the larger picture: - host design - cluster design - network design - storage design
slinuxuzer wrote: » The general rule of thumb is try to keep your vCPU-pCPU consolidation ratio at 4:1 or under.
slinuxuzer wrote: » I am for starting with a single socket and single core. You can actually drive down performance for some applications by assigning a second core. For instance if you have a single threaded application that will never be able to use that second core, you will still be required to schedule that second vCPU and that takes overhead.
blargoe wrote: » Looks like I have some things to re-think in my environment based on the discussion on this thread. That's why I love this place. I am pretty much set up based on the understanding I had of the way things worked 4 years ago, and haven't really changed much of anything other than a couple of version upgrades since then. Looks like I need to ask for some time to do another deep dive again. Is it a true statement that vNuma doesn't kick in until you go past 8vCPU? And that in general, if you can fit all of your memory accesses inside of a single NUMA node, that would be optimal? I guess I just don't have that many VMs that are big enough to cross that threshold. I still have quite a bit of Windows 2008 that was deployed with Standard or Enterprise edition, which do have CPU licensing limitations built in. In Windows Server 2012 R2 this limitation doesn't exist in the OS, and when covered with a Datacenter license on the host, I don't see a reason not to follow dave330i's recommendation of only increasing socket count except for an application or virtual appliance licensing requirement (I'm not familiar with the licensing model of RHEL or other Enterprise Linux distributions). I wasn't aware of a significant overhead issue with hot-add to be honest. I don't have it turned on everywhere, but I do have it enabled for certain groups of VMs that are prone to application changes/additions that I can predict will need to have memory increased. I haven't seen any documentation/articles suggesting not to turn it on. Looks like I have some research to do.
Deathmage wrote: » Well these Xeon's have 14 cores per socket and have HT, and I just had vROPS tell me to increase a VM to 10 vCPU's, we'll see if this helps. It was previously set to 4 socket and 6 vCPU's, changed it to 1 socket and 10 vCPU's, and so far the VM is way happier...
Deathmage wrote: » Well here, maybe you guys can make sense of this then. the Production cluster where the Exchange box is, all the other VM's have 4 CPU's. 2 sockets, 2 vCPU's, but the hosts can't sustain the CPU's. More over all of the VM only use like 400 MHz at all times, so why have that much processor power being need to be processed by the CPU scheduler? Prior to me coming onboard then never knew about vROPS, the past week this has run has been alarming. They score a whopping 6 out of 100. This is actually what I was thinking of, see the Exchange box is using cores, but in the way it is, it's literally hogging up 8 sockets per Xeon E5-2697 v3. On top of the other geez 30 VM's with 4 vCPU's each. cluster just can't sustain the load, that poor CPU scheduler. Yes, the CPU's aren't the only problem here. They do have storage issues, there SAN only has 2% free space of 70TB's with a 40 TB overprovision. They really needed his new VNX SAN, the 2007 era Clariion was showing it's age. Thanks for the feedback so far guys, I got a feeling if we all were in a room someplace we could talk for hours. That's exactly what I've been thinking, and it's probably taxxing the vmkernel. The Exchange box, as shown above doesn't even use the MHz of one the Xeon's cores, I think the max I saw it was 1200 Mhz. But even if it needs one core, since it has 4 sockets and 4 vCPU's it still schedules it with the scheduler and that wait time for those kind of resources just seem like a performance hit and a waste of cpu cycles.
Essendon wrote: » - Say a host's got 2 sockets and 8 cores, you have a total of 16 cores. How many total vCPUs (add up vCPUs from all VMs) do you have in that cluster? I suggest 4:1 for most environments to begin with, unless otherwise needed. You can go 6:1 or even 8:1 (for most single vCPU workload cluster, there aren't too many these days) before really start to stretch the limit. So what ratio do you have? Remember there are multiple hosts in the cluster. - What DRS levels do you have and what do the other hosts look like? Has DRS tried to move VMs around? I've seen people leaving DRS off (not having enough knowledge) and then wondering why their hosts and/or VMs underperforming. - 40TB overprovision!! Jeez.. That may be the issue all along. Remember it's not only about the disk being overprovisioned, it can also be about the FA ports are doing. You need to do a thorough review to be honest, don't go with trial and error. This isn't a home lab!