vCPU Design Considerations

DeathmageDeathmage Banned Posts: 2,496
Hey guys,

Anyone know of a rough design consideration for vCPU's in a datacenter. I like to always start with 2 vCPU's and work my way up, but I was curious how other comes to determinations for vCPU counts based on a VM's demand levels.

Comments

  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    Depends on how many cores per ESXI host. With the modern 12+ core per socket, 2 vCPU is a good starting point.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • LexluetharLexluethar Member Posts: 516
    Funny Death i was just researching this myself. I've always followed the 'physical world' in terms of going with sockets vs cores. Mostly i've never gone above 2 sockets because we don't have more than 2 physical sockets on a host in our environment.

    What i've learned this week though is it really doesn't matter - at least those are the KB articles i've found from VMware. The idea of sockets vs cores came as a licensing issue which is why it was introduced (that and older OS's may not recognize more than 4 cores).

    I'm still not sure which to do - i'm now leaning towards just giving as many sockets as possible and leaving cores at 1. For one that is vmware's default. Secondly from what i've read this allows vmware cpu scheduler to properly schedule cpu cycles.
  • DeathmageDeathmage Banned Posts: 2,496
    I only ask because we have a Exchange box having issues and I think my predecessor didn't grasp the idea of vCPU's. the Exchange box has 16 vCPU's and it's having performance issues... icon_wink.gif

    I would think maybe 2 vCPU's for 2000+ users or a stretch and go to 4 vCPU's NOT 16!!!!!!!

    I'm pretty sure our CPU Scheduler is ******** a brick.
  • blargoeblargoe Member Posts: 4,174 ■■■■■■■■■□
    Lexluethar wrote: »
    What i've learned this week though is it really doesn't matter - at least those are the KB articles i've found from VMware. The idea of sockets vs cores came as a licensing issue which is why it was introduced (that and older OS's may not recognize more than 4 cores).
    .

    I have ran into this with some SharePoint VMs that have been in this environment for 5-6 years... the machines needed more vCPU, but the OS did not support additional sockets. Fortunately, VMware doesn't care how many sockets and cores per socket you use. To the vmkernel, a core is a core.
    Lexluethar wrote: »
    I'm still not sure which to do - i'm now leaning towards just giving as many sockets as possible and leaving cores at 1. For one that is vmware's default. Secondly from what i've read this allows vmware cpu scheduler to properly schedule cpu cycles.
    .

    I do the opposite, because with hot-add, you can edit the number of vCPU on the fly in a running VM, but not the number of cores per vCPU.

    So I start with one socket and two cores per socket by default. If I think it is a VM that may need to scale up, I might go with 2 sockets/2 cores per socket, or possibly 1 socket/4 cores per socket. Especially for Windows 2008 VMs, where there are more limitations with socket count in the OS.
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • DeathmageDeathmage Banned Posts: 2,496
    blargoe wrote: »
    I have ran into this with some SharePoint VMs that have been in this environment for 5-6 years... the machines needed more vCPU, but the OS did not support additional sockets. Fortunately, VMware doesn't care how many sockets and cores per socket you use. To the vmkernel, a core is a core.



    I do the opposite, because with hot-add, you can edit the number of vCPU on the fly in a running VM, but not the number of cores per vCPU.

    So I start with one socket and two cores per socket by default. If I think it is a VM that may need to scale up, I might go with 2 sockets/2 cores per socket, or possibly 1 socket/4 cores per socket. Especially for Windows 2008 VMs, where there are more limitations with socket count in the OS.

    Would you make an Exchange box a 4/4 for 2008 R2? or 1/4?

    I mean you just never think of CPU cycles, and this is now making me ponder about design for a CPU socket and virtual core, lol...
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    Lexluethar wrote: »
    Funny Death i was just researching this myself. I've always followed the 'physical world' in terms of going with sockets vs cores. Mostly i've never gone above 2 sockets because we don't have more than 2 physical sockets on a host in our environment.

    What i've learned this week though is it really doesn't matter - at least those are the KB articles i've found from VMware. The idea of sockets vs cores came as a licensing issue which is why it was introduced (that and older OS's may not recognize more than 4 cores).

    I'm still not sure which to do - i'm now leaning towards just giving as many sockets as possible and leaving cores at 1. For one that is vmware's default. Secondly from what i've read this allows vmware cpu scheduler to properly schedule cpu cycles.


    There is a performance difference between socket and core.

    Does corespersocket Affect Performance? - VMware vSphere Blog - VMware Blogs

    Always use sockets unless licensing issue.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    dave330i wrote: »
    Always use sockets unless licensing issue.
    I don't completely agree here. Within your host's NUMA node sizing, it doesn't matter whether you have 2 sockets and 4 cores or 4 sockets and 2 cores. I agree with the licensing bit, most products go with sockets for licensing. So if your product's licensed for 2 sockets only and your machine needs say 16 cores - go with 2 sockets and 8 cores. Again, think of NUMA and vNUMA.

    @Trev - I'd go with 1 vCPU to begin with. Now this is complete generalization without knowing what apps are going to run and how much workload is going to be put on your hosts. Don't forget - performance isn't about vCPU misconfig, it can have a lot to do with storage and/or network. So do your investigations before dropping vCPUs dramatically.

    @blargoe - Enabling CPU hot-add incurs a fair bit of overhead depending on size of machine. If you're doing it for every VM, you're doing it wrong to begin with. Say if you have 200 VMs in a cluster, each with 8 vCPUs and hot-add enabled - you'll likely see gigs and gigs of unnecessary overhead. In addition, I've found if people get wind of there being the idea of hot-add CPUs to a VM - you'll see more and more VMs end up being oversized.

    @Trev again - Exchange and SQL design are slightly furry beasts, they are not ordinary apps. VMware have sizing guides for Exchange, HIGHLY recommend you look 'em up before you go 1/4 or 4/1 - not so simple dude!

    Guys - performance isn't just about cores - think of the larger picture:

    - host design
    - cluster design
    - network design
    - storage design
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Being a little pedantic, the thread's title should have been VM design considerations, not vCPU design considerations. You don't do vCPUs designs ;)
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • DPGDPG Member Posts: 780 ■■■■■□□□□□
    Deathmage wrote: »
    I only ask because we have a Exchange box having issues and I think my predecessor didn't grasp the idea of vCPU's. the Exchange box has 16 vCPU's and it's having performance issues... icon_wink.gif

    I would think maybe 2 vCPU's for 2000+ users or a stretch and go to 4 vCPU's NOT 16!!!!!!!

    I'm pretty sure our CPU Scheduler is ******** a brick.

    The 16 vCPU's aren't going to impact performance unless there is contention with other VM's on the same host. Which Exchange roles does the VM have running? I run into memory hog implementations of Exchange much more often that one that has CPU issues.
  • kj0kj0 Member Posts: 767
    Even if you don't have Hot-Add enabled, if you oversize the vCPU's and you can create overhead as well.
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • LexluetharLexluethar Member Posts: 516
    As someone said i would NOT enable Hot-Add on all of your VMs, only ones that down time is not an option and under-performing servers is a huge issue. What i've read regarding hot-add is having that enabled causes a fair amount of overhead. What i understand is when hot-add is enabled the hypervisor has to assume at any point in to you are going add all available CPU's to that VM. With that said VMware takes that into account and has to soft allocate those resources to those VMs. Again, is that practical? I don't know, I've just read really wonky things with earlier versions of Hot-add and the KB articles i've found regarding them in 5.5 (because i've played with this idea) it causes some overhead so use it sparingly.

    As for the vCPU thing, i'm still not sure man. Okay you said just use sockets to allow the scheduler to do it's thing, but the scheduler is going the EXACT same thing with cores as well. Those threads are handled in the same fashion. The two big difference that i've heard are licensing considerations and Numa aware. If you have an application that is Numa aware you can use multiple cores and the application will perform better w/o relying on the cpu scheduler.
  • TheProfTheProf Users Awaiting Email Confirmation Posts: 331 ■■■■□□□□□□
    Essendon wrote: »
    I don't completely agree here. Within your host's NUMA node sizing, it doesn't matter whether you have 2 sockets and 4 cores or 4 sockets and 2 cores. I agree with the licensing bit, most products go with sockets for licensing. So if your product's licensed for 2 sockets only and your machine needs say 16 cores - go with 2 sockets and 8 cores. Again, think of NUMA and vNUMA.

    @Trev - I'd go with 1 vCPU to begin with. Now this is complete generalization without knowing what apps are going to run and how much workload is going to be put on your hosts. Don't forget - performance isn't about vCPU misconfig, it can have a lot to do with storage and/or network. So do your investigations before dropping vCPUs dramatically.

    @blargoe - Enabling CPU hot-add incurs a fair bit of overhead depending on size of machine. If you're doing it for every VM, you're doing it wrong to begin with. Say if you have 200 VMs in a cluster, each with 8 vCPUs and hot-add enabled - you'll likely see gigs and gigs of unnecessary overhead. In addition, I've found if people get wind of there being the idea of hot-add CPUs to a VM - you'll see more and more VMs end up being oversized.

    @Trev again - Exchange and SQL design are slightly furry beasts, they are not ordinary apps. VMware have sizing guides for Exchange, HIGHLY recommend you look 'em up before you go 1/4 or 4/1 - not so simple dude!

    Guys - performance isn't just about cores - think of the larger picture:

    - host design
    - cluster design
    - network design
    - storage design

    I agree!

    In fact I always start with 1 vCPU and work my way up (assuming we're talking about VDI).
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    Essendon wrote: »
    I don't completely agree here. Within your host's NUMA node sizing, it doesn't matter whether you have 2 sockets and 4 cores or 4 sockets and 2 cores. I agree with the licensing bit, most products go with sockets for licensing. So if your product's licensed for 2 sockets only and your machine needs say 16 cores - go with 2 sockets and 8 cores. Again, think of NUMA and vNUMA.

    @Trev - I'd go with 1 vCPU to begin with. Now this is complete generalization without knowing what apps are going to run and how much workload is going to be put on your hosts. Don't forget - performance isn't about vCPU misconfig, it can have a lot to do with storage and/or network. So do your investigations before dropping vCPUs dramatically.

    @blargoe - Enabling CPU hot-add incurs a fair bit of overhead depending on size of machine. If you're doing it for every VM, you're doing it wrong to begin with. Say if you have 200 VMs in a cluster, each with 8 vCPUs and hot-add enabled - you'll likely see gigs and gigs of unnecessary overhead. In addition, I've found if people get wind of there being the idea of hot-add CPUs to a VM - you'll see more and more VMs end up being oversized.

    @Trev again - Exchange and SQL design are slightly furry beasts, they are not ordinary apps. VMware have sizing guides for Exchange, HIGHLY recommend you look 'em up before you go 1/4 or 4/1 - not so simple dude!

    Guys - performance isn't just about cores - think of the larger picture:

    - host design
    - cluster design
    - network design
    - storage design

    You'll be hard pressed to find modern app requiring single CPU.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    @Dave - Plenty floating around in the environment I look after (dozens of vCenters, ~800 hosts, god-knows-how-many VMs).
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • slinuxuzerslinuxuzer Member Posts: 665 ■■■■□□□□□□
    I am for starting with a single socket and single core. You can actually drive down performance for some applications by assigning a second core. For instance if you have a single threaded application that will never be able to use that second core, you will still be required to schedule that second vCPU and that takes overhead. We proved this out in the VMware optimize and scale course, less operations per minute (OPM)

    Also, over allocating vCPUs can have a similar problem, once you start driving your hosts beyond the 4:1 vCPU-pCPU consolidation ratios you will start driving CPU ready times up, at this point it isn't a gigahertz problem, it becomes a problem of how long it takes the vCPUs that have work to do to get scheduled on the underlying resource, they will be in a longer line with vCPUs that don't have work to do.

    The general rule of thumb is try to keep your vCPU-pCPU consolidation ratio at 4:1 or under.

    vCPU hotadd, turning this on actually disables vNUMA.

    Also, I would have to go back and read up on some things, but the general recommendation is to try and make your VM socket layout mirror the underlying host, there is overhead involved with having a VM that has more sockets than the host it is running on, its basically a conversion that has to take place, before hitting the hardware.

    There are also some design factors that come in with monster VMs and sizing them with one or more sockets, basically each socket is assigned a memory bank and you could see some improvements memory wise by allowing it access to only one memory bank, basically avoiding remote memory calls and traversing the QPI link between numa nodes.
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    slinuxuzer wrote: »
    The general rule of thumb is try to keep your vCPU-pCPU consolidation ratio at 4:1 or under.

    The 4:1 is for 1 vCPU VMs. You'll have to lower the ratio for multi-CPU VMs. You'll start running into CPU ready issues.

    @Essendon - My experience is 2 or more CPUs lately.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • DeathmageDeathmage Banned Posts: 2,496
    Well here, maybe you guys can make sense of this then. the Production cluster where the Exchange box is, all the other VM's have 4 CPU's. 2 sockets, 2 vCPU's, but the hosts can't sustain the CPU's. More over all of the VM only use like 400 MHz at all times, so why have that much processor power being need to be processed by the CPU scheduler?




    Prior to me coming onboard then never knew about vROPS, the past week this has run has been alarming. They score a whopping 6 out of 100. icon_wink.gif


    This is actually what I was thinking of, see the Exchange box is using cores, but in the way it is, it's literally hogging up 8 sockets per Xeon E5-2697 v3. On top of the other geez 30 VM's with 4 vCPU's each. cluster just can't sustain the load, that poor CPU scheduler.

    Yes, the CPU's aren't the only problem here. They do have storage issues, there SAN only has 2% free space of 70TB's with a 40 TB overprovision. They really needed his new VNX SAN, the 2007 era Clariion was showing it's age.



    Thanks for the feedback so far guys, I got a feeling if we all were in a room someplace we could talk for hours. icon_wink.gif

    slinuxuzer wrote: »
    I am for starting with a single socket and single core. You can actually drive down performance for some applications by assigning a second core. For instance if you have a single threaded application that will never be able to use that second core, you will still be required to schedule that second vCPU and that takes overhead.

    That's exactly what I've been thinking, and it's probably taxxing the vmkernel. The Exchange box, as shown above doesn't even use the MHz of one the Xeon's cores, I think the max I saw it was 1200 Mhz. But even if it needs one core, since it has 4 sockets and 4 vCPU's it still schedules it with the scheduler and that wait time for those kind of resources just seem like a performance hit and a waste of cpu cycles.
  • blargoeblargoe Member Posts: 4,174 ■■■■■■■■■□
    I wonder if your storage issues are compounding your vCPU issues; if the kernel is waiting for the storage driver, all 8 vCPUs are going to be waiting, guest OS sees high "System" CPU time.
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • blargoeblargoe Member Posts: 4,174 ■■■■■■■■■□
    Looks like I have some things to re-think in my environment based on the discussion on this thread. That's why I love this place.

    I am pretty much set up based on the understanding I had of the way things worked 4 years ago, and haven't really changed much of anything other than a couple of version upgrades since then. Looks like I need to ask for some time to do another deep dive again.

    Is it a true statement that vNuma doesn't kick in until you go past 8vCPU? And that in general, if you can fit all of your memory accesses inside of a single NUMA node, that would be optimal? I guess I just don't have that many VMs that are big enough to cross that threshold.

    I still have quite a bit of Windows 2008 that was deployed with Standard or Enterprise edition, which do have CPU licensing limitations built in. In Windows Server 2012 R2 this limitation doesn't exist in the OS, and when covered with a Datacenter license on the host, I don't see a reason not to follow dave330i's recommendation of only increasing socket count except for an application or virtual appliance licensing requirement (I'm not familiar with the licensing model of RHEL or other Enterprise Linux distributions).

    I wasn't aware of a significant overhead issue with hot-add to be honest. I don't have it turned on everywhere, but I do have it enabled for certain groups of VMs that are prone to application changes/additions that I can predict will need to have memory increased. I haven't seen any documentation/articles suggesting not to turn it on. Looks like I have some research to do.
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • DeathmageDeathmage Banned Posts: 2,496
    blargoe wrote: »
    Looks like I have some things to re-think in my environment based on the discussion on this thread. That's why I love this place.

    I am pretty much set up based on the understanding I had of the way things worked 4 years ago, and haven't really changed much of anything other than a couple of version upgrades since then. Looks like I need to ask for some time to do another deep dive again.

    Is it a true statement that vNuma doesn't kick in until you go past 8vCPU? And that in general, if you can fit all of your memory accesses inside of a single NUMA node, that would be optimal? I guess I just don't have that many VMs that are big enough to cross that threshold.

    I still have quite a bit of Windows 2008 that was deployed with Standard or Enterprise edition, which do have CPU licensing limitations built in. In Windows Server 2012 R2 this limitation doesn't exist in the OS, and when covered with a Datacenter license on the host, I don't see a reason not to follow dave330i's recommendation of only increasing socket count except for an application or virtual appliance licensing requirement (I'm not familiar with the licensing model of RHEL or other Enterprise Linux distributions).

    I wasn't aware of a significant overhead issue with hot-add to be honest. I don't have it turned on everywhere, but I do have it enabled for certain groups of VMs that are prone to application changes/additions that I can predict will need to have memory increased. I haven't seen any documentation/articles suggesting not to turn it on. Looks like I have some research to do.


    Well these Xeon's have 14 cores per socket and have HT, and I just had vROPS tell me to increase a VM to 10 vCPU's, we'll see if this helps. It was previously set to 4 socket and 6 vCPU's, changed it to 1 socket and 10 vCPU's, and so far the VM is way happier...
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    @Blargoe - vNUMA does kick in at 8 vCPU. It can be adjusted. We had to do it for exchange 13 servers.

    Hot plug does increase overhead. The bigger problem is that the new CPUS added are on node 0 unless you vMotion or power cycle the VM.

    A lot of the older designs do need to be revisited due to new technologies in hardware & software.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Deathmage wrote: »
    Well these Xeon's have 14 cores per socket and have HT, and I just had vROPS tell me to increase a VM to 10 vCPU's, we'll see if this helps. It was previously set to 4 socket and 6 vCPU's, changed it to 1 socket and 10 vCPU's, and so far the VM is way happier...
    Be careful about what vROps suggests. It's not so black and white. It uses something called - policies - which dictate the nature of recommendations it'll generate. You must base your policies on how your environment's designed - do you overcommit on RAM or CPU or neither - have you checked these settings?
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Deathmage wrote: »
    Well these Xeon's have 14 cores per socket and have HT, and I just had vROPS tell me to increase a VM to 10 vCPU's, we'll see if this helps. It was previously set to 4 socket and 6 vCPU's, changed it to 1 socket and 10 vCPU's, and so far the VM is way happier...
    Get the terminology correct too ;) For instance, at 2 cores and 2 sockets = a machine has 4 vCPUs. Run up esxtop, switch to memory and see how much memory's being fetched from a remote NUMA node. Curious - what's the NUMA size on this hardware, 8?
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Deathmage wrote: »
    Well here, maybe you guys can make sense of this then. the Production cluster where the Exchange box is, all the other VM's have 4 CPU's. 2 sockets, 2 vCPU's, but the hosts can't sustain the CPU's. More over all of the VM only use like 400 MHz at all times, so why have that much processor power being need to be processed by the CPU scheduler?




    Prior to me coming onboard then never knew about vROPS, the past week this has run has been alarming. They score a whopping 6 out of 100. icon_wink.gif


    This is actually what I was thinking of, see the Exchange box is using cores, but in the way it is, it's literally hogging up 8 sockets per Xeon E5-2697 v3. On top of the other geez 30 VM's with 4 vCPU's each. cluster just can't sustain the load, that poor CPU scheduler.

    Yes, the CPU's aren't the only problem here. They do have storage issues, there SAN only has 2% free space of 70TB's with a 40 TB overprovision. They really needed his new VNX SAN, the 2007 era Clariion was showing it's age.



    Thanks for the feedback so far guys, I got a feeling if we all were in a room someplace we could talk for hours. icon_wink.gif




    That's exactly what I've been thinking, and it's probably taxxing the vmkernel. The Exchange box, as shown above doesn't even use the MHz of one the Xeon's cores, I think the max I saw it was 1200 Mhz. But even if it needs one core, since it has 4 sockets and 4 vCPU's it still schedules it with the scheduler and that wait time for those kind of resources just seem like a performance hit and a waste of cpu cycles.

    - Say a host's got 2 sockets and 8 cores, you have a total of 16 cores. How many total vCPUs (add up vCPUs from all VMs) do you have in that cluster? I suggest 4:1 for most environments to begin with, unless otherwise needed. You can go 6:1 or even 8:1 (for most single vCPU workload cluster, there aren't too many these days) before really start to stretch the limit. So what ratio do you have? Remember there are multiple hosts in the cluster.

    - What DRS levels do you have and what do the other hosts look like? Has DRS tried to move VMs around? I've seen people leaving DRS off (not having enough knowledge) and then wondering why their hosts and/or VMs underperforming.

    - 40TB overprovision!! Jeez.. That may be the issue all along. Remember it's not only about the disk being overprovisioned, it can also be about the FA ports are doing.

    You need to do a thorough review to be honest, don't go with trial and error. This isn't a home lab!
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • DeathmageDeathmage Banned Posts: 2,496
    Essendon wrote: »
    - Say a host's got 2 sockets and 8 cores, you have a total of 16 cores. How many total vCPUs (add up vCPUs from all VMs) do you have in that cluster? I suggest 4:1 for most environments to begin with, unless otherwise needed. You can go 6:1 or even 8:1 (for most single vCPU workload cluster, there aren't too many these days) before really start to stretch the limit. So what ratio do you have? Remember there are multiple hosts in the cluster.

    - What DRS levels do you have and what do the other hosts look like? Has DRS tried to move VMs around? I've seen people leaving DRS off (not having enough knowledge) and then wondering why their hosts and/or VMs underperforming.

    - 40TB overprovision!! Jeez.. That may be the issue all along. Remember it's not only about the disk being overprovisioned, it can also be about the FA ports are doing.

    You need to do a thorough review to be honest, don't go with trial and error. This isn't a home lab!

    It's not a home-lab nor in any shape or form like my last clusters either, I'm far from Kansas now. icon_wink.gif - This cluster have over 45 hosts and a few thousand VM's. :)

    Well I would say the majority of these VM's are 2:2 and some are 4:4, these hosts do have 56 cores total per xeon after HT and they run dual processor with 1 TB of RAM per host. Yes there is actually many cluster in this DC, different zonings, it's like a VMware nerd **** on my brain. icon_bounce.gif

    DRS is at like 3 for the most part, but it's still a bit too aggressive, might go to 2 at some point, there is talks.

    Well the arrays were literally screaming, prior to the new array. It wasn't even a backend IOPS issue, it was purely a space issue.

    Right now I'm going over the Cisco configs, Netflow, EMC logs, ESXTOP, VM configurations, and using vROPS for a overall baseline. Will take me a few weeks no joke to look over them all one by one. Already got a long-list of issues I'm seeing. icon_biggrin.gif

    Update: a little more info for anyone else that uses this system, they use EPIC and the ratio here is 4:4 or 6:6. Can any voucher for EPIC and know if these things really need a 6:6 ratio and not a 6:1 or 4:1?
Sign In or Register to comment.