How many VM's per host's resources?
Overdash
Member Posts: 61 ■■□□□□□□□□
I know that this is a common question and that the trick is to look at the utilization/performance of each VM on the host; however, I am not sure what to look for or how to go about it.
I have a 2.66 quad-core CPU (core i7) with hyper-threading, 16GB's DDR3 memory (1333mhz) , and a plain old 7500 rpm Hard drive.
I have a 2TB NAS as an iSCSI target and I have 8 Server 2008 R2 VM's running various services in my domain, they don't really server more then three clients, and the roles I have installed are DC, Exchange, SharePoint, WDS, DHCP, DNS, WEB, etc....
How many VM's do you figure I could do total? Right now I have everyone of them at 2GB's and 1 vCPU each.
I appreciate any help or links to articles.
Thank you,
I have a 2.66 quad-core CPU (core i7) with hyper-threading, 16GB's DDR3 memory (1333mhz) , and a plain old 7500 rpm Hard drive.
I have a 2TB NAS as an iSCSI target and I have 8 Server 2008 R2 VM's running various services in my domain, they don't really server more then three clients, and the roles I have installed are DC, Exchange, SharePoint, WDS, DHCP, DNS, WEB, etc....
How many VM's do you figure I could do total? Right now I have everyone of them at 2GB's and 1 vCPU each.
I appreciate any help or links to articles.
Thank you,
Comments
-
NinjaBoy Member Posts: 968For not more than 3 clients, you could get away with just running 1 server. However I guess it depends on what you actually want to achieve.
-
powerfool Member Posts: 1,666 ■■■■■■■■□□Well, I think with your low number of clients, you should be fine. Your first hit will likely be RAM, as you only have 16GB, but disk I/O is probably a major concern as well. You have a NAS that doesn't likely have advanced features like cache, in memory parity calculation, or ILM... and it is also likely a low spindle count.2024 Renew: [ ] AZ-204 [ ] AZ-305 [ ] AZ-400 [ ] AZ-500 [ ] Vault Assoc.
2024 New: [X] AWS SAP [ ] CKA [ ] Terraform Auth/Ops Pro -
Overdash Member Posts: 61 ■■□□□□□□□□Well, I think with your low number of clients, you should be fine. Your first hit will likely be RAM, as you only have 16GB, but disk I/O is probably a major concern as well. You have a NAS that doesn't likely have advanced features like cache, in memory parity calculation, or ILM... and it is also likely a low spindle count.
Maybe not, I am working with an iomega ix2-200 cloud edition. I was also thinking of upgrading my Esxi servers HDD with a 30GB SSD. But with the current specs do you think I could do more than eight vm's? how many vCPU's do you think I could allocate as well?
Thank you!
+REP -
blargoe Member Posts: 4,174 ■■■■■■■■■□I know "it depends" is kind of a lame answer, but it would be difficult to project exactly how many VM's you could get. I can tell you that Exchange, SharePoint, and SQL Server (which will be installed for SharePoint) are somewhat memory intensive and perform better when they can have memory available to use for caching. However, for a small environment, the requirement will not be as high.
Memory or Disk will be your limiting factor. If you consolidate AD/DNS/DHCP/WINS, have a standalone SharePoint server, a single role Exchange server, and a couple of other servers (you mentioned WDS, web), there's a good chance you will reach or surpass the upper limit of acceptable performance.
When you say "three clients", are you saying three users, or three customers, each of which have some number of users?IT guy since 12/00
Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
Working on: RHCE/Ansible
Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands... -
blargoe Member Posts: 4,174 ■■■■■■■■■□You also asked about vCPU's. The best practice regarding the number of vCPUs per VM is to allocate a few per VM as possible, because ESX reserves extra memory in the VMkernel to handle virtual memory and vCPU. The more you allocate, the more memory overhead it will claim, meaning less memory available for user processing.
Your host machine is single pCPU with 4 cores? You should be able to overallocate (a vCPU for however many VM's you create), but keep in mind that you will run into contention from time to time; for example, if all the VM's are downloading Windows Updates at the same time, the svchost processes in those VM's will eat up processor time.IT guy since 12/00
Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
Working on: RHCE/Ansible
Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands... -
powerfool Member Posts: 1,666 ■■■■■■■■□□Yep, as stated... "it depends" isn't what you are looking for... but it is the right answers. How are resources consumed currently and what are you looking to add? As far as vCPUs... a single Core i7 CPU has 4 cores and hyperthreading... presenting itself as 8 available processors. I would imagine that you could easily do 16 vCPUs as long as all 16 are hammered... but on the other end, folks have done MUCH higher densities. I have seen numerous reports of 10 vCPUs per core... and I am sure others have gone higher. But, it depends on the load.
Honestly, I have to imagine that RAM is going to be your first hit, though. I had set of servers running dual Xeon 5500 CPUs with 32 gigs of RAM and CPU wasn't a factor even virtualizing tier-1 applications. RAM was our limiting factor, but we had virtualized everything that we wanted. Rather than adding servers to the cluster, we would have increased the RAM as necessary, first doubling to 64GB, and then upping to 96GB, before adding hosts. A dual Xeon 5500 setup is not exactly the same as a single Core i7, but you also have half as much RAM.
Further, considering disk I/O, you have more to worry about than just the unit itself... you have the connectivity to worry about. How many interfaces... switching backplane capacity... port-channel configuration... jumbo frames... multipathing... etc.
If this is a production workload, how are you backing up? Many utilize SAN snapshots and replication for these sorts of tasks... not something I would imagine is available in NAS-type storage... even that which is iSCSI-capable.2024 Renew: [ ] AZ-204 [ ] AZ-305 [ ] AZ-400 [ ] AZ-500 [ ] Vault Assoc.
2024 New: [X] AWS SAP [ ] CKA [ ] Terraform Auth/Ops Pro -
Overdash Member Posts: 61 ■■□□□□□□□□I know "it depends" is kind of a lame answer, but it would be difficult to project exactly how many VM's you could get. I can tell you that Exchange, SharePoint, and SQL Server (which will be installed for SharePoint) are somewhat memory intensive and perform better when they can have memory available to use for caching. However, for a small environment, the requirement will not be as high. Memory or Disk will be your limiting factor. If you consolidate AD/DNS/DHCP/WINS, have a standalone SharePoint server, a single role Exchange server, and a couple of other servers (you mentioned WDS, web), there's a good chance you will reach or surpass the upper limit of acceptable performance. When you say "three clients", are you saying three users, or three customers, each of which have some number of users?
-
Overdash Member Posts: 61 ■■□□□□□□□□Yep, as stated... "it depends" isn't what you are looking for... but it is the right answers. How are resources consumed currently and what are you looking to add? As far as vCPUs... a single Core i7 CPU has 4 cores and hyperthreading... presenting itself as 8 available processors. I would imagine that you could easily do 16 vCPUs as long as all 16 are hammered... but on the other end, folks have done MUCH higher densities. I have seen numerous reports of 10 vCPUs per core... and I am sure others have gone higher. But, it depends on the load. Honestly, I have to imagine that RAM is going to be your first hit, though. I had set of servers running dual Xeon 5500 CPUs with 32 gigs of RAM and CPU wasn't a factor even virtualizing tier-1 applications. RAM was our limiting factor, but we had virtualized everything that we wanted. Rather than adding servers to the cluster, we would have increased the RAM as necessary, first doubling to 64GB, and then upping to 96GB, before adding hosts. A dual Xeon 5500 setup is not exactly the same as a single Core i7, but you also have half as much RAM. Further, considering disk I/O, you have more to worry about than just the unit itself... you have the connectivity to worry about. How many interfaces... switching backplane capacity... port-channel configuration... jumbo frames... multipathing... etc. If this is a production workload, how are you backing up? Many utilize SAN snapshots and replication for these sorts of tasks... not something I would imagine is available in NAS-type storage... even that which is iSCSI-capable.
-
powerfool Member Posts: 1,666 ■■■■■■■■□□Wow, Thanks for the advice! I wonder if I should buy another i7 and a dual slot Mobo with more memory banks. Then I could buy the exact same memory and double it to 32GB's! Bet I wouldn't be worried about performance then! As for infrastructure, I have jumbo frames enabled on my ioMega NAS and Cisco 3560 Switch. I have two network cards (+ 1 built in) installed on my ESXi 5 server making it highly available on the network, all LAN speed is 1Gbps. My NAS is RAID 1 so I have that level of redundancy, but I have not found a backup solution as of yet. I have a 1TB external drive that I want to use for backups once I find a good backup solution. Do you know of any? Thanks, (+REP)
Well, I don't think that you should really have much of an issue with your CPU, I wouldn't go out and spend that money for a second CPU and a new motherboard (I am actually looking to get a dual Core i7 machine going for my home workstation, though, and I would run either Windows Server 2008 R2 or Windows 7 x64 Ultimate with VMware Workstation on top of it). Your issue isn't CPU... it is going to be RAM and/or disk I/O, if you even have any issue at all. You haven't explained the purpose of this setup (e.g. production workload, personal lab, personal business use, etc.) If you are to the point where resources are scarce, add more memory and disk I/O as required. You are at a performance disadvantage right now with the RAID 1 setup, but it does give necessary redundancy. You could purchase a second, third, etc NAS device as you can have more than one in your setup and load different VMs on different units. This would also give additional controller and NIC performance on the storage side. As far as backing up your data, again, it depends on your requirements. You could use DPM or another more traditional product to do backups of the guests, or you could just copy the VMDK files and such to other storage. Given your setup, I cannot imagine that you have vSphere and the VCB (remained, don't know what it is called now... was VMware Consolidated Backup).
EDIT: If you reach the point where you need more CPU, I would just build an additional system... that way you have more operational capability... and if you one machine dies, at least you still have the other to service requests, even if it is at a slower pace.2024 Renew: [ ] AZ-204 [ ] AZ-305 [ ] AZ-400 [ ] AZ-500 [ ] Vault Assoc.
2024 New: [X] AWS SAP [ ] CKA [ ] Terraform Auth/Ops Pro -
Overdash Member Posts: 61 ■■□□□□□□□□Well, I don't think that you should really have much of an issue with your CPU, I wouldn't go out and spend that money for a second CPU and a new motherboard (I am actually looking to get a dual Core i7 machine going for my home workstation, though, and I would run either Windows Server 2008 R2 or Windows 7 x64 Ultimate with VMware Workstation on top of it). Your issue isn't CPU... it is going to be RAM and/or disk I/O, if you even have any issue at all. You haven't explained the purpose of this setup (e.g. production workload, personal lab, personal business use, etc.) If you are to the point where resources are scarce, add more memory and disk I/O as required. You are at a performance disadvantage right now with the RAID 1 setup, but it does give necessary redundancy. You could purchase a second, third, etc NAS device as you can have more than one in your setup and load different VMs on different units. This would also give additional controller and NIC performance on the storage side. As far as backing up your data, again, it depends on your requirements. You could use DPM or another more traditional product to do backups of the guests, or you could just copy the VMDK files and such to other storage. Given your setup, I cannot imagine that you have vSphere and the VCB (remained, don't know what it is called now... was VMware Consolidated Backup). EDIT: If you reach the point where you need more CPU, I would just build an additional system... that way you have more operational capability... and if you one machine dies, at least you still have the other to service requests, even if it is at a slower pace.
-
blargoe Member Posts: 4,174 ■■■■■■■■■□Hello thanks for the info on both posts (+REP) This is for my home domain and the services are used primarily by me. Do you suggest having 2 vCPU's and 4GB's of Memory for consolidating AD/DNS/DHCP server? I am the only real user in this domain so the load isn't bad at all. Thank you!
If you're just using this for your home domain to play around with virtualization and/or playing with the technology that you're installing inside the VM's, what you have might be fine for labbing.IT guy since 12/00
Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
Working on: RHCE/Ansible
Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands... -
powerfool Member Posts: 1,666 ■■■■■■■■□□If you're just using this for your home domain to play around with virtualization and/or playing with the technology that you're installing inside the VM's, what you have might be fine for labbing.
I would concur.
If you actually needing higher performance for a short duration, maybe setup something on Amazon's EC2 and S3... fairly cheap for short-term use.2024 Renew: [ ] AZ-204 [ ] AZ-305 [ ] AZ-400 [ ] AZ-500 [ ] Vault Assoc.
2024 New: [X] AWS SAP [ ] CKA [ ] Terraform Auth/Ops Pro -
Overdash Member Posts: 61 ■■□□□□□□□□I would concur. If you actually needing higher performance for a short duration, maybe setup something on Amazon's EC2 and S3... fairly cheap for short-term use.
-
elTorito Member Posts: 102Even with DNS and DHCP roles consolidated onto one Domain Controller, you don't need 2 vCPUs and 4 GBs of RAM for that single VM, especially not in a small lab environment. You could get away with 1 vCPU and 1.5 GB. Easy.WIP: CISSP, MCSE Server Infrastructure
Casual reading: CCNP, Windows Sysinternals Administrator's Reference, Network Warrior -
jibbajabba Member Posts: 4,317 ■■■■■■■■□□I have two network cards (+ 1 built in) installed on my ESXi 5 server making it highly available on the network, all LAN speed is 1Gbps.
Highly a available only obviously when you connect those nics to a switch stack using portchannels and not a single 3650, where each switch is connected to a different fuse / breaker / power feedMy own knowledge base made public: http://open902.com