ESXi to SAN 'Load Balancing' Optimization
Deathmage
Banned Posts: 2,496
Hey guys,
So I remember from my book that 'Route based on the originating virtual port ID' is like the status quo for iSCSI links since you want one as a Active and One as Standby for HA.
But I'm wondering if I anyone here has ever done a etherchannel or IEEE generic LACP bonding from a iSCSI/Fibre-Channel OOB switch to the ESXi hosts and then a bonding to the SAN. I'm curious if that could benefit the back-end performance of a SAN and an ESXi host or if the performance would be minimal.
I already use 1Gb, Jumbo frame's, and Round Robin. I use 1 Gb because of the fact this network doesn't even come close to the the threshold of 1Gb so 10 Gb was completely overkill.
I know this Equalogic supports Multipathing, so that is an option. Just not sure if multipathing will help with performance or if it's really just 'fun' technology.
This more of an advanced topic, but I've been looking into 'Advanced Settings', and I'm curious if anyone has ever tangled with the 'Disk.DiskMaxIOSize' value. I can see this being useful for a few of our VM's that have high IOPS like SQL, but I'm more curious if this is a global value or if it can be applied on a per VM basis.
So I remember from my book that 'Route based on the originating virtual port ID' is like the status quo for iSCSI links since you want one as a Active and One as Standby for HA.
But I'm wondering if I anyone here has ever done a etherchannel or IEEE generic LACP bonding from a iSCSI/Fibre-Channel OOB switch to the ESXi hosts and then a bonding to the SAN. I'm curious if that could benefit the back-end performance of a SAN and an ESXi host or if the performance would be minimal.
I already use 1Gb, Jumbo frame's, and Round Robin. I use 1 Gb because of the fact this network doesn't even come close to the the threshold of 1Gb so 10 Gb was completely overkill.
I know this Equalogic supports Multipathing, so that is an option. Just not sure if multipathing will help with performance or if it's really just 'fun' technology.
This more of an advanced topic, but I've been looking into 'Advanced Settings', and I'm curious if anyone has ever tangled with the 'Disk.DiskMaxIOSize' value. I can see this being useful for a few of our VM's that have high IOPS like SQL, but I'm more curious if this is a global value or if it can be applied on a per VM basis.
Comments
-
jibbajabba Member Posts: 4,317 ■■■■■■■■□□Etherchannels = IP Hash ...
VMware KB: Cannot use iSCSI or NFS over an EtherChannel bound switch
Although again, that isn't 6.x
Some more stuff :
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2038869My own knowledge base made public: http://open902.com -
Lexluethar Member Posts: 516We use the default for our EQL (route based on originating virtual port) but we use the EQL provided PSP for our round robin policy.
If you aren't saturating your links i guess RR is more for show than anything else. I think (think) the default policy is use last active path which isn't load balancing at all - more of a fail back policy. -
joelsfood Member Posts: 1,027 ■■■■■■□□□□Are you only using a single nic with failover for iscsi? If you have two nics on iscsi network, why not use botha nd use MPIO? Multipathing definitely increases performance, assuming the array has more performance to give. It will work even better with the Equallogic PSP isntalled