Hello all!
I am having trouble with the speed of my iSCSI traffic and I'm wondering if anybody can point me in the right direction? OK, so this is my lab:
Left-vSphere Host - vSphere 5.5
Asus Z87-PRO
Intel i7-4770T
32 GB DDR-3
500 GB SSD (Win 7) & 500 GB SSD (vSphere 5.5)
1 Onboard NIC & 2 HP NC364T Quad Port NICs - 9 NICs in total.
1 Distributed Switch (with Management, vMotion, iSCSI Port Binding & FT Logging)
Right-vSphere Host - vSphere 5.5
Same as above.
Server 1 - Domain Controller/iSCSI Target
Asus M4A785-M
AMD Phenom II x6 1075T
6 GB DDR-2
1 TB HDD (Server 2012) & 500 GB SSD (iSCSI Target)
5 NICs in total (1 onboard, 1 Quad Port - individual IPs for multiple iSCSI paths, no NIC teaming here)
Server 2 - vCenter Server 5.5/iSCSI Target
Same as Server 1.
The network switch is a HP v1910-48G - No LACP running.
Now to my problem...
I've got an SSD in the Left-vSphere host and when I create a VM using FreeNAS/Openfiler/Server 2012 - when I test this VM with CrystalDiskMark I get 450-500 MB/s running straight off the SSD. Happy days... no problem here.
But, when I create an iSCSI target on that VM and place another VM inside that newly created iSCSI Target, I only get about 100-150 MB/s going over the network. This is the same drive & datastore which runs at 450-500 MB/s.
I have set up iSCSI multipathing correctly (as shown in
this YouTube link - using Round Robin) - and the performance tab shows nicely distributed data across the 9 NICs but just not running close to the top speed of the drive.
And, if I create an iSCSI Target on either the Domain Controller or on the vCenter Server the same thing happens. Why does my performance suck over the network so much? I am running Jumbo frames throughout, multiple GigE NICs and have multiple iSCSI paths. The switch says everything is running under 5% utilisation for everything connected while doing a Storage vMotion.
Any ideas on how to improve performance? What am I doing wrong that
these guys are getting 380 MB/s?
Thanks in advance!