So I understand the concept of iSCSI is not that difficult. That being said, I hate it.
I have a home lab (theoretically pictured below) that applicably consists of 3 Dell PE 2950s, 1 Cisco 3550 and 1 D-Link GB switch.
The 3 PE servers are 2 ESXi hosts and my iSCSI target. I am using Starwind for the iSCSI features and it seems to do the job, however very slowly! I have my img file built and the target configured and attached to the img file on the server side.
I configure a new vmk port on its own switch to utilize the 2nd NIC (1st is management, nothing operational yet other than a VM of vCenter). Then dynamically discover and mount the iSCSI target on the storage adapter. Then add new storage on LUN 0 on my 1TB iSCSI on the "Storage" tab. All good. Storage shows up, its usable, seems fine.
The vmk I configure is on a GB port connected to an unmanaged GB D-Link switch that also has a connection from the 2nd NIC on the iSCSI SAN server. Pings transverse over the GB link as they should. But when I attempt to upload a file from the local hard drive of the iSCSI SAN to the actual iSCSI SAN via vCenter, then traffic seems to try and use BOTH management and iSCSI network ports and is slow.