Options
iSCSI transfer, is it normal speed?
Hi,
I’m using MD3000i iSCSI SAN connected to host using MPIO and two network adapters.
6 HDD in RAID 1+0 450GB 15K
I’m coping large file (60GB) from one LUN to another (no other action performed on SAN) and my transfer is 42-50 MB/sec. I expected something more so I wander is fault of my configuration or standard?
In some reason my MPIO is working in active/passive mode. I tried to change it to round robin on device properties but I’m getting error. “The parameter is incorrect. The round robin policy attempts to evenly distribute incoming requests to all processing”.
Could you advice me please?
I’m using MD3000i iSCSI SAN connected to host using MPIO and two network adapters.
6 HDD in RAID 1+0 450GB 15K
I’m coping large file (60GB) from one LUN to another (no other action performed on SAN) and my transfer is 42-50 MB/sec. I expected something more so I wander is fault of my configuration or standard?
In some reason my MPIO is working in active/passive mode. I tried to change it to round robin on device properties but I’m getting error. “The parameter is incorrect. The round robin policy attempts to evenly distribute incoming requests to all processing”.
Could you advice me please?
Comments
-
Optionsdynamik Banned Posts: 12,312 ■■■■■■■■■□What is the speed of the connection? Gigabit will give you a theoretical maximum of 1000Mb/8 = 125MB/sec. I'm sure there's overhead that will affect your throughput, but that does seem to be on the low side.
Just for future reference, there is a storage forum as well: http://techexams.net/forums/viewforum.php?f=72 -
OptionsPiotrIr Member Posts: 236Thanks for your reply. Speed is 1Gb so it should be ok. I also tried to follow best practices in NIC and Switch configuration:
Switch:
1. Disable unicast storm control
2. set jumbo frames 9000
3. Enable Flow Control
4. Speed autonegotiation
5. Turn Off Spanning Tree Algorithm
NICs:
- Enable Flow Control Rx&Tx
- set jumbo frames 9000
- disable TCP IPv6
- unbind File and Print Sharing
- clear “register the connection’s address in DNS
- disable WINS
Only difference I found between Best practices is I’m using cat 5e cable instead of cat6. Do you think it may make differences like that? -
OptionsHeroPsycho Inactive Imported Users Posts: 1,940PiotrIr wrote:Thanks for your reply. Speed is 1Gb so it should be ok. I also tried to follow best practices in NIC and Switch configuration:
Switch:
1. Disable unicast storm control
2. set jumbo frames 9000
3. Enable Flow Control
4. Speed autonegotiation
5. Turn Off Spanning Tree Algorithm
NICs:
- Enable Flow Control Rx&Tx
- set jumbo frames 9000
- disable TCP IPv6
- unbind File and Print Sharing
- clear “register the connection’s address in DNS
- disable WINS
Only difference I found between Best practices is I’m using cat 5e cable instead of cat6. Do you think it may make differences like that?
I'd hard set your switch ports to gigabit full for sure.Good luck to all! -
OptionsPiotrIr Member Posts: 236In best practices form DELL and Microsoft is, it should be set to auto.
-
Optionsastorrs Member Posts: 3,139 ■■■■■■□□□□PiotrIr wrote:In best practices form DELL and Microsoft is, it should be set to auto.
As for your performance problem, how are the two LUNs striped across the disks in the MD3000i? Do they share any/all of the disks? What RAID level is in use on the source/target disks? -
OptionsPiotrIr Member Posts: 236In this case it is the same storage group in RAID 1+0 - 6x450 SAS HDD 15K
-
OptionsTechJunky Member Posts: 881Do NOT set full GB on the ports of your switch. I have seen this so many times and it causes so many slow network connection problems its not funny. Set them to auto negotiate. This is very common with Cisco equipment. I have seen very few switches that actually work properly on full GB. Qlogic being one of the few.
Also, remember to use seperate switches if possible on your ISCSI network. A lot of people like to VLAN their ISCSI from their production network and reuse the same switch. Though it will work, make sure you have enough ram on the switching device first or you could see a network slow down as well. Remember, though they are VLAN'd the traffic on both networks can still be affected if you dont have a fast enough proc/memory on the switch.
Hope some of that information helps. -
OptionsHeroPsycho Inactive Imported Users Posts: 1,940PiotrIr wrote:In best practices form DELL and Microsoft is, it should be set to auto.
Do you have a link for this? I've never seen that before when dealing with iSCSI.Good luck to all! -
Optionsastorrs Member Posts: 3,139 ■■■■■■□□□□PiotrIr wrote:In this case it is the same storage group in RAID 1+0 - 6x450 SAS HDD 15K
-
OptionsPiotrIr Member Posts: 236I tested it and transfer was 107 MB/s so it looks ok. I’ve just disappointed of RAID 1+0 on SAN. I expected something more…. But if is it standard it I’m happy enough.
Anyway, what about MPIO which work in active/passive mode? Is it possible to set both network adapters for iSCSI to load balance or it is standard as well?
Many thanks for your help and best regards
TechJunky: Thanks for your reply. Yes, it is already set to auto negotiations. However my cluster NICs are set to 1GB with full duplex following best practices. After your post I’m not sure I did right. Could you advice?
HeroPsyho: link for you:
http://download.microsoft.com/download/a/e/9/ae91dea1-66d9-417c-ade4-92d824b871af/uguide.doc -
OptionsHeroPsycho Inactive Imported Users Posts: 1,940PiotrIr wrote:HeroPsyho: link for you:
http://download.microsoft.com/download/a/e/9/ae91dea1-66d9-417c-ade4-92d824b871af/uguide.doc
I must be blind, but I don't see anywhere in the doc that says to set the port speed to autonegotiate.Good luck to all! -
OptionsPiotrIr Member Posts: 236Page 27
"• Use non blocking switches and set the negotiated speed on the switches. " -
OptionsHeroPsycho Inactive Imported Users Posts: 1,940Not to belabor the point, but that doesn't say to me to set it to autonegotiate. That says to me to manually set the speed.
If your switch vendor doesn't work on a manual setting, so be it, but it's certainly not a Microsoft best practice to set it to autonegotiate.Good luck to all! -
OptionsPiotrIr Member Posts: 236If you still don't believe me
http://www.dell.com/downloads/global/solutions/public/white_papers/IP-SAN-BestPractice-WP.pdf
"It is recommend you use auto-negotiation only, since gigabit ethernet networks are designed to always have autonegotiation enabled." -
OptionsHeroPsycho Inactive Imported Users Posts: 1,940Hey, if Dell says that for their switches, I'm cool with that, and I have absolutely no arguments there. I don't know Dell switches. I just knew what Microsoft said.Good luck to all!
-
OptionsPiotrIr Member Posts: 236but Microsoft says "set the negotiated speed" so exacely the same....
and DELL says "since gigabit ethernet networks are designed to always have autonegotiation enabled" not DELL swtches but network... -
OptionsHeroPsycho Inactive Imported Users Posts: 1,940Set the negotiated speed doesn't = autonegotiation. It's saying set the speed to a finite speed. Otherwise, it would say, "Set the speed to auto-negotiated" or "auto" or something to that effect.
Like I said, I don't want to belabor the point here. I know Microsoft best practices is to hard set it, and Dell's apparently differ. Differing best practices from two companies happens from time to time.
I just IM'ed a storage contact of mine who's done a bunch of SAN work, and see what he recommends, if that helps...Good luck to all! -
Optionsastorrs Member Posts: 3,139 ■■■■■■□□□□I see where everyone is getting confused (or at least I think I do).
First off 1000Base-T/TX were designed from the ground up to support (and essentially require) auto-negotiation. As such with gigabit Ethernet links (that you want to run at gigabit speeds) it is best practice to leave them to auto-negotiate, the majority of vendors do not support a fixed speed of 1000Mb - they only support fixing the speed (and duplex) for backwards compatibility with 10Mb and 100Mb networks.
With that said, best practice for configuring MSCS and NLB clusters is still to manually fix the heartbeat NICs to 10Mb/half-duplex - this remains true for gigabit NICs as well.
For iSCSI, unless you are attempting to use it with 100Mb NICs (not a good idea or supported, but technically possible) you will want to leave it to auto-negotiate per the 1000Base-T standard.
My 2¢ -
OptionsPiotrIr Member Posts: 236Many thanks for your clear answer.
What about Public NIC in cluster. Should it be set to auto as well?
Best Regards -
OptionsHeroPsycho Inactive Imported Users Posts: 1,940He actually said leave the NIC's on full auto, but hard set the switch ports to prevent a negotiate down.Good luck to all!
-
Optionsastorrs Member Posts: 3,139 ■■■■■■□□□□HeroPsycho wrote:He actually said leave the NIC's on full auto, but hard set the switch ports to prevent a negotiate down.
-
OptionsHeroPsycho Inactive Imported Users Posts: 1,940HeroPsycho wrote:I just IM'ed a storage contact of mine who's done a bunch of SAN work, and see what he recommends, if that helps...
I meant that guy.Good luck to all! -
Optionsastorrs Member Posts: 3,139 ■■■■■■□□□□HeroPsycho wrote:HeroPsycho wrote:I just IM'ed a storage contact of mine who's done a bunch of SAN work, and see what he recommends, if that helps...
I meant that guy. -
OptionsPiotrIr Member Posts: 236Hmm now I’m really confused.
1. In all best practices – make sure you have the same negotiate settings on NIC and Switch.
2. Cluster best practices – set all NICs speed and duplex on adapter and switch
3. 1GB network best practices – set NICs and switch to auto negotiation
4. SAN engineer best practices - leave the NIC's on full auto, but hard set the switch ports to prevent a negotiate down
All best practices are incompatible. So what I should do with cluster network (let’s say iSCSI will be 1000MB auto and private 10MB half duplex on cross over cable but public – I really don’t know)? Downgrade to 100MB? -
OptionsPiotrIr Member Posts: 236One additional issue.
NLB cluster I’m going to set up uses private network for heartbeat and domain, SQL communication as well. If I’m not able to set speed and duplex so what should I do? As far as I know for heartbeat auto negotiation is not recommended at all. -
Optionsastorrs Member Posts: 3,139 ■■■■■■□□□□PiotrIr wrote:Hmm now I’m really confused.
1. In all best practices – make sure you have the same negotiate settings on NIC and Switch.
2. Cluster best practices – set all NICs speed and duplex on adapter and switch
3. 1GB network best practices – set NICs and switch to auto negotiation
4. SAN engineer best practices - leave the NIC's on full auto, but hard set the switch ports to prevent a negotiate down
All best practices are incompatible. So what I should do with cluster network (let’s say iSCSI will be 1000MB auto and private 10MB half duplex on cross over cable but public – I really don’t know)? Downgrade to 100MB?PiotrIr wrote:One additional issue.
NLB cluster I’m going to set up uses private network for heartbeat and domain, SQL communication as well. If I’m not able to set speed and duplex so what should I do? As far as I know for heartbeat auto negotiation is not recommended at all.