Options

iSCSI transfer, is it normal speed?

PiotrIrPiotrIr Member Posts: 236
Hi,

I’m using MD3000i iSCSI SAN connected to host using MPIO and two network adapters.

6 HDD in RAID 1+0 450GB 15K
I’m coping large file (60GB) from one LUN to another (no other action performed on SAN) and my transfer is 42-50 MB/sec. I expected something more so I wander is fault of my configuration or standard?

In some reason my MPIO is working in active/passive mode. I tried to change it to round robin on device properties but I’m getting error. “The parameter is incorrect. The round robin policy attempts to evenly distribute incoming requests to all processing”.

Could you advice me please?
«1

Comments

  • Options
    dynamikdynamik Banned Posts: 12,312 ■■■■■■■■■□
    What is the speed of the connection? Gigabit will give you a theoretical maximum of 1000Mb/8 = 125MB/sec. I'm sure there's overhead that will affect your throughput, but that does seem to be on the low side.

    Just for future reference, there is a storage forum as well: http://techexams.net/forums/viewforum.php?f=72
  • Options
    PiotrIrPiotrIr Member Posts: 236
    Thanks for your reply. Speed is 1Gb so it should be ok. I also tried to follow best practices in NIC and Switch configuration:

    Switch:
    1. Disable unicast storm control
    2. set jumbo frames 9000
    3. Enable Flow Control
    4. Speed autonegotiation
    5. Turn Off Spanning Tree Algorithm
    NICs:
    - Enable Flow Control Rx&Tx
    - set jumbo frames 9000
    - disable TCP IPv6
    - unbind File and Print Sharing
    - clear “register the connection’s address in DNS
    - disable WINS

    Only difference I found between Best practices is I’m using cat 5e cable instead of cat6. Do you think it may make differences like that?
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    PiotrIr wrote:
    Thanks for your reply. Speed is 1Gb so it should be ok. I also tried to follow best practices in NIC and Switch configuration:

    Switch:
    1. Disable unicast storm control
    2. set jumbo frames 9000
    3. Enable Flow Control
    4. Speed autonegotiation
    5. Turn Off Spanning Tree Algorithm
    NICs:
    - Enable Flow Control Rx&Tx
    - set jumbo frames 9000
    - disable TCP IPv6
    - unbind File and Print Sharing
    - clear “register the connection’s address in DNS
    - disable WINS

    Only difference I found between Best practices is I’m using cat 5e cable instead of cat6. Do you think it may make differences like that?

    I'd hard set your switch ports to gigabit full for sure.
    Good luck to all!
  • Options
    PiotrIrPiotrIr Member Posts: 236
    In best practices form DELL and Microsoft is, it should be set to auto.
  • Options
    astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    PiotrIr wrote:
    In best practices form DELL and Microsoft is, it should be set to auto.
    Correct, 1000Base-T/TX should always be set to Auto.

    As for your performance problem, how are the two LUNs striped across the disks in the MD3000i? Do they share any/all of the disks? What RAID level is in use on the source/target disks?
  • Options
    PiotrIrPiotrIr Member Posts: 236
    In this case it is the same storage group in RAID 1+0 - 6x450 SAS HDD 15K
  • Options
    TechJunkyTechJunky Member Posts: 881
    Do NOT set full GB on the ports of your switch. I have seen this so many times and it causes so many slow network connection problems its not funny. Set them to auto negotiate. This is very common with Cisco equipment. I have seen very few switches that actually work properly on full GB. Qlogic being one of the few.

    Also, remember to use seperate switches if possible on your ISCSI network. A lot of people like to VLAN their ISCSI from their production network and reuse the same switch. Though it will work, make sure you have enough ram on the switching device first or you could see a network slow down as well. Remember, though they are VLAN'd the traffic on both networks can still be affected if you dont have a fast enough proc/memory on the switch.

    Hope some of that information helps.
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    PiotrIr wrote:
    In best practices form DELL and Microsoft is, it should be set to auto.

    Do you have a link for this? I've never seen that before when dealing with iSCSI.
    Good luck to all!
  • Options
    astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    PiotrIr wrote:
    In this case it is the same storage group in RAID 1+0 - 6x450 SAS HDD 15K
    Given that you're source and target disks are one and the same you're going to see a performance hit since the head will need to both read and write to the same disk and will need to spend a lot of time seeking. Can you do a quick comparison by copying data from the source LUN to the local disk on one of the servers and seeing how the performance over iSCSI there is? At least that way we can try to rule out a few things.
  • Options
    PiotrIrPiotrIr Member Posts: 236
    I tested it and transfer was 107 MB/s so it looks ok. I’ve just disappointed of RAID 1+0 on SAN. I expected something more…. But if is it standard it I’m happy enough.

    Anyway, what about MPIO which work in active/passive mode? Is it possible to set both network adapters for iSCSI to load balance or it is standard as well?

    Many thanks for your help and best regards

    TechJunky: Thanks for your reply. Yes, it is already set to auto negotiations. However my cluster NICs are set to 1GB with full duplex following best practices. After your post I’m not sure I did right. Could you advice?

    HeroPsyho: link for you:
    http://download.microsoft.com/download/a/e/9/ae91dea1-66d9-417c-ade4-92d824b871af/uguide.doc
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    PiotrIr wrote:

    I must be blind, but I don't see anywhere in the doc that says to set the port speed to autonegotiate.
    Good luck to all!
  • Options
    PiotrIrPiotrIr Member Posts: 236
    Page 27

    "• Use non blocking switches and set the negotiated speed on the switches. "
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    Not to belabor the point, but that doesn't say to me to set it to autonegotiate. That says to me to manually set the speed.

    If your switch vendor doesn't work on a manual setting, so be it, but it's certainly not a Microsoft best practice to set it to autonegotiate.
    Good luck to all!
  • Options
    PiotrIrPiotrIr Member Posts: 236
    If you still don't believe me icon_smile.gif

    http://www.dell.com/downloads/global/solutions/public/white_papers/IP-SAN-BestPractice-WP.pdf

    "It is recommend you use auto-negotiation only, since gigabit ethernet networks are designed to always have autonegotiation enabled."
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    Hey, if Dell says that for their switches, I'm cool with that, and I have absolutely no arguments there. I don't know Dell switches. I just knew what Microsoft said.
    Good luck to all!
  • Options
    PiotrIrPiotrIr Member Posts: 236
    but Microsoft says "set the negotiated speed" so exacely the same....

    and DELL says "since gigabit ethernet networks are designed to always have autonegotiation enabled" not DELL swtches but network...
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    Set the negotiated speed doesn't = autonegotiation. It's saying set the speed to a finite speed. Otherwise, it would say, "Set the speed to auto-negotiated" or "auto" or something to that effect.

    Like I said, I don't want to belabor the point here. I know Microsoft best practices is to hard set it, and Dell's apparently differ. Differing best practices from two companies happens from time to time.

    I just IM'ed a storage contact of mine who's done a bunch of SAN work, and see what he recommends, if that helps...
    Good luck to all!
  • Options
    PiotrIrPiotrIr Member Posts: 236
    Please let me know answer. Many Thanks.
  • Options
    astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    I see where everyone is getting confused (or at least I think I do).

    First off 1000Base-T/TX were designed from the ground up to support (and essentially require) auto-negotiation. As such with gigabit Ethernet links (that you want to run at gigabit speeds) it is best practice to leave them to auto-negotiate, the majority of vendors do not support a fixed speed of 1000Mb - they only support fixing the speed (and duplex) for backwards compatibility with 10Mb and 100Mb networks.

    With that said, best practice for configuring MSCS and NLB clusters is still to manually fix the heartbeat NICs to 10Mb/half-duplex - this remains true for gigabit NICs as well.

    For iSCSI, unless you are attempting to use it with 100Mb NICs (not a good idea or supported, but technically possible) you will want to leave it to auto-negotiate per the 1000Base-T standard.

    My 2¢
  • Options
    PiotrIrPiotrIr Member Posts: 236
    Many thanks for your clear answer.

    What about Public NIC in cluster. Should it be set to auto as well?

    Best Regards
  • Options
    astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    PiotrIr wrote:
    What about Public NIC in cluster. Should it be set to auto as well?
    What speed NIC? ;)
  • Options
    PiotrIrPiotrIr Member Posts: 236
  • Options
    astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    PiotrIr wrote:
    1 GB
    As such with gigabit Ethernet links (that you want to run at gigabit speeds) it is best practice to leave them to auto-negotiate,
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    He actually said leave the NIC's on full auto, but hard set the switch ports to prevent a negotiate down.
    Good luck to all!
  • Options
    astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    HeroPsycho wrote:
    He actually said leave the NIC's on full auto, but hard set the switch ports to prevent a negotiate down.
    If by "he" you mean "me", no I meant leave them both on auto for gigabit links you want to run at gigabit speeds. :)
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    HeroPsycho wrote:
    I just IM'ed a storage contact of mine who's done a bunch of SAN work, and see what he recommends, if that helps...

    I meant that guy. icon_confused.gif
    Good luck to all!
  • Options
    astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    HeroPsycho wrote:
    HeroPsycho wrote:
    I just IM'ed a storage contact of mine who's done a bunch of SAN work, and see what he recommends, if that helps...

    I meant that guy. icon_confused.gif
    Who? But he's on 1st...
  • Options
    PiotrIrPiotrIr Member Posts: 236
    Hmm now I’m really confused.

    1. In all best practices – make sure you have the same negotiate settings on NIC and Switch.
    2. Cluster best practices – set all NICs speed and duplex on adapter and switch
    3. 1GB network best practices – set NICs and switch to auto negotiation
    4. SAN engineer best practices - leave the NIC's on full auto, but hard set the switch ports to prevent a negotiate down

    All best practices are incompatible. So what I should do with cluster network (let’s say iSCSI will be 1000MB auto and private 10MB half duplex on cross over cable but public – I really don’t know)? Downgrade to 100MB?
  • Options
    PiotrIrPiotrIr Member Posts: 236
    One additional issue.

    NLB cluster I’m going to set up uses private network for heartbeat and domain, SQL communication as well. If I’m not able to set speed and duplex so what should I do? As far as I know for heartbeat auto negotiation is not recommended at all.
  • Options
    astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    PiotrIr wrote:
    Hmm now I’m really confused.

    1. In all best practices – make sure you have the same negotiate settings on NIC and Switch.
    2. Cluster best practices – set all NICs speed and duplex on adapter and switch
    3. 1GB network best practices – set NICs and switch to auto negotiation
    4. SAN engineer best practices - leave the NIC's on full auto, but hard set the switch ports to prevent a negotiate down

    All best practices are incompatible. So what I should do with cluster network (let’s say iSCSI will be 1000MB auto and private 10MB half duplex on cross over cable but public – I really don’t know)? Downgrade to 100MB?
    You had the 1st two basically correct, for the public network since its gigabit leave it at auto. Problems with auto-negotiation and 1000Base-T are almost unheard off since it was written into the standard from the beginning (and all equipment must fully support it to be compliant with the standard) vs. older standards where it was a tack on later and problems were relatively common between different vendors. For cluster private networks fix the speed and duplex to 10half since that's all you require and it's the lowest common denominator (it's also the usual failback if all other methods to negotiate the other end fail for most networking equipment since it was the original spec).
    PiotrIr wrote:
    One additional issue.

    NLB cluster I’m going to set up uses private network for heartbeat and domain, SQL communication as well. If I’m not able to set speed and duplex so what should I do? As far as I know for heartbeat auto negotiation is not recommended at all.
    Now this one is way outside of best practice. There should be nothing on the private network but cluster/NLB heartbeat packets. Add additional NICs or implement VLANs on your public NICs.
Sign In or Register to comment.