Options

iSCSI transfer, is it normal speed?

2»

Comments

  • Options
    PiotrIrPiotrIr Member Posts: 236
    Thanks for this reply, it gives me hope icon_smile.gif

    What about NLB cluster? Should I set it to auto as well? I just afraid heart beat shared with SQL and domain network may cause some problems due auto negotiation. Could you tell me “no issue with this, set it to auto“and I will close the topic?
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    If it were me, I wouldn't take any chances on the private network with gigabit at all. You absolutely do not need gigabit for heartbeat traffic. Manually set it to 100/Full. Even that is far more bandwidth than you need for that.
    PiotrIr wrote:
    Hmm now I’m really confused.

    1. In all best practices – make sure you have the same negotiate settings on NIC and Switch.
    2. Cluster best practices – set all NICs speed and duplex on adapter and switch
    3. 1GB network best practices – set NICs and switch to auto negotiation
    4. SAN engineer best practices - leave the NIC's on full auto, but hard set the switch ports to prevent a negotiate down

    All best practices are incompatible. So what I should do with cluster network (let’s say iSCSI will be 1000MB auto and private 10MB half duplex on cross over cable but public – I really don’t know)? Downgrade to 100MB?

    Sometimes best practices differ from vendor to vendor, and technology to technology. Find the one that works best for you, or you're most comfortable with, and document why to CYA. If it causes problems, choose the next most logical choice given the problems, and document the issue, too.

    Don't forget to share with the public. Makes a great blog/whitepaper. icon_wink.gif
    Good luck to all!
  • Options
    PiotrIrPiotrIr Member Posts: 236
    The problem is heartbeat in NLB cluster is also comunication network (SQL and domain). And in addition using this network I will take backup so I can't realy change it to 100MB. I will try with 1GB and hope it will work.

    Many thanks for your help asstors and HeroPsyho.

    Best Regards
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    NLB isn't as important. MSCS is though.
    Good luck to all!
  • Options
    dynamikdynamik Banned Posts: 12,312 ■■■■■■■■■□
    How many machines are you dealing with? Is adding additional NICs and another switch that big of deal?
  • Options
    PiotrIrPiotrIr Member Posts: 236
    Unfortunately it is. I provide full redundancy for my network so already I have 4 switches (for four servers). I’m not able to add more NICs (some of my servers have 10 - max allowed).
  • Options
    HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    If you're putting 10 NIC's in a server, unless it's something like a host for virtual machines or a wicked firewall/router box, 99.99% you didn't design it right.
    Good luck to all!
  • Options
    PiotrIrPiotrIr Member Posts: 236
    It is cluster host for virtual machines. 2xiSCSI, 2xHeartbeat, 2inTem Public, 2inTeam HyperV DMZ1, 2inTeam HyperV DMZ2.
  • Options
    aherronaherron Registered Users Posts: 1 ■□□□□□□□□□
    Have you got jumbo frames enabled?

    I've been having similar problems on this, and I'm sure it's not just nic and switch speed settings.

    enabling jumbo frames on both the nic (set 9014) and enabling jumbo frames support on your switch, will dramatically improve performance, I have dabled with enabling and disabling TOE settings, and have found that I get greater transfer speeds with TOE disabled, which is counter intuative as TOE is supposed to inrease throughput.

    Anyway just a thought.
  • Options
    it_consultantit_consultant Member Posts: 1,903
    Besides all the NIC configuration talk, has anyone here seriously ever worked on an iSCSI SAN over 1GB links that was particularly fast? The whole reason for using iSCSI is to avoid spending a boatload of money on fabric switching.
Sign In or Register to comment.