Options

Dell PowerConnect 5424 iSCSI Opti

RobertKaucherRobertKaucher Member Posts: 4,299 ■■■■■■■■■■
I'm working on the VMware/SAN system for my company and I did some performance testing. We have a PC 5424 switch and a PVNX3000. I have created a single iSCSI disk and attached it to a VM and I am using SQLIO to do the IO testing. Results are typically bad for a single path, cheap SATA system.

My question is this: since the backend is only used for iSCSI does it make any sense to attempt to use the switch's iSCSI optimization features since the switch is only being used for iSCSI to begin with?

Comments

  • Options
    bertiebbertieb Member Posts: 1,031 ■■■■■■□□□□
    Yes, that switch and storage unit support jumbo frames - work on the basis of best practices and that every little helps.

    Configuring a PowerConnect 5424 or 5448 Switch for use with an iSCSI storage system - The Dell TechCenter
    The trouble with quotes on the internet is that you can never tell if they are genuine - Abraham Lincoln
  • Options
    RobertKaucherRobertKaucher Member Posts: 4,299 ■■■■■■■■■■
    bertieb wrote: »
    Yes, that switch and storage unit support jumbo frames - work on the basis of best practices and that every little helps.

    Configuring a PowerConnect 5424 or 5448 Switch for use with an iSCSI storage system - The Dell TechCenter


    Actually that was what I was looking at just before posting this question.
    And again, since this is a back-end iSCSI network, you really have no need to prioritize iSCSI; (there is no other traffic) and you can remove this stuff too.

    *********************
    console(config)# no iscsi enable
    console(config)# no iscsi target port 860
    console(config)# no iscsi target port 3260
    *********************



    I wanted confirmation from people I trust, though. So am I understanding correctly and you are agreeing with the statement above and my own conclusion?
  • Options
    bertiebbertieb Member Posts: 1,031 ■■■■■■□□□□
    Yep, looking through the config manual for that switch those iSCSI awareness commands set a QoS policy - and since you aren't using the switch for anything else you may as well disable it. I've also just noted some storage vendors recommend this feature is turned off for certain appliances on these switches anyway.

    So after reading the document (and more importantly your initial post properly :) ) I agree with the Dell TechCenter guide for your situation, which I think implies turning OFF the iSCSI optimisation features.

    If you are still in a testing phase it would be nice to test it with this feature turned on and off to see what (if any) difference it makes - I'm not overly familiar with those model of switch.

    You definately want to follow the part about jumbo frames for the iSCSI ports and for the storage device though.
    The trouble with quotes on the internet is that you can never tell if they are genuine - Abraham Lincoln
  • Options
    RobertKaucherRobertKaucher Member Posts: 4,299 ■■■■■■■■■■
    Rep! Thanks for the help.
  • Options
    bertiebbertieb Member Posts: 1,031 ■■■■■■□□□□
    Rep! Thanks for the help.

    Awesome. I'd best get back to the VCP4 study now I've got the exam in the morning :)
    The trouble with quotes on the internet is that you can never tell if they are genuine - Abraham Lincoln
  • Options
    RobertKaucherRobertKaucher Member Posts: 4,299 ■■■■■■■■■■
  • Options
    ClaymooreClaymoore Member Posts: 1,637
    I wasn't sure whether to respond to this post or your other post, but I will advise you against using jumbo frames.

    I think iSCSI is a good storage protocol, but it breaks down at the I/O extremes. At the small end of many tiny I/O requests, iSCSI suffers from the overhead of TCP headers and ethernet frames. At the high end of large data transfers, iSCSI suffers with the wire speeds of ethernet when compared to fibre. The rest of the time it works great.

    The best way to handle the small I/O requests - where the TCP segment headers and ethernet frame additions can be more data than the iSCSI request itself - is to offload some of the processing to hardware. TCP Offload Engine (TOE) NICs handle the TCP processing, and iSCSI HBAs can handle some of the iSCSI processing without having to rely on host processor interrupts that can slow everything down.

    The best way to handle the large data transfers is to get a larger pipe. Multiple Connection Sharing (MCS) or MPIO in the host intitiator are the easiest way, but etherchannel or even 10Gb ethernet are a possibility.

    Enabling jumbo frames is troublesome. Jumbo frames must be enabled and supported through the entire path otherwise you face fragmentation of the frames or devices dropping them altogether. I hope a member with more networking knowledge will correct me if I am wrong, but when you start dealing with larger frames, the error-correcting algorithims of TCP begin to break down. If TCP error-checking fails, you must rely on the error-checking capabilities of the iSCSI protocol to prevent corruption. I have seen what bad TCP error-handling can do to iSCSI PDUs and it is not pretty. Your best bet is to add more physical links and iSCSI paths to enable more throughput.

    Earlier versions of ESX did not support MCS or MPIO load-balancing, only link fail-over. I got around this a couple of years ago by relying on the MS iSCSI initiator and only using ESX for multiple physical NICs and virtual networks instead of ESX handling iSCSI and offering virtual storage to the guests. Your SAN will need to be able to provide multiple iSCSI targets with different physical NICs and IP addresses, but multiple links with MCS gave me great performance for both SQL and Exchange.
Sign In or Register to comment.