FC vs iscsi

cnfuzzdcnfuzzd Member Posts: 208
Hi All

So apparently the scale computing storage "SAN" (nfs) we inherited is choking on iops. I need to price out a short-term cheap solution, and a long-term expensive solution. I am recommending a new tier-one data storage area. We use vmware across three HP DL380 G7 servers. Should I push for FC or iscsi?

Thanks!

John
__________________________________________

Work In Progress: BSCI, Sharepoint

Comments

  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    Are you currently using FC or iSCSI? That usually decide which tech you go with.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • cnfuzzdcnfuzzd Member Posts: 208
    We are currently using nfs. icon_cry.gif
    __________________________________________

    Work In Progress: BSCI, Sharepoint
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    cnfuzzd wrote: »
    We are currently using nfs. icon_cry.gif

    My bad. I was mixing terms. Since you're using nfs, odds are you don't have any fibre in your network, which means iSCSI will be the cheaper solution.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • EV42TMANEV42TMAN Member Posts: 256
    cheap solution is iscsi. For the expensive solution have you thought about using SAS?
    Current Certification Exam: ???
    Future Certifications: CCNP Route Switch, CCNA Datacenter, random vendor training.
  • it_consultantit_consultant Member Posts: 1,903
    There are a couple of "right" ways of going about this but none of them are going to be very cheap. If your NFS mount is choking on I/O then I would hesitate to recommend 1GB iSCSI - which means we are talking 10GB with jumbo frames with a SAN, ethernet switch, and iSCSI offload capable NICs. This isn't cheap. Alternatively, you could invest in a FC storage solution, which means you will need FC HBAs, at least one FC switch, and a FC capable storage solution. There are some storage systems that support native FCOE, but honestly, you won't save much money doing that, but you will have somewhat less expensive cabling.

    I like the HP small business SAN since you can get them with FC and 10GB iSCSI HBA's for about $10K without disk. A Brocade VDX switch is about $7K and 10GB adapters from intel or brocade are $700-1200 depending on whether you get them with optics. We use 5m twinax cables which work fine.
  • PurpleITPurpleIT Member Posts: 327
    EV42TMAN wrote: »
    cheap solution is iscsi. For the expensive solution have you thought about using SAS?

    I have a SAS solution and love it! I is a really great mid-point between a high-end SAN and directly attached storage - less administration than the SAN, more flexibility than the DAS and it plays very nicely as CSS with HyperV.

    I still have some older FC in place (used mostly for backups to my aging tape drives that I did not want to replace just yet), but when that goes away I will stay with SAS and be quite happy.
    WGU - BS IT: ND&M | Start Date: 12/1/12, End Date 5/7/2013
    What next, what next...
  • bigdogzbigdogz Member Posts: 881 ■■■■■■■■□□
    If you like HP just remember to run a multipath solution. I am just speaking from experience. This goes all the way down to having redundancy on power.
    It may be like having a belt and suspenders, but there is nothing like having your pants down to your ankles.

    Good Luck!!!
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    There are a couple of "right" ways of going about this but none of them are going to be very cheap. If your NFS mount is choking on I/O then I would hesitate to recommend 1GB iSCSI

    It could be that NFS doesn't have enough spindles to support the load. NFS could be supported by bunch of 7200 SATA running RAID 5/6. We need more info from OP to determine 1GB vs. 10 GB iSCSI.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • it_consultantit_consultant Member Posts: 1,903
    I agree, to a point. With the number of servers OP is using, all he really needs on the switching side is a BR 6610 with 2 packs of 10GB licenses, 6-8 twinax cables (2 ports per server plus 2 ports of uplink to other switches), and three HBAs. That is well under 10K on the switching and HBA side. If OP were to grow you could stack in more 6610s with 40GB stacking cables or if they experience rapid growth they could stack in the 6650. I only say Brocade because it is the line I know, you can easily do this with HP, Force10, or Cisco. I don't necessarily recommend a TRILL fabric because it is expensive and it sounds like OP isn't 'there' yet with that need.

    It isn't that 1GB is dead, IMHO, it is just that if you are going to make the investment I would future proof a little bit.
  • phoeneousphoeneous Member Posts: 2,333 ■■■■■■■□□□
    PurpleIT wrote: »
    I have a SAS solution and love it! I is a really great mid-point between a high-end SAN and directly attached storage - less administration than the SAN, more flexibility than the DAS and it plays very nicely as CSS with HyperV.

    I still have some older FC in place (used mostly for backups to my aging tape drives that I did not want to replace just yet), but when that goes away I will stay with SAS and be quite happy.

    What are you using?
  • powerfoolpowerfool Member Posts: 1,666 ■■■■■■■■□□
    iSCSI is easy enough for me to recommend.... it works great, you just need to select the right vendor and implement a good design. Lot's of folks will rant and rave over FC... I don't see the point. I have had great experiences with iSCSI performance using the Cisco Catalyst 3750 series (older gigabit 3750G and newer 3750X series) with jumbo frames implemented using Equallogic and NetApp. You will not get by with single gigabit interfaces, however. Since you are talking about VMware, when I designed hosts for a solution five years ago, we had a total of 11 interfaces on the hosts... 4x gigabit for storage, 4x gigabit for general network, and 2x gigabit for management/vmotion, and 1x 100mb remote management for the hardware (DRAC/iLo). You will need to implement MPIO.

    If you can, you could look into getting 2x 10Gb interfaces and run everything over that, but you will need the switching infrastructure to handle it. Backplane throughput really matters with this sort of requirement.

    As far as vendor goes, there are lots of iSCSI-centric vendors that have optimized their solutions for this capability. Honestly, I wouldn't even look at EMC for iSCSI... they are an FC vendor primarily and have iSCSI as an afterthought. What was appealing to me about Equallogic is that they are essentially clustered systems and as you add each unit, you add additional redundant controllers with additional interfaces, improving the performance. A lot of the bigger vendors, NetApp included, you just keep adding disks, but the controller is still limited. LeftHand is another iSCSI vendor that is great and has a capability called Network RAID, or nRAID. Essentially, the units are clustered in a RAID-like solution.

    Best wishes.
    2024 Renew: [ ] AZ-204 [ ] AZ-305 [ ] AZ-400 [ ] AZ-500 [ ] Vault Assoc.
    2024 New: [X] AWS SAP [ ] CKA [ ] Terraform Auth/Ops Pro
  • cnfuzzdcnfuzzd Member Posts: 208
    Quick update:

    After reviewing our actual server load & perf stats, we went back to the MSP and vendor demanding more explanation. We were running a 3:1 write/read ratio, with no huge transactional app to justify it. It comes to light that there is a distinct possibility our file system block size has been set "incorrectly", causing unnecessary writes. Waiting to hear back for a conclusion.

    John
    __________________________________________

    Work In Progress: BSCI, Sharepoint
  • it_consultantit_consultant Member Posts: 1,903
    The reason people rant and rave about FC is that the performance is better and SAN fabrics self heal faster than any ethernet network. By its very nature ethernet is 'lossy', which is OK because applications just ask for the missing frame and they keep on trucking. In the FC world the very idea of a packet missing is heart-attack inducing. The only way to get FC style performance and reliability without using FC is to deploy an ethernet fabric.
  • PurpleITPurpleIT Member Posts: 327
    phoeneous wrote: »
    What are you using?

    I have a Dell MD3200 with an additional MD1200. IIRC, it can take up to 4 MD1200s for something close to 200 spindles and up to 8 connected systems (or 4 with redundancy). I have a relatively small system - 24 drives, 33 TB of raw space; two physical servers running about a dozen VMs and this thing doesn't even begin to break a sweat.
    WGU - BS IT: ND&M | Start Date: 12/1/12, End Date 5/7/2013
    What next, what next...
  • jmritenourjmritenour Member Posts: 565
    Personally, I think FC is a scam. It requires a whole other fabric and good HBAs are expensive. I'd rather invest the money on 10GB infrastructure. The one thing I will give it is that multipathing is generally easier to setup with FC, and FC as a whole doesn't require as much configuration, but that doesn't offset the cost involved, and the pain of having to maintain a separate infrastructure.
    "Start by doing what is necessary, then do what is possible; suddenly, you are doing the impossible." - St. Francis of Assisi
  • powerfoolpowerfool Member Posts: 1,666 ■■■■■■■■□□
    I agree, FC isn't so fantastic to make it worth the expense (direct and labor) in any but a few real situations. It is simply a distraction for most organizations that could be investing their resources elsewhere. People get nearly religious about it, but there is no way I will be convinced that iSCSI isn't capable... it proves itself in real and large enterprise environments every single day. 10G ethernet will only make this situation better and will likely lead to a second wave of virtualization allowing for greater consolidation and simplification.

    In regards to MPIO... TCP multipathing is a new development that will be a great benefit to iSCSI and other IP-based storage infrastructures.
    2024 Renew: [ ] AZ-204 [ ] AZ-305 [ ] AZ-400 [ ] AZ-500 [ ] Vault Assoc.
    2024 New: [X] AWS SAP [ ] CKA [ ] Terraform Auth/Ops Pro
  • it_consultantit_consultant Member Posts: 1,903
    powerfool wrote: »
    I agree, FC isn't so fantastic to make it worth the expense (direct and labor) in any but a few real situations. It is simply a distraction for most organizations that could be investing their resources elsewhere. People get nearly religious about it, but there is no way I will be convinced that iSCSI isn't capable... it proves itself in real and large enterprise environments every single day. 10G ethernet will only make this situation better and will likely lead to a second wave of virtualization allowing for greater consolidation and simplification.

    In regards to MPIO... TCP multipathing is a new development that will be a great benefit to iSCSI and other IP-based storage infrastructures.

    iSCSI isn't capable the same way that a multipathed FC is. iSCSI with jumbo frames over a lossless ethernet, that might be as capable. The main difference between FC and iSCSI is the way FC fabric handles load balancing and inter-switch links. It is so good at handling huge traffic that Cisco Nexus and Brocade VDX (which are ethernet switches) are forks of their respective FC operating systems, not the other way around. There is a reason why everyone is talking about FCOE like it is the second coming, it allows for all the benefits of FC while only cabling for one type of network. Converged cards are not much more expensive than traditional 10G ethernet cards - with DCB and native FC on ethernet fabric switches, you can run FC only storage arrays on your FCOE network. You will still need FC cabling...approximately 4 links depending on your storage array.

    As an aside, storage guys are religious, at our benefit. In the FC world it is completely unacceptable to drop a packet under any circumstance. In the FC world having only one link to another switch is a crime - the more the merrier. They have never heard of STP and while we are all cheering TRILL, they don't understand what the big deal is, they have been using TRILL for forever. They are shocked that we (on the ethernet side) can only have one active link to an adjacent switch. Now we are getting ethernet switches which can operate under the same scrutiny.
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    jmritenour wrote: »
    Personally, I think FC is a scam. It requires a whole other fabric and good HBAs are expensive. I'd rather invest the money on 10GB infrastructure. The one thing I will give it is that multipathing is generally easier to setup with FC, and FC as a whole doesn't require as much configuration, but that doesn't offset the cost involved, and the pain of having to maintain a separate infrastructure.

    FC is expensive, but it's harsh to call it a scam. If the environment requires the lowest possible latency then FC is the only choice.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • powerfoolpowerfool Member Posts: 1,666 ■■■■■■■■□□
    iSCSI isn't capable the same way that a multipathed FC is. iSCSI with jumbo frames over a lossless ethernet, that might be as capable. The main difference between FC and iSCSI is the way FC fabric handles load balancing and inter-switch links. It is so good at handling huge traffic that Cisco Nexus and Brocade VDX (which are ethernet switches) are forks of their respective FC operating systems, not the other way around. There is a reason why everyone is talking about FCOE like it is the second coming, it allows for all the benefits of FC while only cabling for one type of network. Converged cards are not much more expensive than traditional 10G ethernet cards - with DCB and native FC on ethernet fabric switches, you can run FC only storage arrays on your FCOE network. You will still need FC cabling...approximately 4 links depending on your storage array.

    As an aside, storage guys are religious, at our benefit. In the FC world it is completely unacceptable to drop a packet under any circumstance. In the FC world having only one link to another switch is a crime - the more the merrier. They have never heard of STP and while we are all cheering TRILL, they don't understand what the big deal is, they have been using TRILL for forever. They are shocked that we (on the ethernet side) can only have one active link to an adjacent switch. Now we are getting ethernet switches which can operate under the same scrutiny.

    Sorry, I am not convinced. iSCSI works, every day. iSCSI is a compromise, but it is definitely good enough for all but the most demanding situations. I understand the differences between them and it isn't worth the cost for most folks. My comment on the religious nature is that it isn't a good thing. It is blind faith to a technology that certainly is superior, but costs too much for the additional benefits... and still has disadvantages outside of cost. Folks generally aren't missing anything that have a properly designed iSCSI fabric, and recommending FC in those situations is not putting the best solution forward, because cost should be a factor in all situations. Losing a packet isn't a big deal for iSCSI, because it recovers gracefully... just like other TCP/IP-based communications.
    2024 Renew: [ ] AZ-204 [ ] AZ-305 [ ] AZ-400 [ ] AZ-500 [ ] Vault Assoc.
    2024 New: [X] AWS SAP [ ] CKA [ ] Terraform Auth/Ops Pro
  • dalesdales Member Posts: 225
    Just as a side note I presume you mean scalecomputing who do node based storage HC3 - Hyperconverged, No Sweat Virtualization System Built for SMB | Scale Computing . If the storage is chocking on IO, how many more IOP's do you need as you can just add nodes to the cluster and expand capacity & IOP's with each addition. Which might be cheaper than chucking the whole lot in the bin and starting again. Its worth trying to figure out why its choking to is it to do with legacy AV scans or other things that could be easily sorted before putting the chequebook in front of someone.

    There is not right protocol really for storage NFS, ISCSI and FC (plus the others) all cope admirably when designed correctly.
    Kind Regards
    Dale Scriven

    Twitter:dscriven
    Blog: vhorizon.co.uk
  • jmritenourjmritenour Member Posts: 565
    dave330i wrote: »
    FC is expensive, but it's harsh to call it a scam. If the environment requires the lowest possible latency then FC is the only choice.

    Maybe "scam" is a little overboard, but I've worked with a lot of different environments, and a lot of different storage platforms, and I'd say that kind of latency is needed only in the most demanding situations and applications. For 99%, FC is like swatting a fly with a sledgehammer.
    "Start by doing what is necessary, then do what is possible; suddenly, you are doing the impossible." - St. Francis of Assisi
  • it_consultantit_consultant Member Posts: 1,903
    powerfool wrote: »
    Sorry, I am not convinced. iSCSI works, every day. iSCSI is a compromise, but it is definitely good enough for all but the most demanding situations. I understand the differences between them and it isn't worth the cost for most folks. My comment on the religious nature is that it isn't a good thing. It is blind faith to a technology that certainly is superior, but costs too much for the additional benefits... and still has disadvantages outside of cost. Folks generally aren't missing anything that have a properly designed iSCSI fabric, and recommending FC in those situations is not putting the best solution forward, because cost should be a factor in all situations. Losing a packet isn't a big deal for iSCSI, because it recovers gracefully... just like other TCP/IP-based communications.

    In OP's situation I recommend using iSCSI because the need is not there to justify using FC, FCOE, or iSCSI over ethernet fabric. To say that FC is a scam because it is expensive is not really fair because FC does some key things much better than iSCSI. It does those things so much better that the effect is we are porting that technology into Ethernet. People who are FC are going to FCOE which lowers the price significantly (a converged card 10GB ethernet 16gb FC is $1200 whereas a regular fiber HBA can run you $2k) simplifies your cabling needs and depending on your switches you can run your legacy FC storage array and use FCOE at the same time.

    I have a friend who makes good money 'fixing' iSCSI implementations which suffer from performance problems. For $300 an hour he tells them to buy a high quality intel nic with iSCSI HBA drivers, turn on jumbo frames, etc. In other words, fix things that are never an issue in FC. FCOE is going to look extremely attractive to people as cat-7 and 10GB cat-7 NICs with CNA capabilities become more commonly available. I don't think we are going to see a widescale move from FC to iSCSI.
Sign In or Register to comment.