Options

FCoE, ISCSI, NFS for VMWARE

DevilWAHDevilWAH Member Posts: 2,997 ■■■■■■■■□□
Hi,

We have a old (end of support old) brocade FC SAN, which we are thinking of replacing. So our options are to replace with an new FC set up, or migrating away from pure FC using the exiting Ethernet.

So I looked at our current usage on our ESXi hosts. we run about 7 hosts with about 90 Guests on them, and looking at the basic read write stats we never really see any host using more than about 30-40mbs of storage bandwidth. And this is not a surprise as our servers are many bispoke systems that have few users actually using them at any one time. Looking at our Brocade 4gig FC switches even "trunk" links uterlistion is never more than about 3-5%.

So this made us think could we get away with using the Ethernet and moving to a different SAN technology. The current storage (netapp) already has FC and 10gig adapters. with the 10gig directly in to the core switches. the ESXi hosts don't have 10gig at the moment but if we upgrade them this will also be straight in to the core device.

So every thing would have 10gig connections in to a pair of cisco 6506E.

As we are not heavy users of storage I am guessing the 10gig ethernet would be fine for running ISCSI or NFS and would equal or exceed the 4gig FC we have now. I should note that how the core is set up would mean 40gig dedicated bandwidth for storage. due to all the 10gig for netapp and ESXi storage being terminated on distributed single line card in the 6506.

But moving to ISCSI or NFS means we have to give up all the FC we currently have and moving back to it in the future would be an extra cost.

Am i right in thinking that we could run FCoE with out any new equipment. Ie, by putting FCoE adapters in to ESXi hosts and the netapp we could run FCoE going directly through the core 6506 at layer 2 (single non routed VLAN). There is no need for additional switched hardware? While FCoE switches improve performance and relibility using priority, fabric and loss-less technologies, and allow conversion between pure FC and FCoE am I correct in thinking that they are not a requirement? Or are they needed for the equivalent of Zoning in FC switches?

So we could add FCoE adapters to all our servers, run through the core switch for now, then have the option to upgrade to a FCoE Switched network at a later date if we needed.

So if you where staring out building a SAN network, that did not need to be high performance, had about 500 users to support basic applications. and budget was quite tight (we can't justify purchasing 6 FC switches at the moment which is what we would need to have redundancy in the 3 server rooms).

ISCSI / NFS / or FCoE ?

Cheers
  • If you can't explain it simply, you don't understand it well enough. Albert Einstein
  • An arrow can only be shot by pulling it backward. So when life is dragging you back with difficulties. It means that its going to launch you into something great. So just focus and keep aiming.

Comments

  • Options
    astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    To use FCoE you would want switches that can handle zoning (e.g., Nexus 5K). So take that off the table.

    No point in carrying your FC investment forward as you have very little as it is. I would suggest using 10GB NICs in the servers and using NFS with VMware. Your storage is NetApp and that's supported (even recommended) by them - heck they pioneered the idea. Are you running a recent version of ONTAP? How about your vSphere version? What model servers are these?
  • Options
    DevilWAHDevilWAH Member Posts: 2,997 ■■■■■■■■□□
    Latest Ontap and version 5.1 Vsphere. Not going to bother with 5.5 but will be going to 6 once its out.

    Been testing on the netapp over 2 X 1gig for ISCSI and NFS. ISCSI actually gave better performance, and Exchange does not support NFS with directly attached storage through vsphere.

    While I know Netapp does deal better with NFS, the filers exceed in what we need for proformace in both NFS and ISCSI so rather than running both I think we will just use ISCSI. Also NFS is not off loaded to the NIC's where as ISCSI can be to reduce CPU uterlisation.

    10gig cards I am ordering for the servers support ISCSI and FCoE off loading so should we want to integrate the FC estate in some way can mix and match a bit.
    • If you can't explain it simply, you don't understand it well enough. Albert Einstein
    • An arrow can only be shot by pulling it backward. So when life is dragging you back with difficulties. It means that its going to launch you into something great. So just focus and keep aiming.
  • Options
    down77down77 Member Posts: 1,009
    I won't get in to the "which protocol is better" argument as that is largely based on personal preference more than anything else. If properly configured, ALL of them perform relatively well.

    A few questions to think about:
    -What are the business requirements for the applications (data classification, performance, SLA, etc.)
    -What are the technical requirements for the overall environment, not just vsphere
    -How much FC connectivity is required today (How many ports end to end, all fabrics)
    -What is the cost of migrating to another solution (NFS, iSCSI, FCoE, etc) vs upgrading the fabric
    -What is your 3-5 year strategic plan for data center connectivity

    Since you mentioned you are already looking at replacing the Brocade due to age, I would not take astorrs advice to take FCoE off the table (no disrespect Astorrs). As a consultant I have done a large number of legacy Brocade and Cisco migrations to either Brocade VDX or Cisco Nexus based solutions to support FC/FCoE. In this case I would consider a Nexus based solution as a *potential* replacement since it can handle your 10Gb Aggregation, FC, FCoE, iSCSI, NFS, etc... and it can integrate in to your current network environment. A Nexus 5596 can actually be price competitive against either a 6708 or 6716 line card in your 6506E, depending on the pricing from your Cisco Partner. There are a few caveats such as if you need FICON (IBM Mainframe), integrated FCIP, Inter-VSan Routing (not a concern since you are brocade today), and legacy device compatibility.

    Again this is just some food for thought
    CCIE Sec: Starting Nov 11
Sign In or Register to comment.