Options

SAN vs. NAS

JohnnyBigglesJohnnyBiggles Member Posts: 273
Can anyone explain, in layman's terms, the difference between a SAN and a NAS (if any)?

Comments

  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    NAS
    You can connect to it using simple things like USB and Firewire, and mostly it presents storage via Windows Shares (CIFS) or Linux (NFS)
    Filesystem managed by the NAS operating system (propriotry) and data held is usually file based

    You sometimes don't even know what filesystem the NAS has, all you usually get is a webinterface where you can create shares / ftp sites etc.

    SAN
    Enterprise class - usually accessible via specific network protocolls only - such as iSCSI or Fibrechannel
    Filesystem managed by the server connecting to it and the data on the SAN is block based (as the server determin the filesystem, vsphere-VMFS, Windows-NTFS and so on). Whatever server is connecting to a SAN, sees a "block device". You would not be able to access the data from the SAN itself.
    My own knowledge base made public: http://open902.com :p
  • Options
    prampram Member Posts: 171
    They're not similar things. A NAS is typically a filer DEVICE like a NetApp or an EMC Celerra. They can be attached to a network or fabric through numerous methods (optical, infiniband, ethernet) and serve data with network file protocols (like NFS/CIFS) or block based as stated (fiber channel, iscsi)

    A SAN is, like the acronym implies, a network of storage devices. These are typically fibre channel and are connected with fiber optic switches (from Brocade or whatever)

    So you'd add your NAS (which is a device) to your SAN (which is an abstract networking concept)
  • Options
    prampram Member Posts: 171
    I suppose it should also be stated that there are NAS filers out there that can't function as a block storage devices, and thus can't be part of a SAN.

    I've also seen distinction between 'NAS' and 'FAS' (fabric attached storage) but for the most part that's fairly pedantic IMO. Basically most every NAS you see sold at the enterprise level has the capability to serve data as a block device and simply needs the addition of a FC HBA to do so. The filers that can't are usually consumer level or just really old.
  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    pram wrote: »
    A SAN is, like the acronym implies, a network of storage devices.

    I gave up that fight. Every time I correct people (that a SAN isn't technically the device itself) it turned into massive discussions - so I am just going with the flow :p
    My own knowledge base made public: http://open902.com :p
  • Options
    it_consultantit_consultant Member Posts: 1,903
    If you can mount the file share via ISCSI, FC, or FCOE, you have a "SAN". You will know that you have a mounted because instead of showing up as a network share it appears, to Windows (if your an MS shop) as just another drive. I have a couple of servers that "boot from SAN". Which means the server actually boots to the BIOS on our fiber channel card and drive C: is actually a LUN (logical unit number) on our Hitachi SAN.

    We also have a "NAS-HEAD" which, in essence, a bolt on controller to the Hitachi AMS 2300 which provides SMB and NFS shares.

    The line is getting a little blurred in some areas. Linux can mount an NFS share over ethernet which will show up in LVM just like an ISCSI or FC drive would. This is why a lot of VMWARE data stores are NFS mounts. They are cheap and easy. You can get cheap NAS devices that support NFS.
  • Options
    EV42TMANEV42TMAN Member Posts: 256
    Look at it this way you can go to best buy or newegg and buy a Seagate/Western Digital/Synology/etc NAS device starting at $200
    you plug it into you're switch and can access it by IP address to create file shares

    To build a SAN network you need to build or go to you're favorite OEM and purchase a storage server and the Networking equipment needed to up grade your network. So you're roughly looking at $5000 to $6000 to setup a redundant low end SAN infrastructure.
    Current Certification Exam: ???
    Future Certifications: CCNP Route Switch, CCNA Datacenter, random vendor training.
  • Options
    JohnnyBigglesJohnnyBiggles Member Posts: 273
    Thanks, folks. This helped clear things up a bit. Any further input is also appreciated.
  • Options
    JohnnyBigglesJohnnyBiggles Member Posts: 273
    jibbajabba wrote: »
    I gave up that fight. Every time I correct people (that a SAN isn't technically the device itself) it turned into massive discussions - so I am just going with the flow :p
    I supposed this played a part in my confusion, seeing that people around me refer to a large networked storage unit as a SAN (as a device or unit - an EqualLogic-like type of unit, specifically)
  • Options
    prampram Member Posts: 171
    LAN and SAN are similar concepts. Fibre Channel is similar to IEEE 802 networking. It has its own OSI-esque layers, protocols, addressing, frames etc. The typical FC parlance for storage devices on the SAN is a 'Raid' or 'JBOD' (just a bunch of disks)

    For something to be a SAN it requires stuff to be in a fabric, so calling a Hitachi disk array a SAN doesn't really make much sense. It means Storage Area Network, there should be no ambiguity.
  • Options
    it_consultantit_consultant Member Posts: 1,903
    pram wrote: »
    LAN and SAN are similar concepts. Fibre Channel is similar to IEEE 802 networking. It has its own OSI-esque layers, protocols, addressing, frames etc. The typical FC parlance for storage devices on the SAN is a 'Raid' or 'JBOD' (just a bunch of disks)

    For something to be a SAN it requires stuff to be in a fabric, so calling a Hitachi disk array a SAN doesn't really make much sense. It means Storage Area Network, there should be no ambiguity.

    There are some factual errors here. Fibre Channel is similar to ethernet in that you plug ports in and generally things work OK, but it is completely different in the protocol realm - from the MTU size to zoning, there isn't much that is similar about SAN switches to ethernet switches. Ethernet switches don't (generally, some specialized switches do) have ISL ports and does not use tokening to determine ISL priority. Only a few ethernet switches support TRILL which is important in SAN networking. Our SAN engineer was speechless when he found out that on the ethernet side there was such a thing as a "Switching Loop". Spanning tree makes no sense to him, why would you block an ISL port? Imagine setting up an ethernet switch where you have to tell the ethernet switch what MAC address to expect on each port. The mac is the closes thing to a world wide name ethernet has.

    You don't need a fiber fabric to utilize a SAN. At the most basic level you can use iSCSI which is a SAN protocol which rides right on your ethernet backbone - whether or not that ethernet is in a "Fabric" configuration. You can also run FCOE (fiber channel over ethernet) which allows you to have the big FC MTU and fabric like behavior on an ethernet network. Grab the manual to the Brocade VDX 6730 converged network switch for more on FCOE and the ethernet "Fabric".

    Hitachi AMS2300 and 2100 are most certainly a storage area network. Bolting on a "NAS" device onto the controllers does not turn it into a NAS only. It simple offers the traditional sharing protocols people want. We run a fibre channel fabric to our AMS SAN(s) - of which we have 4. The SANs are connected over the FC network with trunked 8GB LR ISLs.

    Generally it is this - if you can mount a drive directly into your OS as if it were internal cabling - whether it is iSCSI for Fiber - that device you are connecting to is a SAN. There is one caveat, which is NFS, which is generally available on NAS devices (since they run Linux) which will let you mount your network share as an internal drive.
  • Options
    AkaricloudAkaricloud Member Posts: 938
    The main difference that you see from a end standpoint is the block level vs. file level storage.

    With a SAN, the devices within have no idea what data is stored on them and you need to attach them to a server in order to create file level storage. Essentially all you have is a storage network that more or less presents your raw storage to devices that can make use of it.

    Now a NAS itself handles the storage end to end. It obviously still uses blocks but instead of presenting them directly, it presents and manages your data in logical shares. The NAS is aware of the file structure and files it stores on it.


    When trying to connect a client computer to a network share it's quite obvious the difference. With a NAS you can simply connect directly to a share on it. In a SAN environment you would have/want to have a file server connected to your SAN and then connect your client computer to the share on that server.Now this makes a NAS out to be a better deal but once you start looking into what they're actually being used for then you start to see why SANs often make sense.

    Obviously this is a very simplistic look but thought it may help your understanding.
  • Options
    prampram Member Posts: 171
    There are some factual errors here. Fibre Channel is similar to ethernet in that you plug ports in and generally things work OK, but it is completely different in the protocol realm - from the MTU size to zoning, there isn't much that is similar about SAN switches to ethernet switches. Ethernet switches don't (generally, some specialized switches do) have ISL ports and does not use tokening to determine ISL priority. Only a few ethernet switches support TRILL which is important in SAN networking. Our SAN engineer was speechless when he found out that on the ethernet side there was such a thing as a "Switching Loop". Spanning tree makes no sense to him, why would you block an ISL port? Imagine setting up an ethernet switch where you have to tell the ethernet switch what MAC address to expect on each port. The mac is the closes thing to a world wide name ethernet has.
    Uhh it wasn't a direct comparison to ethernet.
    You don't need a fiber fabric to utilize a SAN. At the most basic level you can use iSCSI which is a SAN protocol which rides right on your ethernet backbone - whether or not that ethernet is in a "Fabric" configuration. You can also run FCOE (fiber channel over ethernet) which allows you to have the big FC MTU and fabric like behavior on an ethernet network. Grab the manual to the Brocade VDX 6730 converged network switch for more on FCOE and the ethernet "Fabric".
    Yes there are of course other ways to implement a SAN, but FC is the most ubiquitous. FCOE is still FC, and its still a fabric. I don't see much point in trying to differentiate the two simply because it can be implemented on vanilla networking gear.
    Hitachi AMS2300 and 2100 are most certainly a storage area network. Bolting on a "NAS" device onto the controllers does not turn it into a NAS only. It simple offers the traditional sharing protocols people want. We run a fibre channel fabric to our AMS SAN(s) - of which we have 4. The SANs are connected over the FC network with trunked 8GB LR ISLs.
    That doesn't make any sense, the fabric is the SAN. The 4 AMS and the switches comprise the SAN. You don't have 4 SANs, you have 1.
  • Options
    it_consultantit_consultant Member Posts: 1,903
    You can also tell the difference because a really kick-@$$ NAS will run you a $1K. One of our Hitachi controllers cost...about $20,000. That is without the fiber, fiber switches, patch panels, optics, and drive shelves.
  • Options
    it_consultantit_consultant Member Posts: 1,903
    pram wrote: »
    Uhh it wasn't a direct comparison to ethernet.


    Yes there are of course other ways to implement a SAN, but FC is the most ubiquitous. FCOE is still FC, and its still a fabric. I don't see much point in trying to differentiate the two simply because it can be implemented on vanilla networking gear.


    That doesn't make any sense, the fabric is the SAN. The 4 AMS and the switches comprise the SAN. You don't have 4 SANs, you have 1.

    You have me one place, I should have said we have 8 SAN controllers (redundant controllers) and 2 SANs - we have 2 FC fabrics. FC is ubiquitous but it is not the difference between a SAN and a NAS. HP will sell you a small business "SAN" solely with 1/10GB iSCSI HBAs.
  • Options
    prampram Member Posts: 171
    I can also implement a network over X.25 or AppleTalk, doesn't mean its helpful to bring up in a high-level description of a thing. You can implement a SAN with iSCSI sure, you can, its true. An HP box with disks, a raid controller, and an iSCSI HBA is not a SAN, it could be part of one though.
  • Options
    ClaymooreClaymoore Member Posts: 1,637
    Pram, please see post 5
    pram wrote: »
    Yes there are of course other ways to implement a SAN, but FC is the most ubiquitous. FCOE is still FC, and its still a fabric. I don't see much point in trying to differentiate the two simply because it can be implemented on vanilla networking gear.

    Besides, everyone knows InfiniBand is the wave of the future!
  • Options
    tenroutenrou Member Posts: 108
    On a basic level of Basicness a NAS is a device works with protocols on a file object level and a SAN is a device which works with protocols at the block object level.

    The line is getting very blurry though, as has been mentioned a netapp san and an EMC celerra are considered 'filers' but netapp supports both iSCSI and FC which are both block protocols. The celerra also supports iSCSI. Then you have the VNX which is basically a celerra and a clariion squashed together to support all protocols.

    I wouldn't worry about NAS vs SAN etc as long as it supports the protocols that you are looking for.
Sign In or Register to comment.