Netapp Vs EMC

royalroyal Posts: 3,353Member
So, I've been wanting to get more into storage lately. I by no means want to become an all out storage guy, but I love Exchange and would love to know how to design storage solutions using things like creating an aggregate, designing snapshots, SnapManager, SnapDrive, Failover to DR Site, SAN replication, recovering logs through Snapshots to replay if a log is corrupted, etc...

I know mostly about Netapp from a 1 week internal training class.

I do have access to Netapp University but was curious on what you guys think of Netapp vs EMC?
“For success, attitude is equally as important as ability.” - Harry F. Banks

Comments

  • ClaymooreClaymoore Posts: 1,637Member
    I have never used NetApp, but I can talk about my experiences with EMC and an old HP XP512. Maybe someone else (HeroPsycho?) can chime in with their NetApp experiences and we can get a good comparison.

    Background:
    Old SAN: HP XP 512 at both our primary and DR datacenters. For those of you not familiar with HP arrays, this is a rebranded Hitachi.
    New SAN: EMC NS82. This is an integrated Clariion/Celerra with 10GigE. NS42 at the DR site

    We are switching our whole infrastructure from 2 GB FibreChannel to iSCSI for cost reasons. We would have had to buy new switches as well as blades for existing switches in order to stay with FC. A couple of Cisco 4948s (one of them 10GE) and some TOE NICs were definitely cheaper than a copule of new 9222s, which is good because we will have to buy more - LOTS more - disk space to support iSCSI (more on that in a bit).

    First, don't try to run iSCSI on old hardware. My company won't pay to replace or upgrade anything until the previous item is a smoking pile of rubble, so switching our 5 year old Exchange server (that I have been trying to replace for 2 years) from FC to iSCSI was a bad idea. We had to track down PCI-X TCP Offload Engine NICs that aren't even made anymore, but that still wasn't good enough. The poor performance of that NIC lead to Exchange database corruption and a couple of days of little or no mail while I worked 20 hrs a day mounting message stores in the Recovery Storage Group and moving mailboxes to new stores on the old SAN to filter out the corrupted items. On the plus side, suddenly there was money available to buy a new Exchange server with PCI-e NICs.

    Second, get NICs with at least a TCP Offload Engine if not a full iSCSI HBA. The PCI-e TOE NICs in our SQL cluster haven't had any problems at all - until the storage processors in the EMC failed over during maintenance. That brief interruption triggered a failover in our cluster, but since the other server couldn't access the drives either the cluster just choked. Eventually we had to physcially power cycle the servers.

    Finally, you will need LOTS of disk space to support iSCSI array-based replication - at least with EMC. EMC has a secret formula to calculate the space requirements for iSCSI replication that we didn't find out about until AFTER we bought the new SAN. The formula is based on the size of the LUN, the percentage of change and the number of 'presented' snaps (2 are required for replication while additional snaps could be presented for backups or analysis). Beacuse of this, one of our filesystems is 3 times the size of the LUN - and that amount of space must be on both arrays. Our 600 GB LUN now takes up a combined 3.6 TB of space between both our primary and DR array

    We currently are not replicating Exchange data and I have no intention of doing so. We already have approval to purchase hardware this year and the software in January to move to Exchange 2007. I would personally rather implement CCR with Exchange than rely on array-based replication with EMC right now.

    Also, I would really like to build the Exchange environment on top of Server 2008 instead of Server 2003. Server 2008 has an entirely re-written TCP/IP stack, and one of the new features allows a NIC to use any available processor or processor core rather than being bound to one processor. Between that and the fact that iSCSI is 'natively' supported instead of being an add-on should improve the performance and reliability of our iSCSI SAN.
  • astorrsastorrs Posts: 3,139Member ■■■■■■□□□□
    Wow Claymore that's truly a nightmare story. I assume you guys were using Celerra Replicator to do the replication? I'm not sure how you got LUN requirements unless massive changes are constant and the WAN connection is minimal. Are you sure you're not using clones instead of snapshots? (sorry, I'm just trying to see how this 1.8TB from 600GB came to be :)).

    royal, I'll try to give you a post covering my experiences with both companies and their product lines shortly.
  • ClaymooreClaymoore Posts: 1,637Member
    astorrs wrote:
    Wow Claymore that's truly a nightmare story. I assume you guys were using Celerra Replicator to do the replication? I'm not sure how you got LUN requirements unless massive changes are constant and the WAN connection is minimal. Are you sure you're not using clones instead of snapshots? (sorry, I'm just trying to see how this 1.8TB from 600GB came to be :)).

    royal, I'll try to give you a post covering my experiences with both companies and their product lines shortly.

    Here is 'The Formula'. There is some debate about using sparse or dense LUNs when you snap for replication, but we are converting to sparse in order to save space. This should get us about half of our space back. EMC has a nice spreadsheet and presentation that allows compares the space usage when considering whether to use sparse or dense LUNs.

    http://knowledgebase.emc.com/emcice/documentDisplay.do;jsessionid=61A8E7D5F357D7A537D1A7964E5F3F08?docType=1006&clusterName=DefaultCluster&resultType=5002&groupId=1&page=&docProp=$solution_id&docPropValue=emc142610&passedTitle=null


    Question: How to use the formula to size the source and destination file systems for iSCSI LUN replication.
    Question: How to determine the minimum file system space needed to support one LUN for iSCSI Snap.
    Question: Where can I find information on Celerra Virtual Provisioning?
    Environment: EMC SW: NAS Code 5.5
    Environment: Product: Celerra File Server (CFS)
    Environment: Features: Celerra Virtual Provisioning
    Fix: Using an example of 1 x 10GB LUN, 1 snap per day, 10% change per day, keeping 5 snaps and having 1 snapshot copy mounted for use on a second server, use this formula:

    (LUN_size * 2) + [(no_of_snaps - 1) * (LUN_size * change_rate)] + (n * LUN_size) = minimum file-system space needed to support one LUN.
    (LUN_size * 2) - This would be (10GB x 2) = 20GB. The 2 multiplier can be misleading. The reason RM/SE suggests 2 x LUN size, because it covers the worst case scenario of building a LUN (10GB) and then migrating another 10GB LUN into it. The first snap needs to maintain this 10GB and allow a new 10GB to be written over it and will reserve a second 10GB (hence 20GB). In reality, this depends on the amount of data present in the file system at the time of the first snap. For example, if you migrate 5 GB into the 10GB LUN, then this multiplier is really 1.5. If it is a pristine LUN that is being added to at the rate of 1GB per day, then the multiplier is 1.1.

    [(no_of_snaps - 1) * (LUN_size * change_rate)] - Number of snaps -1 accounts for the first snap in the first argument of the equation. In the example, the equation = [(5-1)*(10*0.1)] = 4GB.

    (n * LUN_size) - This is the number of snaps mounted. Remember that iSCSI Snaps can be mounted Read/Write. When using dense provisioning, the Celerra will reserve 100% of the capacity of the primary LUN when mounting a snap. In a worst case scenario where the secondary server replaces every block in the LUN, there is sufficient capacity to accommodate this: (1*10) = 10GB. If 2 snap copies mounted at the same time are needed, this number would be 20GB.
    The worst case scenario is that the 10GB LUN needs 34GB of space in the file system.

    Celerra Virtual Provisioning, introduced in NAS code version 5.5, can ease this burden somewhat. The primary LUN, or the secondary LUN, or both can be configured to only consume space as data is written to it. A nice compromise may be to virtually provision the secondary (promoted) snap, especially if this is only for a backup and there will not be a lot of writes generated by the secondary system.

    Please see the Celerra white paper entitled Celerra Virtual Provisioning located at Products > Hardware/Platforms > Celerra Family > White Papers in powerlink.emc.com for more information.
  • astorrsastorrs Posts: 3,139Member ■■■■■■□□□□
    Yeah that's what I figured. I guess in the way Celerra stores data and performs snapshots it took thin provisioning to allow it to achieve the same sparse snapshots common in most FC/iSCSI arrays. 3x the disk space is brutal if we're talking high performance drives. :)
  • ClaymooreClaymoore Posts: 1,637Member
    It's good to know that this change puts us back in line with the industry norm. Our prior experience with our HP XP 512 was that replication was 1:1 so finding out that there was this much overhead was a bit of a shock.
  • astorrsastorrs Posts: 3,139Member ■■■■■■□□□□
    Claymoore wrote:
    It's good to know that this change puts us back in line with the industry norm. Our prior experience with our HP XP 512 was that replication was 1:1 so finding out that there was this much overhead was a bit of a shock.
    I realize you're not, but I don't think it's really fair to compare a HP XP 512, which is basically an HDS Lightning 9900, with the Celerra though. They're targeted at different markets and one is a modular NAS array while the other is a monolithic storage array. It would be like comparing the CLARiiON CX4 to a DMX-4 - again they are not exactly competitive products - the DMX will destroy the CLARiiON in everything except management complexity. :)
Sign In or Register to comment.