This is going to be a bit of a rant, but hopefully I can help some other EMC customers out there.
If you have a Celerra or integrated NS SAN with a 10Gb network link, you are probably going to have to replace the NIC. (EMC Primus documentation to follow)
History:
We purchased an NS82 (Clariion CX380 with an integrated Celerra) in January.  This model has no fibre connectivity (other than for a tape drive) so everything is connected through CIFS/NFS or iSCSI.  There is one 10Gb link and 6 1Gb links on each datamover so we are using the 10Gb link as our primary iSCSI path with 4 1Gb links bundled into an etherchannel as a secondary path and each path runs through a separate Cisco 4948 switch.  Due to a series of unfortunate events regarding connecting our HPUX servers to the new SAN, I didn't get around to cutting over our Exchange 2003 server until July and that's when the real fun began.
Problem:
The message stores were corrupted sometime after being moved to the new SAN.  I loaded the corrupted stores into the Recovery Storage Group one by one and moved mailboxes to new stores on the new SAN.  The stores were corrupted again almost immediately.  We involved Microsoft and they said the corruption we experienced was caused by hardware 100% of the time (and later they were proven right), so mailboxes were moved back to stores on the old HP XP512 SAN.  This entire process lasted almost a week with limited or no mail function, so as the Exchange admin I was taking a beating during the first week of August.  The old server only had three expansion slots, two of which were filled with FC HBAs so the server only had one PCI-X TOE NIC connected to the new SAN over the 10Gb path.  This server was 5 years old and I had been trying to replace it and now there was suddenly enough money in the budget for a new Exchange server.  I built a new server with 2 PCIe TOE NICs and hooked it up to the iSCSI SAN using both paths set up to use the MCS load balance setting of least queue depth and prepared to move over the mailboxes to new message stores.
I set up test mailboxes in each store - everything was fine
I moved the IT message store in September and we tested for a week - all good.
I moved 3 of the remaining stores in one day - still good.
We kicked off a backup at 9:06 - the event log showed message store corruption at 9:07
I moved all the mailboxes back to the old server the next day.
EMC has been saying everything was fine all along, but now I am sure the problem is theirs.  An EMC engineer told me that they do not support MCS load balancing - just Failover - and that was causing the problem.  I asked for the EMC documentation on this (which was never provided because it isn't true) and gave him a link to the 
MS iSCSI User Guide that says it is supported.  Plus, the original server only had one connection so MPIO or MCS was not involved during the first corruption.  EMC tried to blame our backup software - HP Data Protector - except it backed up fine over FC and will even backed up a single message store without error.  EMC isn't being very helpful until they find out we are bringing in Hitachi to investigate a competitive upgrade and now we have a PM and project team assigned to resolve this issue.
I find that by copying one of the large corrupted message stores to a different location on the same drive that I can generate enough I/O to recreate the error, so now I have a way to test.  The team involves an engineer who actually knows what he is doing and we started testing this week.  What we find is that the corruption occurs when data is moving over the 10 Gb link - either alone or in a load balanced configuration - but not when the data is moving over the etherchannel (4Gb) path by itself.  He captures some network traffic, I send him errors from the event log, and that data is sent on to another team.  We get a response the next day.
The 10Gb Neterion NIC in the datamover is passing corrupted TCP segments on up the layers where they may or may not be caught.  iSCSI has an error checking routine in the protocol so it sees the errors and rejects the PDUs resulting in iScsiPrt errors Event ID 7 (initiator could not send a PDU) and Event ID 29 (Target rejected a PDU).  The CRC check in Celerra replicator will also generate errors, but we only run iSCSI over the 10Gb link.  CIFS and NFS traffic won't be checked so the corrupted data will be written to disk.
The NIC not only passed the bad traffic - it is the source of the corruption!Solution:
The permanent fix is to replace the NIC and EMC is working on a plan to address this with the affected customers.  In the meantime you can use an alternate network path and avoid the 10Gb path.  You could also asked to be patched to the latest NAS code, but that has a downside.  The NIC will still generate or pass the corrupted TCP segments, but now the Celerra will process all the data rather than relying on the TCP Offload Engine of the 10Gb NIC, resulting in increased processor usage and decreased performance while it has to process all the retransmit requests for the corrupt TCP segments.
Putting a bad NIC in a SAN is a big deal.  The fact that this model was released a year ago, purchased in January, and the problem not revealed to us until October is an even bigger deal.  Not to mention the time wasted and reputation damage incurred while dealing with the fallout from EMC trying to cut a few dollars by buying a batch of NICs from some Shanghai street market.  I hope the money saved was worth it to you EMC, because it certainly wasn't to us.