Options

Migrate storage on iSCSI LUN with MPIO to a VM

meestaplunkmeestaplunk Member Posts: 7 ■□□□□□□□□□
I have a physical server that needs to be retired. It's a heavily used file server. The server stores patient data (small text files, small images) for an EMR and is constantly hit with requests to retrieve those files. I wanted to migrate to a VM but now that I've seen how the environment is configured I'm not sure that it's the right move.

Here is diagram of the setup



File server & SAN NICs are 1GB.
The VM host is Hyper-V on Windows Server 2012R2 with 1GB NICs.
The SAN is a Dell EqualLogic, 24 x 600GB 10K disks in RAID-50
File server connects to single LUN that holds two disks. About 5TB total data.
File server has two NICs for storage, configured with MPIO.
This file server is the only server using the SAN for storage.

Potential Options

1. Migrate data from LUN into VHDX stored on the SAN (create a separate LUN). Create new VM with said VHDX. Doing this would temporarily require more storage than available but could schedule downtime and use another server for temporary storage while migrating.

2. Connect Hyper-V to current LUN, pass through disk to VM. Current host does not have two NICs available for storage so would lose MPIO.

3. Create new external switch bound to one NIC that connects directly to SAN. Attach new file server VM to this as a "storage network".

Ultimately I'd like the file server to be clustered (supporting option 1). The client just wants off the current hardware ASAP and does not want to buy new hardware, though we can have that conversation if it's the right thing to do.

I appreciate everyone's input, thanks! :D

Comments

  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Whilst option #1 sounds the best way forward, this is Windows. Migration would probably take ages, especially with tons of small files. Just calculating the amount will probably throw windows into a diarrhea fest.

    I bet the best solution is to image the data and restore into a VM. Like using Acronis.
    My own knowledge base made public: http://open902.com :p
  • Options
    DeathmageDeathmage Banned Posts: 2,496
    First off, I honestly think you need a iSCSI/vMotion stacked switch, let me explain.....

    ...One benefit you'd get with using say, keeping the spec's Dell since it's a Dell EQL you can use whatever you like though, a Dell N3024/2048 switch from Force10 is the switching cache is made specifically for the Equalogic SAN's. We recently did a File/SQL server P2V over a N3048 stack with a solo 1Gbit to a EQL 4100s and it took roughly 12 hours for 7.7 TB's of SQL data @ 9000 MTU.

    I mean how much free space do you have on the SAN? - could you do a 1 to 1 copy of the data? - in either case it's going to be a painfully slow process to access it from the SAN process it in VMware Conversion tool and then send it back to the SAN without a iSCSI switching fabric, people downplay switching cache sometimes.

    Another question for you, do you have another server you could install ESXi onto and has space or enough disk slot for more hard drives? My logic is this....

    1. Do an Acronis Image of the server you can use as a ESXi husk.

    2. Install ESXi on that server you took the image of and provision the local array (presuming it has a crap tons of free space, enough for your Hyper-V's)

    3. Install VMware Conversion tool on the file server's Hyper-V VM's individually and P2V them to the server you made in step 2 and perform VMware clean-up (or you can wait til later but it will need to be done at some point)

    4. Once the file server is offloaded with the local Hyper-V's, I'd convert it to a 2nd ESXi host and reformat the SAN's arrays. (unless you have enough space for both VMware and Hyper-V data then mind as well keep a backup just-in-case)

    5. Make a vCenter VM with server 2008 R2 and then add the host in step 2 and the file server ESXi host to the vCenter cluster.

    6. Presuming you have a iSCSI switch at this stage and both server's have a nic (at-least 4 minimum if you can get 8 that would be better, you might want to make a bonded or teamed network connection for the Production vlan) on the iSCSI vlan on that switch connected to the SAN I'd then do a svMotion migration or cold-transfer to the SAN of the VM's locally stored on the ESXi host from step 2 to the SAN array.

    7. After the transfer of data is moved to the SAN, I'd make sure you do VMware clean-up and then balance the load across both of the ESXi hosts.

    Note: EQL SAN's don't have performance benefits of splitting up the array's like other SAN's so two LUN's are fine. Also remember that when you do a VMware converter tool P2V the default disk provisioning will be Thick Provisioned Lazy Zero, if you have a SQL or File server you will need to inflate it after the conversion to go to Eager Zeroed.

    Now this is what I would do, but right here you would need at-least $3000 for two iSCSI switches (you always want redundancy on the iSCSI traffic, if you lose a switch and data is transferring your data will be shredded into a bazzillion pieces) ..Question: Do you have good backups? icon_wink.gif ...Plus about $1200 for drives for the host in step 2.

    I know this process above seems steep but moving data from a SAN being used by Hyper-V and then processed and then moved back to the SAN is literally going to be a slow and painful process without MTU switching cache. It would honestly be faster to do a adaptation of what I did above. I can already imagine if you stuck with the no switch config this would literally take a solid week. I'm leaning on the migration window of a solid weekend and that's from 5:30pm Friday until 11pm Sunday. (yes I also mean nights on your weekend, IT people don't need sleep when it comes to weekly production $$$ up-time in the mix)

    Now I know there is this awesome-sauce Nic made by QLogic that I saw at a VMUG recently that can completely bypass the server's southbridge (or comparable chipset) and the speed were insane plus it's basically a self-contained L2 switch on the NiC so you really don't need a switch at all, it might be something to look into if you want to keep going without the iSCSI switch design.

    The QLogic NiC is very similar to a gaming NiC that some may remember about 5 years ago (that I still use for gaming) called the Bigfoot 2100. The 2100 had a 900 Mhz processor with 512 MB's of RAM, it completely bypassed the Windows Network winsock stack and allowed you to throttle a internet connection for say 30% on uTorrents and 40% for online gaming like World of Warcraft, and 20% for Internet access and all worked fine. It's an awesome card, too bad they killed off the PCI-e x1 card. You can still find the Killer Nic is some high-end PC's/Laptops. If you ever have a option to snag it, get it, you will love your online latency. icon_wink.gif

    Hope this helps, I'm sure the other guru's would have other ideas. icon_smile.gif



    EDIT: One thing I forgot to mention, hopefully you have a few hosts because even though you now have one host with a ton of drives in it, with the suggestions above, you could re-use those drives in that server or a different server for the purpose of on-site DR backups with Acronis/Veeam/Dell AppAssure. This could be a selling point for your customer and allow you to have use for the hard drives.

    I did a very similar proposal for the initial stages of our P2V, and now those drives comprise a 9 TB RAID 50 array for a Dell AppAssure server on a R510. RAID 50 is being used because I like being super-**** with safe guarding backups at the cost of write performance. Always, plan ahead and think of the migration from many angles including the system after the migration is finished.
  • Options
    joelsfoodjoelsfood Member Posts: 1,027 ■■■■■■□□□□
    Despite loving everything virtual, I would probably go with an old school robocopy. Mirror the source tree to a new VM, ensure you get permissions, keep an ongoing copy. Once initial sync and copy is done, choose an outage window, export share settings on old, import on new, change DNS records (and probably lanmanserver entries in registry to add old name and new name to new server), and make the shift over. Power off old VM, let users use new VM, and keep the old LUNs around for a bit.

    That's how I'd go anyway based on a no new hardware directive. Preferably, would go with iscsi switch stack, as mentioned, at least two iscsi/VMK ports on host, etc.
Sign In or Register to comment.