Server migration process
We are about to start migrating from server 2003 to 2012 at a 24/7 place that is slowest at night and on the weekends. We plan on having 2 servers with 4 vm's on each. This is the current admin's plan: On a weekend remotely create a new vm on the main server, install 2012, move old files on file server over to new vm, shut down old file server vm. Continue this process for the 3 other vm's. File server being first makes no sense and you automatically lose a lot of drive space to provision which could be added later on with a bit of work. After everything is up and running move on to backup server for replication, although we might see if hot failover will work with the licensing.
My plan: On a weekday evening while we are both there switch the main server function to the current replicated secondary server for 1 vm, install 2012 on a new vm on the main server starting probably with AD/DHCP/DNS and leave file server for last, then move required files from secondary to main vm and move on to the next vm the next night.
Which one makes more sense, and is there suggestions that are missing from either plan?
Also there is 6x 500GB drives in each server, currently it's setup as 2 raid5's because the raid controller has 2 ports. They lose 500GB that could be useful in the near future but gain some protection from one of the ports going out, the current admin set it up this way. I suggested having 1 raid5 to get that 500GB back and a little performance bump but I'm unsure of this and maybe the 2 raid5's is better for the protection. I've also thought about using jbod on the hardware controller then setting up a raidz to get some added disk failure protection but this adds some complexity and not sure how that would work between esxi and bsd. What would you suggest?
My plan: On a weekday evening while we are both there switch the main server function to the current replicated secondary server for 1 vm, install 2012 on a new vm on the main server starting probably with AD/DHCP/DNS and leave file server for last, then move required files from secondary to main vm and move on to the next vm the next night.
Which one makes more sense, and is there suggestions that are missing from either plan?
Also there is 6x 500GB drives in each server, currently it's setup as 2 raid5's because the raid controller has 2 ports. They lose 500GB that could be useful in the near future but gain some protection from one of the ports going out, the current admin set it up this way. I suggested having 1 raid5 to get that 500GB back and a little performance bump but I'm unsure of this and maybe the 2 raid5's is better for the protection. I've also thought about using jbod on the hardware controller then setting up a raidz to get some added disk failure protection but this adds some complexity and not sure how that would work between esxi and bsd. What would you suggest?
2018 AWS Solutions Architect - Associate (Apr) 2017 VCAP6-DCV Deploy (Oct) 2016 Storage+ (Jan)
2015 Start WGU (Feb) Net+ (Feb) Sec+ (Mar) Project+ (Apr) Other WGU (Jun) CCENT (Jul) CCNA (Aug) CCNA Security (Aug) MCP 2012 (Sep) MCSA 2012 (Oct) Linux+ (Nov) Capstone/BS (Nov) VCP6-DCV (Dec) ITILF (Dec)
2015 Start WGU (Feb) Net+ (Feb) Sec+ (Mar) Project+ (Apr) Other WGU (Jun) CCENT (Jul) CCNA (Aug) CCNA Security (Aug) MCP 2012 (Sep) MCSA 2012 (Oct) Linux+ (Nov) Capstone/BS (Nov) VCP6-DCV (Dec) ITILF (Dec)
Comments
-
techfiend Member Posts: 1,481 ■■■■□□□□□□Why would you do the file server first? While there is a modest amount of hard drive space for the next few years, there is a big issue with space being in the right spots. Currently C: drive is out of space on 3 of the 4 vm's and the file server is down to 2 GB in the shared drive. There's over 500GB sitting in partitions that don't get used. The current admin set this up back then, I keep stressing to him about responsibly provisioning disk space but I don't think it's getting through. Worst case, I'll have to fix the partitions once he's gone if he messes this all up again.
No exchange, office 365, there's a crucial sql db, iis hosted websites, ad, dns, dhcp, wsus. Not a whole lot really.2018 AWS Solutions Architect - Associate (Apr) 2017 VCAP6-DCV Deploy (Oct) 2016 Storage+ (Jan)
2015 Start WGU (Feb) Net+ (Feb) Sec+ (Mar) Project+ (Apr) Other WGU (Jun) CCENT (Jul) CCNA (Aug) CCNA Security (Aug) MCP 2012 (Sep) MCSA 2012 (Oct) Linux+ (Nov) Capstone/BS (Nov) VCP6-DCV (Dec) ITILF (Dec) -
discount81 Member Posts: 213Only suggestion I would give is RAID 5 is a really bad choice for a production server, unless it is RAID 5 and a hot spare.
I generally go for RAID 6 or RAID 10, sure you lose some disk space, but I've had 2 disks fail in 1 day before within a couple hours of each other before I could replace them, if it were RAID 5 my day would not of been fun.http://www.darvilleit.com - a blog I write about IT and technology. -
techfiend Member Posts: 1,481 ■■■■□□□□□□I agree, there is no hot spare. I have had 2 HDD's go out at home within 2 days of each other, luckily it wasn't crucial and the freezer trick worked to get the few things I wanted off of them. The dual raid 5's is very strange too but it's all his now, looks like I'll be completely left out of the migration process.
I can only hope he does a better job provisioning disk space this time, otherwise I'll be spending days fixing his mistakes.2018 AWS Solutions Architect - Associate (Apr) 2017 VCAP6-DCV Deploy (Oct) 2016 Storage+ (Jan)
2015 Start WGU (Feb) Net+ (Feb) Sec+ (Mar) Project+ (Apr) Other WGU (Jun) CCENT (Jul) CCNA (Aug) CCNA Security (Aug) MCP 2012 (Sep) MCSA 2012 (Oct) Linux+ (Nov) Capstone/BS (Nov) VCP6-DCV (Dec) ITILF (Dec)