VM's and physical Windows machines in same Windows cluster?
Essendon
Member Posts: 4,546 ■■■■■■■■■■
Awright, this is kinda urgent too. I have the following situation:
Current environment - 2 ESXi 5.0 VM's that have been presented a total of 33 RDM's. All pRDM's. These 2 VM's are a Windows 2008 R2 failover cluster servicing a number of SQL databases that reside on the pRDM's.
Due to some performance issues, the client wants to move off virtual clusters to a physical cluster. Let's not get into the pro's and con's of both, please stick to the scenario. They are intending to spin up 2 physical Windows servers (HP blades) and add them to the Windows Cluster and then provision the RDM's to the physicals. They'll then leave the cluster running with 4 nodes and once satisfied all's well, evict the virtual cluster nodes from the cluster.
Do you guys see a problem with this? Keep in mind these are pRDM's we are talking about, not VMFS volumes so there are no signature worries. Of course I'll take a copy of the all the RDM's before I attempt this.
I'd really appreciate quick input guys! Thanks.
Current environment - 2 ESXi 5.0 VM's that have been presented a total of 33 RDM's. All pRDM's. These 2 VM's are a Windows 2008 R2 failover cluster servicing a number of SQL databases that reside on the pRDM's.
Due to some performance issues, the client wants to move off virtual clusters to a physical cluster. Let's not get into the pro's and con's of both, please stick to the scenario. They are intending to spin up 2 physical Windows servers (HP blades) and add them to the Windows Cluster and then provision the RDM's to the physicals. They'll then leave the cluster running with 4 nodes and once satisfied all's well, evict the virtual cluster nodes from the cluster.
Do you guys see a problem with this? Keep in mind these are pRDM's we are talking about, not VMFS volumes so there are no signature worries. Of course I'll take a copy of the all the RDM's before I attempt this.
I'd really appreciate quick input guys! Thanks.
Comments
-
kj0 Member Posts: 767What's the lead time with this? and are they moving the same number of RDMs to the same number or Disks?
-
Essendon Member Posts: 4,546 ■■■■■■■■■■They want this done Wednesday next week.Yep, they'll keep the same number.
-
jibbajabba Member Posts: 4,317 ■■■■■■■■□□As they are RDMs, specifically pRDMs, this should just work. Do you have resources for a test ?
Can you just attach an RDM to a VM, format with NTFS, throw some files on it and attach to a spare physical blade / server for testing ?My own knowledge base made public: http://open902.com -
blargoe Member Posts: 4,174 ■■■■■■■■■□Not related to VMware itself, make sure the edition SQL will support having 4 nodes (however temporary it will be). If they are using Standard edition whoever is managing SQL server installations will have to remove the SQL instance from one VM node before installing it on one of the new physical ones.IT guy since 12/00
Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
Working on: RHCE/Ansible
Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands... -
Essendon Member Posts: 4,546 ■■■■■■■■■■Cheers gents for confirming this should work. No I dont know have spare resources to try this out first. I'll keep your idea in mind blargoe. Thanks!
-
Essendon Member Posts: 4,546 ■■■■■■■■■■Update - I brought the cluster down, took a copy of every pRDM to be safe. Next, I presented the pRDM's to the physical machines, and the disks showed up in disk management. All well so far. The physicals weren't part of the cluster at this stage. Cluster's still down.
Next, I tried to bring the cluster back and sure enough the cluster couldn't re-establish quorum. Why? Because the cluster went - O, my quorum and other disks are now presented to hosts that aren't part of my cluster yet. I'm unhappy with that and I aint coming up yet.
Fix? Made the physicals part of the cluster, then all was well. So now the cluster has 4 nodes. All good!