royal wrote: Here is the actual Clustering documentation that you will need to learn how to create Clusters. You can then refer to dynamik's link for installing Exchange on top of a cluster.http://www.microsoft.com/downloads/details.aspx?FamilyID=a5bbb021-0760-48f3-a53b-0351fc3337a1&DisplayLang=en
penberth wrote: royal wrote: Here is the actual Clustering documentation that you will need to learn how to create Clusters. You can then refer to dynamik's link for installing Exchange on top of a cluster.http://www.microsoft.com/downloads/details.aspx?FamilyID=a5bbb021-0760-48f3-a53b-0351fc3337a1&DisplayLang=en This is the same document that I used when setting up my 2-node cluster at work. Mine was for file shares with an EMC back end. This document worked great.
royal wrote: Here is the "latest" documentation on how to install the MSDTC (not required in Exchange 2007):http://technet.microsoft.com/en-us/library/bb124059.aspx
mr2nut wrote: Cool, i've downloaded that and will be reading that along the way, cheers. So do I need to set up the 2 node cluster first, then install Exchange after?
dynamik wrote: VMs just let you get by with less physical hardware. They're just more convenient if you have a sufficiently powerful machine. If you have a couple of physical machines laying around, they'll work too. It really doesn't matter; I was just curious what your setup was like. Also, you don't need to use raid. If the guides mention it, it's probably just because that's a best practice. You don't need it in a lab. The data will reside on a shared storage device. As I mentioned earlier, I think you can use SCSI, iSCSI, and FC. You can't use SMB/CIFS/NFS (NAS).
royal wrote: I agree. With iscsi in production, you always want that traffic going over its own dedicated network. Ideally this would be gigabit but 10gigabit for iSCSI is here (or if it's not here just yet it will be soon). But ya, for labs, no sense for that unless maybe it's a pretty big test environment that mimics most/all of your production. For a small Vmware lab, run that over your regular NICs. Nothing bad will happen.
mr2nut wrote: Would it be possible at all to edit my tacky picture to show exactly what you mean please? I'm not too sure I can picture what your suggesting. Would be much appreciated
dynamik wrote: Ideally, you'd want to put the iSCSI traffic on it's own network/NICs. You shouldn't have a lot of congestion on your heartbeat network. Looks fine for a lab though. The ability to add hardware, such as additional NICs, is another nice feature of VMs. You should start experimenting with them if you get a chance. You'll likely find them to be useful in your studies.
dynamik wrote: mr2nut wrote: Would it be possible at all to edit my tacky picture to show exactly what you mean please? I'm not too sure I can picture what your suggesting. Would be much appreciated Leave it exactly as you have it, add a 3rd NIC to each machine, draw a line straight from one to the other, and move the heartbeat to that one.
mr2nut wrote: Ah I see, so put a crossover cable between the 2 nodes with a 3rd NIC right?
mr2nut wrote: Sounds like you've done this quite a bit to know this much, so I assume this is best practise in the real world to have 3 NICS in each Server?
contentpros wrote: I agree with dynamik in shared storage is going to be your toughest hurdle. Well, it may not be the toughest but it will probably the most expensive hurdle. There are a number of options for shared storage. the cheapest option is going to be a directly connected scsi option. You can find some decent and fairly inexpensive options from companies like Promise most of these will be Scsi connected and use ide or sata drives within the enclosure which keeps your costs lower. I work at a shop that is primarily HP so we make a lot of use of the MSA enclosures and these are easy to get in scsi or fiber channel. You also have a number of options with EMC and NetApp as both have iscsi and fiber channel offerings. We have had good luck with the NetApp line (haven't had that much personal experience with EMC). Another nice feature with the NetApp is their snap mirror feature which we use for quick restores as well as syncing with our NetApp at our warm backup location. The cluster configuration is not hard but more so confusing the first time you have a go at it. I prefer to use crossovers for heartbeat and keep it private as the heartbeat can create a fair amount of chatter. Also if you are going to keep you heartbeat on a private range I would recommend that you keep the range far away from any ranges you currently have in use (just incase you decide to change the heartbeat config later). A little planning can go along way in making your cluster install easier. Identify your addresses ahead of time if memory serves me correctly you will need 4 ips for the publics (2 for the physical machines and 2 for the virtual or cluster instances) as well as you ip range for the heartbeats. If you are going to run an antivirus package on the exchange cluster check to see if whatever you are running is cluster aware. I know that most of the Norton/Symantec corporate packages are aware but iirc you will have to create a separate resource within the cluster and it will have some dependency (you will have to check your vendor documentation). Partitioning for the shared storage can be simple or complex. Simple is 2 partitions the first is your basic data store where you are going to house your exchange data and the second is for the quorum which is a small partition that the cluster uses for holding its state information. I believe the MS recommendation is for a small partition like a couple hundred megs but we always keep them at 1GB on the quorum partition which is our personal preference. Take a few minutes to map out how you are going to store your data everything on one partition or separate for logs etc. You may also want to have a separate account created and ready to roll as the service account that will be used for any cluster related services. For the most part the cluster configuration is by the book. good luck and have fun!
mr2nut wrote: dynamik wrote: mr2nut wrote: Would it be possible at all to edit my tacky picture to show exactly what you mean please? I'm not too sure I can picture what your suggesting. Would be much appreciated Leave it exactly as you have it, add a 3rd NIC to each machine, draw a line straight from one to the other, and move the heartbeat to that one. Ah I see, so put a crossover cable between the 2 nodes with a 3rd NIC right? Sounds like you've done this quite a bit to know this much, so I assume this is best practise in the real world to have 3 NICS in each Server?
mr2nut wrote: So when you say directly connected SCSI, do you mean there is a seperate SCSI controller from the OS SCSI controller in each Server, and you connect these together with a SCSI cable? If i've grasped this right, do you install everything you need on the first node, then it will replicate to the second node itself? Like you say, i've never even touched clustering before but i've looked into it lately, mainly due to a client requesting it and i'm dedicated to learn it now, hate things beating me!
royal wrote: Of course. Why would you have external storage if the data still sits on the server's internal disks. That would make no sense!
mr2nut wrote: royal wrote: Of course. Why would you have external storage if the data still sits on the server's internal disks. That would make no sense! lol, like I say, it's all very new to me. I've only just begun looking into this technology. Was a bit of a daft question but, hey, we all gotta learn