Options

Setting up an Exchange 2003 cluster

I'm going to be a part of an Exchange 2003 cluster installation soon and I was wondering if anybody had done this themselves in the past? I've got quite a bit of theory in how to implement this, but if somebody could direct me to a step-by-step guide as to how to deploy this properly, that would be great!

From what i've seen so far...

* 2 network cards in each Server. One for heartbeat/cluster transmission, the other for standard Network traffic.

* Enterprise Windows 2003 AND Enterprise Exchange 2003

* Forestprep has to be run if Exchange hasn't been on the domain before?
«1

Comments

  • Options
    dynamikdynamik Banned Posts: 12,312 ■■■■■■■■■□
    http://technet.microsoft.com/en-us/library/bb123612(EXCHG.65).aspx

    Shared storage is probably going to be your biggest hurdle to overcome.
  • Options
    royalroyal Member Posts: 3,352 ■■■■□□□□□□
    Here is the actual Clustering documentation that you will need to learn how to create Clusters. You can then refer to dynamik's link for installing Exchange on top of a cluster.

    http://www.microsoft.com/downloads/details.aspx?FamilyID=a5bbb021-0760-48f3-a53b-0351fc3337a1&DisplayLang=en

    Here's the Technet Library Version:
    http://technet.microsoft.com/en-us/library/cc778252.aspx

    Here is the "latest" documentation on how to install the MSDTC (not required in Exchange 2007):
    http://technet.microsoft.com/en-us/library/bb124059.aspx
    “For success, attitude is equally as important as ability.” - Harry F. Banks
  • Options
    penberthpenberth Member Posts: 46 ■■□□□□□□□□
    royal wrote:
    Here is the actual Clustering documentation that you will need to learn how to create Clusters. You can then refer to dynamik's link for installing Exchange on top of a cluster.

    http://www.microsoft.com/downloads/details.aspx?FamilyID=a5bbb021-0760-48f3-a53b-0351fc3337a1&DisplayLang=en


    This is the same document that I used when setting up my 2-node cluster at work. Mine was for file shares with an EMC back end. This document worked great.
  • Options
    royalroyal Member Posts: 3,352 ■■■■□□□□□□
    penberth wrote:
    royal wrote:
    Here is the actual Clustering documentation that you will need to learn how to create Clusters. You can then refer to dynamik's link for installing Exchange on top of a cluster.

    http://www.microsoft.com/downloads/details.aspx?FamilyID=a5bbb021-0760-48f3-a53b-0351fc3337a1&DisplayLang=en


    This is the same document that I used when setting up my 2-node cluster at work. Mine was for file shares with an EMC back end. This document worked great.

    Yep, it's the best clustering doc out there.
    “For success, attitude is equally as important as ability.” - Harry F. Banks
  • Options
    mr2nutmr2nut Member Posts: 269
    Cool, i've downloaded that and will be reading that along the way, cheers. So do I need to set up the 2 node cluster first, then install Exchange after? icon_confused.gif
  • Options
    bertiebbertieb Member Posts: 1,031 ■■■■■■□□□□
    royal wrote:
    Here is the "latest" documentation on how to install the MSDTC (not required in Exchange 2007):
    http://technet.microsoft.com/en-us/library/bb124059.aspx

    Thanks for pointing that one out Royal, I've previously had a few small issues with installing/configuring MSDTC on clusters (mostly referring to one of the MS links contained in that one) so I'll try this process on the next one I build to see if it helps to smooth it out a bit.
    The trouble with quotes on the internet is that you can never tell if they are genuine - Abraham Lincoln
  • Options
    bertiebbertieb Member Posts: 1,031 ■■■■■■□□□□
    mr2nut wrote:
    Cool, i've downloaded that and will be reading that along the way, cheers. So do I need to set up the 2 node cluster first, then install Exchange after? icon_confused.gif

    Yep. As Royal says, refer to his links for 'installing the cluster' then refer to Dynamik's link for the actual Exchange2003 install.
    The trouble with quotes on the internet is that you can never tell if they are genuine - Abraham Lincoln
  • Options
    mr2nutmr2nut Member Posts: 269
    Cheers, these docs are great icon_cool.gif

    One thing that confuses me a bit..

    You have 2 identical servers with Enterprise 2003 installed, with 2 network adapters each. I would simply go for the cross-over cable for the heartbeat. Then the other 2 network cards for the private traffic for the users, would need to be on a different subnet mask, correct? I understand that bit, but the RAID setup confuses me slightly.

    You have the OS on a SCSI RAID controller in each machine, that bit I understand, but for the Cluster data itself, does this have to be in an external RAID box (like a NAS box with 2 network adapters) so they can connect through it? I would assume the cluster RAID has to be external to either Server incase the Server with the cluster RAID went down, in which case that would obviously destroy the idea of clustering :)

    I know that may sound a bit confusing, but that's because I am. I don't know how else to explain what i'm getting at. Maybe a diagram of a 2node cluster would help? icon_wink.gif
  • Options
    dynamikdynamik Banned Posts: 12,312 ■■■■■■■■■□
    Are you doing this with VMs or physical machines? Like I said, shared storage can be difficult for home lab work. You might want to setup iSCSI in something like Open Filer and connect both machines to that.

    You can't use NAS. I believe you can use SCSI, iSCSI or FC, but I'm not 100% on that.
  • Options
    mr2nutmr2nut Member Posts: 269
    I'd like to do it with physical machines but would VM be a lot cheaper? Also, what are the benefits to each method?

    I'm still a bit lost about the whole 2 node cluster thing. Does the same data reside on each server on a seperate RAID within the Server, or do the Servers both have to look at an external storage device via the cluster network cards?
  • Options
    dynamikdynamik Banned Posts: 12,312 ■■■■■■■■■□
    VMs just let you get by with less physical hardware. They're just more convenient if you have a sufficiently powerful machine. If you have a couple of physical machines laying around, they'll work too. It really doesn't matter; I was just curious what your setup was like.

    Also, you don't need to use raid. If the guides mention it, it's probably just because that's a best practice. You don't need it in a lab. The data will reside on a shared storage device. As I mentioned earlier, I think you can use SCSI, iSCSI, and FC. You can't use SMB/CIFS/NFS (NAS).
  • Options
    mr2nutmr2nut Member Posts: 269
    dynamik wrote:
    VMs just let you get by with less physical hardware. They're just more convenient if you have a sufficiently powerful machine. If you have a couple of physical machines laying around, they'll work too. It really doesn't matter; I was just curious what your setup was like.

    Also, you don't need to use raid. If the guides mention it, it's probably just because that's a best practice. You don't need it in a lab. The data will reside on a shared storage device. As I mentioned earlier, I think you can use SCSI, iSCSI, and FC. You can't use SMB/CIFS/NFS (NAS).

    Ahh, so you can't use a RAID NAS box as they don't use SCSI right? So the main part to getting Clusters running is it HAS to be a SCSI controller with the drive(s) on there, preferably RAID but can be a single drive? That's cleared things up a bit now.

    Here's a quick diagram of what I THINK should be going on. Right or wrong?

    fd3ef3df7b.jpg
  • Options
    royalroyal Member Posts: 3,352 ■■■■□□□□□□
    Looks good.

    With Server 2003 clustering I would do the following:
    In the binding order of the NIC properties, make sure the public NIC is on top.
    In the Cluster properties, for hearbeat, make sure the order of NICs is where the private NIC is on top and is set to be only used for heartbeat/private. Make sure the public NIC is set to mixed (can be set to public only). I always do mixed for fault tolerance. If something bad happens to a hearbeat NIC, you can then temporarily send hearbeats over the public network. In Server 2008, your public NIC is forced to be mixed.
    “For success, attitude is equally as important as ability.” - Harry F. Banks
  • Options
    dynamikdynamik Banned Posts: 12,312 ■■■■■■■■■□
    Ideally, you'd want to put the iSCSI traffic on it's own network/NICs. You shouldn't have a lot of congestion on your heartbeat network. Looks fine for a lab though. The ability to add hardware, such as additional NICs, is another nice feature of VMs. You should start experimenting with them if you get a chance. You'll likely find them to be useful in your studies.
  • Options
    royalroyal Member Posts: 3,352 ■■■■□□□□□□
    I agree. With iscsi in production, you always want that traffic going over its own dedicated network. Ideally this would be gigabit but 10gigabit for iSCSI is here (or if it's not here just yet it will be soon). But ya, for labs, no sense for that unless maybe it's a pretty big test environment that mimics most/all of your production. For a small Vmware lab, run that over your regular NICs. Nothing bad will happen.
    “For success, attitude is equally as important as ability.” - Harry F. Banks
  • Options
    mr2nutmr2nut Member Posts: 269
    royal wrote:
    I agree. With iscsi in production, you always want that traffic going over its own dedicated network. Ideally this would be gigabit but 10gigabit for iSCSI is here (or if it's not here just yet it will be soon). But ya, for labs, no sense for that unless maybe it's a pretty big test environment that mimics most/all of your production. For a small Vmware lab, run that over your regular NICs. Nothing bad will happen.

    Would it be possible at all to edit my tacky picture to show exactly what you mean please? I'm not too sure I can picture what your suggesting. Would be much appreciated
  • Options
    contentproscontentpros Member Posts: 115 ■■■■□□□□□□
    I agree with dynamik in shared storage is going to be your toughest hurdle. Well, it may not be the toughest but it will probably the most expensive hurdle.

    There are a number of options for shared storage. the cheapest option is going to be a directly connected scsi option. You can find some decent and fairly inexpensive options from companies like Promise most of these will be Scsi connected and use ide or sata drives within the enclosure which keeps your costs lower. I work at a shop that is primarily HP so we make a lot of use of the MSA enclosures and these are easy to get in scsi or fiber channel. You also have a number of options with EMC and NetApp as both have iscsi and fiber channel offerings. We have had good luck with the NetApp line (haven't had that much personal experience with EMC). Another nice feature with the NetApp is their snap mirror feature which we use for quick restores as well as syncing with our NetApp at our warm backup location.

    The cluster configuration is not hard but more so confusing the first time you have a go at it. I prefer to use crossovers for heartbeat and keep it private as the heartbeat can create a fair amount of chatter. Also if you are going to keep you heartbeat on a private range I would recommend that you keep the range far away from any ranges you currently have in use (just incase you decide to change the heartbeat config later).

    A little planning can go along way in making your cluster install easier. Identify your addresses ahead of time if memory serves me correctly you will need 4 ips for the publics (2 for the physical machines and 2 for the virtual or cluster instances) as well as you ip range for the heartbeats.

    If you are going to run an antivirus package on the exchange cluster check to see if whatever you are running is cluster aware. I know that most of the Norton/Symantec corporate packages are aware but iirc you will have to create a separate resource within the cluster and it will have some dependency (you will have to check your vendor documentation).

    Partitioning for the shared storage can be simple or complex. Simple is 2 partitions the first is your basic data store where you are going to house your exchange data and the second is for the quorum which is a small partition that the cluster uses for holding its state information. I believe the MS recommendation is for a small partition like a couple hundred megs but we always keep them at 1GB on the quorum partition which is our personal preference. Take a few minutes to map out how you are going to store your data everything on one partition or separate for logs etc.

    You may also want to have a separate account created and ready to roll as the service account that will be used for any cluster related services.

    For the most part the cluster configuration is by the book. good luck and have fun!
  • Options
    dynamikdynamik Banned Posts: 12,312 ■■■■■■■■■□
    mr2nut wrote:
    Would it be possible at all to edit my tacky picture to show exactly what you mean please? I'm not too sure I can picture what your suggesting. Would be much appreciated

    Leave it exactly as you have it, add a 3rd NIC to each machine, draw a line straight from one to the other, and move the heartbeat to that one.
  • Options
    mr2nutmr2nut Member Posts: 269
    dynamik wrote:
    Ideally, you'd want to put the iSCSI traffic on it's own network/NICs. You shouldn't have a lot of congestion on your heartbeat network. Looks fine for a lab though. The ability to add hardware, such as additional NICs, is another nice feature of VMs. You should start experimenting with them if you get a chance. You'll likely find them to be useful in your studies.

    I've had a go with MS virtual Server recently, installing Server 2k3 on one then XP Pro on the other for a test enivironment. I've heard VMWare is better though? As were a Microsoft partner we have an Enterprise Server and Exchange disc so I could try using Virtual PC right?

    We normally use ML350 HP Servers for SBS installations, reckon these would be fine purely for a small Exchange environment? Around 50 users.

    Also, would this work as an iSCSI device inbetween the two nodes?..

    http://www.bizrate.co.uk/harddrives/oid594052202.html
  • Options
    dynamikdynamik Banned Posts: 12,312 ■■■■■■■■■□
    Probably. You can use Open Filer, which is a free open source NAS/SAN appliance as well. If performance doesn't matter, you can install it in a VM.
  • Options
    mr2nutmr2nut Member Posts: 269
    dynamik wrote:
    mr2nut wrote:
    Would it be possible at all to edit my tacky picture to show exactly what you mean please? I'm not too sure I can picture what your suggesting. Would be much appreciated

    Leave it exactly as you have it, add a 3rd NIC to each machine, draw a line straight from one to the other, and move the heartbeat to that one.

    Ah I see, so put a crossover cable between the 2 nodes with a 3rd NIC right? Sounds like you've done this quite a bit to know this much, so I assume this is best practise in the real world to have 3 NICS in each Server?
  • Options
    dynamikdynamik Banned Posts: 12,312 ■■■■■■■■■□
    mr2nut wrote:
    Ah I see, so put a crossover cable between the 2 nodes with a 3rd NIC right?

    I actually found a good picture on Wikipedia: http://en.wikipedia.org/wiki/Image:Cluster_Scheme_New.JPG

    The cabling doesn't really matter. It could be straight-throughs connected to a switch. You just want to that traffic on it's own network.
    mr2nut wrote:
    Sounds like you've done this quite a bit to know this much, so I assume this is best practise in the real world to have 3 NICS in each Server?

    Ah, how easy it is to deceive people over the internet. Not so much. Royal's the expert. He's out doing it while I'm here talking about it. I just know a few best practices and a few other tricks.

    Honestly, a real server would probably have something like two 4-port NICs with one port on each card for storage, public, and private traffic, and each NIC would likely be connected to a different switch. That protects against failure of a switch, cable, or NIC.
  • Options
    mr2nutmr2nut Member Posts: 269
    I agree with dynamik in shared storage is going to be your toughest hurdle. Well, it may not be the toughest but it will probably the most expensive hurdle.

    There are a number of options for shared storage. the cheapest option is going to be a directly connected scsi option. You can find some decent and fairly inexpensive options from companies like Promise most of these will be Scsi connected and use ide or sata drives within the enclosure which keeps your costs lower. I work at a shop that is primarily HP so we make a lot of use of the MSA enclosures and these are easy to get in scsi or fiber channel. You also have a number of options with EMC and NetApp as both have iscsi and fiber channel offerings. We have had good luck with the NetApp line (haven't had that much personal experience with EMC). Another nice feature with the NetApp is their snap mirror feature which we use for quick restores as well as syncing with our NetApp at our warm backup location.

    The cluster configuration is not hard but more so confusing the first time you have a go at it. I prefer to use crossovers for heartbeat and keep it private as the heartbeat can create a fair amount of chatter. Also if you are going to keep you heartbeat on a private range I would recommend that you keep the range far away from any ranges you currently have in use (just incase you decide to change the heartbeat config later).

    A little planning can go along way in making your cluster install easier. Identify your addresses ahead of time if memory serves me correctly you will need 4 ips for the publics (2 for the physical machines and 2 for the virtual or cluster instances) as well as you ip range for the heartbeats.

    If you are going to run an antivirus package on the exchange cluster check to see if whatever you are running is cluster aware. I know that most of the Norton/Symantec corporate packages are aware but iirc you will have to create a separate resource within the cluster and it will have some dependency (you will have to check your vendor documentation).

    Partitioning for the shared storage can be simple or complex. Simple is 2 partitions the first is your basic data store where you are going to house your exchange data and the second is for the quorum which is a small partition that the cluster uses for holding its state information. I believe the MS recommendation is for a small partition like a couple hundred megs but we always keep them at 1GB on the quorum partition which is our personal preference. Take a few minutes to map out how you are going to store your data everything on one partition or separate for logs etc.

    You may also want to have a separate account created and ready to roll as the service account that will be used for any cluster related services.

    For the most part the cluster configuration is by the book. good luck and have fun!

    Took a while to read all that icon_lol.gif Cheers

    So when you say directly connected SCSI, do you mean there is a seperate SCSI controller from the OS SCSI controller in each Server, and you connect these together with a SCSI cable? If i've grasped this right, do you install everything you need on the first node, then it will replicate to the second node itself? Like you say, i've never even touched clustering before but i've looked into it lately, mainly due to a client requesting it and i'm dedicated to learn it now, hate things beating me! icon_wink.gif
  • Options
    royalroyal Member Posts: 3,352 ■■■■□□□□□□
    mr2nut wrote:
    dynamik wrote:
    mr2nut wrote:
    Would it be possible at all to edit my tacky picture to show exactly what you mean please? I'm not too sure I can picture what your suggesting. Would be much appreciated

    Leave it exactly as you have it, add a 3rd NIC to each machine, draw a line straight from one to the other, and move the heartbeat to that one.

    Ah I see, so put a crossover cable between the 2 nodes with a 3rd NIC right? Sounds like you've done this quite a bit to know this much, so I assume this is best practise in the real world to have 3 NICS in each Server?

    If you're doing a cluster, ALWAYS HAVE at least 2 NICs. One for corporate communication (top of the binding order) and the heartbeat NIC. If you're doing iSCSI, you would add a 3rd NIC for iSCSI communication as you want to separate the iSCSI traffic. So no iscsi = 2 NICs minimum and iSCSI = 3 NICs minimum.

    Now if you want more information on storage that isn't really pertaining to what you're doing, read on. If you're doing DAS on a shared storage cabinet, then you'll just want a SAS controller card in your server with SAS cables. A lot of SAS controller cards have multiple ports so you can essentially have multiple SAS cabinets with 1 controller card. SAS connections can typically support 100 or so disks but you'll never really ever do that. For instance, most of the SMB storage shelves have at most 24 disks in a cabinet which 1 SAS Controller card can easily handle. So let's say your Exchange cluster design calls for shared storage that requires 100 disks that will utilize 8 cabinets. You would need 8 controller single port SAS cards or 4 controller cards that have 2 ports each, etc..

    When you start moving into the Enterprise Storage space, you'll almost always be doing either iSCSI or Fiber Channel which are actual SAN technologies. Fiber Channel requires you to have a Host Bus Adapter. If you have a bunch of servers, this can be costly. So a lot of companies have been going over to iSCSI as it can be less expensive overall. Now Fibre Channel is both a disk technology and the way you can connect. You can use Fibre Channel Disks or SAS disks in a shelve but connect to it via iSCSI or Fibre Channel. It really depends on what the controllers and shelves support. For instance, some Netapp SANs allow you to use SAS disks/Fibre Channel Disks and connect to it via either iSCSI or Fibre Channel.
    “For success, attitude is equally as important as ability.” - Harry F. Banks
  • Options
    royalroyal Member Posts: 3,352 ■■■■□□□□□□
    mr2nut wrote:
    So when you say directly connected SCSI, do you mean there is a seperate SCSI controller from the OS SCSI controller in each Server, and you connect these together with a SCSI cable? If i've grasped this right, do you install everything you need on the first node, then it will replicate to the second node itself? Like you say, i've never even touched clustering before but i've looked into it lately, mainly due to a client requesting it and i'm dedicated to learn it now, hate things beating me! icon_wink.gif

    I partially answered this in my last post so I'll answer the rest in this post.

    When working with Shared Storage, you typically want to only have your soon to be Active Node be the only one running. You connect it to the shared storage, put a file on it, shut it down. Boot up the second node, add shared storage, see the file, and that verifies the shared storage is working.

    Now you can install Clustering on your first node and that services takes control of the disks. Now you can bring up your second node and it won't mess up or corrupt the disks because it won't have access to the disks due to the cluster service on the other node owning the disk. You can now join the second node to the cluster and now you're all set.
    “For success, attitude is equally as important as ability.” - Harry F. Banks
  • Options
    mr2nutmr2nut Member Posts: 269
    The bit that confused me was when somebody said you can do SCSI to SCSI using a lead from one node to the other, this lead me to believe that the data was sat INSIDE one of the actual Servers, but then that would defeat the object of it still being available if that Server went down, right. So you always have to have the data on an external storage device that is self-powered and not a part of the Server correct?
  • Options
    royalroyal Member Posts: 3,352 ■■■■□□□□□□
    Of course. Why would you have external storage if the data still sits on the server's internal disks. That would make no sense!
    “For success, attitude is equally as important as ability.” - Harry F. Banks
  • Options
    mr2nutmr2nut Member Posts: 269
    royal wrote:
    Of course. Why would you have external storage if the data still sits on the server's internal disks. That would make no sense!

    lol, like I say, it's all very new to me. I've only just begun looking into this technology. Was a bit of a daft question but, hey, we all gotta learn icon_lol.gif
  • Options
    royalroyal Member Posts: 3,352 ■■■■□□□□□□
    mr2nut wrote:
    royal wrote:
    Of course. Why would you have external storage if the data still sits on the server's internal disks. That would make no sense!

    lol, like I say, it's all very new to me. I've only just begun looking into this technology. Was a bit of a daft question but, hey, we all gotta learn icon_lol.gif

    We all start somewhere. Hope the info in this thread helped you get started. Feel free to ask any other questions, even if you think they are stupid or not. :)
    “For success, attitude is equally as important as ability.” - Harry F. Banks
  • Options
    mr2nutmr2nut Member Posts: 269
    Cheers man, really appreciate it. What would we do without friendly forums like this eh icon_cool.gif
Sign In or Register to comment.