Clustering and NLB and DFS

itdaddyitdaddy Member Posts: 2,089 ■■■■□□□□□□
Hey dudes

this is my issue: we have NLB and Clustering in Exchnage 2003.

NLB is for http and say ftp servers
and Clustering is for Exchange with EE 2003.. and I hear you have
to have a shared physical drive and does it have to be SCSI RAID
or can it be a simple disk??? 1 HDD? that they share?

I am totally confused. some one help my dumbness!
thanks everyone in advance...

DFS servers; so a DFS server groups share links?
what for? and you can write your scripts with UNC of DFS server
and it willl seek out other shares? to help in scripting
I have heard something like this ??

So in gist, what if we didnt have DFS or
Clustering or NLB what advantage do these technologies have
i have read the white papers on these but understadning them
is on the tip of my tongue and I cant see them??

sorry they must be dumb questions but I just can see?


  • royalroyal Member Posts: 3,352 ■■■■□□□□□□
    NLB is for distributing traffic. For example, you have a NLB ip of This could distribute traffic to 3 different web servers depending on the configuration of the NLB Cluster. This is used more for stateless servers that have data that does not change as often. Each servers have their own hard drives and you have to synchronize the data manually or through special software to keep the data up to date across all your NLB Servers.

    Server clustering is used more for failing over. Your Server Cluster Nodes will all be connected to an outside storage device through SCSI or Fiber Channel Storage. When a server dies, it can failover to another node. It is used for stuff like databases where information is constantly changing. For example, your database is stored on a SAN Array Volume D. You have 2 cluster nodes. SQL Server is installed on both servers and SQL is cluster service aware. 1 server is the current owner of D and if that server goes down, the other server will become the new owner.

    A good way to think of how you can mix NLB and Clustering is imaging a collection of web servers that act as a front-end that pulls data from a database back-end. The Web Servers are using NLB to distribute the load across each other and all of them pull information from a highly available sql server cluster.

    DFS is a different technology. Basically, DFS allows you to set up fake paths which end up with a real physical folder at the end. So for instance, instead of \\server\share, you can do \\ (which is linked to several servers if you want) which you can then setup fake paths after that for administrative simplicity. So you can do \\\Chicago\Marketing\. <- All that could be fake. Now you have \\\Chicago\Marketing\Portfolio. That Portfolio part is actually the real folder name that points the user to several servers (could be one if you want). All the servers that host Portfolio could be synchronized with each other to contain the same information. There are many different configurations that can be done with DFS. You can still configure mapped drives to point to your DFS shares because they just use normal UNC paths. The nice thing about DFS is it is site aware. If you have a path such as \\\Marketing instead of using a city in there, if a user connecting from Chicago connects to the Marketing folder, DFS will point that user to the Chicago server that holds the Marketing folder. If a user from Miami connects tot he same share and there is a server that holds the Marketing folder, then the user will be sent to the Miami server.

    It all depends on what you want to do with your infrastructure. You can use DFS with a single file server and if that single file server goes down, the clients will be sent to another site where the data could be different and be a lot slower. Of course you can configure the shares that the DFS points to to be hosted by Server Clusters for a very nice file infrastructure due to the high availability due to clustering and in the chance the whole cluster goes down, DFS can transfer that client to another site temporarily (DFS can also be configured to failback to the user's home site).

    Hope this helps.
    “For success, attitude is equally as important as ability.” - Harry F. Banks
  • itdaddyitdaddy Member Posts: 2,089 ■■■■□□□□□□

    dude you rock! you are a rocket scientist and an expert instructor!
    well done; exactly what I was looking for; perfect detailed explaination.
    Let us archive this badboy! thanks so much. i see said the blind man
    i seee!
    thank so much for your time. wow!
  • itdaddyitdaddy Member Posts: 2,089 ■■■■□□□□□□

    a poor mans cluster would be say thisicon_confused.gif???

    two servers that happen to be DC1 and DC2 (domain controllers)
    2003 EE with Exchange 2003 EE of course.

    1 CARD GOES TO THE LAN THE OTHER TO another computer
    housing a SCSI RAID1 (2 mirrored drives)...this is for lack of
    external storage device.

    2nd nic goes to this storage PC.

    the other DC2 has the same thing
    1 nic goes to the LAN and the 2nd goes to the Storage PC
    with the RAID SCSI devices and that storage PC of course has
    2 nics one to each DC1 and DC2

    will this work???
  • royalroyal Member Posts: 3,352 ■■■■□□□□□□
    You can use something like a pci card to connect to the external scsi storage or fiber channel storage. Scsi works in a chain and you just connect the servers in the chain and terminate them at the end of the chain. This might help on how you can connect a Server Cluster Node to external storage:

    As for NICs, you can use 1 NIC or two NICs. If you use 1 NIC then the heartbeat and LAN traffic is sent across the same path. It is better to use 2 NICs. 1 NIC for the heartbeat (dedicated node to node communication) and the other NIC for LAN communication. You can even get 2 dual-port NIC cards for even higher level fault tolerance. 1 NIC you'll have a port for heartbeat and 1 port for LAN communication and the 2nd NIC you'll have for the same. 1 NIC will be doing the heartbeat and the other will be doing LAN communication. If the port for the heartbeat dies on NIC 1, NIC 2's dedicated heartbeat port will gain control.
    “For success, attitude is equally as important as ability.” - Harry F. Banks
  • itdaddyitdaddy Member Posts: 2,089 ■■■■□□□□□□
    so you bless my idea right?

    let me get this straight:

    2 nics per MX server
    3 nics per the external storage PC with Scsi raid 1(2 hdds)

    the two nics on external storage go to each MX server
    while the 3rd nic is the heartbeat of the network.

    the 2nd nic in the mx servers goes to the xternal storage
    from each mx server
    andthe first nic goes for heartbeat of each mx server.

    running exchange enterprise edition on both MX servers
    (180 edition of course) and running 2003 server on the xternal
    storage PC with the scsi raid1 drives probably 80 giggers.

    wha ya think? icon_eek.gif
    and the data store will be on the external storge PC with the raid scsi
    working as and extra drive

    will sata raid1 work if i want that on my external stoge computer???
    vs the scsi raid1?
    i have seen some decent prices sata drives and and raid1 controller.
Sign In or Register to comment.