NIC configuration for NLB and fail over cluster.
What NIC configuration should I use for IIS NLB and SQL fail over cluster? My target is to avoid single point of failure for all connection including switches.
Scenario:
2 servers in NLB clustering (Domain member)
2 servers in SQL and file fail over clustering (both Domain Controllers)
Webpage from IIS server will be stored on fail over cluster (I don’t want to use DFSR replication) using \\fileclustername\folder
IIS servers will connect to SQL database cluster IP.
And my problem is how should I set up NIC addressing?
Idea:
NLB cluster
NLB NODE 1
Public1 & Public2 = PublicNICtem
IP 192.168.100.50
MASK: 255.255.255.0
GATEWAY: 192.168.100.254
DNS: N/A
Heartbeat1
IP 10.10.100.10
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
Heartbeat2
IP 10.10.110.10
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
Private1 & Private2 = PrivateNICteam
IP 172.16.100.50
MASK: 255.255.255.0
GATEAY: N/A
DNS: 172.16.100.10
172.16.100.20
NLB NODE 2
Public1 & Public2 = PublicNICtem
IP 192.168.100.60
MASK: 255.255.255.0
GATEWAY: 192.168.100.254
DNS: N/A
Heartbeat1
IP 10.10.100.20
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
Heartbeat2
IP 10.10.110.20
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
Private1 & Private2 = PrivateNICteam
IP 172.16.100.60
MASK: 255.255.255.0
GATEAY: N/A
DNS: 172.16.100.10
172.16.100.20
Public NLB Cluster IP (on two PublicNICteam)
192.168.100.100
MASK: 255.255.255.0
Fail over cluster
SQL F/O NODE 1
Private1 & Private2 = PrivateNICteam
IP 172.16.100.10
MASK: 255.255.255.0
GATEAY: N/A
DNS: 172.16.100.10
Heartbeat1
IP 10.10.120.10
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
Heartbeat2
IP 10.10.130.10
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
SQL F/O NODE 2
Public1 & Public2 = PublicNICtem
IP 172.16.100.20
MASK: 255.255.255.0
GATEAY: N/A
DNS: 172.16.100.10
Heartbeat1
IP 10.10.120.20
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
Heartbeat2
IP 10.10.130.20
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
Public SQL F/O Cluster IP (on two PublicNICteam)
172.16.100.50
MASK: 255.255.255.0
Does it make any sense?
Scenario:
2 servers in NLB clustering (Domain member)
2 servers in SQL and file fail over clustering (both Domain Controllers)
Webpage from IIS server will be stored on fail over cluster (I don’t want to use DFSR replication) using \\fileclustername\folder
IIS servers will connect to SQL database cluster IP.
And my problem is how should I set up NIC addressing?
Idea:
NLB cluster
NLB NODE 1
Public1 & Public2 = PublicNICtem
IP 192.168.100.50
MASK: 255.255.255.0
GATEWAY: 192.168.100.254
DNS: N/A
Heartbeat1
IP 10.10.100.10
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
Heartbeat2
IP 10.10.110.10
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
Private1 & Private2 = PrivateNICteam
IP 172.16.100.50
MASK: 255.255.255.0
GATEAY: N/A
DNS: 172.16.100.10
172.16.100.20
NLB NODE 2
Public1 & Public2 = PublicNICtem
IP 192.168.100.60
MASK: 255.255.255.0
GATEWAY: 192.168.100.254
DNS: N/A
Heartbeat1
IP 10.10.100.20
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
Heartbeat2
IP 10.10.110.20
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
Private1 & Private2 = PrivateNICteam
IP 172.16.100.60
MASK: 255.255.255.0
GATEAY: N/A
DNS: 172.16.100.10
172.16.100.20
Public NLB Cluster IP (on two PublicNICteam)
192.168.100.100
MASK: 255.255.255.0
Fail over cluster
SQL F/O NODE 1
Private1 & Private2 = PrivateNICteam
IP 172.16.100.10
MASK: 255.255.255.0
GATEAY: N/A
DNS: 172.16.100.10
Heartbeat1
IP 10.10.120.10
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
Heartbeat2
IP 10.10.130.10
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
SQL F/O NODE 2
Public1 & Public2 = PublicNICtem
IP 172.16.100.20
MASK: 255.255.255.0
GATEAY: N/A
DNS: 172.16.100.10
Heartbeat1
IP 10.10.120.20
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
Heartbeat2
IP 10.10.130.20
MASK: 255.255.255.0
GATEWAY: N/A
DNS: /NA
Public SQL F/O Cluster IP (on two PublicNICteam)
172.16.100.50
MASK: 255.255.255.0
Does it make any sense?
Comments
-
astorrs Member Posts: 3,139 ■■■■■■□□□□What is the point of Private1 & Private2 in the NLB cluster? You already have a couple of public interfaces and a pair of hearbeats... ?
-
bertieb Member Posts: 1,031 ■■■■■■□□□□astorrs wrote:What is the point of Private1 & Private2 in the NLB cluster? You already have a couple of public interfaces and a pair of hearbeats... ?
Out of band management/backup type network perhaps?The trouble with quotes on the internet is that you can never tell if they are genuine - Abraham Lincoln -
bertieb Member Posts: 1,031 ■■■■■■□□□□Also, and I know it's slightly off topic, but running cluster nodes as domain controllers is generally frowned upon as bad practise and is typically not a recommended or supported configuration by Microsoft (especially with Exchange, and I believe running a clustered instance SQL2005/SQL2008 on DC's is also non-supported now).
http://support.microsoft.com/default.aspx?scid=kb;EN-US;q281662
You'll be OK for a testing environment with SQL in that configuration but be wary of the drawbacks if this is a production system.
I think astorrs posted somewhere about some possible issues you may have with NLB and NIC teaming, but I can't find it .
Can you please clarify astorrs question re: private networks as I'm too struggling to visualise your setup? Also, you sure have a lot of heartbeats in the true cluster, are you just using up spare NIC's? Typically I'd configure one heartbeat, then configure clustering to use that LAN as the primary network for cluster communications and to use the public network as a fallback (i.e. mixed mode). Though if you've got the NIC's then greatThe trouble with quotes on the internet is that you can never tell if they are genuine - Abraham Lincoln -
PiotrIr Member Posts: 236Many thanks for your help.
I really appreciate this especially this is my first cluster in production environment. I believe I have done this too complex but my target is to avoid every single point of failure including switches and NICs. Usage teaming or two separate networks for HearBeat will provide fail-over for switches. Question is if I should do this and how to simplify this solution as much as possible.
Private1&Private2 is teamed network for dedicated communication with SQL cluster and shared folder sitting on file cluster (one cluster – two roles) for wwwroot folder (unfortunately I have to keep it on shared folder but it is another story). (One NIC form team connected to one switch, second to another switch)
Public1&Public2 is teamed network for dedicated communication with internet. (One NIC form team connected to one switch, second to another switch)
Two separate Heart beat networks for NLB cluster communication (each connected to different switch)
I thought it is good idea to separate SQL and shared storage communication form internet cluster communication but if I’m wrong please let me know. And what about teaming is any way to provide switch redundancy and NIC redundancy without teaming? How do you do this in practice?
About DC.
This solution will be a little more complicated than in this scenario. In addition there will be second fail-over cluster connected to the same SAN with SQL and file cluster. It will be Hyper-V host for 5 virtual machines (small exchange server, management server, IIS web server – for non critical webs and two less important machines). I understand your recommendation is to don’t use DC on SQL cluster. In Microsoft documentation I have read that all servers in fail over cluster should have the same domain role (member or DC). I can move DC to Hyper-V cluster from SQL cluster but unfortunately I’m not able to set up two additional servers as DC due budget limitation. As far as I know it is not good idea to use DC on IIS servers. What I should do with AD? Why DC on nodes may cause problems?
Please advice how this solution should look like?
Once again many thanks and Best Regards. -
astorrs Member Posts: 3,139 ■■■■■■□□□□Quick follow up question before I reply (;))...
Are the IIS servers going to be directly on the Internet of behind a firewall (and if so, what type)? -
PiotrIr Member Posts: 236Yes, I'm going to use firewall and NAT external IP to IIS server and obwiously open only 80 and 443 port.
Two Jupiter Netorks SSG 140 firewalls in clustering.