Setting up a backup server
grayfox587
Users Awaiting Email Confirmation Posts: 48 ■■□□□□□□□□
well at work i am getting the chance to turn our novell server into a windows 2003 backup server in case our main server crashes. Well i have been doing research and asking around and im not really sure, what to do or how to go at it. What i have discovered so far is what appears to be load balancing but can be used as a backup. If anyone is a pro on this and can point me in the right direction and tell me anything specific i need to do, it would be greatly appreciated
they want the server to be completely the same, but i dont see how i can copy the drives without shutting the server down, but anyways any advice is truly appreciated, i need some
sorry let me be a little more clear, what i mean by backup server is, if the main server fails, the other server takes over and provides access to the clients
they want the server to be completely the same, but i dont see how i can copy the drives without shutting the server down, but anyways any advice is truly appreciated, i need some
sorry let me be a little more clear, what i mean by backup server is, if the main server fails, the other server takes over and provides access to the clients
Comments
-
RTmarc Member Posts: 1,082 ■■■□□□□□□□grayfox587 wrote: »well at work i am getting the chance to turn our novell server into a windows 2003 backup server in case our main server crashes. Well i have been doing research and asking around and im not really sure, what to do or how to go at it. What i have discovered so far is what appears to be load balancing but can be used as a backup. If anyone is a pro on this and can point me in the right direction and tell me anything specific i need to do, it would be greatly appreciated
they want the server to be completely the same, but i dont see how i can copy the drives without shutting the server down, but anyways any advice is truly appreciated, i need some
sorry let me be a little more clear, what i mean by backup server is, if the main server fails, the other server takes over and provides access to the clients
What services? Exchange, SQL, domain, etc?
Microsoft Clustering is the first place I would look. -
undomiel Member Posts: 2,818If it is just a backup for authentication then you would just make a DC and GC. If it is serving files then you may want to look into using DFS so that changes are kept in sync and availability is increased. If it's an application as mentioned previously then clustering would be what you want to look into.Jumping on the IT blogging band wagon -- http://www.jefferyland.com/
-
RobertKaucher Member Posts: 4,299 ■■■■■■■■■■sorry let me be a little more clear, what i mean by backup server is, if the main server fails, the other server takes over and provides access to the clients
But what does the primary server do? What does it provide access to?
Without knowing exactly what you mean by a "backup server" it's hard for us to give any advice. If this is a SQL Server or Exchange server that you are trying to create a backup for you are going to be best served by looking at clustering, but you will also need some sort of shared storage, like iSCSI. I'd also suggest you know exactly what your bosses expect. They said they want the servers to be exactly alike, but is that possible? What do they expect to happen when the primary server goes down? How much time do they expect to pass between failure and recovery? You really need to find out their expectations and then match the technology to that. -
RTmarc Member Posts: 1,082 ■■■□□□□□□□Bottom line, we need to know what you mean by "backup server" before we can give you better advice.
-
grayfox587 Users Awaiting Email Confirmation Posts: 48 ■■□□□□□□□□well there server does everything, authentication,dhcp,dns, file server and much more : P
but the server crashes every once in a while and everyone looses connectivity, so they want a backup server to kick in and allow people to do their work, ill try to get more info -
Peibol Member Posts: 32 ■■□□□□□□□□I think what he's trying to say is this:
Server1 (the one actually at work) and then Server2(new one)
He wants Server2 to be a clone(exactly the same as Server1) so if Server1 goes down, then Server2 comes into action and since it's the same, nothing would have change, kind of like when you have 2 hard drives on raid. That's what i think he is trying to say by backup, or maybe i'm wrong. -
undomiel Member Posts: 2,818Depends upon what the much more are. If all you need are authentication, dhcp, dns and file server that isn't too difficult. DC + GC would take care of your authentication, split your DHCP scopes across the two servers for DHCP, AD integrated DNS would take care of itself on its own, just don't forget to put in the server ip as the secondary DNS in the DHCP options, and finally the file server can be taken care of by DFS.Jumping on the IT blogging band wagon -- http://www.jefferyland.com/
-
RobertKaucher Member Posts: 4,299 ■■■■■■■■■■I think what he's trying to say is this:
Server1 (the one actually at work) and then Server2(new one)
He wants Server2 to be a clone(exactly the same as Server1) so if Server1 goes down, then Server2 comes into action and since it's the same, nothing would have change, kind of like when you have 2 hard drives on raid. That's what i think he is trying to say by backup, or maybe i'm wrong.
We get that much, but the devil is in the details. Different services will be made redundant in different ways.
As far as DNS, DHCP and Active Directory go this will be pretty easy. Just make it a Domain Controller and GC. For DHCP create a scope on the "Backup Server" using the 80/20 rule. 80% of your available scope should go on the primary and 20% of the available addresses would go on the backup server.
As far as file and print services go, this is where things will get sticky... Even DFS might not help you in this. How much does the data change? DFS is not well suited for highly dynamic data. What can you tell us about the amount of data and how often it changes? -
grayfox587 Users Awaiting Email Confirmation Posts: 48 ■■□□□□□□□□RobertKaucher wrote: »We get that much, but the devil is in the details. Different services will be made redundant in different ways.
As far as DNS, DHCP and Active Directory go this will be pretty easy. Just make it a Domain Controller and GC. For DHCP create a scope on the "Backup Server" using the 80/20 rule. 80% of your available scope should go on the primary and 20% of the available addresses would go on the backup server.
As far as file and print services go, this is where things will get sticky... Even DFS might not help you in this. How much does the data change? DFS is not well suited for highly dynamic data. What can you tell us about the amount of data and how often it changes?
80/20 rule wouldn't that only handle 20% of users when the main server fails?, but yeah that's what i have seen while doing research, map out a different scope and such, is there any specific software that i need that recognizes when the main server fails? -
dynamik Banned Posts: 12,312 ■■■■■■■■■□RobertKaucher wrote: »Different services will be made redundant in different ways.
Exactly.
Even within a single application, such as Exchange, there are different ways to make each type of role highly-available. You need to provide us with a detailed list of everything that server is doing.
How much time does it have to be back up and running in, and how much data loss is tolerable? If you're only large enough to have a single server, I can almost guarantee that your budget is too small to provide instantaneous fail-over with no data loss.
The 20% in the 80/20 rule is only to provide new addresses while the 80% server is down. Leases are valid for 8 days by default. However, I don't see any reason not to go 50/50 (and that's how I set mine up). -
RobertKaucher Member Posts: 4,299 ■■■■■■■■■■grayfox587 wrote: »80/20 rule wouldn't that only handle 20% of users when the main server fails?, but yeah that's what i have seen while doing research, map out a different scope and such, is there any specific software that i need that recognizes when the main server fails?
No, it would be able to handle as many users as would fit into 20% of your scope. The usual value for a DHCP lease is 7 or 8 days. So let's look at the situation:
Let's imagine a 192.168.1.0/24 network. If you have a DHCP scope with 100 addresses and you have 80 devices then your primary server will have 80 addresses and your secondary will have 20. When the primary server goes down not every user on the network will need an address. Only a few will require them. Most of the devices will have between 2 and 8 days before they renew their addresses. What clients will notice most will be DNS. Let’s imagine a worst case scenario where 10% of the devices require an address immediately. That would only be 8 addresses and would give you another 12 free addresses to lease over the next few days. Also, if you find the primary server is going to be down for more than just a few days you can easily expand the scope to overlap what was on the primary server.
80/20 is not my recommendation; it is Microsoft’s as a general best practice. http://technet.microsoft.com/en-us/library/cc780311(WS.10).aspx
You can tweak the numbers to fit your environment better. -
RTmarc Member Posts: 1,082 ■■■□□□□□□□Exactly.
Even within a single application, such as Exchange, there are different ways to make each type of role highly-available. You need to provide us with a detailed list of everything that server is doing.
How much time does it have to be back up and running in, and how much data loss is tolerable? If you're only large enough to have a single server, I can almost guarantee that your budget is too small to provide instantaneous fail-over with no data loss.
The 20% in the 80/20 rule is only to provide new addresses while the 80% server is down. Leases are valid for 8 days by default. However, I don't see any reason not to go 50/50 (and that's how I set mine up).
Correct, and I agree with the 50/50 rule. That's always how I've done it. No reason to limit yourself to 80/20.If you're only large enough to have a single server, I can almost guarantee that your budget is too small to provide instantaneous fail-over with no data loss.
This is dead-on. When you start talking about Exchange and SQL (Sharepoint, OCS, etc...) having redundancy you bring in shared storage. That's a completely separate animal and adds a lot to the expenses. -
RobertKaucher Member Posts: 4,299 ■■■■■■■■■■Correct, and I agree with the 50/50 rule. That's always how I've done it. No reason to limit yourself to 80/20.
I agree witht he 50/50 suggestion. I'm not sure why MS suggests 80/20 so often.
But the total of both pools should be larger than (not equal to) the total number of addresses required by devices, IMO.
Actually, I think I realized why MS sugests only 80/20. I bet they assume the DHCP servers are actually on different subnets....