Options

Enterprise Level Backup Solutions

tbgree00tbgree00 Member Posts: 553 ■■■■□□□□□□
I just started a new job and one of my responsibilities revolves around designing a new backup plan. We have some serious data and some tight windows to back the data up with. We currently do the following:

We use Veeam to back up the virtual machines block level, around 2 TB. The weekly full backup to a 3TB USB 3.0 drive takes roughly 12 - 18 hours and the daily differential takes around 1 hour with no impact on the production network. This is acceptable for now.

We also use backup exec to back up data from a few physical servers. This takes a very long time and uses production network so we limit the data over that.

In the project I just started planning for I will be moving a significant amount of data (~1.5TB) out of a virtual machine and putting it on an iSCSI target, thus Veeam can't back it up and Backup Exec will have to handle it.

Essentially I need to develop some strategy for completing a full weekly backup and daily differentials of 2TB+ of data from the iSCSI SAN within a ~8hr window Friday night - Saturday morning. I have a budget to buy stuff but don't know if I really need to. My initial research seems to be a disk to disk to LTO5 tape will possibly do it. We have an old SAN that can be used for the disk target.

My experience has always been with in place backup routines and less than 300 GB of data so I'm in way over my head. I know there are companies with much more data so I wanted to reach out to see what the community does with backups. My boss and I know the targets are aggressive but they are what they are.
I finally started that blog - www.thomgreene.com

Comments

  • Options
    it_consultantit_consultant Member Posts: 1,903
    tbgree00 wrote: »
    I just started a new job and one of my responsibilities revolves around designing a new backup plan. We have some serious data and some tight windows to back the data up with. We currently do the following:

    We use Veeam to back up the virtual machines block level, around 2 TB. The weekly full backup to a 3TB USB 3.0 drive takes roughly 12 - 18 hours and the daily differential takes around 1 hour with no impact on the production network. This is acceptable for now.

    We also use backup exec to back up data from a few physical servers. This takes a very long time and uses production network so we limit the data over that.

    In the project I just started planning for I will be moving a significant amount of data (~1.5TB) out of a virtual machine and putting it on an iSCSI target, thus Veeam can't back it up and Backup Exec will have to handle it.

    Essentially I need to develop some strategy for completing a full weekly backup and daily differentials of 2TB+ of data from the iSCSI SAN within a ~8hr window Friday night - Saturday morning. I have a budget to buy stuff but don't know if I really need to. My initial research seems to be a disk to disk to LTO5 tape will possibly do it. We have an old SAN that can be used for the disk target.

    My experience has always been with in place backup routines and less than 300 GB of data so I'm in way over my head. I know there are companies with much more data so I wanted to reach out to see what the community does with backups. My boss and I know the targets are aggressive but they are what they are.

    I think you need to look seriously at things like enterprise level de-duplication. Doing VEAM (I use Vizioncore) for image based backups of ESX is essential for disaster recovery, but you always need to worry about versioning, even if everything is on a "redundant" SAN. By far the most common problem is "I overwrote a file I needed from two weeks ago" etc. You could use MS shadow copy for that or continue to use backup exec.

    Also, be aware of log file growth in SQL and Exchange. Exchange log files are traditionally cleared when the backup of the store is completed. If the backup application is not Exchange and SQL aware, your logs will grow until they take up the whole disk.
  • Options
    qcomerqcomer Member Posts: 142
    I think you need to look seriously at things like enterprise level de-duplication. Doing VEAM (I use Vizioncore) for image based backups of ESX is essential for disaster recovery, but you always need to worry about versioning, even if everything is on a "redundant" SAN. By far the most common problem is "I overwrote a file I needed from two weeks ago" etc. You could use MS shadow copy for that or continue to use backup exec.

    Also, be aware of log file growth in SQL and Exchange. Exchange log files are traditionally cleared when the backup of the store is completed. If the backup application is not Exchange and SQL aware, your logs will grow until they take up the whole disk.

    We use MS Shadow Copy for shared volumes but then also use Avamar on top of that for our real backups. Good software so far - we came over from Backup Exec with a tape library and its much better now.
  • Options
    RTmarcRTmarc Member Posts: 1,082 ■■■□□□□□□□
    The only problem with Avamar is that it is insanely expensive. Otherwise, I love it.
  • Options
    tenroutenrou Member Posts: 108
    De-dupe is where you want to be going here. It doesn't look like dedupe on the backup is going to offer you much assistance but the newer enterprise backups (simpana 9, netbackup, tivoli) offer de-dupe prior to the data being transfered to the media server by the agent. This could be very useful for reducing your transfer across the network. Alternatively you could look at synthetic fulls where you only take incrementals/differentials but the backup software then recreates a new full from previous fulls and the latest incrementals. It takes time at the backup server but it will again reduce the amount of data you need to transfer during the window.
  • Options
    tbgree00tbgree00 Member Posts: 553 ■■■■□□□□□□
    I think you need to look seriously at things like enterprise level de-duplication. Doing VEAM (I use Vizioncore) for image based backups of ESX is essential for disaster recovery, but you always need to worry about versioning, even if everything is on a "redundant" SAN. By far the most common problem is "I overwrote a file I needed from two weeks ago" etc. You could use MS shadow copy for that or continue to use backup exec.

    Also, be aware of log file growth in SQL and Exchange. Exchange log files are traditionally cleared when the backup of the store is completed. If the backup application is not Exchange and SQL aware, your logs will grow until they take up the whole disk.

    We use Shadow copy for versioning on our file shares and that's a life saver. I didn't think to mention that. Backup exec currently only runs to backup email and SQL data for the log file clearing like you mentioned. I'm trying to wrap my mind around how to clear those logs. Would it be possible to run a powershell script or something similar once we verified the backup to clear that log?

    Our plan has evolved into looking into Universal Application Recovery in Veeam. They are marketing it toward SQL and Exchange users according to a webcast I was in yesterday. I'm still not sure if it will keep the log file size down and keep it from crashing the VM.
    I finally started that blog - www.thomgreene.com
  • Options
    it_consultantit_consultant Member Posts: 1,903
    You can't do a point in time recovery in SQL or Exchange unless you have the logs, which is why they are flushed automatically during backup only. What actually happens is all those logs are copied and when the OS gets verification that they were copied successfully they are flushed. When you go to do a recovery, SQL and Exchange will "replay" the log files (since the log files are a running history of the changes to the database) in order to recover the proper information.

    If you get rid of Backup exec, you will still need to do traditional full backups on your SQL and Exchange servers. Exchange 2010 and 2007 SP3 support the Windows backup tool as a means to do a full backup of the active database. In SQL you can schedule a full backup with SQL tools. In SQL, you can set up customized log retention, which most of the DBAs I know do...they set them for longer than the backup cycle. Thats how important the log files are.
Sign In or Register to comment.