Options

Veeam Backup Storage

QHaloQHalo Member Posts: 1,488
This is a question for all those using Veeam.

-When you spec how much backup disk you need to store the backups, how are you going about that?
-Also, what kind of storage are you using?
-What's the general rule of thumb you all go with?
-Are any of your customers/or you for that matter, not backing up to tape and using replication instead to keep multiple copies?

I found a 'formula' that Gostev posted on the Veeam forums but it almost seems like it's an educated 'best guess'. I'm working on a design for the new environment I'm working at and figuring out how much disk I'll need is rather complex due to deduplication/compression etc.

Comments

  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Been there, done that, bought the t-shirt. I have seen the formula but it certainly isn't accurate as it highly depends on data on the server so you can't say for sure how much % it is being compressed. How much storage you require is a bit of a how long is a string question as it also depends on how many backups you need to keep - what is your retention and so on.

    I noticed that a system drive is compressed at around 40-50%. You can save even more data if you add multiple VM into the same backup job, utilizing de-duplication.

    I just checked and a VM with 100GB had initially 80GB in size (the VB files that is) with approx 2GB of incremental backups.

    If you go for replication then you obviously have no compression as the replica is a 1:1 copy.

    As for storage - we use ISCSI

    Our ESX SANs are iSCSI SANs from Dell (Equallogic) - all LUNs are mounted onto the Veeam server, utilizing SAN based backups and the backups itself are stored on a 2008R2 Storage Server (also using ISCSI).

    So I am afraid I cannot give you a precise answer, all I can suggest is make a test install and backup a VM and get proper figures and upscale ...
    My own knowledge base made public: http://open902.com :p
  • Options
    QHaloQHalo Member Posts: 1,488
    Thanks for the post. Basically my design setup would be something like this.

    We will have two active datacenters. Each will have a Veeam backup at each location for the VM's at their site. We have 9 remote sites. They will locally backup their VM's and then replicate them to Site A or B depending on geopgraphy, I will split them up. Site A and B will replicate their backups to the other site for redundancy. This means that even if I lose a primary site and a remote site, I could restore both from the other primary.

    I'm looking at Equallogic as well. Do you have a separate Equallogic for storing your backups on? We're not a huge shop, about 150 real VM's and only put out about 4500 random IOPS over those, so I'm looking at putting them on a 6510x with SAS 10k's in RAID 50 for the disks that need faster IOPS, and the rest is going on NL SAS 7.2k drives in a 6510e in RAID 50. That's around 45TB of raw storage across the arrays and should provide me around 3500 IOPS off the 6510x and 2200 IOPS off the 6510e. The DPACK that I ran on our environment showed around 27TB used disk space across my physicals and VMs.

    I'm thinking that another 6500 series with just 7.2k 1TB drives would be plenty of room for the two main sites.
  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    No, we only have the 4000 series for our ESX cluster and the backups are stored on local storage. Our local storage box is a 2U system and 8x2TB SATA in Raid60 .. The server itself is running Server 20008R2 with iSCSI Target which presents the LUN to the Veeam server.

    Another system I implemented was also for DR using Veeam. The customer in question had in the primary site a fully fledged vSphere cluster and Veeam was running as a VM. Veeam then replicated the VMs to a big standalone host with local storage across the WAN. If the primary site would slide in the ocean he could then just power the VMs on in the secondary site.

    When you go shopping make sure you check the features of the 4 and 6k series .. You might not need the added (software) benefits of the 6000. We almost got dragged into them ourselves. The main benefit is really the amount of EQLs you can add into a single group (plus replication). If you are just a small shop then loox at the 4k series and a "normal" server for backup.

    Also .. call Dell directly and ask for proper discounts. For instance, whilst the very first 4k series costs us £24k through a supplier, at the end we got the same SAN directly from Dell for just under £11k .. it is REALLY scary what sort of margin they have on these things.

    Also make sure you get support. Support might be expensive at first, but spares are very expensive due to the special firmware they have on the disks. We had prices of £1750 for a single 600GB 15k SAS disk.
    My own knowledge base made public: http://open902.com :p
Sign In or Register to comment.