Options

Seperated vLan for hosts for Backups for Veeam

DeathmageDeathmage Banned Posts: 2,496
Hey guys,

I'm thinking of doing a seperated vLan fabric between my VMware cluster (for hosts on the VM's) and our Veeam Backup server which has a local 20 bay RAID 10 array.

My thinking is in order for me to have a RTO of 20 minutes I need to be doing incremental backups every 10 to 15 minutes and this will put a ton of traffic on the normal vlans for the servers.

Thinking also is enabling baby or jumbo frames on this vlan would speed up backup windows.

Curious if anyone ever implemented something like this and if it's a good idea?

Comments

  • Options
    techfiendtechfiend Member Posts: 1,481 ■■■■□□□□□□
    When I started veeam backups were performed on a separate lan, unmanaged switches, they still are and it works well. Pretty light load though, rarely 25 GB a day in incrementals but the synthetic fulls are near a TB. Since it's such a light load, I'm thinking about pushing for doing it on the same network and team the nic's. Size of the incrementals and how busy your lan are key factors.
    2018 AWS Solutions Architect - Associate (Apr) 2017 VCAP6-DCV Deploy (Oct) 2016 Storage+ (Jan)
    2015 Start WGU (Feb) Net+ (Feb) Sec+ (Mar) Project+ (Apr) Other WGU (Jun) CCENT (Jul) CCNA (Aug) CCNA Security (Aug) MCP 2012 (Sep) MCSA 2012 (Oct) Linux+ (Nov) Capstone/BS (Nov) VCP6-DCV (Dec) ITILF (Dec)
  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    I am not a network guy, but even with a separate VLANs, wouldn't you still share the same backplane, potentially affecting other VLANs one way or another? Or would you implement QoS ? Also, changing the MTU would need to be done on every switch from end to end and some switches need reloading.
    My own knowledge base made public: http://open902.com :p
  • Options
    DeathmageDeathmage Banned Posts: 2,496
    jibbajabba wrote: »
    I am not a network guy, but even with a separate VLANs, wouldn't you still share the same backplane, potentially affecting other VLANs one way or another? Or would you implement QoS ? Also, changing the MTU would need to be done on every switch from end to end and some switches need reloading.

    Potentially yes it could affect the backplane but highly unlikely. The backplane on our switches is 384 Gbps.

    As for jumbo frames these Dell switches don't require reboots but I normally do it anyways after a switch config change. But yup the Jumbo frames do need to be setup end-to-end.
  • Options
    d4nz1gd4nz1g Member Posts: 464
    You will need to implement qos policies on your switches, end to end.

    Without QoS, your backup traffic will compete directly with your production traffic and you don't want that to happen, backup is a resource-greedy traffic.
  • Options
    networker050184networker050184 Mod Posts: 11,962 Mod
    Deathmage wrote: »
    Potentially yes it could affect the backplane but highly unlikely. The backplane on our switches is 384 Gbps.

    I'd be less concerned with the back plane of the switch and more concerned with the individual port/shared asics. Depending on your model four or more ports could share a single asic. One port blasting backups could potentially affect them all. VLANs will do nothing for you in this instance. I prefer out of band/separate fabric from the production network like techfiend personally. QoS as d4nz1g pointed out is also an option but requires much more expertise to implement and support.
    An expert is a man who has made all the mistakes which can be made.
  • Options
    d4nz1gd4nz1g Member Posts: 464
    I prefer out of band/separate fabric from the production network like techfiend personally.

    This.

    In medium/large networks, it is pretty common to see dedicated switches and host interfaces exclusivelly for backup solutions.
    Even with QoS, a few issues could arise for example with Head of Line Blocking (HOLB), high drop rate due to overwhelmed buffers, increase of delay due to queuing, etc.
  • Options
    DeathmageDeathmage Banned Posts: 2,496
    Well this is enlightening, glad I asked the question. :)

    The basis of this OP was this topic has been covered a few times in depth in the CompTIA Storage + exam I'm taking next Saturday in the CBT Nugget/Pluralsight videos as-well as Nigel's book and it seems enlightening cause traditionally backups have flowed over the same fabric as production to a untrained engineer in datacenter data flows. But I guess this is why we all study and get certified huh, to learn. icon_wink.gif

    We do have a bunch of Dell 2824's that were replaced by these newer Force10 Dell switches, for just backup traffic they will work fine (I think), my only thought would be their backplanes which are at 48 Gbps which is only a factor of 1 over the default port speed, I like going with a factor of 2.5 or higher, but question is will small 15 to 20 minutes incremental backups really have that much of a growth potential were a factor of 1 over a solo port speed of 1 GBps really make a difference to the speed of the backup? - I suppose I could do Nic Teaming with like 2 uplinks from host to switch and switch to Backup Appliance.

    Were talking like less than 100 MB's of file changes probably in 15 minutes, but those changes are done to SQL and 20 minutes changes are HUGE with that kind of data and maybe excel documents. My co-worker thinks though that SQL is more important than other data like Excel and Documents which I don't disagree with but they don't want to do RPO of 20 minutes on those files to which I do disagree upon and I quote they call 'none essential data'. But for me I think of all data since my job is System Administrator and all they think about is developer land, IE: SQL and our ERP.

    I'm really looking at way to improve overall network performance, I glazed over the CCNA:DC exam blueprints and this QoS I'm looking at seems like it's covered in that study but I'm honestly like 10+ months from starting that. MCSA 2012 is next and the VCP6 refresh. So as mentioned above a separate switch, for now, would probably be more ideal given my lack of expertise with QoS.
  • Options
    d4nz1gd4nz1g Member Posts: 464
    Deathmage,

    If you draw your network topology, you can perceive the oversubscription of your network and then configure your backup properly.

    You also could run the backup vlan on different trunk links, so your production links would not be congested (the backplane will still be shared).

    btw, 100MB of traffic equals to 800Mbits, almost a gigabit of backup traffic bursting every 15 minutes. (correct me if I am not using the right logic)
  • Options
    DeathmageDeathmage Banned Posts: 2,496
    d4nz1g wrote: »
    Deathmage,

    If you draw your network topology, you can perceive the oversubscription of your network and then configure your backup properly.

    You also could run the backup vlan on different trunk links, so your production links would not be congested (the backplane will still be shared).

    btw, 100MB of traffic equals to 800Mbits, almost a gigabit of backup traffic bursting every 15 minutes. (correct me if I am not using the right logic)

    As for the links or trunks, if this is purely a L2 function why would I need to involved trunks at all? the VMware hosts and Backup Appliance have 4 spare nic slots each, to me correct me if I'm wrong it would simply be a L2 switching process with each end-device having like a bonded gigabit connection, I can't think of a reason why I'd need to route the vlan. I mean unless I'm misreading something. :)
  • Options
    d4nz1gd4nz1g Member Posts: 464
    I mean, a separate link dedicated for the backup vlan, so you don't mess with your production traffic :)

    By trunk, I intended to say "a link between the server switch and the backup switch"
  • Options
    DeathmageDeathmage Banned Posts: 2,496
    d4nz1g wrote: »
    I mean, a separate link dedicated for the backup vlan, so you don't mess with your production traffic :)

    By trunk, I intended to say "a link between the server switch and the backup switch"


    ahhh I see I was like O.o

    Well with the Backup traffic I wouldn't really want the backup data to even touch the production data, so we're in agreement there.. Basically the Backup Appliance would have 3 nics on it, two bonded and the other connected to the production network for management. The physical backup data would just talk on their own fabric.

    Does that make sense?
  • Options
    d4nz1gd4nz1g Member Posts: 464
    Do you mean a separate physical fabric? Do you have gear available for the backup network?

    If yes, then do it!

    If it is not possible, you could use the production fabric BUT with dedicated links carrying only the backup vlan, got it?
  • Options
    DeathmageDeathmage Banned Posts: 2,496
    d4nz1g wrote: »
    Do you mean a separate physical fabric? Do you have gear available for the backup network?

    If yes, then do it!

    If it is not possible, you could use the production fabric BUT with dedicated links carrying only the backup vlan, got it?


    Indeedio!!!!

    Thanks everyone for your sugesstions, learned a ton. icon_biggrin.gif
  • Options
    techfiendtechfiend Member Posts: 1,481 ■■■■□□□□□□
    With dual nics on the backup server is separate lan preferred over a nic team?

    Or maybe on the same network have a switch independent team. One goes to a small switch for backup that's connected to the main switches other goes to the main network. Maybe some metric trickery can get it to backup on the small switch while using the main switches for everything else. I'm guessing this could probably be done with managed switches but I'm dealing with unmanaged ones.
    2018 AWS Solutions Architect - Associate (Apr) 2017 VCAP6-DCV Deploy (Oct) 2016 Storage+ (Jan)
    2015 Start WGU (Feb) Net+ (Feb) Sec+ (Mar) Project+ (Apr) Other WGU (Jun) CCENT (Jul) CCNA (Aug) CCNA Security (Aug) MCP 2012 (Sep) MCSA 2012 (Oct) Linux+ (Nov) Capstone/BS (Nov) VCP6-DCV (Dec) ITILF (Dec)
  • Options
    DeathmageDeathmage Banned Posts: 2,496
    I recently just haggle annual renewals on software and wiggle in 3 quad nics, each the VMware hosts and backup server.

    I'm also using the older Dell 2824 switches as a dedicated fabric.

    I had thoughts of putting it in the N3024 iSCSI and vMotion switch but I didn't want the network facing backup server having a hard access even if on a vLan into the SAN switch from a security perspective.

    This way I can bond two nics per VMware host and three on the backup server for aggregated throughput. I'd prefer the southbridge being the bottleneck on the RAID controller vs the easily fixed, if provisioned, Nic cards.
Sign In or Register to comment.