Storage Tips

DustyRackDustyRack Member Posts: 16 ■□□□□□□□□□
Storage Area Network (SAN) design. When configuring storage for your virtualised environments how do you set it up? Virtual disk per vm? Virtual disk for OS's and 1 for data? Group similar apps together? Does anyone have any good resources/tips. I don't have any experience with storage just wanted to know how others went about doing this. I've tried to read up on best practices but lots of people have different opinions.

Thanks.

Comments

  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    We have tiers of storage depending on the workload. Tier 1 storage by FC disk, tier 2 by New Line disk. Something like Exchange/SQL goes on Tier 1, small apps go on Tier 2 storage. We have OS and data split into separate vmdk's. Dedicated swap space per host. Our hosts are blades so they have boot-from-SAN LUN's, one per host. When you group similar apps together, you'll likely get decent performance, but you need to ensure you dont put too much together as you can only have so much I/O. It's about requirements/constraints really.

    It may be too much for your needs right now, but the vSphere Design 2nd edition book by Scott Lowe is an excellent read for the type of questions you asked.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • emerald_octaneemerald_octane Member Posts: 613
    it's up to you really. Certain things can be taken from the physical environment but not always. Your storage vendor for your NAS or definitely SAN will have best practices in terms of setup and presenting the disk to your hardware, but you can modify these to fit unique environments. For instance you may want to employ a certain technology that necessitates precisely how you attach your disk to ESXi. If you have no such requirements then you can get away with whatever is easiest.

    In terms of the VM itself, that can be determined by the application. For instance a SQL Server often has the OS on one spindle and the database+logs on another. You can create two separate disks to attach to your windows server, then put the VMDK on separate spindles. Or you can attatch a lun directly to the vm. or you can put them on the same spindle. It's really up to you!
  • DustyRackDustyRack Member Posts: 16 ■□□□□□□□□□
    Thank you for your replies. My environment is very small 10 vms. We're using iscsi between the san and the hosts.
  • blargoeblargoe Member Posts: 4,174 ■■■■■■■■■□
    Definately take a look at your SAN vendor's knowledge base and see if they have a best practices guide for their SAN hardware with VMware. Different vendors have different recommendations. NetApp, for example, pushes splitting OS vmdk's from data vmdk's onto different LUNs because they offer deduplication. Other vendors do not offer that and may not advocate splitting in that manner.

    I tend to use tiers of storage, non-production usually on SATA or NL-SAS, Production maybe on SAS, and if needed a higher tier that has access to Flash.
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • elToritoelTorito Member Posts: 102
    I'd say for a small environment with 'only' 10 VMs, don't over-analyze. Keep it simple, and only apply the common best practises. For a small SAN with a relatively small amount of spindles - likely, a single pool of disks with a single RAID level - it doesn't really make sense to go crazy about micromanaging where the database files go and where the transaction logs go (do separate them from the O/S, though, for backup/restore/persistence purposes).

    If you do have the luxury of having multiple RAID groups or pools to play with, it comes down to analyzing workloads. For example, it's generally a bad idea to put an OLTP database application (lots of small block, random I/O) on the same physical disks as an application that mainly does large block sequential I/O. The seek time caused by the random I/O will absolutely kill your sequential traffic. You can get away with this only if the peak activity of either workload occurs at differing times during the day (for example, end user access during working hours and backups at night).

    For larger environments with mission-critical applications and/or servers, it makes more sense to use tiered storage (with SSDs thrown in as secondary cache), as well as using varying RAID levels to accomodate different workloads: RAID 10 for workloads that have primarily small-block write I/O's, RAID 5 for mixed workloads where capacity is more a concern than pure performance.
    WIP: CISSP, MCSE Server Infrastructure
    Casual reading:
    CCNP, Windows Sysinternals Administrator's Reference, Network Warrior


  • Blessmewithgrace1Blessmewithgrace1 Registered Users Posts: 10 ■□□□□□□□□□
    Great replies here!
  • DirtySouthDirtySouth Member Posts: 314 ■□□□□□□□□□
    This is a great question, but not easy to answer because there are so many variables. Like others have said, since you have a very small environment, keep it simple. One factor that will impact all of this is does your SAN do any kind of tiering? In other words, does it have mixed disk types (FLASH, SATA, SAS, NL-SAS...etc)? If so, you can probably just carve up the necessary LUNs/Volumes however you want. If it does NOT have mixed disk types and auto-tiering, you may want to be more selective. In this case, isolate high IO VM's to faster disks or on dedicated disks. Hope that helps.
  • lsud00dlsud00d Member Posts: 1,571
    blargoe wrote: »
    I tend to use tiers of storage, non-production usually on SATA or NL-SAS, Production maybe on SAS, and if needed a higher tier that has access to Flash.

    This is how we operate, minus the Flash.

    Tier 1 is SAS (Exchange, SQL, high I/O applications)
    Tier 2 is SATA (Essentially everything else)

    6 SANS comprise 3 Storage Pools on-site and they provide storage for ~150 VM's. The DR site is setup in a similar fashion but with less storage space since not all VM's need to be replicated.
Sign In or Register to comment.