Exchange 2007 design question..

Well, heres where I am at. I will probably eventually have 1000 users (within how long I have no idea but probably a few years)

Heres the hardware part I am thinking.

OS: RAID 1, 2 72GB 15k drives
Transaction Logs: RAID 1, 2 72GB 15k drives
DB: RAID 5, 4 146GB 10k drives

Sooo heres where I am bummed/confused/unsure whatever...

I know that I will need the space for the RSG in case of emergency, I want to give each users a 200MB mailbox, which based on the calculator from MS is like a 250GB minimum for the amount of data that will be required including 20% overhead, whitespace, yadda yadda...

So it seems like 146 is almost a have to instead of a do I want to...

So, I was under the impression that I will need roughly 1 IOP per user with a 20% buffer for gotcha moments.. As I read, and I am by no means qualified to talk on this, a rough estimate on the amount of IOPs I will get with a 10k drive is around 250ish (obviously tons of variables, but numbers I have come across in the wild) which is about a 20% reduction compared to the 15k drives...

So, if I am reading any of this right, in a 4 drive array I will have about 1000 IOPs available to me with the 146 drives but around 1200-1300 with the 72GB drives, I am obviously favoring on the side of being to pessimistic in my head so I may error on the side of caution, but this seems to me to be a deviation of the recommended amount of IOPs per user.

I dont know if I am not taking into account the fact that my transaction logs are on separate spindles (as the explanation on the technet site made it seem like I should be only referencing DB IOPs and not including the TL disk r/w's in my math)..

The reason I am limited here is I cannot buy a second box for the MB role, as I would prefer, so I must fit all these drives into an 8 slot DL380

... Royal if you are around and could untangle the mess of numbers in my head I would be greatly appreciative..

Thanks so much to any and all advice!
MCSE tests left: 294, 297 |

Comments

  • royalroyal Member Posts: 3,352 ■■■■□□□□□□
    I'm not a good person to talk with on IOPS. I haven't dug deep enough into the whole IOPS thing to be the guy to talk to.

    What I would do, is the following since you're using DAS:
    OS - 2 disks RAID 1
    Logs - 2 disks RAID 1 for the write performance
    DB - 4 disks Raid 5 and if space allows you to, do RAID 10 instead for better write performance (since Exchange 2007 enhances read performance through caching) and for better recoverability.

    Calculate your storage requirements and make sure that above all your storage requirements, you have 110% space left for maintenance/recovery tasks for ESEUTIL and the RSG. Typically, in an environment with a SAN, I'd dedicated separate spindles and create a dedicated LUN for the online maintenance tasks and the RSG on same LUN since best practices state that you should always delete RSG when not in use. If you were to perform these tasks on a database in another LUN, you could use eseutil -t to direct the tasks to another volume.

    For database creation, they're all going to have to be on the same RAID Array. Because of that, you can't really do much to improve performance by placing them on new LUNs since the separate LUNs will still be on the same RAID Array using the same spindles. All you really can do is just get SAS 15K RPM disks which is what I would highly recommend doing.

    One of the things I hate about putting all the databases on the same spindles is that those disks are going to be hammered, not to mention the backup process is going to want to use those same spindles as well.

    Your options are creating 1 big LUN since they'll be on the same array, but I'd still recommend creating separate volumes so you can split the backups to different days so your backup isn't running in 1 big chunk. It'll also help you in restore times if something happens with a specific database file instead of having to restore a huge database file at once.

    Hope that helps.
    “For success, attitude is equally as important as ability.” - Harry F. Banks
  • CorySCoryS Member Posts: 208
    If I had the luxury of a SAN... Boy would I be a happy camper. I think I will have to just do the 4 in a RAID5, hopefully budgets will allow for the provisioning of a separate box in the semi near future so I can migrate my dbs to it.

    Your post help reinforce the thought train I was on, much appreciated. Always been a reliable voice around here, and you have my gratitude.
    MCSE tests left: 294, 297 |
  • HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    You might want to consider a few things differently:

    A. Put the Tlogs on your OS drive. While it's better to have Tlogs on a separate drive from OS pagefile, the IO patterns of tlogs are quite similar to pagefile, so it's better to put the tlogs on the OS drive than the DB drive if you have to choose one or the other.

    B. Have you considered using LCR? For the cost of a few more disks, an extra gig of RAM, and 20% more CPU than you'd need without it enabled, you gain quite a bit in flexibility of DR to help reduce recovery time potentially.
    Good luck to all!
  • CorySCoryS Member Posts: 208
    Well, I am limited to 8 drives in total for this box by design. I think with the 4 146 drives I should have more then enough for what I will be needing for my DBs and if I were to switch it and do say a 6 drive RAID1+0 with the 146 drives I think I may be limiting myself in the future by leaving it with a RAID1 72gb set for my TLs OS and Pagefile, something I dont think I want to do.

    I think what will be happenening however is we will be setting up a separate box in a different site that will mirror this one as an SCR target.

    Thanks for your reply!
    MCSE tests left: 294, 297 |
  • HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    2x RAID1 for OS/Exchange binaries
    2x RAID1 for Tlogs
    4x drives for DB's in RAID5. While RAID10 would be faster, Exchange 2007 is more efficient as far as disk i/o is concerned, so RAID5 is acceptable. However, don't forget when calculating disk i/o there is a write penalty for it.

    I just reviewed what you were suggesting, and it looks like you placed this stuff right.

    As for RSG, don't forget that you can have multiple storage groups. You absolutely should break up the 1000 users into multiple storage groups (recommended one db per SG). This would cut down on how much space you would need in the event you do need to use an RSG, as you would be working with a smaller dataset instead of one big one, which reduces the amount of free space required to leverage the RSG.

    You should break people up into SG's according to people who will more frequently receive the same messages to leverage single instance store, while also making sure you don't make some DB's far bigger than others. Also, the VIP's in your org who need faster recovery times in the event of a DR should be on smaller DB's to allow quicker restores.
    Good luck to all!
  • royalroyal Member Posts: 3,352 ■■■■□□□□□□
    HeroPsycho wrote:
    2x RAID1 for OS/Exchange binaries
    2x RAID1 for Tlogs
    4x drives for DB's in RAID5. While RAID10 would be faster, Exchange 2007 is more efficient as far as disk i/o is concerned, so RAID5 is acceptable. However, don't forget when calculating disk i/o there is a write penalty for it.

    I just reviewed what you were suggesting, and it looks like you placed this stuff right.

    As for RSG, don't forget that you can have multiple storage groups. You absolutely should break up the 1000 users into multiple storage groups (recommended one db per SG). This would cut down on how much space you would need in the event you do need to use an RSG, as you would be working with a smaller dataset instead of one big one, which reduces the amount of free space required to leverage the RSG.

    You should break people up into SG's according to people who will more frequently receive the same messages to leverage single instance store, while also making sure you don't make some DB's far bigger than others. Also, the VIP's in your org who need faster recovery times in the event of a DR should be on smaller DB's to allow quicker restores.

    So basically, you agree with me. :)
    “For success, attitude is equally as important as ability.” - Harry F. Banks
  • HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    Yep.

    Sorry, I skimmed through when replying.
    Good luck to all!
  • CorySCoryS Member Posts: 208
    Yea, thats pretty much where I was heading.. I will be following best practice and splitting this design into 2 storage groups with 100gb limits on the db. Thanks for confirming my design guys. Its nice to be able to pick other peoples brains.

    Have a great weekend!
    MCSE tests left: 294, 297 |
  • royalroyal Member Posts: 3,352 ■■■■□□□□□□
    CoryS, you could also try running JetStress to simulate your disk type, raid array, amount of users, and run a JetStress simulation to validate your storage architecture.
    “For success, attitude is equally as important as ability.” - Harry F. Banks
  • CorySCoryS Member Posts: 208
    Good call! I was wondering if there were any recommended programs for this. Thanks for throwing that out there, I will most certainly do that.
    MCSE tests left: 294, 297 |
  • CorySCoryS Member Posts: 208
    So heres my results from a slapped together test. I have a test server with a newer penry 3.0ghz, 3 gb of ram, raid 1 mirrored os drive and a raid5 with 5 10k 146 drives for the DB.. Just out of curiousity I installed the 32 bit version of Exchange (this is in my test lab), I am watching perfmon and notice everything is nominal except it seems I have a huge bottleneck in my drives with an average queue length of around 14. Now this is SP1 of the 32bit version, I am curious since this is a huge raid array (400+gb) and a slapped together test using the non 64bit OS if I will see any gains even though I will actually be using fewer disks in my raid, but will be using the 64bit version with much more RAM, I know it should cache more meaning fewer reads but was wondering if anyone had any input on their setup, or use of this tool.

    To be quite honest, it kind of scares me a bit, although I am sure our load will not be as extreme as this test is emulating (i set to defaults other then changing the path for my db and logs).. Any input from you is more then welcome!

    *Edit* Ran a second test using one of the preconfigured databases (I created one for my test mailboxes) and started the test, its now at around 2-3ish, which is still a bit over the limit as described per best practices

    *2nd Edit* Looks like I will need to parse the data a bit more thoroughly, it is very descriptive in the amount of IOPS you are actually getting as opposed to the amount that it desires. Very handy. I think the workload this thing simulates is pretty radical, I noticed the earlier versions of this tool seemed to be a bit more configurable in the GUI but I think there are still command line options that you can set, I will have to look more or get some advice.

    Overall I think this tool is very handy, thanks again go to Royal for advising on it.
    MCSE tests left: 294, 297 |
  • HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    CoryS,

    32-bit is not a good test for this. Moving to 64-bit greatly reduces disk i/0 loads. That along with support for far more memory were the principal reasons Microsoft moved to 64-bit only for Exchange 2007. From what I read from developers within Microsoft, the Jet database, used in both Exchange and Active Directory, gained up to 60% in disk i/o efficiency, although I wouldn't expect you're gonna actually see that necessarily in the real world. However, it should greatly improve.
    Good luck to all!
  • royalroyal Member Posts: 3,352 ■■■■□□□□□□
    Jetstress is only for simulation. The 32-bit vs 64-bit version of it is only for the host OS you're going to be running Jetstress on. Jetstress wouldn't simulate a 32-bit Exchange box in production, well because, 32-bit isn't supported in a production environment except for management tools only and export-mailbox and import-mailbox.
    “For success, attitude is equally as important as ability.” - Harry F. Banks
  • CorySCoryS Member Posts: 208
    Fair enough. I have yet to get my hands on a 64bit installed system. I half assumed it would be kind of comparing apples to oranges but had to give it a shot anyway :)

    Have a nice day!
    MCSE tests left: 294, 297 |
Sign In or Register to comment.