Openfiler in a prodcution environment ???

JBrownJBrown Member Posts: 308
I have a standalone beefed up server that was taken of off the production (not powerfull enough for our current needs) with 1.5TB RAID 5, 8 GB ECC RAM, 2 XEON 2.4 CPUs. Since we can not afford to move to SAN at the moment, I was thinking to install some SAN/NAS software on it and use it as a datastore for our ESXi 3.5 hosts with vmotion and some other options. At the moment each ESXi host is using local datastore to store the VMs.

So my question basically is, would you implement Openfiler or any other freeware in a production environment as an iSCSI based SAN/NAS substitute?

How big of a difference between NetAPP/EMC software running on SAN and Openfiler kind software?

Comments

  • tierstentiersten Member Posts: 4,505
    JBrown wrote: »
    So my question basically is, would you implement Openfiler or any other freeware in a production environment as an iSCSI based SAN/NAS substitute?
    No. I use open source tools and distributions but if it is for a SAN, I'd go for a proper SAN solution for performance, reliability and support.
    JBrown wrote: »
    How big of a difference between NetAPP/EMC software running on SAN and Openfiler kind software?
    You can't compare them. OpenFiler is basically a regular file server with a iSCSI/NFS front end. OpenFiler doesn't do dedupe for one.
  • HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    If the crap hits the fan, would you put your job on the line that OpenFiler support can get you rolling again?

    Personally, I'll take NetApp, EMC, etc...
    Good luck to all!
  • Daniel333Daniel333 Member Posts: 2,077 ■■■■■■□□□□
    Assuming the performance was even close enough to meet your needs. It would be a bad idea to make a 5 year old server a single point of failure on your high availability ESX farm. Additionally ESXi I don't believe ESXi supports Vmotion. I think you need the full version for that, I could be wrong though.
    -Daniel
  • tierstentiersten Member Posts: 4,505
    Daniel333 wrote: »
    Additionally ESXi I don't believe ESXi supports Vmotion. I think you need the full version for that, I could be wrong though.
    VMware decided to confusingly name their products. There is the free version of ESXi which doesn't support vCenter/VirtualCenter + related features and there is the paid version of ESXi which does. The free version of ESXi comes with a trial of the paid version features for 60 days.
  • astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    I'm with Daniel33, you'd be nuts to make an old server the single point of failure for your entire virtual environment. Proper SAN storage has fully redundant storage processors and redundant paths from each SP to each disk, in addition to the usual redundant NICs, power supplies, fans, etc that you would find in an industry standard server.

    Do the business case for shared storage and figure out if it makes economical sense or not to buy a real SAN array.
  • JBrownJBrown Member Posts: 308
    HeroPsycho wrote: »
    If the crap hits the fan, would you put your job on the line that OpenFiler support can get you rolling again?

    Personally, I'll take NetApp, EMC, etc...

    Good point on the support. I did some inquiries on netapp and it costs at least 60K, even with academic pricing. ;(

    Daniel333 wrote: »
    Assuming the performance was even close enough to meet your needs. It would be a bad idea to make a 5 year old server a single point of failure on your high availability ESX farm. Additionally ESXi I don't believe ESXi supports Vmotion. I think you need the full version for that, I could be wrong though.

    Up to a few days ago I was running the free version of ESXi 3.5 on a single CPU 1U unit, but recently we received a few servers with 2 CPUs and in process of ordering ESXi 3.5 Standard or Enterprise edition. If I remember correctly there was not such a big difference between Standard and Enterprise editions for us ( academic institution). So I was wondering if its worth getting Enterprise version and use Openfiler as an iSCSI target.

    I guess I will stick with local datastore for time being.
  • JBrownJBrown Member Posts: 308
    tiersten wrote: »
    VMware decided to confusingly name their products. There is the free version of ESXi which doesn't support vCenter/VirtualCenter + related features and there is the paid version of ESXi which does. The free version of ESXi comes with a trial of the paid version features for 60 days.

    Just their naming and then renaming everything every 12 months makes my head spin around. It's not even possible to locate proper features for products on their website anymore. Have to rely on google cache;(
  • dynamikdynamik Banned Posts: 12,312 ■■■■■■■■■□
    Have you looked at LeftHand SANs at all?
  • HeroPsychoHeroPsycho Inactive Imported Users Posts: 1,940
    JBrown wrote: »
    Good point on the support. I did some inquiries on netapp and it costs at least 60K, even with academic pricing. ;(

    It's because it's worth it. icon_lol.gif

    +1 for LeftHand and Datacore SANMelody for cheaper solutions, although I still recommend a hardware device. Both Dell and HP make inexpensive SANs.
    Good luck to all!
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    tiersten wrote: »
    VMware decided to confusingly name their products. There is the free version of ESXi which doesn't support vCenter/VirtualCenter + related features and there is the paid version of ESXi which does. The free version of ESXi comes with a trial of the paid version features for 60 days.

    It is the same install - just a different serial / license.

    As for Openfiler, when I installed a test cluster, a WUDSS server installed in a virtual machine was more powerfull then Openfiler in a dedicated server. The IO REALLY is horrible on Openfiler. It might be ok as filestorage, but I wouldn't let it near an ESX infrastructure.

    For testing, fair, not for production though ..
    My own knowledge base made public: http://open902.com :p
  • JBrownJBrown Member Posts: 308
    We stopped looking into it after my manager "found out" that SAN does not cost 10,000 but 35K at least - that was for FAS2020 NetAPP 2GB ram, RAID-DP -, and he just killed the whole idea of moving to SAN in near future. That was right before economy going south.

    I attended a few presentations at Microsoft HQ in Manhattan -they were promoting Hyper V with LeftHand's storage,and i was very impressed with their hardware and software. Specially with their site-to-site near real time hot backup option.
  • tierstentiersten Member Posts: 4,505
    JBrown wrote: »
    We stopped looking into it after my manager "found out" that SAN does not cost 10,000 but 35K at least - that was for FAS2020 NetAPP 2GB ram, RAID-DP -, and he just killed the whole idea of moving to SAN in near future. That was right before economy going south.
    The list price is high but if you're a big enough company, they'll do discounts for you. All of our NetApps are actually IBM branded nSeries boxes.
    JBrown wrote: »
    I attended a few presentations at Microsoft HQ in Manhattan -they were promoting Hyper V with LeftHand's storage,and i was very impressed with their hardware and software. Specially with their site-to-site near real time hot backup option.
    Hope you've got a big pipe between your sites...
  • remyforbes777remyforbes777 Member Posts: 499
    We use Lefthand Storage Clusters here. I would look into that. All resources are basically aggregated everytime you add a node to the cluster.
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Mind you, we have built a couple iSCSI SANs for $5k which are based on WUDSS (Couple TB) .. where the most money is spent on disks ... (well and $600 for the WUDSS license). I think our hardware solution, based on Axstor, is also about $30-40k
    My own knowledge base made public: http://open902.com :p
  • tierstentiersten Member Posts: 4,505
    Gomjaba wrote: »
    It is the same install - just a different serial / license.
    I know. They could have made it a little clearer or at least appended something to the name of the free version of ESXi.
    Gomjaba wrote: »
    As for Openfiler, when I installed a test cluster, a WUDSS server installed in a virtual machine was more powerfull then Openfiler in a dedicated server. The IO REALLY is horrible on Openfiler. It might be ok as filestorage, but I wouldn't let it near an ESX infrastructure.
    OpenFiler is a regular file server which has iSCSI bolted on. The LUNs that get exported via iSCSI are actually files on a regular file system. The extra abstraction caused by this will impact performance. You can implement iSCSI properly with Linux however. It is just how OpenFiler currently (or at least used to) works that doesn't suit it for a production ESX system.
  • JBrownJBrown Member Posts: 308
    Gomjaba wrote: »
    Mind you, we have built a couple iSCSI SANs for $5k which are based on WUDSS (Couple TB) .. where the most money is spent on disks ... (well and $600 for the WUDSS license). I think our hardware solution, based on Axstor, is also about $30-40k

    Would you mind sharing the $5K setup ?
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    JBrown wrote: »
    Would you mind sharing the $5K setup ?

    Sure,

    One example of server we sell

    SuperMicro based system
    Xeon 5410
    4GB Ram
    Adaptec 5805 + BBU
    Seagate Barracuda NS.2 SAS 1TB x4 in Raid5

    £3,180.88 + VAT (approx $5,2001 plus tax)

    Ok, it is a bit more expensive as the WUDSS License (Windows Unified Data Storage Server) is not included which is another approx. £550.00 ($900 - more expensive than I first thought) ...

    This system would be a 2U system (2 socket, 8 bay).. going 1U, Core2Duo CPU and Adaptec 2405 (4 bay chassis) , same disk setup is just two third of that (£1,900 / $3.200) plus WUDSS license.

    Clearly for performance you have to go for a different disk layout and disks. The 1TB spindles only have 7200 rpm (even though it is SAS) and Raid5 would hardly cut it in terms of write speeds.

    I am sure you can get a very good performance when using 72GB SAS (15k rpm) ..

    But it is no secret that dedicated SANs have a better performance due to its smaller footprint.

    The best price / performance SAN we found (and use) is Axstor iSCSI SAN ...

    In case you don't know WUDSS - the 'Unified' part is the iSCSI target by the way. You can get now the 2008 version on MSDN / Technet .. This won't be available for production systems for the public though, you still can get WUDSS only from OEM partner with approved hardware ..

    Oh, and our systems come with a 3 year warranty with 4 hour hardware fix (24/7/365) - just in case you want to compare the system with Dell / IBM etc. :p:p
    My own knowledge base made public: http://open902.com :p
  • JBrownJBrown Member Posts: 308
    Gomjaba wrote: »
    Sure,

    One example of server we sell

    SuperMicro based system
    Xeon 5410
    4GB Ram
    Adaptec 5805 + BBU
    Seagate Barracuda NS.2 SAS 1TB x4 in Raid5

    £3,180.88 + VAT (approx $5,2001 plus tax)

    Ok, it is a bit more expensive as the WUDSS License (Windows Unified Data Storage Server) is not included which is another approx. £550.00 ($900 - more expensive than I first thought) ...

    Thank you for the info. We usually get the Microsoft software for 25%-30% of the price, so it should not be an issue.
    I guess next project for me is testing WUDSS with the previously mentioned server.
  • tierstentiersten Member Posts: 4,505
    JBrown wrote: »
    Thank you for the info. We usually get the Microsoft software for 25%-30% of the price, so it should not be an issue.
    You can't just buy WUDSS by itself. You're supposed to buy it in a package with hardware.
    JBrown wrote: »
    I guess next project for me is testing WUDSS with the previously mentioned server.
    You'll still have the problem about a single point of failure in your ESX cluster. The entire cluster will die if your old server goes...
  • JBrownJBrown Member Posts: 308
    tiersten wrote: »
    You can't just buy WUDSS by itself. You're supposed to buy it in a package with hardware.


    You'll still have the problem about a single point of failure in your ESX cluster. The entire cluster will die if your old server goes...

    I have deiced to wait for better times, when we can get a real SAN. For now I am going to set it up as a test environment, Gomjaba has a nice howto for the whole thing. I want to setup as close to a production system as possible using real PCs and switches.
Sign In or Register to comment.