Do people still do blade servers?

cnfuzzdcnfuzzd Member Posts: 208
Hi all

We are preparing to make a significant hardware outlay for a new Dynamics AX project. I haven't done hardware in awhile. We are getting good specs from our existing metrics, but that still leaves some options:

I used to think that blade servers were so cute. Do we still use those?

It used to be that you could say that "I ordered IBM because no one ever got fired for ordering IBM". I am assuming that all of those people have now been fired and IBM stopped making x86 servers. Is there any vendor out there of the big three that have a decent reputation?

SAN. Do you think I need a fiber channel SAN? I want it to be fast. How fast? If you have to ask me that question, its not fast enough. Did iscsi ever catch up in terms of speed, reliability, and acceptance?

Not really looking for hard recommendations, just spit-ballin'

Thanks!

John
__________________________________________

Work In Progress: BSCI, Sharepoint

Comments

  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    Buy a VBlock from VCE and call it a day.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • W StewartW Stewart Member Posts: 794 ■■■■□□□□□□
    I've never seen one working in a data center but I'd imagine if you had a xeon x model performance cpu, you could benefit from using a blade server.
  • kurosaki00kurosaki00 Member Posts: 973
    Cisco UCS works fine.
    meh
  • ClaymooreClaymoore Member Posts: 1,637
    Why are you buying on-premise servers? Microsoft offers a hosted Dynamics solution in their cloud. If you just wanted to run your own servers, you could also build them in AWS or Azure.
  • joelsfoodjoelsfood Member Posts: 1,027 ■■■■■■□□□□
    I manage 12 chassises of UCS blades, so I'd say so. :)
  • UnixGuyUnixGuy Mod Posts: 4,570 Mod
    Cisco UCS is very dominant in the market you might wanna learn that..
    Certs: GSTRT, GPEN, GCFA, CISM, CRISC, RHCE

    Learn GRC! GRC Mastery : https://grcmastery.com 

  • slinuxuzerslinuxuzer Member Posts: 665 ■■■■□□□□□□
    These are all very very relative questions. The primary reason for going with blades is density, for example you can get 16 servers in 10 U Vs 16 U, with high end IO modules in your blade chassis you can drastically reduce port counts and cable counts, also less power outlets needed. What you give up with blades are PCI card slots and drive slots, typical blades will have two drive slots, so not a ton of onboard storage.

    Fiber channel adds significant cost, but there are several good reasons for going that way, if you have an existing investment in storage that only does FC, then go FC. If your workload requires super fast failover times during a path failure go FC. My guess is neither of these are the case. IP Stroage using ISCSI or maybe NFS I've deployed many times. Be careful here though and make sure you get a switch that can handle the load. I've seen significant issues where people tried to use ancient switches for IP storage. Typically I like 10 gigabit top of rack switches for IP storage scenarios. Cisco UCS I *think requires Cisco Nexus, if nothing else its going to work best on Nexus, and unless you already have that, its expensive.

    Cisco UCS has its place, but I like HP C7000 as a blade platform.

    How fast of a SAN comes down to your workload requirements, your storage vendor should have a bench-marking tool that can collect metrics, I suggest you run this during peak times, typically during backup window, month end closing etc. This tool typically will collect IOPS, IO Size, read/write distribution, workload type (random or sequential) then your storage vendor should be able to size a SAN for you. If the workload doesn't exist then I suggest 8Gb SAN, most storage devices aren't up to 16Gb speed yet, and the 16Gb switches are insanely high.

    Also, Always start a solution with a power assessment, make sure you have enough power to the building, the bus, the breaker, and last but not least on the PDU. Power is tricky and will bite you, don't ask how I know. Then look at cooling, every watt you spend powering something you spend another cooling it.
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    And be silo'd into a Vblock and never be able to do change anything without VCE's blessings. With something like a HP blade infra + a 3PAR SAN for example, you can do what you want, when you want (of course, you make sure it works before productionizing it).
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • slinuxuzerslinuxuzer Member Posts: 665 ■■■■□□□□□□
    well the big guns have come out now :D #Essendon #Dave330i
  • techfiendtechfiend Member Posts: 1,481 ■■■■□□□□□□
    Time for some popcorn. Why not node servers?
    2018 AWS Solutions Architect - Associate (Apr) 2017 VCAP6-DCV Deploy (Oct) 2016 Storage+ (Jan)
    2015 Start WGU (Feb) Net+ (Feb) Sec+ (Mar) Project+ (Apr) Other WGU (Jun) CCENT (Jul) CCNA (Aug) CCNA Security (Aug) MCP 2012 (Sep) MCSA 2012 (Oct) Linux+ (Nov) Capstone/BS (Nov) VCP6-DCV (Dec) ITILF (Dec)
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Rackmounts have their place too! I wont get sucked into this rackmounts vs blades debate, use case/requirements/constraints will determine what you go with.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • joelsfoodjoelsfood Member Posts: 1,027 ■■■■■■□□□□
    UCS does NOT require Nexus, despite many blog posts, sales rags, etc claiming so. Don't believe the haters

    That being said, it certainly works well with Nexus. :)

    FC vs iSCSI is less important these days than spinning disk vs SSD (with hybrid in the middle). That being said, FC was designed from the ground up as a lossless, storage based protocol. iSCSI was designed for storage, but is built on top of TCP/IP which isn't exactly known for being lossless. Retries tend not to be good for data streams.
  • dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    I was going to post about the benefits of the VBlock and issues I've had with HP support. Then I realized I didn't know if OP is going physical or virtual. VBlock business model breaks down when you go physical.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • discount81discount81 Member Posts: 213
    My current job has insane storage and computing processor requirements, everything is custom built Supermicro servers, and some Cisco UCS servers we got from an acquisition of another company, no real blades, and nothing much is virtualized as we need every possible CPU and GPU to be able to be available for the rendering farm.
    Buying brand name HP/Lenovo/Dell servers with the specs we need would be ridiculously expensive.

    My last job we used rack mount HP's, I had no complaints.

    Job prior to that was all IBM servers, very solid, mixture of blades and rackmount, but the blades seem to be much more troublesome.

    And prior to that it was a place that was 100% Dell (I assume rackmount? I never actually went to the datacenter, did everything over IPMI and remote), which I wouldn't touch again with a 10foot pole if I had the choice, complete crap.
    http://www.darvilleit.com - a blog I write about IT and technology.
  • slinuxuzerslinuxuzer Member Posts: 665 ■■■■□□□□□□
    Well Dave, I will admit that had always been the down side to HP, their phone people are lacking, unless you buy a really high tier of course. My solution was always to use the HP warranty and then use a SMS or Service express type company. Nothing like having an argument with tier 1 support over sending me a hard drive when management is breathing down your neck.
Sign In or Register to comment.