Servers

2»

Comments

  • astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    Mishra wrote:
    Our 3par SAN has a supermicro as it's service controller. They used to have Dell. Pretty interesting.
    How do you guys like your 3PAR? I love the term "chunklets" icon_lol.gif
  • tierstentiersten Member Posts: 4,505
    mattrgee wrote:
    I'm currently using IBM and finding them incredibly unreliable!
    Odd. Are you using the servers in a particularly harsh environment?
  • MishraMishra Member Posts: 2,468 ■■■■□□□□□□
    astorrs wrote:
    Mishra wrote:
    Our 3par SAN has a supermicro as it's service controller. They used to have Dell. Pretty interesting.
    How do you guys like your 3PAR? I love the term "chunklets" icon_lol.gif

    Yeah, the engineer said "southern california term". lol

    3par's biggest thing is the fact that the SAN sorts out your volumes in chunklets throughout the entire array. Most other SANs require you to figure out how to sort everything yourself.

    They have a lot of other features that other SANs have. I haven't dealt with an enterprise SAN yet but we have the T800 from 3par which is their biggest SAN. We didn't really buy too many of their other features like thin provisioning.

    As of right now I don't have any complaints... I didn't see the price of the SAN or I may complain about that. I don't like the fact that they don't have an over all health light on the front of the chassis. I like that you can script anything in their CLI. I don't like that they don't really have a central way to manage multiple SANs.

    But we haven't quite started setting up LUNs on it so we aren't deep into it yet.
    My blog http://www.calegp.com

    You may learn something!
  • astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    Mishra wrote:
    Yeah, the engineer said "southern california term". lol

    3par's biggest thing is the fact that the SAN sorts out your volumes in chunklets throughout the entire array. Most other SANs require you to figure out how to sort everything yourself.

    They have a lot of other features that other SANs have. I haven't dealt with an enterprise SAN yet but we have the T800 from 3par which is their biggest SAN. We didn't really buy too many of their other features like thin provisioning.

    As of right now I don't have any complaints... I didn't see the price of the SAN or I may complain about that. I don't like the fact that they don't have an over all health light on the front of the chassis. I like that you can script anything in their CLI. I don't like that they don't really have a central way to manage multiple SANs.

    But we haven't quite started setting up LUNs on it so we aren't deep into it yet.
    Yeah chunklets, wide-striping and micro-RAID. It's actually a very interesting idea, one I'm curious to see if others duplicate. Marc Farley has a good video on his blog about it. :)

    A lot of SAN vendors don't have any visual indicator (they expect them to be running in a co-lo half the time), instead they usually have "ET phone home" functionality which lets the array email you about any problems it detects in hardware, cache, etc.

    Let me know what you guys think once you really start using it.
  • jamesp1983jamesp1983 Member Posts: 2,475 ■■■■□□□□□□
    I like HP. I've been around Intel, HP, and Dell servers and by far HP is my favorite. They have really nice utilities and a powerful product line.
    "Check both the destination and return path when a route fails." "Switches create a network. Routers connect networks."
  • hypnotoadhypnotoad Banned Posts: 915
    Gateway. Gateway is the best company eveeeeeee

    NO CARRIER.
  • tierstentiersten Member Posts: 4,505
    hypnotoad wrote:
    Gateway. Gateway is the best company eveeeeeee
    I like their boxes with the friesian cow print...
  • KaminskyKaminsky Member Posts: 1,235
    tiersten wrote:
    hypnotoad wrote:
    Gateway. Gateway is the best company eveeeeeee
    I like their boxes with the friesian cow print...

    You ever noticed how many IT departments have a "cow" box in their store room and usually right next to the window ?

    Worked in many depts and every time ... cow box ....
    Kam.
  • tierstentiersten Member Posts: 4,505
    Kaminsky wrote:
    You ever noticed how many IT departments have a "cow" box in their store room and usually right next to the window ?

    Worked in many depts and every time ... cow box ....
    Spooky. We do have a cow box in the IT storeroom..
  • undomielundomiel Member Posts: 2,818
    tiersten wrote:
    Spooky. We do have a cow box in the IT storeroom..

    I can say the same about my former job as well!
    Jumping on the IT blogging band wagon -- http://www.jefferyland.com/
  • nelnel Member Posts: 2,859 ■□□□□□□□□□
    To be honest all companies have there positives and negatives.

    ive worked with hp's which were crap and dells which were crap and times when they were both great.

    One thing i would say is you pay for what you get - if you get an issue with either one both will support you well aslong as your paying the £££'s.

    Thats what it comes down to the money. you can talk about build quality etc but its luck of the draw.

    Both companies are leaches which ever way you look at it!

    just dont make sure you buy time pc's :P
    Xbox Live: Bring It On

    Bsc (hons) Network Computing - 1st Class
    WIP: Msc advanced networking
  • blargoeblargoe Member Posts: 4,174 ■■■■■■■■■□
    I think HP has the slight edge over Dell I guess, but not enough of an edge to overcome the deep discount that our company gets from being part of an 60,000 company organization that has exclusive contracts with Dell.

    IBM's that I've worked with have been crap, but admittedly that was over 5 years ago
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • msright1981msright1981 Member Posts: 3 ■□□□□□□□□□
    Hi,

    With all of you arguing about HP & Dell blades I thought I will through the following comparison between the two HP Bladecenter vs Dell BladeCenter I hope this will explode a huge argument and further competing in here for who is the clear winner of the blades.
    The grass always looks greener on the other side.

    SUN Bladecenter vs HP BladeCenter
  • blargoeblargoe Member Posts: 4,174 ■■■■■■■■■□
    I don't see a "clear winner" there either... HP has an advantage in that it has the capability to fail over to another blade if you boot to san, but you have to purchase their switches to use it. Other than that, I don't see much of a difference. The article says that Dell doesn't offer any third party modules but I know for a fact they offer the integrated Cisco switches in place of the passthrough modules.

    HP, slight edge, if you want to switch to their san switches.
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    blargoe wrote:
    I don't see a "clear winner" there either... HP has an advantage in that it has the capability to fail over to another blade if you boot to san, but you have to purchase their switches to use it. Other than that, I don't see much of a difference. The article says that Dell doesn't offer any third party modules but I know for a fact they offer the integrated Cisco switches in place of the passthrough modules.

    HP, slight edge, if you want to switch to their san switches.
    IBM can do the same blade failover as HP, but can failover to blades in any one of up to 100 chassis not just the same chassis (as with HP)... oh and you can use any vendors switch/FC modules with Open Fabric Manager - no need for HP proprietary VirtualConnect switches. So +1 for IBM.

    I will argue IBM BladeCenter over either HP or Dell blades to the end of sanity if anyone wants too, but we should probably open up another topic if you want to go down that path. ;)
  • jbaellojbaello Member Posts: 1,191 ■■■□□□□□□□
    I would get a powerful 2U server with max memory and install VMware ESX, now all my servers are so easily managed etc, 2 servers/nodes will be so awesome so you can play with Vmotion... I am trying to purchase decomissioned server at work I want to get a Poweredge 2850, ESX is all you'll need for your server lab, then connect it to a DAS scsi storage, and your AWESOME :)
  • blargoeblargoe Member Posts: 4,174 ■■■■■■■■■□
    astorrs wrote:
    blargoe wrote:
    I don't see a "clear winner" there either... HP has an advantage in that it has the capability to fail over to another blade if you boot to san, but you have to purchase their switches to use it. Other than that, I don't see much of a difference. The article says that Dell doesn't offer any third party modules but I know for a fact they offer the integrated Cisco switches in place of the passthrough modules.

    HP, slight edge, if you want to switch to their san switches.
    IBM can do the same blade failover as HP, but can failover to blades in any one of up to 100 chassis not just the same chassis (as with HP)... oh and you can use any vendors switch/FC modules with Open Fabric Manager - no need for HP proprietary VirtualConnect switches. So +1 for IBM.

    I will argue IBM BladeCenter over either HP or Dell blades to the end of sanity if anyone wants too, but we should probably open up another topic if you want to go down that path. ;)

    I would too (based on paper, not personal experience), but IBM wasn't part of the comparison that was just linked above ;)
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    blargoe wrote:
    I would too (based on paper, not personal experience), but IBM wasn't part of the comparison that was just linked above ;)
    LOL ;)
  • SlowhandSlowhand Mod Posts: 5,161 Mod
    cnfuzzd wrote:
    As a point of curiousity, why is everyone seeming to lean towards the amd chips?
    I can't speak for everyone here, but the number one reason I favor AMD over Intel most of the time is cost. I've found AMD systems to cost less most of the time, and the performance is top-notch. Even Sun has made the switch from their own hardware to Opterons for some of their servers. Given, these days they carry Intel systems as well, but the initial move from SPARC-only servers to x86 and x64 systems was with AMD.

    Free Microsoft Training: Microsoft Learn
    Free PowerShell Resources: Top PowerShell Blogs
    Free DevOps/Azure Resources: Visual Studio Dev Essentials

    Let it never be said that I didn't do the very least I could do.
Sign In or Register to comment.