Options

HELP on vmware raid 1 rebuild?

itdaddyitdaddy Member Posts: 2,089 ■■■■□□□□□□
hey guys..how long does it take for a raid 1 to rebuild?
I mean it has about 350GB to rebuild. We had a drive fail on a vm server.
It is raid 1 and I popped same size drive but different manufacturer that shouldnt matter if it is SATA right? look like it is rebuilding but that darn whistle is blowing (alarm) from the raid being of course degraded. Butu how long should it take? IT has been running for like 40 hours? how much longer or do I have to reboot to shut the alarm off?
Should it take this longto build?? you would think that it would be at most 24 hours maybe but since it is bus to bus so to speak shouldnt it be faster? I see the lights flickeron the second hdd that I replaced but could it take that long?? and how much longer??

Comments

  • Options
    leefdaddyleefdaddy Member Posts: 405
    Are you sure you don't need to boot into the RAID bios and start the rebuild? Did you just replace the drive hoping it would kick off automatically?

    What type of server is it?
    Dustin Leefers
  • Options
    itdaddyitdaddy Member Posts: 2,089 ■■■■□□□□□□
    yes shouldnt it be hots swappable? yes it is a supermicro
    I would think it would auto rebuild? didnt think it would take so long?

    I talk to this tech and he said I might have to reboot

    and another issue with another raid card. I have a machine that I want raid 10
    so I configured it in the raid bios 4 drives to raid 10. yep it shows on bootup
    2 pairs of 2 drives..
    2 drives - subunit mirror
    2 drives - subunit mirror

    but when I bring up diskdirector it show 1.3 TB available of unused space?
    shouldnt I only have like 965GB or 1 TB only if I have 4 500 GB drives?

    so i have 2 issues one is with the what I thought was hot swap and the other is showing 300 GB too much space
    for a raid 10 has anyone here experienced this?
  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    You just say Supermicro - are you using the onboard raid ? If that is the case then the server will need a reboot as it basically is just "fake raid" and doesn't scan the bus automatically. If you got a raid card fitted then it depends on what kind. The Supermicro own "make" are based on LSI and they support hotswap and as a result rescan the bus as soon as you throw that disk in. If you use Adaptec then again, it should rescan but I have seen on occasion that you have to initialize the disk first and sometimes even make it a global hotspare in order for the card to use the disk to rebuild the array.

    Also bear in mind that since you are running VMs on the array the load is quite "high" - hence the rebuild time is massivly decreased. You will have to make sure that the rebuild priority is set to high to get some sort of good performance out of this. Plus SATA is mostly 7200rpm so that doesn't help speed wise.

    4x 500GB in Raid 10 should give you indeed just under 1TB of available space. Why it sees 1.3TB - I honestly can't tell, not with a lot more informations.

    Since you say "2 pairs of 2 drives" - it really sounds like LSI as they have a real funky way of creating Raid10s ... ?
    My own knowledge base made public: http://open902.com :p
  • Options
    itdaddyitdaddy Member Posts: 2,089 ■■■■□□□□□□
    Gomjaba

    thanks man. Yeah I am going to have to rebuild on reboot it looks like I have let it run 3 days and crap it should have been done by down
    rebooting it tonight and going to look for an option the raid bios for REBUILD with high priority..thanks..

    for your help...willl report back what I find
Sign In or Register to comment.