raid 5 question

I want to have raid 5 for testing. I have 3 10gb HDD. Now as I understand it, data and parity is accross all the HDD's, therefor if ANY one HDD fails you can rebuild from the other 2?

Thanks.

icon_redface.gif Just starting to get info for 2003 server!
Remember I.T. means In Theory ( it should works )

Comments

  • CessationCessation Member Posts: 326
    I want to have raid 5 for testing. I have 3 10gb HDD. Now as I understand it, data and parity is accross all the HDD's, therefor if ANY one HDD fails you can rebuild from the other 2?

    Thanks.

    icon_redface.gif Just starting to get info for 2003 server!

    You have it correct from what I understand.
    Hope I pass before you do =P
    GL =)
    Cess
    A+, MCP(270,290), CCNA 2008.
    Working back on my CCNA and then possibly CCNP.
  • FijianTribeFijianTribe Member Posts: 62 ■■□□□□□□□□
    That is correct. With 3 drives you can have one fail and it is in a degraded stage. Depending on your RAID controller is what your options really are next. Hot Swap controllers are sweet in that you can just pull out the failed drive and then put in a new one, and normally the controller starts the rebuild automatically. Speeds during this time are heavily degraded.

    Also note that if you have 3 10GB drives, you do not get 30GB. You would get like 26GBs (<- I know its not an exact number)....

    The nice option is to have 4 drives. 3 as RAID 5 and the 4th as a Hot Swap. Meaning your RAID Controller realizes a drive dies, then automatically begins to rebuild on the 4th. Then theoretically you can have 2 drives fail without loosing data (as long as its not at the same time) if you go on vacation or something and no one can man the IT stuff while your gone. One dies say monday... then the 4th is activated.... then on Saturday a second one dies... and monday when you come back you have a degraded RAID 5 system that needs two drives replaced.

    You can also in most cases specify more than one hot swap. But I have only done one.
  • SWMSWM Member Posts: 287
    FijianTribe, My calculations suggest that 3 x 10G drives in raid 5 = 20Gb storage space?
    Also note that if you have 3 10GB drives, you do not get 30GB. You would get like 26GBs (<- I know its not an exact number)....
    Isn't Bill such a Great Guy!!!!
  • amyamandaallenamyamandaallen Member Posts: 316
    Thanks for replys

    I am only going to be using the software raid options within 2003 server, as that will be tested on the exam.

    Did seem quite daunting at first but looks alot more easier once you start playing with it.

    Any tips or known pitfalls guys?
    Remember I.T. means In Theory ( it should works )
  • panikpanik Member Posts: 61 ■■□□□□□□□□
    You can't boot off a software RAID array, as the drivers for the RAID don't start up until after the OS is running, so you will need the RAID disks, and an additional disk to use for the OS.

    Hardware RAID is faster.

    RAID 5 with 3 disks uses one disk as parity, so with 3 x 10 GB drives there is 20 GB of usable space.
  • blargoeblargoe Self-Described Huguenot NC, USAMember Posts: 4,174 ■■■■■■■■■□
    It's true that the boot volume can't be software raid 5 (or raid 0), but you can use raid 1.
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • amyamandaallenamyamandaallen Member Posts: 316
    blargoe wrote:
    It's true that the boot volume can't be software raid 5 (or raid 0), but you can use raid 1.

    BUGGER! didnt know that.

    ok if I had my 1st 10gb drive and the operating system boot were on that could I mirror that to the second drive. and use the remaining 'data' space on the 1st drive to raid 5 my data to the unallocated parts of my 2nd and 3rd drives? That way if my original drive dies I have a mirror to boot from ( on second drive ) then I could rebuild my first drive from the mirror and also raid would reconstruct the 1st drive from the remaining 2 drives.

    Hope that makes sense.
    Remember I.T. means In Theory ( it should works )
  • Danman32Danman32 Member Posts: 1,243
    With software RAID, you're RAIDing partitions, not physical drives. So, you could create a system/boot partition on drive 1 with say 2GB, mirror it to the 2nd drive, then use the remaining drive/partition spaces to perform RAID 5 of 8GBx(3-1) or 16GB.
    2 GB isn't a whole lot for system/boot, so 10GB isn't ideal in real world.
    Even only as a lab for practice, 2GB would be tight for system/boot, might be better with a 4/6GB split or even 5/5GB per drive.
  • FijianTribeFijianTribe Member Posts: 62 ■■□□□□□□□□
    blargoe wrote:
    It's true that the boot volume can't be software raid 5 (or raid 0), but you can use raid 1.

    You can use RAID 1 on the boot volume, but I had issues with it when my first drive died. The machine did not want to boot to the second drive though the data was there.

    If I had to rely on software vs hardware RAID I would choose hardware RAID.

    As far as the test goes, I guess its good to know the software RAID, but the initial machines I setup with software RAID, I am switching all of them back to hardware.
  • Danman32Danman32 Member Posts: 1,243
    You can use RAID 1 on the boot volume, but I had issues with it when my first drive died. The machine did not want to boot to the second drive though the data was there.

    The problem there is probably 2 fold. First, the BIOS has to be able transfer the boot process to the MBR code of the remaining drive. If MBR code was never placed there, it won't boot. Also the correct partition must be set active to select the system partition (the partition containing NTLdr, NTDetect and Boot.ini), and finally the boot.ini there has to point to the correct arcpath of the working drive.
    Even if the correct drive is given boot control, since you are mirroring the system partition, the boot.ini is still looking to go to the defunct drive since it is a copy (mirror) of the one on the original drive.
    Thus you need to create a boot floppy to substitute for the system drive in the boot process that has the boot.ini with the correct arcpath to the drive with the working boot partition.

    Some HD hardware configurations would make the remaining drive appear as the dead drive so that the boot process wouldn't know the difference of the original boot drive being missing, but if the drive IDs don't shift as seen by NTLdr, then control is going to be sent to the bad drive.
  • FijianTribeFijianTribe Member Posts: 62 ■■□□□□□□□□
    Just another reason in my mind to stick with hardware RAID.
  • blargoeblargoe Self-Described Huguenot NC, USAMember Posts: 4,174 ■■■■■■■■■□
    Yes, software raid sucks big time.
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • amyamandaallenamyamandaallen Member Posts: 316
    but I presume the exam expects you to know about raid 5 - but if you cant do it there seems little point icon_confused.gif
    Remember I.T. means In Theory ( it should works )
  • Danman32Danman32 Member Posts: 1,243
    I wouldn't say it can't be done. It just has limitations because you are asking the OS to support the RAID 5, yet you can't get to the OS until you get the RAID working.
    It's like locking the keys in the car. You could easily unlock the car, if you can get to the keys. You can get to the keys, if you can unlock the car.
    Better analogy: The electrical on a car can be supported by the alternator without the battery, but first you have to get the engine running, which requires the electrical. NOTE: with today's cars' electronics, you actually do need the battery to regulate the alternator's output. A super capacitor would work too.
  • GIOVGIOV Member Posts: 1 ■□□□□□□□□□
    convert the disks to dynamic and create a RAID 5 volume set across your remaining space

    to work out how much storage you will have the equation is
    Sx (N-1)

    S = Smallest size of drive
    N = Number of drives.

    so in a 5 disk array of 20GB drives your storage would be

    20(S) x 4 (5-1,N-1)=80GB
    Obesa Cantavit
Sign In or Register to comment.