ESXi does not let me use 2.7 terabytes available... Help

maumercadomaumercado Member Posts: 163
Hello everyone,

Im havin a small problem with a server the company got to virtualize mos of our environment...

I installed ESXi 4 on it, and it has a RAID 10 configure with 6 disk 1 Terabyte each disk, so that makes 2.73 TB usable, problem is that when trying to set the whole datastore for vms to use 2.73 TB it does not let me, it only lets me use 750 GB...

why is that? how can I use the whole 2.73 TB??

Thank you.

Comments

  • JDMurrayJDMurray Admin Posts: 13,101 Admin
    Is that 750GB per datastore? How many datastores were created?
  • dynamikdynamik Banned Posts: 12,312 ■■■■■■■■■□
  • maumercadomaumercado Member Posts: 163
    I have only 1 virtual disk available because of the 10 Raid arrray.

    1 Data store got created with 750 GB of size, and it does not let me create another with the remaining 2 TBs.

    So dynamik given the document you just posted I should be able to use 2 TB by creating vds on the perc6 raid controller through the poweredge r710 bios utility??
  • maumercadomaumercado Member Posts: 163
    Or should I use ESX and create the vmfs disks manually??
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    The maximum is 2TB but I found the same, that if you have a LUN larger than 2TB it always just creates a 750GB LUN making the rest unuseable ...

    The only way to avoid that is creating for example two luns with 1.3T each ..
    My own knowledge base made public: http://open902.com :p
  • maumercadomaumercado Member Posts: 163
    ... and the only way to create them is by using ESX right? cuz theres no way with ESXi, am I correct?
  • astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    maumercado wrote: »
    ... and the only way to create them is by using ESX right? cuz theres no way with ESXi, am I correct?
    No you would need to create seperate LUNs using the storage controller, then you could create a VMFS datastore for each.
  • maumercadomaumercado Member Posts: 163
    astorrs wrote: »
    No you would need to create seperate LUNs using the storage controller, then you could create a VMFS datastore for each.

    How would I do that? Its my first time using PERC6i raid controller...
  • dynamikdynamik Banned Posts: 12,312 ■■■■■■■■■□
    When the machines booting, you should see an option for going into the configuration.
  • maumercadomaumercado Member Posts: 163
    dynamik wrote: »
    When the machines booting, you should see an option for going into the configuration.

    Yup, I press ctrl+r and get to the configuration option for the raid controller... There I can make one virtual disk alone since im using an array of six disk with RAID 10 so I guess I should do a virtual disk with raid 10 array using 4 disks and another Virtual disk with Raid 1 using the other 2 disks...
  • dynamikdynamik Banned Posts: 12,312 ■■■■■■■■■□
    It doesn't ask you to specify the size when you create one? The RAID configuration obviously depends on your needs, but that seems like a goofy setup if you're doing it "just because".
  • maumercadomaumercado Member Posts: 163
    dynamik wrote: »
    It doesn't ask you to specify the size when you create one? The RAID configuration obviously depends on your needs, but that seems like a goofy setup if you're doing it "just because".

    I dont see anything in the config menu to set the size of the virtual disk... Ill check with dell...
  • maumercadomaumercado Member Posts: 163
    Before I checked with dell about modifying the VD size manually I saw this post in the vmware community forum --> VMware Communities: 2TB limit on Dell PERC6? ... so yes is a GOOFY setup ...

    I guess Ill have to use another kind of array that gives me good reading velocity, I really want to stick to RAID 10!

    is there any other options??
  • astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    maumercado wrote: »
    Before I checked with dell about modifying the VD size manually I saw this post in the vmware community forum --> VMware Communities: 2TB limit on Dell PERC6? ... so yes is a GOOFY setup ...

    I guess Ill have to use another kind of array that gives me good reading velocity, I really want to stick to RAID 10!

    is there any other options??
    Good read speeds, RAID-5 will be great for that. Can we assume you meant to say write speeds...?
  • maumercadomaumercado Member Posts: 163
    Yup I meant write speeds... I found an article about the performance of RAID arrays with perc6i so based on that Im using raid 6 for better redundancy as well as speed...

    PERC 6 Performance Analysis Report - The Dell TechCenter
  • astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    maumercado wrote: »
    Yup I meant write speeds... I found an article about the performance of RAID arrays with perc6i so based on that Im using raid 6 for better redundancy as well as speed...

    PERC 6 Performance Analysis Report - The Dell TechCenter

    RAID-6 has the worst performance of any of the RAID types listed in that report (this is to be expected). Higher numbers are better in the test results.

    RAID-6 has a write-penalty of 6; that means that for every I/O block that needs to be written a total of 6 I/Os will occur (3 reads and 3 writes - remember we're reading/writing the double parity information too).

    Or in another way of looking at it, lets say we have 6 x 7200rpm SATA drives and each drive is capable of 80 I/O's per second (aka IOPS).

    If we use RAID-0 (no fault tolerance) we get 480 IOPS for reads / 480 IOPS for writes

    If we use RAID-10 we get 480 IOPS for reads / 240 IOPS for writes

    If we use RAID-5 we get 480 IOPS for reads / 120 IOPS for writes

    If we use RAID-6 we get 480 IOPS for reads / 80 IOPS for writes

    Therefore our six drives in a RAID-6 array have the combined read performance of all six drives, but the write performance of only a single drive.

    Does that make sense?
  • JDMurrayJDMurray Admin Posts: 13,101 Admin
    astorrs wrote: »
    Therefore our six drives in a RAID-6 array have the combined read performance of all six drives, but the write performance of only a single drive.

    Does that make sense?
    For a system that does a lot of reading, very little writing, and requires very high fault tolerance, it makes sense. But does RAID6 really buy you much more fault tolerance than RAID5? What are the numbers for that?
  • astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    JDMurray wrote: »
    For a system that does a lot of reading, very little writing, and requires very high fault tolerance, it makes sense. But does RAID6 really buy you much more fault tolerance than RAID5? What are the numbers for that?
    The rebuild times are what's become the problem for RAID-5. As drives continue to get larger the time it takes to restore data and recalculate parity after the failure of one drive (while the other drives are still actively serving data) has increased exponentially to where the window of risk for a 2nd drive failure might be considered to great.
  • maumercadomaumercado Member Posts: 163
    Hello all... ok so theres some writing involved in the server... not so much but still.. so I used RAID 5 thanks to astorrs reply... but Im using 2 Arrays of 3 disks each instead of one big array of 6...

    Thank you all... seriously... this is one part I like about IT.. always learning new stuff
Sign In or Register to comment.