Options

Your Daily VMware quiz!

1356714

Comments

  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    tomtom1 wrote: »
    According to a few KB articles and the HA product documentation you have to specify it manually, and you need to do so in production. That's what I was after, otherwise good post. :)

    Never assume ey :)

    Fair enough ... Never had to do that in production (that's my excuse anyway :p)
    My own knowledge base made public: http://open902.com :p
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Question 9

    Your company has 2 clusters of 10 servers each with DRS and HA enabled. All these servers are HP DL380 G7's and have 2 years left before they are EOL'd. There are ongoing discussions to introduce more server into the clusters to cater for increased growth Some company executive (aka smartypants) decides to buy 4 new servers with AMD processors while you are away on holidays. What can you do about these servers - are you able to add them to the 2 pre-existing clusters? Discuss your options.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    QHaloQHalo Member Posts: 1,488
    Sure you can add them, but its just easier to create a separate cluster for the AMD machines outside of the two Intel-based ones. There's really no reason to keep them within the same cluster. vMotion and dvSwitch bounardaries are the datacenter object, so you can still have all the functions you need for the VMs in both clusters. vMotion workloads to the AMDs as you see fit. You can vMotion between Intel and AMD, however the VM must be offline to do so.
  • Options
    tomtom1tomtom1 Member Posts: 375
    Essendon wrote: »
    Question
    Your company has 2 clusters of 10 servers each with DRS and HA enabled. All these servers are HP DL380 G7's and have 2 years left before they are EOL'd. There are ongoing discussions to introduce more server into the clusters to cater for increased growth Some company executive (aka smartypants) decides to buy 4 new servers with AMD processors while you are away on holidays. What can you do about these servers - are you able to add them to the 2 pre-existing clusters? Discuss your options.

    Hooking in on this one: Assuming and according to my research the DL380's have Intel CPU's, vMotion (and therefore DRS) will not be happy since VM's with different CPU vendors cannot be moved with vMotion. You can't just add them to the cluster, but you have a few options. Each with pro's and con's.

    1) Create a separate cluster for the AMD based hosts and enable HA / DRS on this cluster according to company policy.
    Pro's: Maximum compatibility for hosts and VM's placed in this cluster.
    Con's: (Can) create(s) additional management overhead and could have impact on stuff like licensing.

    2) Another option you have is by adding 2 hosts to the both existing clusters (assuming EVC is not enabled on the existing clusters) and setting the new hosts as dedicated failover hosts. This ensures that VM's will never be vMotioned to these hosts, but they will be able to grab some of the workload if an HA event occurs.
    Pro's: Better use of the existing cluster infrastructure and therefore saving on additional (management) overhead in the cluster.
    Con's: The hosts identifies as dedicated failover hosts will never be used until an HA event occurs.

    Just my 2 cents, but I think I'd go with option 1, which would maximize the usability of the new hosts.
  • Options
    tomtom1tomtom1 Member Posts: 375
    QHalo wrote: »
    You can vMotion between Intel and AMD, however the VM must be offline to do so.

    At which time it would be a cold migration, which is technically not the same as a vMotion. :)
  • Options
    QHaloQHalo Member Posts: 1,488
  • Options
    KonfliktKonflikt Member Posts: 43 ■■■□□□□□□□
    I would leave the old Intel based servers in the original cluster, and I would make a new one from the new AMD Opteron based servers. Main reason is that the vMotion (neither DRS) won't work between intel and AMD CPUs. Maybe in the future in eEVC mode (extended EVC for the inter-vendor vMotion - just kidding:) ).
    So it would't be a good idea to mix them. And even if both servers would based on the same CPU vendor (Intel or AMD) I would go with 2 clusters. The compute capacity difference per host between the almost EOL and the just purchased servers is probably huge, so it would not be the best for HA.
    drawbacks: we need more spare resources (depend on HA policy) for these two clusters, if the HA is in the scope.
    for 2013: [x] 3x VCA, [x] VCAP5-DCA, [-] VCAP-DCD - failed. PASSED in 2014
    for 2014: [x] BACP, [x] SCP, [x] 70-409, [x] VCAP-DCD
    for 2015: [x] VCP6-DCV,
    for 2016: [x] upgrade VCAPs to VCIX6-DCV, [x] CCNA [-]
    2019: NEW job, back to again to the datacenter area:)
    My Virtual blog: vthing.wordpress.com
  • Options
    QHaloQHalo Member Posts: 1,488
    tomtom1 wrote: »
    Con's: (Can) create(s) additional management overhead and could have impact on stuff like licensing.

    Outside of CPU socket count, there's no other licensing concerns that I'm aware of that you wouldn't encounter if they were Intel-based CPUs.
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Yeah I havent come across such a licensing constraint either. Have you, Tom?

    Another thing to keep in mind is most people have EVC enabled clusters and such clusters will not allow for a different vendor's hosts be added to the same cluster. So it is best to have a separate cluster for the AMD hosts. Oh and clip Ms. Smartypants' wings!
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    tomtom1tomtom1 Member Posts: 375
    tomtom1 wrote: »
    Another option you have is by adding 2 hosts to the both existing clusters (assuming EVC is not enabled on the existing clusters) and setting the new hosts as dedicated failover hosts.

    I got that one right here. I came across an application one time that only supported Intel processors, so that was a constraint for a scenario like this, which was a really good one by the way. Need to think of a good one for tomorrow.
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Sorry my bad, didnt quite read that well enough! Thanks for shedding light on your experience with the application and the strange licensing constraint. The IT world never ceases to surprise, does it?!
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Question 10

    A company has hired you as their virtualization specialist to get their stretched cluster going. Both the network and storage are stretched and there is only 1 vCenter as is usually the case with a stretched cluster as opposed to having 2 in an SRM scenario.




    - Is there something missing on the hardware side of things? Discuss.

    - Discuss their HA settings. Specifically talk about
    • Admission Control. Enable or disable? What policy setting do they need to ensure all VM's start up successfully in case either datacenter (entire datacenter, that is) fails.
    • How many datastores should they use for their heartbeating? What advanced HA setting is needed?
    • What will happen to the VM's running on the far left host if it fails?
    - Discuss their DRS settings. Specifically talk about
    • How do they ensure that the VM workload is balanced across the stretched cluster?
    • How do they also ensure that VM's successfully startup when a host(s) fails? Hint: talk about DRS rules
    - Lastly, does this setup provide for workload mobility and allow your client to migrate their VM workload to the other datacenter if an impending disaster threatens to wipe out one datacenter?
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    tomtom1tomtom1 Member Posts: 375
    - Is there something missing on the hardware side of things? Discuss.
    Assuming the design is leaning more towards the physical design then a logical design, I'm missing some redundancy in the pNICS and the FC switches. 1 NIC is drawn per host to 1 single instance of a fiber channel switch.

    - Discuss their HA settings. Specifically talk about
    • Admission Control. Enable or disable? What policy setting do they need to ensure all VM's start up successfully in case either datacenter (entire datacenter, that is) fails.
    • I would go with the option for Power Off, because the vSphere 5 default of Leave Powered On could create something you would avoid at all costs, a split brain scenario. The immediate power off in a host faillure event would ensure that the hosts in the other side could start the VM's.
    • How many datastores should they use for their heartbeating? What advanced HA setting is needed?
    • According to the Metro Cluster Case Study you should use a minimum of 4 datastores, 2 per sites. To increase the default of 2 datastores, you need the HA advanced setting of das.heartbeatDsPerHost to 4.
    • What will happen to the VM's running on the far left host if it fails?
    The local host in the local site will run these VM's, if you specify this with DRS rules.


    - Discuss their DRS settings. Specifically talk about
    • How do they ensure that the VM workload is balanced across the stretched cluster?
    • Create DRS should rules to ensure that a part of the workload is specifically running on either the left or the right part of the stretched cluster.
    • How do they also ensure that VM's successfully startup when a host(s) fails? Hint: talk about DRS rules
    • By using the DRS should rules, you can ensure that the host local to the site runs the workload first, unless that fails to. Because it is a should rule, it isn't a hard rule, and HA and stuff like maintenance mode will continue to work after both hosts in the site has failed.
    - Lastly, does this setup provide for workload mobility and allow your client to migrate their VM workload to the other datacenter if an impending disaster threatens to wipe out one datacenter.
    I'd say so, assuming the storage is capable of the correct replication.
  • Options
    tomtom1tomtom1 Member Posts: 375
    Anybody else with other ideas?
  • Options
    tomtom1tomtom1 Member Posts: 375
    Pre posting tomorrow's question:

    You currently have 1 VSS with 2 vmnics as uplinks in place for your vSphere environment. Your company recently bought Enterprise Plus licenses to leverage the PVLAN features of the DVS. Tell me how you would non disruptively migrate the following network types to the DVS.
    • Management traffic
    • vMotion traffic
    • VM traffic
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    tomtom1 wrote: »
    - Is there something missing on the hardware side of things? Discuss.
    Assuming the design is leaning more towards the physical design then a logical design, I'm missing some redundancy in the pNICS and the FC switches. 1 NIC is drawn per host to 1 single instance of a fiber channel switch.

    - Discuss their HA settings. Specifically talk about
    • Admission Control. Enable or disable? What policy setting do they need to ensure all VM's start up successfully in case either datacenter (entire datacenter, that is) fails.
    • I would go with the option for Power Off, because the vSphere 5 default of Leave Powered On could create something you would avoid at all costs, a split brain scenario. The immediate power off in a host faillure event would ensure that the hosts in the other side could start the VM's.
    • How many datastores should they use for their heartbeating? What advanced HA setting is needed?
    • According to the Metro Cluster Case Study you should use a minimum of 4 datastores, 2 per sites. To increase the default of 2 datastores, you need the HA advanced setting of das.heartbeatDsPerHost to 4.
    • What will happen to the VM's running on the far left host if it fails?
    The local host in the local site will run these VM's, if you specify this with DRS rules.


    - Discuss their DRS settings. Specifically talk about
    • How do they ensure that the VM workload is balanced across the stretched cluster?
    • Create DRS should rules to ensure that a part of the workload is specifically running on either the left or the right part of the stretched cluster.
    • How do they also ensure that VM's successfully startup when a host(s) fails? Hint: talk about DRS rules
    • By using the DRS should rules, you can ensure that the host local to the site runs the workload first, unless that fails to. Because it is a should rule, it isn't a hard rule, and HA and stuff like maintenance mode will continue to work after both hosts in the site has failed.
    - Lastly, does this setup provide for workload mobility and allow your client to migrate their VM workload to the other datacenter if an impending disaster threatens to wipe out one datacenter.
    I'd say so, assuming the storage is capable of the correct replication.

    Great answer there, Tom. I'll add a few bits here and there.

    VMware HA:

    Admission Control: I'd set it to Enable. You always want to ensure that your cluster is able to restart all your VM's on another host if a HA event occurs. Setting Admission Control to Disable would allow you to power on more VM's than can be restarted in case of host failure. The only use cases I'd see for this are a test lab situation or when you dont care about high availability and are trying to make maximum use of your hardware (again a test lab really!).

    In addition, I'd set the Admission Control Policy to %age reserved and reserve 50% of the resources to be used only in the event of a complete site failure or during a planned migration in case of a impending catastrophic event. I've seen people setting the %age reserved to 30% for both CPU and Mem and then wonder why all their VM's didnt startup when one of their datacenters (say Building B) has fallen over completely. Sure you may think that setting a %age reserved of 50% is overkill, well do you want all your VM's protected? That's one of the things about a stretched cluster situation, you are probably running production workloads in either datacenter and you'd want your VM's to be highly available.
    Isolation response: I'd recommend setting the isolation response to your requirements and constraints. Isolation response is just that, how should your cluster respond when a host is isolated. In a well-designed network environment, it's very unlikely that a host will be isolated, there'll be some redundant path that can be used by the host. I'd leave the Isolation response to "Leave Powered On", especially in a environment that uses a FC as its storage protocol. In an environment that used iSCSI and/or NFS, the recommended option is "Power Off". In a network based storage protocol, it's likely that a disruption that's caused host isolation will also prevent a host from getting to its datastores. Hence, the need to quickly power off your VM's and have HA spin 'em up on another host.

    Another thing to keep in mind is that when your VM's are powered up by HA (based on your choice of isolation response), they can be restarted in the other datacenter. DRS rules will come in play here and will move the VM's over to the home datacenter. There'll be some latency experienced while the VM's run in the distant datacenter.

    Split-brain scenario: This may exist for a very short time only when the two datacenters have their networking re-established. HA will recognise this immediately and VM's with no access to their files will be powered-off.

    Workload mobility:

    Yes, this is whole purpose of a stretched cluster. You should be able to move your VM's around if needed. However, this kind of setup should be setup with care and would require regular monitoring to ensure VM locality otherwise you may experience latency and discover your VM's dont restart successfully in case of host or storage failure. Host and datastore affinities should be setup carefully and regularly checked.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    tomtom1 wrote: »
    Pre posting tomorrow's question:

    You currently have 1 VSS with 2 vmnics as uplinks in place for your vSphere environment. Your company recently bought Enterprise Plus licenses to leverage the PVLAN features of the DVS. Tell me how you would non disruptively migrate the following network types to the DVS.
    • Management traffic
    • vMotion traffic
    • VM traffic

    1. Remove one NIC from the VSS (if portchannels are used make sure you change the failover policy away from IP Hash)

    2. Create a DVS, add hosts with the now available NIC to the DVS

    3. Create portgroups with the relevant VLANs matching Management, vMotion and VM Traffic

    4. If multiple vmkernel ports are used for vMotion for example, make sure you create two portgroups, excluding an uplink per portgroup

    - Portgroup 1
    - Active Uplink dvuplink1
    - Unused Uplink dvuplink2

    - Portgroup 2
    - Active Uplink dvuplink2
    - Unused Uplink dvuplink1

    5. Migrate the vmkernel interface to dvs
    - Either do this when adding the host
    - Add host without migrating and migrate later (Configuration > Networking > vDS > Manage Virtual Adapters > Add > Migrate)

    6. Migrate Virtual Machine Networking
    - Change NIC assignments manually per VM or
    - Home > Networking > vDS > Commands > Migrate Virtual Networking

    7. Remove VSS

    8. Add now unused vmnic to vDS (Configuration > Networking > vDS > Manage Physical Adapters > Add)

    Make sure that the correct configuration is applied to the vDS - This includes, but not limited to, Portchannels and failover policy, MTU and VLANs. If iSCSI is used you will need to remove the portbinding, which may or may not cause interruption to the storage network so I would suggest evacuating a host and remove / re-add the ISCSI layer making sure you follow the same uplink rules as the vmotion interfaces
    My own knowledge base made public: http://open902.com :p
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Question 12

    You are the virtualization gun for an SMB that currently has its gear sitting in Datacenter Mickey. Due to increased growth they are looking at buying more hardware and sticking them in a new datacenter, Datacenter Minnie. Mickey is owned by the SMB so space wasn't an issue, however Minnie is a 3rd party and rack space is at a premium, so are cooling and power.

    Your company's position on budget/gear:

    - Tight budget for the first 12-18 months.
    - Only sufficient to purchase one blade chassis.
    - No money for training staff in blade management.

    Your company's requirements are:

    - Use minimum rack space.
    - Be able to scale-up if needed because they anticipate a potential client will have these massive SQL VM's.
    - No single point of failure.
    - Lower entry-cost point with regards to their ESXi hosts

    Future considerations:

    - The company anticipates winning a large VDI project for another client, though the tender process and the rest of negotiations aren't expected to finish for about 20 months. The chances of winning the project are not that high, contrary to what some douches in the company believe.

    Suggest whether the company should go with physical rack-mount servers or blade servers while taking into consideration your company's current monetary position, its requirements and future plans.

    Answer:

    A tight budget and a lower entry cost point are usually enough to weigh someone in favour of rackmount servers. Couple that with this particularly tight-arse company not coughing up enough coin for 2 chassis for redundancy's sake, rackmount servers are the only option for them.

    Let's look at this in more detail. Blade chassis systems are only cost effective if you fully (or mostly) populate them with blades. The initial cost of the system is usually prohibitive enough to deter many customers, but there are several advantages

    - far less cabling
    - reduced rack usage (higher density)
    - easy to replace a failed blade, just chuck a new one in, assign profile and away you go
    - great for a scale-out model and in VDI deployments

    If the company had sufficient budget and the client with massive SQL VM's on their roles, it may have been enough to sway them towards rackmount servers. Nowadays, blade servers easily come with 256-512GB RAM, but if you need more than that for your monster VM's, then rackmounts will be the way to go.

    As always, it's important to tailor your solution in line with the needs, the constraints and future requirements for your customer. You dont want to be in situation where you run out of pSwitch ports and/or storage. While we are at it, a company my team was resolving problems for had this massive virtualization project. They thought (or atleast in their minds, they did) they had a grip on everything - NO!. When they finished their P2V project, things were running satisfactorily, but then they had this new initiative which required these massive VM's (128 GB ones with 16 vCPU's), they absolutely killed the storage and their hosts. The VM's were all over their host's NUMA boundaries, the storage was on its knees and there regular datastore drops. Wasnt a pretty situation. Plan ahead, plan ahead, plan ahead! If you cant, call me!

    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    QHaloQHalo Member Posts: 1,488
    Buy a couple Nutanix blocks! /project over WHAT ELSE YA GOT ESSENDON!?!?

    Mad props for using 'douches' in the description as well. I'd +1 if I could but I need to spread some love elsewheres
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    LOL! There are too many people of that particular category nowadays mate, make me mad. I particularly love the title "Solutions Architect", some of these architects cant tell a server apart from a sewing machine mate. I had one call me the other day

    she said - I'd like to pick up a virtual server on the way home.
    Me - ummm right, why and what for?
    She - apparently, Facebook and Instagram run better on your phone I hear if you have a virtual server at home.
    Me - what??? who told you that, are you serious?
    She - O yes, we were discussing buying a bunch of them during our morning smoko.
    Me - complete silence.

    As for Nutanix, I'm all for it too!! That thing kicks arse, read the Nutanix bible by Steve Poitras, and man was I impressed!
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    tomtom1tomtom1 Member Posts: 375
    Definitely go with rackservers, there are a few risks and requirements here that prohibit the use of blade servers:

    1) Only one black chassis, which is in fact a single point of failure. Chances of this failing are slim, but existent.
    2) Lower entry cost point, which with blade servers cannot be easily reached, since you need to buy a chassis, and some blades.

    Constraints are:
    1) Tight budget (the amount is not specified) for the initial 1 to 1,5 years.

    Risks are:
    1) No money for staff training in blades, thus leaving them at risk when a problem occurs on the chassis, that they don't know how to solve.

    Also, the future growth is uncertain at this point, and mixed with all these risks, constraints and requirements, I'd say rackservers. Love to hear somebody else's view on this.
  • Options
    tomtom1tomtom1 Member Posts: 375
    Question 13:

    Your company has invested in Dell Equallogic storage. Upon verification after your implementation, you see that all EQL iSCSI disks are correctly being claimed by the right SATP, but the PSP associated with this SATP is set to VMW_SATP_MRU, whilst Dell best practices are to use the VMW_PSP_RR. Using esxcli, how would you fix this?


    Relevant information:
    aa.6019cba11285a36e682655755d74fde8
       Display Name: EQLOGIC iSCSI Disk (naa.6019cba11285a36e682655755d74fde[IMG]https://us.v-cdn.net/6030959/uploads/images/smilies/icon_cool.gif[/IMG]
       Has Settable Display Name: true
       Size: 307200
       Device Type: Direct-Access
       Multipath Plugin: NMP
       Devfs Path: /vmfs/devices/disks/naa.6019cba11285a36e682655755d74fde8
       Vendor: EQLOGIC
       Model: 100E-00
       Revision: 6.0
       SCSI Level: 5
       Is Pseudo: false
       Status: on
       Is RDM Capable: true
       Is Local: false
       Is Removable: false
       Is SSD: false
       Is Offline: false
       Is Perennially Reserved: false
       Queue Full Sample Size: 0
       Queue Full Threshold: 0
       Thin Provisioning Status: yes
       Attached Filters: VAAI_FILTER
       VAAI Status: supported
       Other UIDs: vml.02000000006019cba11285a36e682655755d74fde8313030452d30
       Is Local SAS Device: false
       Is Boot USB Device: false
       No of outstanding IOs with competing worlds: 32
    
    
       Device Display Name: EQLOGIC iSCSI Disk (naa.6019cba11285a36e682655755d74fde[IMG]https://us.v-cdn.net/6030959/uploads/images/smilies/icon_cool.gif[/IMG]
       Storage Array Type: VMW_SATP_EQL
       Storage Array Type Device Config: SATP VMW_SATP_EQL does not support device configuration.
       Path Selection Policy: VMW_PSP_MRU
       Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0; lastPathIndex=1: NumIOsPending=0,numBytesPending=0}
       Path Selection Policy Device Custom Config:
       Working Paths: vmhba38:C1:T4:L0, vmhba38:C0:T4:L0
    
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    tomtom1 wrote: »
    Definitely go with rackservers, there are a few risks and requirements here that prohibit the use of blade servers:

    1) Only one black chassis, which is in fact a single point of failure. Chances of this failing are slim, but existent.
    2) Lower entry cost point, which with blade servers cannot be easily reached, since you need to buy a chassis, and some blades.

    Constraints are:
    1) Tight budget (the amount is not specified) for the initial 1 to 1,5 years.

    Risks are:
    1) No money for staff training in blades, thus leaving them at risk when a problem occurs on the chassis, that they don't know how to solve.

    Also, the feature growth is uncertain at this point, and mixed with all these risks, constraints and requirements, I'd say rackservers.

    Couldnt agree more mate. Added a few more lines in the answer area of the question, included a client's situation I dealt with some time back.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    QHaloQHalo Member Posts: 1,488
    tomtom1 wrote: »
    Question 13:

    Your company has invested in Dell Equallogic storage. Upon verification after your implementation, you see that all EQL iSCSI disks are correctly being claimed by the right SATP, but the PSP associated with this SATP is set to VMW_SATP_MRU, whilst Dell best practices are to use the VMW_PSP_RR. Using esxcli, how would you fix this?


    Relevant information:
    aa.6019cba11285a36e682655755d74fde8
       Display Name: EQLOGIC iSCSI Disk (naa.6019cba11285a36e682655755d74fde[IMG]https://us.v-cdn.net/6030959/uploads/images/smilies/icon_cool.gif[/IMG]
       Has Settable Display Name: true
       Size: 307200
       Device Type: Direct-Access
       Multipath Plugin: NMP
       Devfs Path: /vmfs/devices/disks/naa.6019cba11285a36e682655755d74fde8
       Vendor: EQLOGIC
       Model: 100E-00
       Revision: 6.0
       SCSI Level: 5
       Is Pseudo: false
       Status: on
       Is RDM Capable: true
       Is Local: false
       Is Removable: false
       Is SSD: false
       Is Offline: false
       Is Perennially Reserved: false
       Queue Full Sample Size: 0
       Queue Full Threshold: 0
       Thin Provisioning Status: yes
       Attached Filters: VAAI_FILTER
       VAAI Status: supported
       Other UIDs: vml.02000000006019cba11285a36e682655755d74fde8313030452d30
       Is Local SAS Device: false
       Is Boot USB Device: false
       No of outstanding IOs with competing worlds: 32
    
    
       Device Display Name: EQLOGIC iSCSI Disk (naa.6019cba11285a36e682655755d74fde[IMG]https://us.v-cdn.net/6030959/uploads/images/smilies/icon_cool.gif[/IMG]
       Storage Array Type: VMW_SATP_EQL
       Storage Array Type Device Config: SATP VMW_SATP_EQL does not support device configuration.
       Path Selection Policy: VMW_PSP_MRU
       Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0; lastPathIndex=1: NumIOsPending=0,numBytesPending=0}
       Path Selection Policy Device Custom Config:
       Working Paths: vmhba38:C1:T4:L0, vmhba38:C0:T4:L0
    

    Modify the SATP default claim rule to claim the vendor=EQLOGIC as PSP WMW_PSP_RR.
  • Options
    tomtom1tomtom1 Member Posts: 375
    QHalo wrote: »
    Modify the SATP default claim rule to claim the vendor=EQLOGIC as PSP WMW_PSP_RR.

    Exact syntaxes please :)
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Bringing the difficulty down a notch or two

    Question 14

    A company has hired you as their virtualization administrator and is looking at P2V'ing some application servers. The problem they are facing is the application is a multi-tier application with various components depending on each other. They are concerned that they wouldnt be able to control the power on the various VM's that host the application.

    - How will you help them overcome there fears?
    - In addition, they are adamant the application servers have memory dedicated to them, how you will you do this? Discuss the consequences to other VM's
    - How will you determine the number of hosts and the grunt they need?
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    kj0kj0 Member Posts: 767
    Essendon wrote: »
    Bringing the difficulty down a notch or two

    Question 14

    A company has hired you as their virtualization administrator and is looking at P2V'ing some application servers. The problem they are facing is the application is a multi-tier application with various components depending on each other. They are concerned that they wouldnt be able to control the power on the various VM's that host the application.

    - How will you help them overcome there fears?
    - In addition, they are adamant the application servers have memory dedicated to them, how you will you do this? Discuss the consequences to other VM's
    - How will you determine the number of hosts and the grunt they need?
    Create a Resource pool that has a "power On" order set.

    Inside the Resource Pool, set reserved Memory levels.
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Elaborate please, kj0, when you have a moment icon_wink.gif
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    tomtom1tomtom1 Member Posts: 375
    kj0 wrote: »
    Create a Resource pool that has a "power On" order set.

    Inside the Resource Pool, set reserved Memory levels.

    You mean a vApp. :) One thing I would most definitely stay away from is guest VM reservations, since it messes up HA slot sizes and not in a good way. Even if the company is currently not leveraging HA, it might in the future.

    A vApp has some nasty implications though, if you decide to use one, you should understand the impact it has on the shares in times of resource contention. If you leave the shares at the default, and you start coming close to a saturated host, you might run into some problems with the calculation behind shares.

    To determine the resources necessary to complete this project, run some analysis tools (i.e. capacity planner, perfmon) on the current physical machines and determine:
    • Peak CPU usage
    • Average CPU usage
    • Peak memory usage
    • Average memory usage
  • Options
    kj0kj0 Member Posts: 767
    tomtom1 wrote: »
    You mean a vApp. :) One thing I would most definitely stay away from is guest VM reservations, since it messes up HA slot sizes and not in a good way. Even if the company is currently not leveraging HA, it might in the future.

    A vApp has some nasty implications though, if you decide to use one, you should understand the impact it has on the shares in times of resource contention. If you leave the shares at the default, and you start coming close to a saturated host, you might run into some problems with the calculation behind shares.

    To determine the resources necessary to complete this project, run some analysis tools (i.e. capacity planner, perfmon) on the current physical machines and determine:
    • Peak CPU usage
    • Average CPU usage
    • Peak memory usage
    • Average memory usage
    HAHA... Yeah, vApp is what I meant, Heads all over he shop at the moment with all this study. vMotion and DRS at the moment.

    When I get a second I'll do what I was originally going to. Do up some screenshots of the answer with vApps.


    Inside your vApps you can set your boot priority for the order of which your VMs will start up in. 120seconds between each is generally the ballpark.

    You can then set your reserves for the host memory for the VMs inside the vApp so that when your start your VM, it is guaranteed that Memory and can hold it.


    I think that's right.
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
Sign In or Register to comment.