Compare cert salaries and plan your next career move
tomtom1 wrote: » According to a few KB articles and the HA product documentation you have to specify it manually, and you need to do so in production. That's what I was after, otherwise good post.
Essendon wrote: » Question Your company has 2 clusters of 10 servers each with DRS and HA enabled. All these servers are HP DL380 G7's and have 2 years left before they are EOL'd. There are ongoing discussions to introduce more server into the clusters to cater for increased growth Some company executive (aka smartypants) decides to buy 4 new servers with AMD processors while you are away on holidays. What can you do about these servers - are you able to add them to the 2 pre-existing clusters? Discuss your options.
QHalo wrote: » You can vMotion between Intel and AMD, however the VM must be offline to do so.
tomtom1 wrote: » Con's: (Can) create(s) additional management overhead and could have impact on stuff like licensing.
tomtom1 wrote: » Another option you have is by adding 2 hosts to the both existing clusters (assuming EVC is not enabled on the existing clusters) and setting the new hosts as dedicated failover hosts.
tomtom1 wrote: » - Is there something missing on the hardware side of things? Discuss.Assuming the design is leaning more towards the physical design then a logical design, I'm missing some redundancy in the pNICS and the FC switches. 1 NIC is drawn per host to 1 single instance of a fiber channel switch. - Discuss their HA settings. Specifically talk about Admission Control. Enable or disable? What policy setting do they need to ensure all VM's start up successfully in case either datacenter (entire datacenter, that is) fails. I would go with the option for Power Off, because the vSphere 5 default of Leave Powered On could create something you would avoid at all costs, a split brain scenario. The immediate power off in a host faillure event would ensure that the hosts in the other side could start the VM's. How many datastores should they use for their heartbeating? What advanced HA setting is needed? According to the Metro Cluster Case Study you should use a minimum of 4 datastores, 2 per sites. To increase the default of 2 datastores, you need the HA advanced setting of das.heartbeatDsPerHost to 4. What will happen to the VM's running on the far left host if it fails? The local host in the local site will run these VM's, if you specify this with DRS rules. - Discuss their DRS settings. Specifically talk about How do they ensure that the VM workload is balanced across the stretched cluster? Create DRS should rules to ensure that a part of the workload is specifically running on either the left or the right part of the stretched cluster. How do they also ensure that VM's successfully startup when a host(s) fails? Hint: talk about DRS rules By using the DRS should rules, you can ensure that the host local to the site runs the workload first, unless that fails to. Because it is a should rule, it isn't a hard rule, and HA and stuff like maintenance mode will continue to work after both hosts in the site has failed. - Lastly, does this setup provide for workload mobility and allow your client to migrate their VM workload to the other datacenter if an impending disaster threatens to wipe out one datacenter.I'd say so, assuming the storage is capable of the correct replication.
tomtom1 wrote: » Pre posting tomorrow's question:You currently have 1 VSS with 2 vmnics as uplinks in place for your vSphere environment. Your company recently bought Enterprise Plus licenses to leverage the PVLAN features of the DVS. Tell me how you would non disruptively migrate the following network types to the DVS. Management traffic vMotion traffic VM traffic
aa.6019cba11285a36e682655755d74fde8 Display Name: EQLOGIC iSCSI Disk (naa.6019cba11285a36e682655755d74fde[IMG]https://us.v-cdn.net/6030959/uploads/images/smilies/icon_cool.gif[/IMG] Has Settable Display Name: true Size: 307200 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.6019cba11285a36e682655755d74fde8 Vendor: EQLOGIC Model: 100E-00 Revision: 6.0 SCSI Level: 5 Is Pseudo: false Status: on Is RDM Capable: true Is Local: false Is Removable: false Is SSD: false Is Offline: false Is Perennially Reserved: false Queue Full Sample Size: 0 Queue Full Threshold: 0 Thin Provisioning Status: yes Attached Filters: VAAI_FILTER VAAI Status: supported Other UIDs: vml.02000000006019cba11285a36e682655755d74fde8313030452d30 Is Local SAS Device: false Is Boot USB Device: false No of outstanding IOs with competing worlds: 32 Device Display Name: EQLOGIC iSCSI Disk (naa.6019cba11285a36e682655755d74fde[IMG]https://us.v-cdn.net/6030959/uploads/images/smilies/icon_cool.gif[/IMG] Storage Array Type: VMW_SATP_EQL Storage Array Type Device Config: SATP VMW_SATP_EQL does not support device configuration. Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0; lastPathIndex=1: NumIOsPending=0,numBytesPending=0} Path Selection Policy Device Custom Config: Working Paths: vmhba38:C1:T4:L0, vmhba38:C0:T4:L0
tomtom1 wrote: » Definitely go with rackservers, there are a few risks and requirements here that prohibit the use of blade servers: 1) Only one black chassis, which is in fact a single point of failure. Chances of this failing are slim, but existent. 2) Lower entry cost point, which with blade servers cannot be easily reached, since you need to buy a chassis, and some blades. Constraints are: 1) Tight budget (the amount is not specified) for the initial 1 to 1,5 years. Risks are: 1) No money for staff training in blades, thus leaving them at risk when a problem occurs on the chassis, that they don't know how to solve. Also, the feature growth is uncertain at this point, and mixed with all these risks, constraints and requirements, I'd say rackservers.
tomtom1 wrote: » Question 13: Your company has invested in Dell Equallogic storage. Upon verification after your implementation, you see that all EQL iSCSI disks are correctly being claimed by the right SATP, but the PSP associated with this SATP is set to VMW_SATP_MRU, whilst Dell best practices are to use the VMW_PSP_RR. Using esxcli, how would you fix this? Relevant information: aa.6019cba11285a36e682655755d74fde8 Display Name: EQLOGIC iSCSI Disk (naa.6019cba11285a36e682655755d74fde[IMG]https://us.v-cdn.net/6030959/uploads/images/smilies/icon_cool.gif[/IMG] Has Settable Display Name: true Size: 307200 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.6019cba11285a36e682655755d74fde8 Vendor: EQLOGIC Model: 100E-00 Revision: 6.0 SCSI Level: 5 Is Pseudo: false Status: on Is RDM Capable: true Is Local: false Is Removable: false Is SSD: false Is Offline: false Is Perennially Reserved: false Queue Full Sample Size: 0 Queue Full Threshold: 0 Thin Provisioning Status: yes Attached Filters: VAAI_FILTER VAAI Status: supported Other UIDs: vml.02000000006019cba11285a36e682655755d74fde8313030452d30 Is Local SAS Device: false Is Boot USB Device: false No of outstanding IOs with competing worlds: 32 Device Display Name: EQLOGIC iSCSI Disk (naa.6019cba11285a36e682655755d74fde[IMG]https://us.v-cdn.net/6030959/uploads/images/smilies/icon_cool.gif[/IMG] Storage Array Type: VMW_SATP_EQL Storage Array Type Device Config: SATP VMW_SATP_EQL does not support device configuration. Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0; lastPathIndex=1: NumIOsPending=0,numBytesPending=0} Path Selection Policy Device Custom Config: Working Paths: vmhba38:C1:T4:L0, vmhba38:C0:T4:L0
QHalo wrote: » Modify the SATP default claim rule to claim the vendor=EQLOGIC as PSP WMW_PSP_RR.
Essendon wrote: » Bringing the difficulty down a notch or twoQuestion 14 A company has hired you as their virtualization administrator and is looking at P2V'ing some application servers. The problem they are facing is the application is a multi-tier application with various components depending on each other. They are concerned that they wouldnt be able to control the power on the various VM's that host the application. - How will you help them overcome there fears? - In addition, they are adamant the application servers have memory dedicated to them, how you will you do this? Discuss the consequences to other VM's - How will you determine the number of hosts and the grunt they need?
kj0 wrote: » Create a Resource pool that has a "power On" order set. Inside the Resource Pool, set reserved Memory levels.
tomtom1 wrote: » You mean a vApp. One thing I would most definitely stay away from is guest VM reservations, since it messes up HA slot sizes and not in a good way. Even if the company is currently not leveraging HA, it might in the future. A vApp has some nasty implications though, if you decide to use one, you should understand the impact it has on the shares in times of resource contention. If you leave the shares at the default, and you start coming close to a saturated host, you might run into some problems with the calculation behind shares. To determine the resources necessary to complete this project, run some analysis tools (i.e. capacity planner, perfmon) on the current physical machines and determine: Peak CPU usage Average CPU usage Peak memory usage Average memory usage
Compare salaries for top cybersecurity certifications. Free download for TechExams community.