Options

Your Daily VMware quiz!

1568101114

Comments

  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Great replies there @smackie1973 - you could also run a backup of the VM so log files are automatically truncated. But to get vCenter back is probably a priority, usually at least, so I'd do your "next" option first and then do anything else. I did this at 2am the other day, we need decent monitoring in place which would've alerted to low disk space...Oh well!

    I'd just disable DRS for your vCenter VM so you know which host your VM's on. If the host was lost, HA will still restart the vCenter VM on another host because HA is a host function.

    As for Q 36, you are right the portgroup is missing on the new host. So like you said, it should be available on all hosts. A host profile is a good idea, so is a vDS so you create the group once only and all hosts automatically get the portgroup.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Question 38 - Time for another design question!

    You have been appointed the Virtualization Architect for a project your company is embarking upon. The company has two datacenters,

    · DC_Prd, running vSphere 5.5
    · DC_Dr, running vSphere 5.0

    As the names imply, the first one is where most of the company’s prod VM’s live. The company intends to make use of the second datacenter as its DR location. When the company first started, it also housed some prod VM’s at the second datacenter. There are differences in the two datacenters

    · DC_Prd has 6 clusters (Clusters 1 – 3 are critical) of 6 hosts each whereas DC_Dr has 2 clusters of 6 hosts each
    · DC_Prd uses FC as its storage protocol of choice whereas DC_Dr uses iSCSI for its storage needs
    · DC_Prd has mostly Windows machines whereas DC_Dr has a mix of Windows and Linux machines

    Requirements:

    · Make optimum use of resources at both location without compromising HA
    · Protect the VM’s living in Clusters 1 – 3 and ensure they can be recovered in the event that DC_Prd is lost
    · Ensure hosts at the DC_Dr location aren’t under severe constraints at any time
    · Move the prod VM’s from the DR location to the Prod location (assume they can route at other site)
    · One vCenter at either site in Linked Mode

    Constraints:

    · The company insists on continuing the use of its current storage arrays at either site
    · Backups of VM’s at either location are taken locally and taken offsite.

    Assumptions

    · The company’s network team will ensure that prod VM’s can successfully route at the DR location when they are run there
    · The company is buying new hosts for the DR location and there are vSphere licenses available
    · The WAN link between sites can handle storage replication traffic
    · The company has allocated a decent amount of money for this project

    Tasks:

    Ø Design a DR strategy for the company
    Ø Discuss storage replication and its type
    Ø Move the prod VM’s from the DR location to the Prod location

    Make fair assumptions where necessary.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Throw in some ideas to the design question folks! Not as hard as it looks, it's a tiny furry kitten!
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Extremely busy today, tomorrow promises to be worse. Wont be posting questions till the weekend/Monday...
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Posting tomtom1's response to the design question:

    Additional assumptions:
    -> RTO 1 hour
    -> RTO 15 minutes
    -> The storage array is capable of asynchronous replication.
    -> VM's in the critical cluster can also be grouped together based on RPO / RTO.

    Additional risks:
    -> The backups are made locally and then taken offsite using sneakernet. This might fail the given requirements with regards to DR RPO / RTO. This risk will need to be mitigated to perform DR in all situations.

    Since the company requirements with regards to RTO / RPO are not a given, we will make the additional assumption that an RPO of 15 minutes is required for all the hosts inside the first 3 clusters (Cluster 1-3). The RTO for these virtual machines is set at one (1) hour.

    The requirements clearly state that there is a production site and a recovery site, which would implicate that the use of a vSphere Metro Cluster isn't a good fit for this environment. One of the other solutions that could provide a benefit to the organization and is capable of satisfying the requirements is vSphere SRM.

    When using vSphere SRM, one (1) of two (2) replication types can be chosen. Either:
    -> VM based replication (vSphere replication)
    -> Array based replication (Storage replication features)

    When using VM based replication, the vSphere hypervisor itself is responsible for the replication of the critical virtual machines. When using array based replication, this task is outsourced to the storage array. In order to meet requirements for recovery, the critical VM's need to be grouped together, not only based on capacity and performance, but also in terms of RPO / RTO.

    Since the workload characteristics of the critical cluster is unknown, it is assumed that this will not form a problem. vSphere SRM also leverages vCenter linked mode, which is already in place.

    With SRM in place, one site is considered as a primary site, and another site is deemed recovery site. This will mean that one site will do most of the workload, until a failover occurs in the primary site and DRP's are executed. This provides a collision of the first (1) and fourth (4) requirement. Because of this collision, it is assumed that a valid DRP is prioritised over the utilisation of the hosts in the DR site.

    Since new hosts are being acquired for the DR site, ensure that there is at least enough capacity to ensure that the clusters 1-3 in the production site can run in the DR site. Also keep at least 20% compute resources reserved for overhead and peak workload.

    The workload that is currently considered production in the DR site, will need to be migrated to the production site. Since these are in seperated (v)Datacenters, a vMotion (+ SvMotion) cannot be executed. A cold vMotion will need to occur to satisfy that requirement. Therefore, downtime on these VM's is a given.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Good response. I like the way he's considered the assumptions.

    - Linked Mode isnt configured yet. See the two sites are at differing levels of vSphere
    - Which replication mechanism would you use?

    I threw in the requirement for optimum use of resources to try and trip people up. First, the question clearly states that the company is after a DR strategy. Second, metro clusters arent a DR strategy, they're rather a DA strategy - think of a hurricane that's bearing down on your city and you'd like to move your important (or all) VM's to your second DC to avoid downtime. Other things that give it away are the lack of a stretched network, different storage protocols at either site and no mention of distance between sites (there needs to be a certain distance for replication to be of use). I also threw in the type of VM's at either site just for some background noise, good that tomtom1 ignored it. SRM is your choice here.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Keeping things going after a little hiatus -

    Question 39

    a). You've built a new VM for a user, but the user complains she cannot RDP to it. You open your vSphere Client, console to the machine and see the machine is powered up and you can successfully log into it using your domain account. What would you check next and how will you fix this issue?

    b). You need to migrate a VM from one datacenter to another. The machine lives on a datastore called, Prod_LUN_01 in Datacenter A. There are a number of other LUN's, Prod_LUN_2...upto Prod_LUN_10. There a shared datastore across both datacenters called, ISO_TMPL_NFS, hosts across both datacenters have access to it. There's vCenter_A in Datacenter A and there's vCenter_B in Datacenter B. Provide the steps you'd employ to move the VM from Datacenter A to Datacenter B.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    smackie1973smackie1973 Member Posts: 13 ■■□□□□□□□□
    Q39

    a).
    You will need to ensure that RDP is actually enabled within Windows and that the user has permissions to RDP to the server. Add her domain login to the Remote Desktop Users group or local admins depending on what her requirements are.

    b).
    Assumptions: - The VM can be powered off for a few minutes. The VM is not huge and there is enough space in the ISO_TMPL_NFS datastore to temporarily store the VM.
    You could storage vMotion the VM to the ISO_TMPL_NFS datastore, power off the VM and then remove it from the vCenter inventory.
    On the other vCenter, browse to the ISO_TMPL_NFS datastore and attach the VM. Then power the VM on and storage vMotion it to the relevant production datastore.

    I'll be interested to see if there is another way of doing this. e.g. VM replication.
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Spot on smackie. Everyone else, notice how the correct assumptions about the VM not being large and there being enough space on the shared datastore were made.

    Yeah, you could use SRM (if you have it, of course) to recover the VM at the other datacenter. Obvious that SRM will need to have been already setup and confirmed as working. Or you could just get put the VM on a datastore, replicate it to the other datacenter, remove from one vCenter's inventory and add to the other vCenter's inventory. Get the array vendor to assist with mounting of the VMFS volume at the other DC, you dont want signature conflicts.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Question 40

    Doufus is the Virtualization Engineer at ABC Enterprises. He's been tasked with sizing up LUN's for an Exchange migration project. He's been given the approximate sizes of the 3 Exchange DAG members database disks - 1.5TB, 2.5TB and 2.0TB. He's added that up and arrives at 6TB. He whips open his SAN mgmt console and provisions only a 6TB LUN for this deployment. What's wrong with Doufus' sizing idea, what has he not considered and what will you do to make it better? Your answer should talk about


    - performance
    - capacity
    - data growth
    - anything else you can think of


    Make fair assumptions where necessary.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Anyone take a stab at this before I answer?
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    tomtom1tomtom1 Member Posts: 375
    Allright, let me take a look. First off the bat, things that aren't calculated in from a capacity based perspective:

    -> VM swapfiles, assuming the VM configuration will be stored on the same LUN.
    -> VMX based overhead, once again assuming that the VM configuration will be stored on the same LUN.
    -> VM based snapshots, although Microsoft officially does not support snapshots on Exchange servers, whether they are in a DAG or not (http://technet.microsoft.com/en-us/library/jj619301(v=exchg.150).aspx).
    -> Projected growth for the environment (What type of growth, how much growth and over what period of time?)

    Although the rest of the environment is unknown, provisioning the disks on one LUN might saturate the resources specific for that LUN, such as LUN queue depth. Also, since all disks are provisioned on one LUN, this might impact performance in a negative way, because that means that all the IOPS are coming from the same backend disk group (assuming storage based auto tiering is not in place).

    I'll post a more VCP oriented question in a bit!
  • Options
    tomtom1tomtom1 Member Posts: 375
    Question 41

    Your organization works with resource pools. You currently have 2, with a couple of VM's in them. One of them is a root resource pool, and the other is a child resource pool. The settings are attached.


    When another administrator tries to perform a power on operation on DC04, he receives an error message.



    Why is this? Using which (at least) 2 possible answers can you fix this?
  • Options
    jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Essendon wrote: »
    Yeah, you could use SRM (if you have it, of course) to recover the VM at the other datacenter. .

    LUN restore seems excessive - worth setting up SRM with Vmware Replication so you can restore individual VMs :)
    My own knowledge base made public: http://open902.com :p
  • Options
    dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    Interesting thread.

    Q40 - So many problems. Microsoft recommends 20% free space, so the actual size should be 1.8TB, 3.0TB & 2.4TB. Pre 5.5 has 2TB-512k disk limit, which means the 3DB must be physical RDM. SAN vendors typically recommend ~20% headroom on LUNs for snapshots, logs, etc, so the actual LUN size should be 2.2TB, 3.6TB, & 2.9TB. Depending on the # of DAG replication, you need to double or triple the LUN size (unless the original number includes DAG copies).

    Depending on the SAN, having all DB on the same LUN may not cause a disk performance issue (NetApp WAFL comes to mind), but you can run into LUN queue depth problems. For availability, the 3DBs should be spread across multiple SAN controllers (if available) and multiple LUNs.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Glad to have you back bud! That's the type of response from an almost-there VCDX.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    Q41

    First problem - should use custom RPs, not the default High, Normal, Low RP. Default RP account for the RP importance, but not the number/percentage of VMs in the RP. This can result in critical VMs getting less resource than normal or low priority VMs.

    Second problem - RP ProducitonTier2 & DC01 & DC02 are siblings. VMs and RPs being siblings is considered a bad design because resource shares are relative to their hierarchy.

    To solve the out of memory error, check the Expandable reservation on ProductionVMs RP or increase the RAM reservation.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • Options
    QHaloQHalo Member Posts: 1,488
    Good to see you around dave. Hope all is well.
  • Options
    kj0kj0 Member Posts: 767
    Welcome back Dave. Hope all has been going well with your study!
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • Options
    tomtom1tomtom1 Member Posts: 375
    Best of luck Dave, if you need some reviewing, I'm sure Manny, myself and the other could make some time available. :)
  • Options
    dave330idave330i Member Posts: 2,091 ■■■■■■■■■■
    Thanks guys. Actually, if anyone's a Lefthand expert I have some metro cluster question I need answered.
    2018 Certification Goals: Maybe VMware Sales Cert
    "Simplify, then add lightness" -Colin Chapman
  • Options
    tomtom1tomtom1 Member Posts: 375
    Let's give this another go if you guys are interested?
  • Options
    kj0kj0 Member Posts: 767
    tomtom1 wrote: »
    Let's give this another go if you guys are interested?
    Sounds like a plan. ;) Was planning to compile this into a practice document at some stage.
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Giving this thread a new lease of life

    Q. 42 You have been assigned a new P2V engagement at a customer's site. Upon discussions with the appropriate people you've discovered that there's this business critical application that the customer cannot survive without. Due to the number of virtual machines that will be created as a result, you've determined you are going to need 3 hosts for n+1 redundancy. Problem is, the business critical application is licensed for 16 cores only and strict licensing conditions dictate that DRS rules cannot be used to restrict machines to a set of hosts in the cluster. Assume there are 8 cores per ESXi server.

    What HA Admission Control Policy would you use and why?

    Answer:


    Constraints:

    - Strict licensing (no DRS rules, only 16 cores licensed)
    - No budget to buy more hardware

    Assumptions:

    - 8 cores per ESXi host

    Requirement:

    - Maximum avaiability of the business critical application (BCA)

    HA decision:

    - Enable Admission Control. This will ensure VM's will restart successfully in case of a host failing.
    - Choose "Specify Failover Hosts: as the Admission Control policy. This will ensure that the BCA always runs on two hosts only and keeps within the licensing constraints. n + 1 redundancy is also maintained.

    Implications:

    - Keep VM's within NUMA boundaries. Building VM's larger than 8 vCPU's will result in VM's crossing NUMA boundaries.
    - Now that one host will always be on standby waiting for the untoward event, there can/will be higher overcommit on the 2 remaining hosts.

    Other choices:

    - There arent any. Any other choice of admission control policy will violate the licensing agreement.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Q. 43

    You are connected to your vCenter server and suddenly the connection drops out. Repeated attempts to re-connect fail. You RDP to the vCenter server and discover the vCenter service is stopped. You attempt to start the service, it starts and then dies after a few seconds. The SQL server for vCenter's database is on the same machine. What else should you try/look at to get vCenter up and running again?
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    tstrip007tstrip007 Member Posts: 308 ■■■■□□□□□□
    Check windows event logs for sql express related errors is where I would start.
  • Options
    kj0kj0 Member Posts: 767
    Check to make sure that any of the vCenter service Dependencies are running first, This may include SQL server - You may nee to check that you can connect to the SQL DB and that it is running.
    2017 Goals: VCP6-DCV | VCIX
    Blog: https://readysetvirtual.wordpress.com
  • Options
    tomtom1tomtom1 Member Posts: 375
    You could also check the vpxd.log, which is more work to view, but provides way more information. For you VCP'ers out there, you should know the location for the exam:

    VMware KB: Location of vCenter Server log files
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Great points guys, another thing you could check is if the disk holding the database/logs has enough space. And if your vCenter is virtual, then you logon directly to the host, locate the vCenter VM and extend the disk. If you have a number of hosts, then "pin" vCenter to a host by disabling DRS for the VM holding vCenter.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Q. 44

    A system manager has come up with this project where he intends to have virtual machines running in a cluster running vSphere 5.0. Going by the manager's requirements, you've found out that the VM's will require 2 x 4 TB disks. vSphere 5.0 has a 2 TB limit on the vmdk size for a VM. How will you satisfy the requirements?

    Answer:

    Use pRDM's to satisfy the manager's requirement. vSphere 5.0 has a limit of 2TB (actually 2TB - 512MB) on the size of vmdk's for a VM. To work around this limitation, physical mode RDM's can be used which can be upto 64TB. Understand the limitations of pRDM's though, no snapshots, no vMotion of the VM etc..
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
Sign In or Register to comment.