Compare cert salaries and plan your next career move
Essendon wrote: » -- Troubleshooting scenario -- Q 45 You are the Virtualization Consultant at your company and have been delegated the responsibility of fixing some issues a customer’s having. The issues: - Random slowness throughout the day. Slowness is more pronounced in the evening, about 8pm is when it’s at its worst. - Some SQL jobs time-out. - Host failure happened the other day, all machines on affected host did not power up on other hosts. The environment: - HP Blade environment, 2 x 10GbE backplanes, single chassis - 5 x ESXi 2-way 4 core 5.0 hosts in 1 cluster, HT not enabled, HA enabled, DRS not enabled, no datastore clusters, 192GB RAM (Total=768GB) on each blade - LUN’s have varying number of machines on them. - There’s considerable variation in the config of the VM’s: o 40 x VM’s with 1 vCPU (T=40) and 4 GB RAM (Total=160GB) o 20 x VM’s with 2 vCPU’s (T=40) and 8 GB RAM (Total=160GB) o 20 x VM’s with 4 vCPU’s (T=80) and 12 GB RAM (Total=240GB)o 5 x VM’s with 8 vCPU’s (T=40) and 32 GB RAM (Total=160GB), these are running SQL workloads How/what/where would you start troubleshooting this situation? Ask me questions you have, say you want to know what kind of array the customer’s running, number of LUN’s – basically, ask me questions which would help you arrive at a solution for the customer. Issues described at the top have been left vague/unclear deliberately.
Essendon wrote: » Great points guys, another thing you could check is if the disk holding the database/logs has enough space. And if your vCenter is virtual, then you logon directly to the host, locate the vCenter VM and extend the disk. If you have a number of hosts, then "pin" vCenter to a host by disabling DRS for the VM holding vCenter.
jibbajabba wrote: » We use Thin on Thin ... I don't know who made that decision .. I am the guy who supposed to support the infrastructure but I was on holiday when it was implemented. Anyway, I don't like Thin on Thin simply because I have seen customers out of a sudden using a heck of a lot more storage than anticipated. If your pool on the SAN runs low there is usually not much you can do to fix it unless you buy another shelf. So yes, alerting / monitoring is very important (and staff not ignoring the mails for weeks *sigh*). As for thin provisioned LUNs. We use 4TB LUNs (again, for no real reason I don't think) and I personally think it is pointless to have thin provisioned LUNs at that size given the average VM size in our environment (we got 250GB - 1TB VMs) ... In cases like that I would either a. Use 4TB LUNs - Thick provisioned VMDKs b. Use larger LUNs - Thin provisioned VMDKs and in both cases Thick LUNs Simply I like to know where I am at with the storage. The problem I think is the business in most cases. Buy small, sell big, hope it works out I was "Beta Testing" Dell Equallogics years back and I managed to make the Dell guy speechless. He was all over the Thin on Thin thing. So we received an early model (pre Production 4000 Series it was I think) so he left it with us (not leaving our office for the day obviously) and let me play with it. Took me 15 minutes to "blow it up". I left the setup as is - at that point 500GB LUNs thin on thin .. so we had 10 VMs or so .. So I kicked off some clones ... some snapshots, some removals, some more clones, some uploads and deployed some OVFs. I "overrun" the system to a point where the alerting was somewhat overrun and the whole thing locked up lol .. Needless to say the Dell guy lost the colour of his face ... Was not recoverable .. Ok, it was a pre-release model and it was probably unlikely to happen in production anyway. My point is, you rely very heavily on proper alerting when using thin on thin and in my experience, even thin on thick, caused problems eventually. I would LOVE to hear from an architect about environments implemented where either implementation made perfect sense. VCDX anyone ? By the way - speaking of thin provision and storage vmotion - depending on your SAN you may see that you have to keep reclaiming space using VAAI because of all those svmotions ...
tomtom1 wrote: » I know, but he never finished it, so that's a real shame.@jibbajabba: This was the table I mentioned, found it on joshodgers website. So we should be good with a percentage of 13% percent, seeing as in his opinion N+2 becomes relevant at the 9th host.
Deathmage wrote: » If these questions are meant for someone with a DCD then I can shed a sigh of relief, If not I'm going to do more reading or cancel my exam on Wednesday, maybe 4 months of study isn't enough, eek!
dave330i wrote: » I remember many of the questions in this thread being above VCP. Edit: in the table you posted, N is for the number of ESXi hosts in the cluster that'll provide resources. In a HA cluster with admission control enabled, the basic formula is N (# of host providing resources) + x (# of redundant hosts) = total # of ESXi hosts in the cluster.
Essendon wrote: » I dont quite follow you, think of it as this - the 'x' stands for the number of hosts you can sustain the loss of and continue to run your workloads on the remaining ones. So with N + 1, you can sustain the loss of 1 host and run all your VMs, with N + 2, two hosts can be lost (or not unavailable, think maintenance mode) and other hosts still handle your VMs. Better?
Essendon wrote: » I may just go with 2 and 4, option 1 may reboot VMs under some particular conditions (I think)
Deathmage wrote: » Not sure if I can post in here a question that I just found on a practice test that I had difficulty on in the exam: vCenter reports a connectivity problem with a ESXi 5.x host that is not a member of a cluster. An Administrator attempts to connect directly to the host using vSphere Client but fails with the message "an unknown connection error occurred. Virtual machines running on the host appear to be running and report no problems. What two methods would likely resolve the issue without affecting the VM's? To me I think it's: (underlined is my choice) 1) Enter the service mgmt-vmware restart command from either SSH or local CLI 2) Enter the services.sh restart command from either SSH or local CLI 3) Select reboot host in the DCUI 4) Select Reboot Management Agents in the DCUI did I royal screw that up?
tstrip007 wrote: » Yah that happened to me yesterday after I patched a host. About 20 min after I had powered back on VM's the host went non responsive and vm's showed disconnected. Couldn't access the host directly through the client. However, all VM's were fully functioning and accessible via RDP. Restarted mgt agents. I have iLO configed on most of my servers and use that to access the DCUI.
Verities wrote: » I actually ran into this issue earlier this year, in our environment with a 4.1 host. I restarted the management agents via SSH and everything went smooth (if you don't have SSH enabled on the host, you can use PowerCLI to turn it on). You won't cause any issues on your VMs (unless you're running them with Automatic Startup/Automatic Shutdown) only your host if its running a specific task. I think I followed the KB 1003490 (article ID) but it was a while ago, so I don't particularly remember.
Deathmage wrote: » See my only issue is this, maybe it's a design flaw in my home cluster but I only have a single uplink for the management VMkernel while my vMotion/Production/vStorage both are two uplinks in a failover policy. It's no so much I don't have the connections I just was lazy, lol, I have 8 nic's per host (Broadcom 4 slot nic riser card) but needless to say before when I tried to restart the Management Agents the process happened but then my SSH console locked up and I couldn't gain access to the host but I could ping the IP from my laptop/PC it just wouldn't allow a SSH. I had to drop the host to fix it... However I'm curious if you guys know roughly how long it takes for the Management Agents to restart? ...I'm thinking I jumped the gun thinking it would happen fast... I'm pondering if even if it's a test cluster if I should make a vlan for iDRAC just in-case of this situation...but I'm just not sure if I even have a iDRAC configured if I could access the DCUI from iDRAC so kind of scratching my head at this point... hmm... now to figure out why the restarting on the management agents didn't go smooth for me, lol! - I always get the issues, lol!!!!
Verities wrote: » Management agents are pretty quick to restart; I don't have a 100% accurate time in seconds, but definitely less than a minute. As for the management agent restart issue...do you have Automatic Startup/Shutdown for VMs enabled?? I really can't comment on the iDRAC question since I've never used it or set it up before.
Deathmage wrote: » Turns out I just let it run this time, clocked it taking 3 minutes and it was successful and once it restarted the SSH connection reconnected. I was just impatient earlier. But no Automatic is setup, they aren't control in that manner everything is manual.
Deathmage wrote: » I swear my answers are correct? Is this app broken? erg, I think those asnwers are for a NFS datastore, shoot.
Compare salaries for top cybersecurity certifications. Free download for TechExams community.