Options

Vmware Horizion View isseus

2»

Comments

  • Options
    DevilWAHDevilWAH Member Posts: 2,997 ■■■■■■■■□□
    Nope a third party company, I don't lay this completely at there door though. I expect issues and the attitude from our side was imidatly hostile which has upset them and as you know once that happens its hard to work to get things sorted out.
    • If you can't explain it simply, you don't understand it well enough. Albert Einstein
    • An arrow can only be shot by pulling it backward. So when life is dragging you back with difficulties. It means that its going to launch you into something great. So just focus and keep aiming.
  • Options
    DevilWAHDevilWAH Member Posts: 2,997 ■■■■■■■■□□
    OK few updates

    1st - they had set up the network to use 2X10gig as primary and 2X 1gig as stand by. once he removed the 1 gig from the virtual switch the network throughput improved significantly! strange as the 1 gig where set as standby nics but still seem to have an effect?

    2nd office 2013 has hardware acceleration enabled, with out a graphic card this actually causes a big performance hit. Turning it off via a reg edit make a huge difference. IF you have the new beta graphic cards profiles in vmware view 6 (off loads the graphics on to a physical graphic card in the host) then having hardware acceleration turned on is fine, but with default graphics hardware turn it off :)

    these two have improved matters greatly, going form unusable to a reasonable experience. Still some file copying issues but 80% better over all VDI experience.
    • If you can't explain it simply, you don't understand it well enough. Albert Einstein
    • An arrow can only be shot by pulling it backward. So when life is dragging you back with difficulties. It means that its going to launch you into something great. So just focus and keep aiming.
  • Options
    DevilWAHDevilWAH Member Posts: 2,997 ■■■■■■■■□□
    Well Another issue or maybe it is the issue. This is the disk bench mark on a VDI taht is running on an underlying VSAN. (2 SSD + 4 Spindles per host).

    Question about VSAN performance, on write operations does it have to confirm write to both copies of a dedicated VDI disk? Stranger still if we log on to a VDI with a domain admin account the performance is better than with a standard domain user. But both have full premissions to the C drive of the VDI so what difference should this make?


    • If you can't explain it simply, you don't understand it well enough. Albert Einstein
    • An arrow can only be shot by pulling it backward. So when life is dragging you back with difficulties. It means that its going to launch you into something great. So just focus and keep aiming.
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    As for removing the 1GB NIC having a positive impact, you sure they were set to standby? It sounds like there were two teams each with one 10GbE and one 1GbE, limiting you to 1GbE throughput. Dunno.

    Nice catch about Office 2013, I didnt know about it.

    About VSAN, yes it needs to write to both locations so a copy is available in case the primary copy's lost (that is host loss or disk loss). At some defined interval, the write cache is moved off to HDD too.

    When you say performance is better for a domain admin account, what are you doing to gauge this performance? It probably wont be C drive permissions, is it something to do with a domain admin account having to traverse a shorter distance to get to something due to elevated permissions?
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    DevilWAHDevilWAH Member Posts: 2,997 ■■■■■■■■□□
    Umm slight update, it was pure coincidence that the domain admin account performed better. 4 standard users where getting poor performance, assigned two DA accounts to two new machines and they worked fine. Then assigned one of these machines to a standard user and it went down hill again. However now we have seen standard users working fine and issues with DA users as we assign them to different machines.

    There does not seem to be any easy to see patten in the issue. We have 3 X host patched in to 1 cisco 6506 (duel nics) and another 3 patched in to a second. The 6506's them selves are in VSS and looking at the 10Gig connections I can see that on all hosts only one of the nics seems to be sending data (still don't have the Vcenter logs in so cant actually see any of this from that side of things), but each host is using about 0.5 - 5% of the 10Gig link in terms of utilisation and the core uplinks between the two 6506 chassis is running at 10-20% at its worse.

    What I understand is that the write must happen to the two SSD's (master and slave copy), this cached data can then be written to the Spindle drives later. I am testing network latency between the two cores and at the moment I can't see any issue, but I do see some a few retransmissions / ACK for unseen packets / and out of sequence packets. I need to check what this is down to, asynchronous switching or NIC issues, or just the monitoring software packet captures. but all other traffic across the cores seems to be fine.
    • If you can't explain it simply, you don't understand it well enough. Albert Einstein
    • An arrow can only be shot by pulling it backward. So when life is dragging you back with difficulties. It means that its going to launch you into something great. So just focus and keep aiming.
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    What's the multipathing setting on the hosts? Is it in line with the vendors' recommendation? I ask this because you've indicated that traffic is coming in via one link only.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    This is a great troubleshooting scenario for the VCDX. Thanks for the practice mate!
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    DevilWAHDevilWAH Member Posts: 2,997 ■■■■■■■■□□
    Essendon wrote: »
    What's the multipathing setting on the hosts? Is it in line with the vendors' recommendation? I ask this because you've indicated that traffic is coming in via one link only.

    I was told it is "based on virtual port", although not sure how this relates to VSAN.
    • If you can't explain it simply, you don't understand it well enough. Albert Einstein
    • An arrow can only be shot by pulling it backward. So when life is dragging you back with difficulties. It means that its going to launch you into something great. So just focus and keep aiming.
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    No, the storage setting I mean. Round-robin/fixed/MRU or is it PowerPath that you use for MPP?
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    DevilWAHDevilWAH Member Posts: 2,997 ■■■■■■■■□□
    Oh I know what you mean sorry. I will have to look and see, what is the suggested for vsan?
    • If you can't explain it simply, you don't understand it well enough. Albert Einstein
    • An arrow can only be shot by pulling it backward. So when life is dragging you back with difficulties. It means that its going to launch you into something great. So just focus and keep aiming.
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    Not sure, though RR is usually what I see these days. You likely have it set to Fixed.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    DevilWAHDevilWAH Member Posts: 2,997 ■■■■■■■■□□
    Hi,

    I cant see where I posted the resolution to this, so I thought I would repeat it here just in case I have not done so before.

    So the cause of this issues??? Not using the correct SSD drives both size and performance.

    So the Vmware best practice says use a ratio of size >1:10 and number in terms of SSD drives to Spindle. In this case the company had spec'd 1 X 200GB SSD (186 usable) for each set of 3 X 1 Tbyte HDD. Which I make as a ratio of 1:16, now they said vmware has changed there recommended specs, however this 1:10 ratio I found dating back to July this year which is before the delivered us the design documents. Also the SSD drives supplier are not on VMwares qualified hardware list and vmware have told us are not sutible for the high IO required.

    So a Test... Take out 3 of the 200gb SSD replace with certified 400gb SSD's on three of our servers and IO's jump from 250 to > 6,000. read and write speeds max out at >150meg/s, and Latency drops from 10-20ms to <2ms.

    From unusable desktops to a something approaching my physical desktop, and fine for general office work. After nearly 2 months of them insisting it was the netqwork, testing and retesting, changing almost every bit of the network infrastructure of the ESXi hosts and making changes that we now need to roll back. It seems it was not following VMwares best practice for the SSD:HDD ratios and spec's.

    Glad it is sorted but not at all happy with this, yes its fixed but it has taken ages and they have been going round and round in circles with out any real trouble shooting methods in place.

    thanks for every ones input and suggestions have learnt a lot about VDI over the last few months, even if it was not really part of my plan :)
    • If you can't explain it simply, you don't understand it well enough. Albert Einstein
    • An arrow can only be shot by pulling it backward. So when life is dragging you back with difficulties. It means that its going to launch you into something great. So just focus and keep aiming.
Sign In or Register to comment.