[vSphere] Cannot mount NFS datastore - IPv6 is not enabled ?!?

jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
Only noticed today that three hosts were missing their NFS datastore ... one host still had it and I can unmount / remount .. but the other three get this :


Never seen that error before. I mean yes, IPv6 is disabled as I don't use it anywhere in my lab .. but wth ?!?

Anyone seen this error before ? Sure, I could enable IPv6 (might try it just to see if it makes a difference), but to me it doesn't really make any sense given that one host can mount it (all the same config)
My own knowledge base made public: http://open902.com :p


  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Enabling IPv6 on the host works, but I still don't understand why. The only explanation I got is that the SAN is based on Server 2012R2 (but IPv6 is disabled) - so no idea what is causing this ..
    My own knowledge base made public: http://open902.com :p
  • TheProfTheProf Users Awaiting Email Confirmation Posts: 331 ■■■■□□□□□□
    Honestly, it's hard to say, we'd need more info on your environment and how it's setup. Is this a lab or production environment? Maybe someone made a recent change?
  • EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    In prod at least, it's best to have your NFS server on some form of Linux. MS mounts can be flaky and this may well be the reason why. This certainly was a problem in 2008 R2, maybe MS put some work on fixing this flakiness in 2012 - IDK.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • QHaloQHalo Member Posts: 1,488
    Yeah I'd never serve NFS from a Windows box. This is my prediction:

    Prediction - Pain.wmv - YouTube
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    @ TheProf
    As said in my first post, this is indeed a lab and the only person access to it is me. I only noticed the NFS datastores missing as I had to reboot all hosts.

    @ Asif
    Funny enough, that was my first thing I checked. Now here the "fun" thing. Created a host profile of that particular host and when I attached it to all hosts, that host itself wasn't compliant !

    @ Essendon
    Oh yea def. In production I wouldn't use a DIY job as a SAN. :)

    Here is why it is based on Windows. When I was about to setup the lab (some might remember some question of mine here asking which storage distro to use), I tested FreeNas, Nexenta, CentOS and Windows 2012R2.

    Guess what - 2012R2 iSCSI performance is par with FreeNAS, faster than Nexenta and CentOS. I therefore stuck to 2012R2 - easier to adminsitrate (I find FreeNas irritating to be honest) and I can use it for multiple roles, especially since it is physical.

    The server is also my second DC so when the whole virtual platform falls apart, I have at least my storage and AD working.

    Another reason is Raid card monitoring. The only way to monitor any disk failure in FreeNas is basically not using the raid card at all or passthrough the disks directly to FreeNAS - strangely enough - that is even slower ..

    You'd be surprised how damn good 2012r2 is now. In fact, we were OEM partner for Server 2003 Unified Data Storage Server (not many know it - iSCSI Target software) - and even then it was quite good (was sold by Dell and HP as "SAN".
    My own knowledge base made public: http://open902.com :p
Sign In or Register to comment.