Does it work in real world: Hyper-V server hosting its own DC

jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
I need a 2008R2 box for a lab which hosts the vCenter but I also need a domain environment. I need / want to avoid hosting the DC as a VM on the ESXi infrastructure as the hosts intended for this (3 in total) will be blown away / broken / reinstalled / changed to Citrix and the lot and I don't want to use a physical server just to run AD.

Whilst it sure is possible in theory (making sure the VM starts immedialtely once the server boots up and poweres off when the host is being shut down), does it REALLY work good enough ?

In theory (hosting a VM on top of Hyper-V running AD and joining 'itself') it should work, but I wonder if anyone has encountered any nasty surprises when using this scenario ?

I also wonder how it behaves when you first try to join the Hyper-V server to this domain as it obviously require a reboot, and I am not sure if the VM is shut down before it can register itself properly etc.

Another option might be using my Linux Server, which hosts NFS and ISCSI shares, and throw KVM at it and do it that way, but that seems a lot more hassle (plus I never used KVM and I don't want to use the storage for anything but network shared).

Edit: I guess I could even use vmware workstation on Linux ... but I really don't want to tbh.
Edit2: Linux server is 32 Bit, so that's out anyway ...
My own knowledge base made public: http://open902.com :p

Comments

  • ZartanasaurusZartanasaurus Member Posts: 2,008 ■■■■■■■■□□
    You'll have issues if you try to cluster your Hyper-V servers, since it relies on AD to start the clustering service. Other than that, it should work well enough I think.
    Currently reading:
    IPSec VPN Design 44%
    Mastering VMWare vSphere 5​ 42.8%
  • RobertKaucherRobertKaucher A cornfield in OhioMember Posts: 4,299 ■■■■■■■■■■
    So are you talking about the product "Hyper-V Server" or a server with the Hyper-V role installed? If you are talking about a full install of Windows Server with the Hyper-V role installed, then why not just make this host the primary DC?

    It seems as this is a production environment. So I am going to suggest what MS suggests as a best practice (or at least they did in the past): keep one physical server as your PDC. It can be a small, cheap, pizza box of a server.
  • ZartanasaurusZartanasaurus Member Posts: 2,008 ■■■■■■■■□□
    It seems as this is a production environment.
    I thought it was for a lab?
    Currently reading:
    IPSec VPN Design 44%
    Mastering VMWare vSphere 5​ 42.8%
  • pumbaa_gpumbaa_g Member Posts: 353
    Hyper V will not work in a nested setup. ESX is more forgiving but if you have a nested setup most of the advanced features are not available. I am planning to use my old AMD Quad Core desktop to install ESX and my current desktop as a nested setup.
    [h=1]“An expert is one who knows more and more about less and less until he knows absolutely everything about nothing.” [/h]
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    You'll have issues if you try to cluster your Hyper-V servers

    Nah, no cluster needed, don't even need Hyper-V, that would just be for running the DC and storage is on iSCSI anyway, thanks.
    pumbaa_g wrote: »
    Hyper V will not work in a nested setup..

    Not nested. Server 2008R2 running Hyper-V .. Hyper-V runs one VM, a DC, which the host is using to join the domain. Host also runs vCenter Server.
    why not just make this host the primary DC?
    .
    <snip>
    .
    It seems as this is a production environment

    Like I said in my 1st post - this is for a vCenter and in a lab, providing AD as well.

    Problem with vCenter is, it does NOT install on a DC, so that's out ..

    I was tempted to just install ESXi on it etc.and have two VMs running, but that server has a touchscreen which needs to work and that isn't working on ESXi due to the USB driver requirement.

    What I now actually did - is installing VMware Workstation on my storage server, which is running CentOS 6 and just presents NFS and iSCSI storage. That storage server has also a 2.5" SATA / SSD Hybrid, which I forgot, so I won't need to use any of the dedicated (SAN) storage to store this VM.

    Like I mentioned though, the CentOS box is running CentOS 32 Bit - so cannot use 2008R2 - but since I just need "a" DC, I installed a VM with "Windows 2008-non-R2-without-Hyper-V-but-standard-core-edition". When you create the VM as "shared VM" you can even configure the VMs to start with the host, so if I need to restart my "san" for some reason, the DC should come back up .. Since it is core, I should get away with minimal RAM requirements as well. Could upgrade from 4GB to 8GB if needed, but I hate wasting (expensive) resources.

    So I THINK this might be the safest solution ... (although don't REALLY like using the storage box for it, which now lost its network since the first reboot - grrr)

    My own knowledge base made public: http://open902.com :p
  • cyberguyprcyberguypr Senior Member Mod Posts: 6,917 Mod
    If indeed just a lab setup, is there a specific reason why you want to join the Hyper-V host to the domain? I have my main Hyper-V host as a standalone precisely to avoid issues like this. Zero issues.
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    I need a domain for parts of the lab, otherwise I wouldn't bother :)
    My own knowledge base made public: http://open902.com :p
  • ClaymooreClaymoore Member Posts: 1,637
    So I am going to suggest what MS suggests as a best practice (or at least they did in the past): keep one physical server as your PDC. It can be a small, cheap, pizza box of a server.

    Still the best idea. When a DC boots it tries to talk to the other DCs to get back in the loop. If it cannot reach any other DCs, it will wait 15 minutes before completely booting and providing directory services. Keeping one physical DC allows all the virtual DCs to boot immediately after a complete host failure. If you have other servers that are dependent on AD, those would need to be set to a delay boot of at least 15 minutes to give the DC a chance to come online, unless you had a physical DC as well.

    There are other reasons not to virtualize everything - regardless of the vendor. Anyone remember Aug 12, 2008?
  • RobertKaucherRobertKaucher A cornfield in OhioMember Posts: 4,299 ■■■■■■■■■■
    I thought it was for a lab?
    I apparently cannot read...
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    Right, that works surprisingly well. NFS/iSCSI storage Server, running CentOS 32 Bit, has now VMware workstation installed with Server 2008R1 core, configured as DC.

    Reboot and shutdown script of the Linuxmachine have been changed so the first step is to shutdown the DC cleanly (vm powers on automatically on startup). Load of the storage server sits on average at 0.17 and jumps up to 0.6 when the ESX cluster is using the LUNs.

    Not bad really icon_smile.gif
    My own knowledge base made public: http://open902.com :p
  • ZartanasaurusZartanasaurus Member Posts: 2,008 ■■■■■■■■□□
    jibbajabba wrote: »
    Right, that works surprisingly well. NFS/iSCSI storage Server, running CentOS 32 Bit, has now VMware workstation installed with Server 2008R1 core, configured as DC.

    Reboot and shutdown script of the Linuxmachine have been changed so the first step is to shutdown the DC cleanly (vm powers on automatically on startup). Load of the storage server sits on average at 0.17 and jumps up to 0.6 when the ESX cluster is using the LUNs.

    Not bad really icon_smile.gif
    I thought you required 2008 R2, which is why the 32-bit Linux box was a no go?
    Currently reading:
    IPSec VPN Design 44%
    Mastering VMWare vSphere 5​ 42.8%
  • bdubbdub Member Posts: 154
    Crazily enough I did this and it worked fine (mostly). I recall having a few issues but I dont really recall what they were, nothing major.

    My scenario was that I wanted to use the VMM for Hyper-V so I could get some exposure to it. I had 2008 R2 full install with Hyper-V role installed. I had a vm running as a DC. In order to install VMM it must be installed by a Domain Admin so as you can guess it will not install on a workgroup server. So I joined the physical host to the vm domain and then used DA to install the VMM on the physical host. I think my my main reason for going this route was that the resource requirements for VMM were such that I did not want to dedicate that much to a single vm since this box was also my workstation/gaming machine.

    I think I did have issues booting into a domain user profile, so if IIRC I logged in with the original local user and then did a run as on the VMM console and used a domain user to login.

    Definitely sort of hacked it together to make it work, but it did work, and its what I used during my entire MCITP journey.
  • jibbajabbajibbajabba Member Posts: 4,317 ■■■■■■■■□□
    bdub: Thanks :)
    I thought you required 2008 R2, which is why the 32-bit Linux box was a no go?

    I do need 2008R2 to install vCenter. The 32 Bit 2008 is JUST to run AD and 2008R2 can joined to a 2008 Level domain just fine.

    So CentOS 6 with Server 2008R"1" VM running as a DC with another physical 2008R2 running sql and vcenter joined to it ...

    Hope it makes sense :)
    My own knowledge base made public: http://open902.com :p
  • pwjohnstonpwjohnston Member Posts: 441
    I have actually been working in this project of setting up a production cluster for a client. Hyper-V’s weakness are are more than just DC and FSMO placement. I mean the whole idea is, ya you should be able to have your cluster nodes as member servers and virtualize your DC’s. That is the point of having the cluster in the first place that it doesn’t go down, nodes go down and you either fix them or replace them.

    With that said:
    Domain Controller Virtualization Options

    Personally I prefer to put two virtual DC’s up on separate nodes. If your cluster, your SAN, and your network are solid, you shouldn’t have anything to worry about.

    If you want to be a little more cautious, put up a third Physical DC and have at least PDC emulator and DNS on it.

    The key here is back up your DC’s systate and you really shouldn’t have anything to worry about.

    Oh and if you’re going to do HyperVdo some reading. That **** is so inconsistent. Somethings work, some things don’t. Some things didn’t used to work, now they do. Some things work, but aren’t recommended for production environments. You have to be on top of it because it’s still getting it’s legs.

    Just my 2cents.
Sign In or Register to comment.