Create a Containers VM Host with DHCP

Posted on December 1, 2015 by Aidan Finn in Windows Server 2016 with 0 Comments

There are several ways to deploy Windows Server Containers with Windows Server 2016 Technical Preview 3 (TPv3). You can enable the role on a physical server or use an existing virtual machine on any hypervisor that supports Windows Server 2016 TPv3 as a guest OS. In this post, I’m going to show you how to perform scripted deployment of a Hyper-V virtual machine that’s running Windows Server Core and enable Windows Server Containers. You may want to read my article, Create a Containers VM Host with NAT, for a quick refresh. This post will focus on enable containers that get direct network connectivity using IP addresses that are provided by DHCP.

The Shelf Life of a Container

When containers are connected to the network via a NAT-enabled virtual switch in the VM host, each container gets a private and non-routable IP address. NAT rules intercept traffic on a specific TCP port and forward it to a TCP port in the container, such as TCP 80. Although this allows huge numbers of containers per routable IP address or per container, it does potentially create a new layer of complexity.

Some might decide that they prefer simplified networking, where they will want each container to get its own network address. Therefore, there’s no need to implement NAT rules on the VM host after deploying a new container. Once you deploy a new container, that container is on the network. In this case, a traditional external virtual switch is deployed instead of a NAT virtual switch.

Each new container gets an IP address for the broadcast domain that it resides from, assuming that a DHCP listener is available. This might require VLANs to forward DHCP request broadcasts to a DHCP server on different VLAN.

Windows Server Containers with direct network connectivity (Image Credit: Aidan Finn)

Windows Server Containers with direct network connectivity (Image Credit: Aidan Finn)

You might argue that you want containers to have static IP addresses. Containers are not intended to have a long life. Even outside of the testing and development world, a container’s owner will probably take advantage of the speed of deployment and born-in-the-cloud data and security architecture to perform frequent upgrades. In reality, a container won’t be around long enough to worry about assigning a static IP address to it.

DHCP Server

The first thing that you will need is a DHCP server that is capable of listening and configured to provide addresses for the VLAN(s) that your containers will be connected to.

Tip: If you’re setting up a test lab, then don’t be that person that sets up a test DHCP server on a production network and interferes with normal DHCP traffic. Adding a DHCP server to an existing VLAN does not magically provide you with a new network. People like this do exist — I once wasted hours diagnosing one and hunting them down.

The Hyper-V Host

The second thing that you need is a Hyper-V host or cluster. Deploy Windows Server 2016 TPv3 onto the required hardware. Next, enable Hyper-V, provision some storage for virtual machines, and create a virtual switch that allows virtual machines to communicate on the network. Note that the setup has changed very little since Windows Server 2012 R2.


Deploying a VM Host

The solution that we are going to use is based on a set of scripts and images that Microsoft has shared. We’ll download two PowerShell scripts called New-ContainerHost.ps1 and Install-ContainerHost.ps1, which we will modify.

New-ContainerHost with a parameter will:

  1. Download several gigabytes of files from Microsoft and expand them, if this hasn’t been done already. One of these files is a VHD format virtual hard disk. This is the VM host virtual machine template.
  2. The script creates a new Hyper-V virtual machine using differencing disks. Note that we don’t usually like differencing disks for normal long-lived virtual machines, but containers are designed to be easy come, easy go. VM hosts might be the same, but we’ll find out more as the technical preview process proceeds to general availability. By using differencing disks, each new VM host takes up very little space, and it is very quick to deploy.
  3. The virtual machine is powered up. The scripted process will specialise the virtual machine. This includes executing a modified copy of Install-ContainerHost.ps1 in the virtual machine to enable the containers role in the guest OS.
  4. Windows Server Containers is configured with direct network connectivity for any new containers, and a container OS image for Windows Server Core is in a local (in the VM) containers repository.

Note that New-ContainerHost.ps1 will do some things by default. For example, Docker is installed in the VM host, and the VM host is not connected to a virtual switch. You can modify this behaviour using some parameters when you execute the script:

  • DockerPath: Indicate a path to an alternative Docker.exe.
  • Password: The password for the new VM host.
  • ScriptPath: We’ll use this flag to run a custom version of Install-ContainerHost.ps1. This will enable the containers role in the guest OS with DHCP functionality.
  • SkipDocker: Use this to not install Docker in the guest OS.
  • SwitchName: Instruct New-ContainerHost.ps1 to connect the new VM host to a virtual switch on your Hyper-V host.
  • UnattendPath: Use a different unattended answer file to specialize the guest OS when it first boots up.
  • VHDPath: Don’t use a Microsoft supplied VHD.
  • VMName: The name of your new VM host.

Download Install-ContainerHost.ps1 and edit the script. Scroll down to where the script’s parameters are defined and search for a variable called $UseDHCP. New-ContainerHost does not pass a value for this parameter, and this will cause Install-ContainerHost to create a NAT virtual switch in the new VM host.

The default DHCP parameter in Install-ContainerHost.ps1 (Image Credit: Aidan Finn)

The default DHCP parameter in Install-ContainerHost.ps1 (Image Credit: Aidan Finn)

Edit the line, and set $UseDHCP to be $true, forcing the script to create a new VM host with an external virtual switch and allowing containers to have direct network connectivity. Save this script somewhere that you will be able to easily access when you run New-ContainerHost.ps1. Now we can move on with creating a new VM host.

The modified DHCP parameter in the Install-ContainerHpst script (Image Credit: Aidan Finn)

The modified DHCP parameter in the Install-ContainerHpst script (Image Credit: Aidan Finn)

Launch PowerShell with elevated privileges, navigate to where you want to save your container-related lab, and then run the following to download the New-ContainerHost.ps1 script. In the world of containers, you are going to get used to downloading files using wget:

Make sure that you add the -ScriptPath flag, and specify the path to your modified copy of Install-ContainerHost.ps1 when you execute New-ContainerHost.ps1:

You will be prompted to agree some licensing terms from Microsoft. Click Yes if you agree and want to continue.

If this was your first time to run the script, then you’ll have a long wait as several gigabytes are downloaded. If you’ve run the script before, then you’ll quickly have a brand new VM host waiting for you to log in.

Make sure you network the virtual machine:

To find the name of your switch on the Hyper-V host:

To connect the VM to the switch on the Hyper-V host:


Customizing a New VM Host

Now my copy of Install-ContainerHost is used to create a new VM host. Note that I’ve been focusing on using PowerShell to create and manage containers, so I do not install Docker:

And to speed things up a bit, I also automate the connection to my Hyper-V host’s virtual switch: