In a previous article, Designing a Non-Clustered Hyper-V Host, I discussed how to design a simple, non-clustered Hyper-V host. This article is a step-by-step outline of how to install that host, running Windows Server 2012 R2 as the management operating system (OS).
Too many posts on the Internet just through a click-click-click experience in a wizard that usually results in a host I would never use in production. Instead, I have written a series of posts at the Petri IT Knowledgebase that walk through preparing a production-ready standalone Hyper-V host. This post shows you how to deploy a non-clustered Hyper-V host, while the next post in this series will show you how to enable Hyper-V and get that new non-clustered host ready for production.
The Objective: Install a Non-Clustered Hyper-V Host
The host in this example is of a very simple design. As in my previous post there will be two drives. The management OS will be installed on the C: drive. This is the operating system that will get the host up and running, and will be used to manage the host once the type 1 hypervisor is enabled and slips itself directly onto the hardware, beneath the OS. The D: drive is where the virtual machine files are stored.
A simple storage design for non-clustered hosts (Source: Aidan Finn)
This host has 32 GB RAM. If I allow 1-3 GB for the management OS, I will have at least 29 GB of RAM that I can assign to virtual machines. By enabling Dynamic Memory in compatible virtual machines I can get a lot of virtual machines onto that host.
The host only has a single quad core Intel processor. I have enabled Hyperthreading. This gives my host 8 logical processors. I haven’t doubled the processing power of the host but I have slightly increased overall processing capacity, especially for virtual machines that will run multithreaded applications such as SQL Server.
The host has 2 x 1 GbE network cards. Ideally I would like to have 4 NICs to implement the design in the image below.
Simple networking in a non-clustered host (Source: Aidan Finn)
I will be teaming these NICs and then sharing the overall network capacity between virtual machines and the management OS.
The Management OS: Windows Server 2012 R2
To save time, I have downloaded the latest Windows Server 2012 R2 (WS2012 R2) ISO from Microsoft. This ISO included the April 2014 update that has become known in the media as “Update 1”. Using this ISO will reduce the amount of Windows updates and elective hotfixes that I will have to install.
I used the Windows 7 USB/DVD Download Tool to “burn” the ISO to a USB stick (at least 4 GB in size). This means I can use reusable media, don’t have to go searching for a DVD burner, and I don’t need a DVD drive in my host. Overall, I am saving time and money, and the install will be much quicker too.
WS2012 R2 was installed onto the host on the C: drive. That C: drive doesn’t need to be big or fast. I will want speed and capacity for the drive that will store my virtual machines. Speed is the first priority with capacity second.
At this point I urge you to update the drivers and firmware using a download(s) from the computer vendor’s website. Most issues, especially network related, are solved by having the correct and latest drivers and firmware for your machine’s chipsets. Do not rely on the drivers that Microsoft supply as they are generic drivers which often perform badly and can even cause bugs. There is a reason that companies like Dell and HP share new drivers and firmware. Don’t be lazy; download and install those drivers!
Virtual Machine Storage Drive
If I have a CD/DVD drive I like to change the letter from D: to Z:. It doesn’t really affect any functionality but it’s a personal best-practice to keep things tidy in my mind, as you’ll see in a few paragraphs.
Open up Disk Management. Bring the second drive online as a GPT drive. Create and format a new volume. Format the new volume with a 64 K allocation unit size. This will optimize the file system for Hyper-V. This new volume will be the D: drive.
On the D: drive I create a new folder called D:Virtual Machines. This is where I will store all virtual machines that will run on this host.
In this design the 2 x 1 GbE NICs in the host will be teamed. A virtual switch will later be created and connected to the team’s single team interface. The design is a very simple converged networks example. Virtual machines will communicate through the NIC team via the virtual switch and a virtual NIC will be created in the management OS to allow the host to also communicate via the virtual switch. I will be applying some simple QoS rules to ensure that we can access the host if a virtual machine misbehaves.
Run LBFOADMIN.EXE to start the NIC Teaming utility. You don’t need to browse through Server Manager. Select the two NICs in the physical computer, right click, and select Add To New Team. This will open the dialog shown below.
- From the Petri Forums: Virtual Machines and NIC Teaming
Name the team and select a teaming mode. I prefer to use one or two simple top-of-rack switches for my access network points. That means I’ll choose the Switch Independent teaming mode. If you are connecting the host to a collection of stacked switches to create one logical switch then you must go with either the Static or LACP teaming modes, depending on what your switches and network administrators support. Switch Independent is the best option to go with, according to Microsoft.
There are several options for the load balancing method. Dynamic is the default and the best option to go with. Dynamic load balancing gives you the best of Hyper-V Port and Address Hashing, both optimizing inbound traffic processing and enabling the use of available NICs for outbound traffic by a single virtual NIC; subject to the team’s hashing algorithm.
A simple Hyper-V host NIC team
Use a little PowerShell if you don’t want to use the GUI:
New-NetLBFOTeam –Name ConvergedNetTeam –TeamMembers “SLOT 2 3”,”SLOT 2 4” –TeamingMode SwitchIndependent –LoadBalancingAlgorithm Dynamic
After the team is created open Control Panel > Network Connections (a shortcut is to run NCPA.CPL). You should see the team interface appear alongside your two physical NICs. The host’s management OS will have an IP address but we will not configure it yet.
Enable the Details view in Network Connections. Make a note of the device name of the NIC that is created for the team interface. It will be something like “Microsoft Network Adapter Multiplexor Driver”. You will need this device name (unique on your server) later on when you create a virtual switch.
Wait! Where Are The Enable Hyper-V Instructions?
In part 2 of this series I will show you how to enable Hyper-V, and we’ll continue the process of getting your new non-clustered host ready for production.