Microsoft recently released the Windows Server 2016 Technical Preview 4 (TP4). This is the latest pre-release test version that Microsoft has delivered ahead of the 2016 release of the new server operating system.
Microsoft Wants Feedback
There are several new and enhanced features in Windows Server 2016. Sometimes the software company has a good understanding of how we will use features, and sometimes they really want us to share our plans with them. And they want us to test these new features. For example, Microsoft would love to hear feedback about containers and Hyper-V backup. If things need improvement, then Microsoft genuinely wants our feedback. I have seen Microsoft program managers reading the Windows Server User Voice feedback site and communicating with their colleagues about it. And there is no doubt in my mind that people that have submitted and voted for feedback have shaped Windows Server 2016 since Technical Preview 1.
What’s New in Virtualization with Technical Preview 4?
Microsoft has shared a list of changes. It’s probably not a complete and in-depth list of what’s new in TP4, but it’s a good spotlight on where your evaluations and testing should begin.
Until now, Nano Server only supported the roles of Hyper-V and Scale-Out File Server (SOFS). The new preview now adds support for DNS server and IIS. The following were also added:
- DSC (Desire State Configuration) push mode
- DCB (Data Center Bridging)
- Windows Server Installer
- WMI provider for Windows Update
Management has also been improved. You can now edit and repair the network configuration through the Recovery Console, and a new PowerShell module can be used to build Nano Server images.
Windows Server Containers were added in Technical Preview 3, and in the latest preview, we get our first glimpse at Hyper-V Containers, where each container is encapsulated in a Hyper-V child partition with a stripped down and optimized operating system, featuring a dedicated kernel for secure isolation.
The most interesting addition to Hyper-V is Discrete Device Assignment. The concept, is much like SR-IOV, where a Hyper-V virtual machine connects to a physical NIC instead of the virtual switch for performance reasons. But unlike SR-IOV, Discrete Device Assignment allows virtual machines to connect to PCI Express (PCIe) devices in the host. Microsoft states that Windows Server 2016 will allow NVMe devices to be assigned to guest VMs. The benefit here is that a virtual machine can function as a virtual storage appliance that can offer the full performance of the PCIe flash storage device.
GPUs are also a must-have for many virtual machines. This isn’t RemoteFX where the GPU is virtualized; instead, the GPU is accessed directly by the guest OS and can have much better performance. Microsoft says that discrete assignment of GPUs to virtual machines is coming, but it will require support from the GPU manufacturer. Microsoft will talk more about this in the future.
Other devices appear to work, too. Microsoft mentions that USB 3.0 controllers, RAID/SAS controllers and other have worked. While some things might work, they aren’t candidates for support from Microsoft; the company has limited test hours so they will focus on the scenarios that most customers will benefit from.
Nested virtualization appeared in Windows 10, and now it has come to Windows Server. How I teach and demo Hyper-V has changed forever. I’ll be able to run full clusters on a single laptop.
Storage Spaces Direct (S2D)
Quite a bit has changed in Storage Spaces Direct. Welcome to what I’m calling RAID 3D. When you implement disk resiliency for virtualization, you lose a lot of potential storage space. Microsoft is introducing Multi-Resilient Virtual Disks in S2D. The concept here is that instead of saying that a virtual disk has a resiliency of X, now we can divide the virtual disk into two kinds of resiliency:
- Mirroring: Uses more space but offers the best write performance.
- Parity: The best resulting usable space, but has reduced write performance, which isn’t an issue for older data
This is what we refer to as tiering in S2D. The breakdown between flash and HDD is dealt with differently, and we’ll look at that at another time.
ReFS is the file system of choice for CSV in Windows Server 2016 (see Accelerated VHDX operations). ReFS will always write, including updates to the mirrored tier of the virtual disk, which is where the best write performance is. Data, written randomly, is rotated from the mirror tier to the parity tier in larger sequential chunks. This is sequential IO so it performs well and leads to better disk utilization over time.
Principal Program Manager of storage, Claus Joergensen, also described the Software Storage Bus, a virtual bus spanning the nodes of a S2D cluster. In this post, we are introduced to how S2D will use a mixture of flash + HDD or NVMe flash + SSD flash storage. To keep it simple for now, flash storage will be used as a Software Storage Bus Cache (SBC), caching reads and writes to the lower tier of disks. Flash will cache for lower-tier capacity devices.
I expect that your mind is trying to warp around RAID 3D now! I expect we’ll learn more about Windows Server 2016 Technical Preview 4 in Hyper-V and cloud scenarios in the coming weeks and months, so watch this space!