Choosing Hyper-V Storage: Physical Disks

Posted on April 22, 2013 by Aidan Finn in Hyper-V with 0 Comments

This subject has become a talking point recently because of a recent incorrect KB article that was published by Microsoft. [Editor’s note: Microsoft removed the web page in question before this article was posted.] We will talk you through the possible solutions you can use for virtual machine storage in this series of posts.

There are several kinds of disk that you can present to a virtual machine. Some are physical volumes. These types can offer support for legacy management mechanisms, but they cannot offer the flexibility benefits that we associate with virtualization. Others are virtual solutions that are designed to offer flexibility and enable the self-service trait that we associate with cloud computing.

Kinds of Physical Disks

There are three kinds of physical disks that you can attach to a virtual machine in Windows Server 2012 Hyper-V: Passthrough disks, iSCSI disks, or Fibre Channel disks.

Pass-through Disks

The first of these is the pass-through disk, known to VMware customers as raw device mapping. The pass-through disk is a LUN that is connected directly to the controller of a virtual machine, located in the settings of that virtual machine’s virtual hardware. There are valid reasons for using a pass-through disk. There are also not-so-valid reasons for using pass-through disks, and these are often the reasons offered by engineers who have mistakenly gone down the path of favoring this type over superior virtual alternatives.

Management of a pass-through disk’s LUN is done using the physical storage tools, just like you would manage LUNs on a physical server or a SAN. This means that you can easily extend a LUN’s size while a virtual machine is running, and this has obvious benefits for services that are subject to a service level agreement.  This is necessary because Windows Server 2012 Hyper-V cannot resize the virtual storage of a running virtual machine. The counter argument to this is that correctly sized virtual machines should require infrequent changes.

Prior to Windows Server 2012, virtual hard disks were limited to a maximum size of 2,040 GB. To overcome that restriction, businesses decided to use pass-through disks – a valid architectural decision. Thanks to Windows Server 2012, we now can deploy VHDX virtual hard disks. They can scale out to 64 TB, which is also the maximum size of a Volume Shadow Copy Service (VSS) snapshot.

WinServer2012 logo

A sometimes-quoted reason for using pass-through disks is that the business is fearful of data corruption in a virtual hard disk. As you will see in the next installment of this article series, Microsoft has gone to great lengths to alleviate this concern. Another offered reason for the use of pass-through disks is that administrators want to backup data volumes using SAN snapshots. However, it’s not a good reason: Host level backup can use a SAN manufacturer-provided hardware VSS provider to do the exact same thing for virtual hard disks that are stored on a host-managed volume or Cluster Shared Volume.

Virtual Hard Disks

And finally, we sometimes hear that administrators are deploying pass-through disks to get performance. This excuse is often used by those who do not understand how their SAN works or the performance characteristics of virtual hard disks. A LUN on a SAN spans many physical disks in a group, taking up a tiny amount of each disk. Creating lots of LUNs in a single disk group for pass-through disks places more demands on the physical spindles. A virtual hard disk that resides on a LUN will span those same physical disks, but on a single LUN. The same physical spindles are servicing the same cumulative demand from the services running in the virtual machines. Virtual hard disks can perform at almost the same speed as the physical storage that they reside on. In fact, virtual hard disks are the backbone of the Microsoft data center strategy and are probably the most used type in cloud computing. Are there exceptions where the extra 2% of performance is required? Yes, but that’s the point – they are exceptions, and you need to understand the limitations of using a physical disk.

Is using physical disks for virtual machines all bad? No, there are some scenarios where it is necessary.

Sponsored

Guest Clusters

We can make virtual machines highly available by hosting them on a Hyper-V cluster, but that doesn’t make their services highly available. Failover has some level of downtime:

  1. A host fails
  2. The cluster heartbeat times out
  3. Virtual machine failover to other hosts
  4. The failed-over virtual machines boot up

The time is minimal but it still impacts service availability. We can create guest clusters, wherein we create clusters in the virtual machines, independent of the host cluster. Guest clusters have the same requirements as physical clusters, and this usually means having some kind of shared storage. We cannot use a pass-through disk for this shared storage. This is because a pass-through disk can only be presented to a single virtual machine, thus it is not actually shared.

Windows Server 2012 allows us to create guest clusters with up to 64 nodes using any of the following shared storage options, which you’ll notice are the same as in the physical world:

  • SMB 3.0 Shares: Any application, such as SQL Server, that supports clusters with file storage can use SMB 3.0 file shares that are permissioned for the nodes of the guest OS cluster.
  • iSCSI Targets: An iSCSI SAN can present a LUN to all members of the guest cluster. The virtual machines can use iSCSI initiators to connect to this storage. These initiators will use virtual machine virtual NICs to communicate with the SAN via a virtual switch (or switches) in the Hyper-V host. Check with your storage vendor for support of this scenario.
  • Virtual Fiber Channel: Windows Server 2012 allows you to virtualize a host’s N_Port ID Virtualization (NPIV) capable host bus adapters (HBAs). This means that both the host and enabled virtual machines can connect to LUNs on a fiber channel SAN, with support for virtual machine Live Migration.  LUNs can be zoned on the SAN to allow virtual machines to use those LUNs for the guest cluster’s shared storage.

Each of these options is typically created using physical storage. Note that you can use virtual storage appliances to create a completely virtualized guest cluster that is abstracted entirely from the physical compute cluster and fabrics.

Sponsored

Limits on Physical Disks

One the big reasons that virtualization is deployed is flexibility.  Abstracting virtual machines from the physical environment gives us the following.

  • Quicker deployment: We don’t have to wait on anyone to provision hardware configurations to deploy new virtual machines. Getting physical storage from a SAN administrator can be compared to a small business trying to get a loan from a bank in an economic crisis: It will be slow, you will have to negotiate, and you won’t get everything you need.
  • Cloud computing: A necessary trait of a cloud is self service: this is where customers (internal or external) can deploy virtual machines. Physical storage administration does not lend itself to this purpose.
  • Easier backup: Backing up a LUN that stores virtual machines enables application consistent backup of all those virtual machines using a Hyper-V VSS Writer. One backup job on a host can have a huge and efficient reach. Imagine a cloud where customers are deploying virtual machines without the involvement of IT. Who is going to deploy and configure all those backup agents/jobs? Most IT administrators tend to find out about new services when their owners ask for a restore. Imagine this in a cloud with hundreds or thousands of virtual machines to which they had no involvement!
  • Disaster recovery replication: Virtual machines that are just files are easy to replicate using well-proven, selective, and economic techniques. This is not the case with physical LUNs – it’s all or nothing, and they require expensive inflexible infrastructures.

A KB article was recently published by Microsoft that incorrectly stated that Windows Server 2012 Live Migration did not support pass-through disks. Many Hyper-V experts cheered this news because this could finally bring an end to inflexible storage on Hyper-V, while others questioned the validity of the claim. (The KB article was quickly deleted.) The truth is that Hyper-V does support pass-through disks with Live Migration as long as both of the following are true:

  • You are performing Live Migration on a highly available virtual machine between nodes in a Hyper-V cluster.
  • The pass-through disks are managed by the failover cluster.

That covers the traditional Live Migration scenario that was introduced in Windows Server 2008 R2 Hyper-V.  Windows Server 2012 Hyper-V introduces new kinds of Live Migration:

  • Storage Live Migration: Movement of virtual machine files without service outages.
  • Shared-Nothing Live Migration: Movement of virtual machines and their files between hosts without common storage, a common cluster membership, or even any cluster at all.
  • Live Migration with SMB 3.0 Storage: Storing virtual machines on SMB 3.0 file share storage and performing Live Migration of those virtual machines with or without a Hyper-V cluster

All of these new kinds of Live Migration only work with virtual hard disks; it’s not possible to move a LUN from one server/SAN to another without downtime!

There are some situations where using physical storage can be useful, but these are exceptions rather than the rule. Physical storage disables many of the benefits of virtualization. Virtual storage – particularly the new VHDX format of virtual hard disk – offers performance, scalability, and stability with support for all of the features of Hyper-V. We will look at the virtual hard disk design choices in part two of this article.

Sponsored

Tagged with , , , ,