In this post, I will discuss how you can achieve better networking performance with Azure virtual machines by using a feature called Accelerated Networking.
Faster Is Better
Most workloads in Azure will probably never find networking to be the bottleneck. However, some workloads are extreme and need to be able to send or receive data at high speeds with reliable streaming and with lower CPU utilization. If that is the case for you, then Accelerated Networking is a feature that you should consider enabling when creating your virtual machines. It will enable speeds of up to 25Gbps per virtual machine.
What Is Accelerated Networking?
If you have been doing lots of reading about Hyper-V or have dived deep into hardware offloads for VMware, then you might be familiar with something called Single-Root I/O Virtualization or SR-IOV. SR-IOV is a virtualization feature that allows a virtual machine to use a virtual function or VF (a special guest OS driver) to connect directly to the physical function or PF on a physical network card (NIC).
SR-IOV was a feature introduced in Windows Server 2012 Hyper-V. Microsoft announced at Ignite last year that it had started to turn on this feature in waves around the Azure regions. The Azure Implementation of SR-IOV is called Accelerated Networking. The below diagram shows the changes that introduction of Accelerated Networking, if you decide to use it, would do to the architecture of a virtual machine’s networking. The enabled NIC no longer passes through the virtual switch, which is in user mode in the management operating system (host OS). Therefore, it no longer requires multiple-host, processor context switches to transmit packets in/out. Instead, the packets simply pass directly from the VF of the virtual machine to the PF of the NIC or vice versa.
Enabling Accelerated Networking has 3 effects:
- Reduced CPU utilization, leaving more capacity for processing the massive amounts of data that are either being sent or received
- Reduced jitter, which is better for streams of data
- Higher overall throughput, enabling more data to be pushed at once.
Higher-spec virtual machines should see overall bandwidth amounts increase quite a bit, up to 25,000Mbps on the M128s or 20,000Mbps on the GS5 or DS15_v2.
Read the Details
“The devil is in the details” is a phrase that might apply with Accelerated Networking. The typical D3_v2 or A2_v2 that us lesser mortals deploy on a regular basis cannot use Accelerated Networking. The short story is that virtual machines with 8 cores can use virtual NICs with Accelerated Networking enabled. The detailed answer is that the virtual machine series/size must be listed on this page and must be deployed in a region with support and with a supported guest OS. Note that Windows Server support is generally available and Linux is in Preview.
Another page reveals more information:
Expected network performance is the maximum aggregated bandwidth allocated per virtual machine size across all NICs for all destinations.
That means that to hit 20,000Mbps with a DS15_v2, we must deploy the virtual machine with 8 NICs and aggregate the bandwidth of those NICs. A single NIC will not give you that bandwidth.
That page also says that:
Upper limits are not guaranteed but are intended to provide guidance for selecting the right VM size for a specific application.
Check out the figures that Microsoft shares in the virtual machine sizes page of what could be possible in ideal circumstances. Many factors will influence actual numbers, including guest OS tuning, the nature of the traffic, application efficiency, network congestion, and network connections.
Some Deployment Notes
As with all new features, those of you clinging to ASM (Classic) deployments will miss out. It is past time to upgrade to ARM! The NICs must be created with Accelerated Networking enabled. The feature cannot be enabled afterward. And finally, you need to use PowerShell! Accelerated Networking requires a flag to be set when creating the NIC and you cannot create multi-NIC virtual machines using the Azure Portal.