How to Build an Azure Firewall in a Hub Virtual Network
In this design, the Azure Firewall will be deployed once into a hub virtual network. Applications or services will be deployed in spoke virtual networks. VNet peering (global peering is not supported), will connect the hub, where the firewall is hosted, to the peers, where the applications/services are hosted.
The benefits of this design are:
- Services can be deployed across different virtual networks/resource groups/subscriptions in the same tenant (governance & RBAC) but you can still enable secure communications between isolated services.
- Every packet flowing between services is inspected and logged by the Azure Firewall.
- You have a centralized Azure Firewall deployment instead of one firewall or network virtual appliance (NV) cluster per application/service deployment.
- The role of network security can be separated from the roles of application development & operations.
Firewall Virtual Network
This is a very simple virtual network – a single subnet, that must be called AzureFirewallSubnet, is required to host the Azure Firewall. The firewall has a single public IP address, which you should note for the “destination address” in NAT rules, and a single internal IP address, which you should note for creating route tables in the application/services subnets.
If your applications will be internet facing and they require a dedicated public IP address (see web application gateway/firewall), then traffic from the Internet will come into those services without flowing through the firewall. However, traffic between applications must flow through the firewall, and this level of isolation will contain risks and enable full logging with centralized oversight & control.
NAT rules will allow you to present applications to the Internet via the PIP of the firewall. Application and/or network rules will allow you to control the flow of traffic between the subnets and out to the external world.
Application Virtual Network(s)
There are three things to consider here:
- External services
- The cost of peering – more on this later.
You will do a lot of routing management if you choose to deploy Azure Firewall, and you will do massive amounts of routing management if you deploy the Azure Firewall into hub-and-spoke architecture. This is, so far, where I have spent most of my network troubleshooting time – usually it’s a route table that I have forgotten to associate with a subnet.
If you create a route table for each subnet with a simple route to 0.0.0.0/24 via the internal IP address of the Azure Firewall. It would appear that all traffic will route via the Azure Firewall, but in reality:
- Traffic within the source to destination in the same VNet will flow directly without inspection by the Azure Firewall, but still subject to NSG filtering.
- You will have split routing – any services in the spoke VNets with a public IP address will have traffic come in through their PIP but go out through the Azure Firewall … and that won’t end well.
In reality, you will need greater control. One could create a route with a destination of 10.0.0.0/16 via the internal address of the Azure Firewall. That route will direct all traffic inside the VNets through the firewall but allow services to respond out through their respective inbound routes. However, this micro-segmentation will send all traffic across the peering connections and that could increase network costs substantially.
A balancing act will be to send all inter-VNet traffic through the Azure Firewall but leave all intra-VNet traffic to stay inside the VNet – and still be filtered by the NSGs. To do this, you will need more complex route tables. Each route table will need a route for each VNet or subnet that it must communicate with. For example, the route table on 10.1.1.0/24 might have a route to 10.2.0.0/16 that passes through the Azure Firewall’s internal IP address.
The final piece of the puzzle is VNet peering. Each application VNet has peered with the Azure Firewall’s hub VNet. You should consider a few things here:
- Cost: VNet peering has a micro-cost. If you will have massive flows of data across those peering connections then this cost will build up. Be careful about micro-segmentation between subnets in the same VNet; tiers of the same service will communicate a lot but tiers of different services typically only communicate a little, relatively speaking. Place your service components into VNets with this in mind.
- Peering configuration: You will create one peer connection from the hub to each spoke, and one connection from each spoke to the hub. When creating the spoke->hub connection you must enable “Allow Forwarded Traffic” to allow traffic that the Firewall is routing to the spoke.
- Governance: This design implies that one VNet should never talk directly to another VNet. You should attempt to control the creation of VNet peering connections.
Adding a Jumpbox
If you require a bastion host to enable secured remote access to your virtual machines then place it into another spoke virtual network. You should then allow traffic from it to the other subnets using network rules in the Azure Firewall. Here are two things to think about:
- Consider an RDS Gateway: One of my students suggested this to me in a class last year. Instead of just opening up RDS, enable certificate-secured RDP traffic to the virtual machines in your services via the jumpbox which is running the RDS Gateway role (only, so no RDS licensing). This will scale out your ability to work across many services at once.
- NAT or PIP? Should your jumpbox have a dedicated PIP or NAT via the firewall? Security purists will prefer the NAT approach. I prefer the PIP approach, which you’ll see in many Microsoft reference architectures; this is because, even if there are problems that affect RDS or the Firewall, I can see get into the network directly via the virtual machine – do please use Azure Security Center’s Just-In-Time VM Access.