Hi Everyone!

This post is continuation of how to series about Hub-Spoke network topology in Azure. Over the time, I will update this page with links to individual posts :

Connect an on-premises network to a Microsoft Azure - Part 1

Connect an on-premises network to a Microsoft Azure - Part 2

Implementing Hub-Spoke network topology in Azure - Part 1

Implementing Hub-Spoke network topology in Azure - Part 2

This Post - Introducing Azure Firewall in Hub-Spoke network topology in Azure

Implementing Azure Firewall in Hub-Spoke network topology in Azure

Now we are able to connect between Hub and Spokes but there is no communication between Spokes. In this post we are going to see how Intra-Region Spokes VNET routing works.

Ways to establish Spoke to Spoke communication

There are three ways Spokes can talk to each other. All has it’s own pros and cons.

1. Leveraging ExpressRoute 1

If ExpressRoute is in use to allow connectivity from on-prem locations, we can leverage the ExpressRoute circuit to provide native spoke to spoke communication. Either a default route (0.0.0.0/0) or a summary route comprising all the networks for the region VNETs can be injected via BGP across ExpressRoute.

Pros

  • By advertising a default or summary route, we provide the ability for spoke to spoke routing natively within the Azure backbone via the MSEEs(Microsoft Edge routers).
  • This traffic is specifically identified within Azure and does not trigger any VNET peering costs, so it is completely free to the customer.

Cons

  • Traffic is limited to the bandwidth of the ExpressRoute gateway SKU size and latency of hair-pinning off of the MSEEs which are in the peering location of your ExpressRoute circuit. Example- If you are using the Standard Express Route Gateway SKU, and bandwidth limit of 1G. This throughput limit would also apply to spoke to spoke communications.

2. VNET Peering 2

We already know that, by enabling the peering between two VNETs they can communicate. So, similar to how each Spoke is peered to the Hub, and additional VNET peer would be created between the two spokes which require communication.

Pros

  • Spokes are now directly connected via the Azure backbone and have the lowest latency path possible.
  • No bandwidth restrictions exist along the path, so hosts are only limited by the amount of data they can push.

Cons

  • While spokes continue to grow, the main problem will be to scale it. As multiple spokes are introduced behind a hub and require connectivity, a full mesh of VNET peers would be required to provide this connectivity.
  • The additional cost associated with VNET peering.

3. Leveraging a Hub NVA 3

In this approach, we deploy a NVA of our choosing into the Hub and define static UDRs within each spoke to route to the NVA to get to the other spoke.

Pros

  • The granular control and inspection capabilities NVA comes with. Spoke to spoke traffic can now be fully inspected.
  • No longer need to worry about advertising in a default or summary route which may relieve some administrative
  • Some lower latency as we are communicating through a HUB rather than through an MSEE. In all reality, the improvement here is fairly low as the Azure backbone doesn’t add too much latency.
  • Do not have the bandwidth limitation of the ExpressRoute gateway as this is no longer in the path.

Cons

  • Cost of deploying an NVA. Regardless of NVA chosen, there will be an additional cost for running the NVA and, typically, a throughput cost for the traffic traversing the NVA.
  • NVAs have bandwidth limitations of their own, which could be a bottleneck for this traffic. Understanding the specific throughput limitations of the NVA chosen would be critical when leveraging this method
  • When leveraging Option 1, the Azure fabric recognizes this data path and does not apply charges for the traffic when it traverses the VNET peerings. If using an NVA, all traffic which traverses VNET peerings to reach the NVA will incur VNET peering costs.

In this post, we are going to leverage Hub NVA for Spoke to Spoke routing and we are going to use Azure Firewall for this purpose.

What is Azure Firewall?

Azure Firewall is a cloud native network security service. It offers fully stateful network and application level traffic filtering for VNet resources, with built-in high availability and cloud scalability delivered as a service. You can protect your VNets by filtering outbound, inbound, spoke-to-spoke, VPN, and ExpressRoute traffic. Connectivity policy enforcement is supported across multiple VNets and Azure subscriptions.

Why Azure Firewall?

  • Cost! As always No upfront cost, No termination fees, Pay only for what you use. Azure Firewall pricing includes a fixed hourly cost ($1.25/firewall/hour) and a variable per GB processed cost to support auto scaling. As per Microsoft’s observation, most customers save 30% – 50% in comparison to an NVA deployment model.
  • You can use Azure Monitor to centrally log all events. You can archive the logs to a storage account, stream events to your Event Hub, or send them to Log Analytics or your security information and event management (SIEM) product of your choice.

In my next post, I will start to implement azure firewall and how it fits in Spoke to Spoke routing by user defined routing or UDR.