I was recently involved in deploying a Watchguard network virtual appliance (NVA) to Azure, something I had last performed several years ago but back then it was a very basic setup and everything, including the NVA itself was deployed to a single Azure virtual network.
Nowadays, most deployments will call for a hub and spoke network topology and the firewall/NVA gets deployed to the hub virtual network and then you will use VNET peering to connect the hub to your spoke virtual network(s) where most if not all of your other network resources are located.
This particular deployment was fairly straightforward, no BGP requirements or anything like that just a simple hub and single spoke deployment with user defined routes to send the traffic via the NVA.
I completed all of the standard setup and got the routing configured pretty quickly and initially everything looked OK but then I soon realised that I was only getting one way connectivity from the NVA to the spoke resources. Internal network and Internet bound traffic from the spoke network was not getting through.
A Routing Issue
It was pretty clear that this was a routing issue so via a combination of network watcher tools in Azure and the NVA appliance itself I could quickly see the issue.
The packets were hitting the Watchguard NVA but getting dropped by an internal policy due to IP spoofing.
The marketplace deployment had created an external and an internal “trusted” subnet on the virtual network where the NVA was deployed. Any other network address spaces that you decide to connect after that are not known or trusted by the NVA by default. This is a good thing from a security perspective.
A little bit more research led me to the requirement to add a static route to the Watchguard NVA configuration as the appliance was not able to complete a reverse route lookup.
This seemed straightforward, just a matter of updating the route table on the NVA itself similar to how the Azure user defined route was implemented on the virtual network subnets already.
In other words, just providing the address space of the spoke VNET and then the IP address of the internal interface of the NVA as the gateway.
Well so I thought, but when I added this route it immediately stopped my traffic flowing in both directions between the hub VNET and the spoke VNET. Clearly my assumption was wrong.
I won’t bore you with all the troubleshooting steps I took here but the solution in the end was quite simple.
You do need to create a static route but the gateway address for the static route is not the private IP address of the internal interface of the NVA.
Instead you have to use the default gateway address of the subnet the internal interface is on.
In my case, the internal interface IP address was 10.10.1.4 which meant the default gateway address was 10.10.1.1 and this is the address that you need to provide for the static route gateway in the NVA.
Note: In Azure networking, the x.x.x.1 address is always reserved as the default gateway address
Once this route was added to the Watchguard NVA, the traffic started flowing in both directions pretty quickly without any further changes required.
I wrote this post to help any others with this scenario. From some research I did afterwards, this same routing logic seems to apply to most if not all other vendor NVAs also.
Frustratingly even now in 2023, the documentation from some of these vendors seems to still be very lacking when it comes to Azure based deployments especially once you go outside of the basic “out of the box” deployment that you will get from the Azure Marketplace.
Official vendor support channels are often quite limited in much they can help you here also.
This presents a strong argument for cloud native firewalls especially if the network architecture is in any way complicated and/or you want to maintain your firewall configuration with infrastructure as code.