When I first setup my server, the best solution I could find to route my Hyper-V guests from the private network out the internet facing IP addresses was ICS (internet connection sharing). It works ok, but after every reboot, I have to disable ICS and then re-enable it again on the host. Otherwise, it simply doesn't work. It's not ideal, but no big deal for a process I'm very hands-on about. I thought Microsoft would fix it one of these years, but no such luck.

Now that I have almost half a million IP addresses in the firewall, turning ICS off and on takes tens of minutes and even throws errors that you wouldn't guess if you didn't know it was still going through the process of handling all those firewall addresses in the background (and then sometimes it doesn't work and I have to do it again). My Hyper-V guests are not mission critical, so I've been fairly reluctant to mess with it too as I'm not near my server's colocation datacenter and always a little scary messing with the NICs remotely. However, the time has come when that solution is just not acceptable any more and it's always haunted the back of my mind a bit as I tend to automate everything possible.

The other reluctance is that I have multiple NICS teamed and then the Microsoft Multiplexor driver on top of that. Therefore, it seems like magic that any of it works. I like simplicity too. The more steps in a process, the more chance of failure. However, that solution provides redundancy, so it's a risk to reward situation.

I still needed a virtual switch to run my guest OSes with ICS so I simply modified that internal virtual switch to have an IP address of Then, executed the following command in powershell:

New-NetNAT -Name “NATNetwork” -InternalIPInterfaceAddressPrefix

You're only allowed to have one NAT Network. I have no idea what happens when you run the previous command multiple times for different subnets. I suppose it's possible you could forget over time and best to look first. There is no GUI for New-NetNAT that I found, so to view your current configuration:



Name                             : NATNetwork

ExternalIPInterfaceAddressPrefix :

InternalIPInterfaceAddressPrefix :

IcmpQueryTimeout                 : 30

TcpEstablishedConnectionTimeout  : 1800

TcpTransientConnectionTimeout    : 120

TcpFilteringBehavior             : AddressDependentFiltering

UdpFilteringBehavior             : AddressDependentFiltering

UdpIdleSessionTimeout            : 120

UdpInboundRefresh                : False

Store                            : Local

Active                           : True

If nothing shows up after typing that command, you have no Natting configured. However, if you need to remove Natting for some reason, type:


You may need (it's recommended) to restart the host after executing the previous remove command.

If you're setting all this up from scratch, you can create a internal switch through the Virtual Switch Manager or type the following commands:

New-VMSwitch -SwitchName “NATSwitch” -SwitchType Internal

New-NetIPAddress -IPAddress -PrefixLength 24 -InterfaceAlias “vEthernet (NATSwitch)”

I found the solution on Petri.com. I haven't been on that site in probably over a decade so it's good to see it still around:


Next project is to manage those firewalled IP addresses a little better. Considering the server is still responding in under a second when it's parsing all traffic through half a million firewalled IP addresses is not too shabby. Maybe I'll post some IDS/IPS C# code I wrote at some point though...

Update: This is working great for months now. However, In the beginning, I noticed after each reboot I would lose connectivity to the internet from the hyper-v guests. Upon investigation, I noticed after the reboot, something would add the ICS address to the virtual switch ( Upon removing the 137 address, it would all work again. Upon reboot, the issue back. Rinse, repeat.

After about the third time of this happening, I went registry hunting and found lists of addresses on at least two different nics in the registry that weren't showing up in the gui. Unfortunately, I didn't save or remember which HKLM entries these were, but it's very easy to find when searcing "" in the registry and then seeing a 50+ long list of these same addresses.

After removing all these orphaned 137.1 addresses, it all works as it should after a reboot. Otherwise, I probably would have needed at least 100 more reboots to clear all the 137 addresses at one removal per reboot.

Therefore, it appears every time I enabled ICS (sometimes more than once after a reboot because there were so many firewall addresses the gui would time out), it would add the 137 address to a few places in the registry and never remove them. It also appears it adds a set of ICS firewall rules every time you enable ICS. Therefore, there are hundreds of orphaned firewall rules that needed removing as well.

Lastly, I finally wrote the code to manage the firewall addresses and went from 580k addresses down to about 37k. The performance increase is noticable, but the difference is like snapping your finger and snapping your finger. It just feels more responsive, but hard to put a number on it that would be more than milliseconds.

I've always heard people talk about how the Windows firewall cannot do this or that or it's problematic, and yet it seems to work great for me. It's definitely come a long way in 20 years. It processing more than 580,000 firewall entries (just for my IDS/IPS, not including the default entries), while running the base OS, multiple hyper-Vs guests, IIS, email, etc. is nothing to sneeze at, especially while being completely stable.


Comments are closed