A simple NSX-V load balancer lab

Posted by

Skärmavbild 2018-08-21 kl. 21.19.57Recently I had to do a quick demo on the NSX-V logical load balancer. Setting up the NSX components in a small lab for such demo is a pretty straight-forward process. I thought I’d walk you through setting this up in case you want to play around with the logical load balancer or any of the other NSX components yourself.

The NSX-V logical load balancer in a nutshell

The NSX-V logical load balancer enables high availability service and distributes network traffic (L4 and L7) among multiple servers. Incoming requests are evenly distributed between backend servers that are grouped together in a pool.

It comes with configurable service monitors that perform health checking on backend servers. If one of the backend servers becomes unavailable, the service monitor marks that server as “DOWN” and stops distributing requests to it. When the backend server is available again, the service monitor marks it as “UP” and incoming requests are once again distributed to the server.

The NSX-V logical load balancer has all the functionality you can expect from a modern load balancer. There’s variety of features and two deployment models (inline vs one-armed), but they are beyond the scope of this simple lab. I recommend you have a look at the Logical Load Balancer chapter of the NSX Administration Guide for detailed documentation on the NSX-V logical load balancer.

The Lab Environment

Let’s have a  look at a small diagram showing the environment we’re going to build:

NSX-V Load Balancer PoC.png


Two virtual machines are connected to the “Web Tier” logical switch. The “Web Tier” uses IP subnet as defined by the Distributed Logical Router (DLR). The DLR and the Edge Services Gateway (ESG)  connect using the “Transit” logical switch. The “Transit” VXLAN  uses the tiny subnet. The ESG also acts as the gateway for the virtual network with its connection to the physical network.

The logical load balancer is a component of the ESG. Its job in this lab will be to distribute incoming web requests between the two web servers.

For this guide I assume NSX-V 6.4 is installed and configured and that your vSphere cluster is prepared for VXLAN.
We need to reserve four IP addresses on the physical network’s subnet to be used by the ESG. For this lab we also need Internet access from the physical network as well as access to a DNS server.

We don’t need a lot of resources for this lab. A single physical ESXi 6.5/6.7 host running a nested vSphere environment will do the job.

Building the NSX infrastructure

We start by creating the two logical switches “Web Tier” and “Transit”. The default settings for the switches are fine for this lab:

Skärmavbild 2018-08-13 kl. 20.19.10.png

Next we deploy the distributed logical router:

Skärmavbild 2018-08-13 kl. 20.24.02.png

At the “Configure interfaces” step we create and configure two interfaces: An uplink interface we’ll call “2ESG” which connects to the “Transit” logical switch and an internal interface that we’ll call “2WebTier” which connects to the “Web Tier” logical switch. We configure the interfaces using the IP addresses from the diagram above:

Skärmavbild 2018-08-13 kl. 20.32.18.png

The DLR’s default gateway is the IP address of the ESG’s internal interface, accessed via the “2ESG” interface:

Skärmavbild 2018-08-13 kl. 20.38.45.png

Once the DLR is deployed we go to its settings and configure DHCP relay. The DHCP relay server in this lab is the IP address of the ESG’s internal interface (we’ll configure a DHCP server on the ESG in a moment). We also need to enable a DHCP relay agent on the DLR’s “2WebTier” interface:

Skärmavbild 2018-08-13 kl. 20.52.10.png

Now it’s time to deploy the Edge Services Gateway appliance:

Skärmavbild 2018-08-13 kl. 20.55.05.png

We choose a compact appliance size and configure the ESG with two interfaces: An uplink interface we call “2Physical” which connects to a VLAN backed port group on the distributed switch as well as an internal interface we call “2DLR” which connects to the “Transit” logical switch.

We’re going to configure four IP addresses on the “2Physical” interface: One primary IP address and three secondaries. One of the secondaries will be used as the VIP address of the load balancer. The other two will be used for NAT:

Skärmavbild 2018-08-13 kl. 21.40.50.png

The default gateway for the ESG resides on the physical network. In my environment it’s at and reached using the “2Physical interface:

Skärmavbild 2018-08-13 kl. 21.53.03.png

So let’s setup the DHCP server on the ESG. We do this under the “DHCP” tab of the ESG. We need to configure a DHCP pool and enable the DHCP service. Don’t forget to configure a primary name server. The “Web Tier” servers need one.

Skärmavbild 2018-08-13 kl. 21.58.55.png

Finally we need to let the ESG know how to reach the “Web Tier” subnet. We do this by adding a static route under “Routing”:

Skärmavbild 2018-08-21 kl. 19.52.38.png

Now the ESG knows that it should use the DLR to reach the “Web Tier” subnet.

Deploying the Virtual Machines

Now that the NSX infrastructure is in place we’ll continue with the deployment of our two virtual machines in the “Web Tier” VXLAN.

In this lab we use VMware’s Photon OS which is an open source minimal Linux container host optimized for vSphere. Download the OVA for vSphere 6.5 or newer and deploy two virtual machines from it. Make sure to connect both to the “Web Tier” logical switch.

Skärmavbild 2018-08-16 kl. 22.01.26.png

Once booted, the DHCP server on the ESG should have assigned IP addresses to the VMs. Take a note of the IPv4 addresses as we need them for our NAT rules:

Skärmavbild 2018-08-19 kl. 16.27.43.png

Configuring NAT

NAT rules are defined on the ESG under the “NAT” tab. First of all we want to provide Internet access to our “Web Tier” virtual machines so we can patch the OS and pull the NGINX container. For this we have to create a source NAT (SNAT) rule that looks like this:

Skärmavbild 2018-08-16 kl. 22.19.32.png

The primary IP address of the ESG “2Physical” interface ( is used to represent any IP address from the “Web Tier” subnet ( on the physical network.
Don’t worry about protocol for this lab.

As we also want to SSH into our virtual machines from the physical network, we need to create two destination NAT (DNAT) rules. One for each server. For my web-1 virtual machine the DNAT rule looks like this:

Skärmavbild 2018-08-16 kl. 22.27.53.png

IP address, one of the secondary IP addresses of the ESG’s “2Physical” interface, is translated to web-1’s IP address (
Don’t forget to create one more DNAT rule for the web-2 virtual machine using one of the other secondary IP addresses of the ESG’s “2Physical” interface (I chose for example).

Installing the Web Servers

With the NAT rules in place we should be able to SSH into our virtual machines from the physical network:

ssh root@

The default root password on the Photon OS OVA is “changeme” which we’ll have to change after first login.

We can now test if our SNAT rule is working by querying and installing available OS updates. Do this on both machines:

tdnf update

When it’s done updating reboot the servers:


After reboot, SSH back into the virtual machines to enable and start the Docker engine:

systemctl enable docker
systemctl start docker

Next pull the NGINX docker container:

docker pull nginx

We want the NGINX server to use an “index.html” that is stored outside of the docker  container. Create an “index.html” on each of the servers:

mkdir -p ~/docker-nginx/html
vi ~/docker-nginx/html/index.html

Copy and paste this into index.html on web-1:

<h1>This is web-1</h1>

And paste this into index.html on web-2:

<h1>This is web-2</h1>

Set the following permissions on the html directory (never do this in production):

chmod 755 ~/docker-nginx -R

We can now run the container. On each of the virtual machines issue the following command:

docker run --name docker-nginx -p 80:80 -d -v ~/docker-nginx/html:/usr/share/nginx/html nginx

With the containers up and running we should be able to access the web servers from the physical network. In my case I browse to and hit web-1’s “index.html”

Skärmavbild 2018-08-18 kl. 20.05.16

Make sure this works on both web servers before moving on.

Configuring the Logical Load Balancer

The web servers are up and running. It’s time to enable high availability service and start load balancing.

We start by defining a new server pool we’ll call “web-pool”. On the ESG click the “Load Balancer” tab and choose “Pools”. Add a new pool:

Skärmavbild 2018-08-18 kl. 22.02.43.png

Add our two virtual machines to the pool. Notice that you can specify vCenter/NSX objects (which require VMware Tools or ARP/DHCP Snooping) as well as IP addresses. The Photon OS virtual machines are running open-vm-tools meaning we can pick virtual machine objects for these servers.

The other thing we need to specify here is the service monitor. Choose “default_http_monitor” which by default executes a “HTTP GET” to the servers in the pool every 5 seconds.

Head over to “Virtual Servers”. Here we configure our virtual IP (VIP) and tie it to our server pool:

Skärmavbild 2018-08-18 kl. 22.30.32

Give the virtual server a name and enter the last available secondary IP address of the ESG’s “2Physical” interface ( in my case). Select the pool that we just created as the default pool.

Finally enable the load balancer service under “Global Configuration”.

Testing the Logical Load Balancer

It’s time to test the load balancer. Open a web browser and navigate to http://VIP_IP. In my case You should now hit one of the web server’s “index.html”:

Skärmavbild 2018-08-19 kl. 17.11.37.png

Now reload the page in your browser.

Skärmavbild 2018-08-19 kl. 17.14.31.png

Voilà! The load balancer sent the request to the next available server in the pool which in my case was web-1.

Every time you reload the web page you will end up on the next available server in the pool. This is the intended behavior when using the Round Robin load balancing algorithm.

Let’s stop the NGINX container on one of the virtual machines:

docker stop docker-nginx

Now reload the web page a couple of times. Instead of a timeout every other time the load balancer keeps sending the requests to the only healthy server left in the pool.

Start the NGINX container again:

docker start docker-nginx

Keep reloading the web page in your browser and you’ll notice that it doesn’t take many seconds before the load balancer picks up the server and start sending requests to it again.


NSX-V, a key component of the VMware vSphere SDDC, delivers a very competent load balancer that is easy to install and configure.
In this short exercise we kept things really simple. There are more things to consider in a production environment, but building a lab like this hopefully gives you some idea of the concepts and components involved configuring a NSX-V logical load balancer.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.