NSX-T Lab – Part 3

Welcome back! I’m in the middle of installing NSX-T in my vSphere lab environment. In part one I installed NSX Manager, in part two I deployed the NSX Controller Cluster. Now it’s time start working on what it’s all about: The data plane.

High-level overview

Setting up a complete NSX-T data plane involves installing and configuring several components. We have East-West distributed routing, North-South centralized routing, and security. Then there are the additional services like load balancing, NAT, DHCP and partner integrations.

The order in which you set things up depends primarily on what you’re trying to achieve. I noticed that different documents and guides also use different approaches.

So, I put together bits and pieces from different sources and came up with the following high-level plan for my NSX-T data plane deployment:

  1. Prepare the vSphere distributed switch
  2. Configure transport zones
  3. Create logical switches
  4. Prepare & configure ESXi hosts
  5. Deploy & configure Edge VMs
  6. Configure routing

In this article I will prepare the distributed switch, add the transport zones, and create the logical switches for the uplinks. Just to keep things digistible 🙂

Preparing the vSphere Distributed Switch

The NSX Edge VMs, that will be deployed later on, connect to four different VLANs: management, transport (carrying logical networks), and two uplink VLANs.
I already have a distributed port group that maps to the management VLAN, so I need to create the ones for transport and the uplinks.

In vCenter, navigate to Networking, right-click the distributed switch and select Distributed Port Group > New Distributed Port Group.
I’m calling this port group “pg-transport”.

On the next page I set “VLAN type” to “VLAN” and “VLAN ID” to “1614”. Click “Next” and finish the port group creation.

I repeat this process for the two port groups for the uplinks (VLAN 2711 and 2712). Once done it looks like this:

And the ESXi host’s network configuration now looks something like this:

Here I have the VDS with its 5 port groups as well as a pair of unused NICs which I will use for NSX networking later on.

Configuring NSX transport zones

Transport zones in NSX are containers that define the reach of the transport nodes. I briefly mentioned transports nodes in part two. Transport nodes are the hypervisor hosts and NSX Edges that participate in an NSX overlay. For hypervisor hosts, this means that its VMs can communicate over NSX logical switches. For NSX Edges, this means it will have logical router uplinks and downlinks.

My lab environment will start out with three transport zones: uplink01, uplink02, and overlay01.

Log in to NSX Manager. In the menu at the left select Fabric > Transport Zones.

I start by creating a transport zone called “uplink01”. This is a VLAN transport zone that will be used by the NSX Edge later on:

I’m repeating this process to create the “uplink02” VLAN transport zone.

The third transport zone is an Overlay transport zone. It will be used by the host transport nodes and the NSX Edge:

The three transport zones listed:

Creating logical switches

Next I’ll create two logical switches. These two will facilitate the transit between NSX and the pfSense router. In NSX Manager choose Networking > Switching.


The first logical switch, “ls-uplink01”, I add to transport zone “uplink01” and configured with VLAN 2711 :

I repeat this process to create a second logical switch called “ls-uplink02”. I add it to transport zone “uplink02” and configure it with VLAN Id 2712.

Conclusion

Taking small steps, but getting there. I created the necessary port groups on the vSphere distributed switch which are needed for the Edge VMs. I then went on to create the transport zones as well as two logical switches from NSX Manager.

In the next part I will continue with setting up the transport nodes; The ESXi hosts and the NSX Edge.

NSX-T Lab – Part 2

Welcome back! I’m in the process of installing NSX-T in my lab environment. So far I have deployed NSX Manager which is the central management plane component of the NSX-T platform.
Today I will continue the installation and add a NSX-T controller to the lab environment.

Control plane

The control plane in NSX is responsible for maintaining runtime state based on configuration from the management plane, providing topology information reported by the data plane, and pushing stateless configuration to the data plane.

With NSX the control plane is split in two parts, the central control plane (CCP) which runs on the controller cluster nodes, and the local control plane (LCP) which runs on the transport nodes. A transport node is basically a server participating in NSX-T networking.

Now that I’m mentioning these planes, here’s a good diagram showing the interaction between them

Deploying the controller cluster

In a production NSX-T environment the controller cluster must have three members (controller nodes). The controller nodes are placed on three separate hypervisor hosts to avoid a single point of failure. In a lab environment like this a single controller node in the CCP is acceptable.

First I’m adding a new DNS record for the controller node. This is not required, but a good practice imho.

NameIP address
nsxcontroller-01172.16.11.57

An NSX-T controller node can be deployed in several ways. I added my vCenter system as a compute manager in part one, so perhaps the most convenient way is to deploy the controller node from the NSX Manager GUI. Other options include using the OVA package or the NSX Manager API.

I decided to deploy the controller node using the NSX Manager API. For this I need to prepare a small piece of JSON code that will be the body in the API call:

{
"deployment_requests": [
{
"roles": ["CONTROLLER"],
"form_factor": "SMALL",
"user_settings": {
"cli_password": "VMware1!",
"root_password": "VMware1!"
},
"deployment_config": {
"placement_type": "VsphereClusterNodeVMDeploymentConfig",
"vc_id": "bc5e0012-662f-421c-a286-7b408302bf15",
"management_network_id": "dvportgroup-44",
"hostname": "nsxcontroller-01",
"compute_id": "domain-c7",
"storage_id": "datastore-41",
"default_gateway_addresses":[
"172.16.11.253"
],
"management_port_subnets":[
{
"ip_addresses":[
"172.16.11.57"
],
"prefix_length":"24"
}
]
}
}
],
"clustering_config": {
"clustering_type": "ControlClusteringConfig",
"shared_secret": "VMware1!",
"join_to_existing_cluster": false
}
}

This code instructs the NSX Manager API to deploy one NSX-T controller node with a small form factor. Most things in the code above are pretty self-explanatory. I fetched the values for “management_network_id”, “compute_id”, and “storage_id” from the vCenter managed object browser (https://vcenter.rainpole.local/mob). The “vc_id” is found in NSX Manager under “Fabric” > “Compute Managers” under the “ID” column:

I use Postman to send the REST POST to “https://nsxmanager.rainpole.local/api/v1/cluster/nodes/deployments”:

This initiates the controller node deployment which can be tracked in the NSX Manager GUI under “System” > “Components”:

Once the deployment is finished it should look something like this:

Both cluster connectivity and manager connectivity are showing “Up”. I run the following command from the controller node CLI to verify the status:

get control-cluster status

To verify that NSX Manager knows about this new controller node I run the following command from the NSX Manager CLI:

get nodes

That looks good too. Another useful command is:

get management-cluster status

It shows the status of the management cluster and the control cluster.

I also configure DNS and NTP on the new controller node from the controller node CLI:

set name-servers 172.16.11.10
set ntp-server ntp.rainpole.local

Conclusion

I have the controller “cluster” up and running. With only one controller node it isn’t much of a cluster, but it will do for lab purposes.

The management plane and control plane are now operational. In the next part I will continue with installing the data plane components.

NSX-T Lab – Part 1

Learning by doing is my preferred method of getting to know a product or technology I haven’t worked with before.

Take NSX-T for example. On the surface NSX-T seems to be just another flavour of network virtualisation technology by VMware. But having a closer look I quickly realised NSX-T is quite a different beast compared to NSX-V.

So, in order to learn more about the product I’m going to install NSX-T in a lab environment. In the coming blog posts you can follow along as I’m setting this up.

In this first part I’ll quickly introduce my lab environment and then continue with the deployment of the management plane main component: NSX Manager.

Lab environment

My small NSX-T lab will use the following components:

  • pfSense router with OpenBGPD package
  • Windows Server 2016 (AD/DNS)
  • NetApp NFS storage
  • vCenter 6.7
  • ESXi 6.7
  • NSX-T 2.3

The blog post series is about setting up NSX-T so I deployed some things in advance. A pfSense router, vCenter, three ESXi hosts in a cluster, and a Windows AD/DNS server have been installed and configured.

Here is a simple diagram of the environment the way it looks right now.

The ESXi hosts are equipped with four NICs. Two of them are used by vSphere and the other two are unassigned and will be used by NSX-T networking.

I configured some VLANs on the pfSense router which are trunked down to the ESXi hosts. The pfSense router has the OpenBGPD package installed so that I can do some dynamic routing later on.

I left out the details of the non-nested infrastructure. The three ESXi hosts are nested (running as VMs). vCenter, the Windows server, and pfSense are running outside of the nested environment.

Networking

I configured the following networks:

VLAN IDVLAN FunctionCIDRGateway
1611Management172.16.11.0/24172.16.11.253
1612vMotion172.16.12.0/24172.16.12.253
1614Transport172.16.14.0/24172.16.12.253
2711Uplink01172.27.11.0/24172.27.11.1
2712Uplink02172.27.12.0/24172.27.12.1

Yes, /24 subnets all the way. This is a lab environment so I don’t really have to worry about wasting IP addresses. In a production environment I would most likely be using smaller subnets.

DNS and hostnames

I’m using the “rainpole.local” DNS domain name for this lab and created the following DNS records in advance:

NameIP address
dc01-0172.16.11.10
esxi01172.16.11.51
esxi02172.16.11.52
esxi03172.16.11.53
vcenter172.16.11.50
ntp
172.16.11.10
nsxmanager172.16.11.56

More DNS records will probably be added during the deployment.

Installing NSX Manager

The first thing I need to do is install NSX Manager. For vSphere the NSX Manager installation is done by deploying an OVA package.

The OVA deployment is a straight forward process. I choose a small deployment configuration as this is for lab purposes:

I pick the management network (VLAN 1611) as the destination network for “Network 1” and IPv4 as the protocol:

I need to fill in the required details on the “Customize template” page. I have selected”nsx-manager” as the “Rolename”.

Once the virtual appliance is deployed it needs to be started. It will take some minutes to boot and get things up and running. Once ready I can log in to the NSX Manager web interface at https://nsxmanager.rainpole.local using the previously configured admin account:

The first time I log in I need to accept the license agreement and answer the question about the CEIP program. I then end up at the “NSX Overview” page:

One thing I notice immediately is how elegant NSX Manager interface is. I really like the layout of the HTML5 interface with a menu on the left side that pops out when moving the mouse cursor over it.

I took some time to look around in the web interface just to familiarise myself. After all I’ll be spending quite some time in the web interface during the NSX-T deployment.

The dashboard shows status and health overview of of the platform and its components. It’s pretty empty right now as you see 🙂

Adding a compute manager

Although there is no relationship between NSX Manager and vCenter like there is in NSX-V, it is possible to add vCenter systems as a so called compute managers. This is not required but having this link in place makes deploying NSX-T components on vSphere more convenient.

In the menu on the left I choose “Fabric” > “Compute Manager” and click “+Add”. I fill out the details of my vCenter system and then click on the “Add” button:

After a short registration process vCenter is added to NSX Manager:

Housekeeping

Under “System” I find some items that are of interest at this point:


Two things I would probably want to have a look at straight after deploying NSX Manager in a production environment are certificates and backup.

It is under “Trust” certificates are managed. Default self-signed certificates often need to be replaced with ones that are signed by a trusted CA. This is done over here, partly. An API call is actually needed to activate a newly imported certificate.

Backup can be configured entirely in the GUI. This is done under “Utilities” > “Backup”. Here we configure scheduling/interval and backup destination:

It is here I learn that there are three types of backup:

  • Cluster backup – includes the desired state of the virtual network
  • Node backup – includes the NSX Manager appliance configuration
  • Inventory backup – includes the set of ESX and KVM hosts and edges

Time synchronisation is important and a NTP server can be configured during the NSX Manager deployment. If I want to change the NTP configuration after deployment I need to use the NSX Manager CLI or API. Using the CLI the following command will configure “ntp.rainpole.local” as a time source:

set ntp-server ntp.rainpole.local 

Likewise, if I need to (re-)configure DNS this is done via the NSX Manager CLI or API. Configuring the Windows AD/DNS server in my lab as the DNS server is done with this CLI command:

set name-servers 172.16.11.10

Conclusion

Installing and configuring NSX Manager in my lab environment was an easy process. The web interface looks really good with a nice layout. I’m a bit surprised to see that some, what I consider, basic configuration can’t be done in the GUI (NTP, DNS, logging), but I’m sure this will be added to the GUI in future releases of NSX Manager.

In the next part I’ll continue my NSX-T deployment journey and set up the control plane.

Cross-vCenter NSX – Part 3

Welcome back! In part two we deployed the data plane for the East-West traffic in our cross-vCenter NSX environment. In this part we continue with the setting up the components for the North-South traffic.

Current state

A quick look at the current state of our cross-vCenter NSX lab environment.

The management plane and the control plane are operational and configured for cross-vCenter NSX. We have deployed the data plane components for East-West traffic; two universal logical switches and a universal distributed logical router.

NSX Edge

The NSX edge is responsible for the central on-ramp/off-ramp routing between the NSX logical networks (VXLANs) and the physical network. It’s a central function and is deployed as virtual appliances (edge services gateways) by NSX Manager.

Local egress

There are some design considerations surrounding the NSX edge. In a cross-vCenter NSX environment spanning multiple physical sites one consideration is of particular interest: should each site use a local edge for North-South egress (local egress), or should one site act as the central edge for the other sites?

You might remember from part two that we deployed the universal distributed router with local egress enabled. So what we’ll do, is set up our fictional sites “DC-SE” and “DC-US” with active/active egress. This is actually a less common/popular type of cross-vCenter NSX deployment as it introduces asynchronous traffic flows (traffic egresses one site and ingresses at another site) which need to be dealt with somehow.

Deployment

When local egress is a requirement we need to deploy some additional components. We need a UDLR control VM at each site. Each site will also use a separate transit universal logical switch.

So, let’s create the transit logical switches first and come back to the control VMs a little later.

Log in at the DC-SE’s vCenter and navigate to Networking and Security > Logical Switches. Click the “+ Add” button. Type a name for the logical switch. In my case I’ll call it “ULS Transit SE”. This logical switch will use the universal transport zone “UTZ” that we created in part one. Click “Add“. Repeat these steps to add the second transit universal logical switch. I’m calling this one “ULS Transit US”.

With the transit switches created we continue with the deployment of the ESG appliances. For this lab we’ll deploy just one ESG per site. At DC-SE navigate to Networking and Security > NSX Edges. Click the “+Add” button and choose “Edge Services Gateway”.

Give the ESG a name. In my lab it’s called “esg-se”.

Configure a user name and password. Enable SSH.

Configure the appliance VM deployment. A compact sized appliance will do for this lab.

Next we configure two interfaces on the ESG: one uplink and one internal. Starting with the uplink interface that we connect to a VLAN-backed distributed port group (VLAN 70 at my DC-SE site). This one connects the ESG with the pfSense router. I’m assigning IP address 10.0.70.2/24 to this interface (the pfSense router’s interface is configured with 10.0.70.1/24).

The internal interface connects the ESG with the “ULS Transit SE” logical switch. It is the link between the ESG and the UDLR. I’m assigning 192.168.100.1/29 to this interface.

We also configure a default gateway on “esg-se” which points to the next-hop router on the “physical” network. In my case this is 10.0.70.1 (pfSense router).

Review the configuration and click “Finish” to deploy the ESG appliance.

At the DC-US’s vCenter we basically repeat these steps for the ESG deployment over there. The unique settings for the ESG at DC-US in my lab are:

Nameesg-us
Uplink interface IP (VLAN 700)10.1.70.2/24
Internal interface connected toULS Transit US
Internal interface IP192.168.101.1/29
Default gateway10.1.70.1

Universal Distributed Logical Router

We need to revisit the UDLR to deploy the additional UDLR control VM at DC-US as well as configure connectivity between the UDLR and the new transit logical switches.

Log in at vCenter in DC-US and navigate to Networking and Security > NSX Edges and click on the universal distributed router. Select Configure > Appliance Settings. Click on “Add Edge Appliance VM“.

Configure the placement parameters and deploy the VM.

Once the control VM has been deployed we head over to vCenter at DC-SE and navigate to Networking and Security > NSX Edges and open up the universal distributed router. Select Configure > Interfaces and add a new uplink interface that connects to the “ULS Transit SE” logical switch. I configure this interface with IP address 192.168.100.2/29.

Add a second uplink interface to the UDLR and connect it to the “ULS Transit US” logical switch. I configure this one with IP address 192.168.101.2/29.

With the UDLR uplinks configured we’ll do a quick connectivity test. Open an SSH session to the ESGs and ping the IP address of the UDLR uplink interface connected to that ESG.

North-South routing

Now that we have a working L2 connection between the ESGs and the UDLR, we can focus on setting up the North-South routing in our cross-vCenter NSX environment.

In my lab I’ve configured an iBGP peering between the UDLR control VMs and their respective ESG appliance. I also configured an eBGP peering between the ESGs and the pfSense routers at each site.
I won’t go through setting up dynamic routing with BGP, but here’s a summary of the configuration that I used in my lab:

PropertyDC-SEDC-US
pfSense Local AS6550265502
ESG default gateway IP10.0.70.110.1.70.1
ESG Local AS6551065110
ESG BGP neighbors10.0.70.1, 192.168.100.310.1.70.1, 192.168.101.3
ESG redistributionconnectedconnected
UDLR default gateway192.168.100.1192.168.101.1
UDLR Local AS6551065510
UDLR protocol address192.168.100.3192.168.101.3
UDLR BGP neighbors192.168.100.1192.168.101.1
UDLR redistributionconnectedconnected

Setting up dynamic routing in cross-vCenter NSX with local egress can be quite a job. Even in a small lab like this we need to configure routing parameters at 6 places: pfSense x 2, ESG x 2, UDLR control VM x 2.

The big picture

Now that we have configured North-South routing let’s have another look at the environment from above.

Let’s do a simple test with traceroute and see if local egress works.
From AppServer-SE (192.168.0.20) we are egressing via “esg-se”:

From AppServer-US (192.168.0.22) we are egressing via “esg-us”:

A simple test, but local egress seems to work!

Conclusion

This concludes the blogpost series on cross-vCenter NSX. During these three posts we went through the process of setting up cross-vCenter NSX with local egress. We deployed and configured the different components and had a look at the data plane functions: universal East-West routing, universal distributed firewall and finally North-South routing in a cross-vCenter NSX environment.

All in all setting this up is a pretty straight forward process. Especially in a lab where we have control over the entire environment. 😉

Cross-vCenter NSX – Part 2

Welcome back! In part one we prepared the management and control plane for cross-vCenter NSX. In this part we’re going to deploy the data plane for the East-West traffic in a cross-vCenter NSX environment.

Universal logical switch

We start by deploying logical switching between our two fictional sites “DC-SE” and “DC-US”. To accomplish this we create a “stretched” logical switch. In cross-vCenter NSX terminology this is called a universal logical switch.

With cross-vCenter NSX, the vCenter system paired with the primary NSX Manager is the point of administration for NSX universal constructs. In my environment the vCenter at DC-SE is paired with the primary NSX Manager.

Log in at the DC-SE’s vCenter and navigate to Networking and Security > Logical Switches. Click the “+ Add” button. Type a name for the logical switch,. In my lab I’ll call it “ULS App”. This logical switch will use the universal transport zone “UTZ” that was created in part one. Click “Add“.

By picking a universal transport zone for a logical switch, the logical switch itself becomes universal and will be synced to secondary NSX Managers.

Testing connectivity

We now have a logical switch that exists on both of the sites. The VXLAN tunneling protocol and the VTEPs are responsible for carrying the frames over layer 3 between the sites. If we now connect virtual machines on both sites to the same universal logical switch, we should have layer 2 connectivity between them.

Let’s test this by deploying a virtual machine at each site and connect them to the “ULS App” universal logical switch.
I like Photon OS for lab purposes so I’m going to deploy two of these using the OVA.

Deploy the first virtual machine in DC-SE.

I’ll call it “AppServer-SE”.

And connect it to the “ULS App” logical switch.

Repeat this process over at DC-US. In my lab that virtual machine is called “AppServer-US”.

The first time we boot Photon OS deployed via the OVA, we need to change the root password. The default password is “changeme”.

I also want to set a hostname to avoid any confusion later on. Reboot the virtual machines after this.

At this point let’s have a look what the universal controller cluster knows about this logical segment. Start an SSH session to one of the NSX Managers and run the following command (replace “25000” with the VNI of your universal logical switch):

show logical-switch controller master vni 25000 mac

This looks good. The control plane has picked up the MAC addresses of both virtual machines. We can also see that the VTEPs of ESXi hosts on both sites are involved.

Do we have ARP entries?

show logical-switch controller master vni 25000 arp

Empty as expected. Let’s configure an IP address on the virtual machines.

AppServer-SE:

ifconfig eth0 192.168.0.10 netmask 255.255.255.0

AppServer-US:

ifconfig eth0 192.168.0.11 netmask 255.255.255.0

On both virtual machines we need to create iptables rules to allow ICMP:

iptables -A OUTPUT -p icmp -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT

And let’s see if the virtual machines can ping each other:

It works! Do we have ARP entries at the control plane now?

show logical-switch controller master vni 25000 arp

There they are!

So, we have verified L2 connectivity between the virtual machines which are located at different sites. Multi-site VXLAN in action!

Step 2 – Universal distributed logical router

East-West routing in a cross-vCenter NSX scenario is accomplished by deploying a universal distributed logical router. So let’s do that.

Log in at the DC-SE vCenter system and navigate to Networking and Security > NSX Edges. Click the “+Add” button and choose “Universal Distributed Logical Router”.

Give it a name like “UDLR-01”. Make sure you enable “Local Egress” which we’ll talk about in part three. Also tick the “Deploy Control VMs” box.

Set a password and enable SSH.

Configure the control VM deployment and choose a connection for the HA interface.

At this point we’ll just add one internal interface to the UDLR and attach it to the “ULS-App” logical switch that we created earlier. In my environment I call the interface “App-Tier” and assign it IP address 192.168.0.1 with a 24 prefix.

Disable “Configure Default Gateway” and complete the deployment wizard.

Once the UDLR is deployed we can test connectivity from one of the virtual machines by running a ping to the “App-Tier” interface IP address.

That should work as expected. Let’s add another universal logical switch to the environment so we can test some basic routing.

Once again in the vCenter system paired with the primary NSX Manager, navigate to Networking and Security > Logical Switches. Click the “+ Add” button. Type a name for the universal logical switch. I call it “ULS DB” in my environment. Once again choose “UTZ” as the transport zone. Click “Add

Once created select the new “ULS DB” switch and click on Actions > Connect Edge. Select the “UDLR-01” edge.

Give the interface a name. I call it “DB-Tier” and select “Internal” as the interface type. Make sure the connectivity status is set to “Connected“. Assign IP address 192.168.1.1 with a 24 prefix and click “Add“.

Deploy a new virtual machine at one of the sites and connect it to the “ULS DB” logical switch.

Once the virtual machine is deployed, set a hostname. Mine is called “DBServer-SE” and reboot. Next configure it with IP address 192.168.1.10/24. Add a rule to iptables to allow ICMP. You should now be able to ping the DB-Tier UDLR-01 interface (192.168.1.1):

Can we reach any of the virtual machines connected to the App Tier at this point? Yes, but as none of the virtual machines have a default gateway configured it won’t work. Let’s quickly fix this:

For the App server virtual machines:

route add default gw 192.168.0.1 eth0

For the DBServer virtual machine:

route add default gw 192.168.1.1 eth0

And now routing between the two segments should work:

We can also use the NSX traceflow utility to visualize the path of the ICMP packets. From any of the vCenter systems navigate to Networking and Security > Tools > Traceflow. Do a trace from the DBServer-SE to AppServer-US for example:

Step 3 – Universal distributed firewall

Now that we have cross-vCenter routing and switching working it’s time to look at security.

Just as with universal distributed logical routing and universal logical switching, we use universal objects for security when we want to propagate them cross-vCenter. It is worth noting that not all NSX security features are available for cross-vCenter NSX. The things that are not supported are:

  • Exclude list
  • SpoofGuard
  • Flow monitoring for aggregate flows
  • Network service insertion
  • Edge Firewall
  • Service Composer

Universal security groups can contain the following:

  • Universal IP Sets
  • Universal MAC Sets
  • Universal Security Groups
  • Universal Security Tags
  • Dynamic criteria

As you see with cross-vCenter NSX security we’re not able to use vCenter constructs when defining security policies. Remember there’s a 1:1 relationship between NSX Manager and vCenter. One NSX Manager has no clue about the vCenter objects of a vCenter system paired with another NSX Manager.

Let’s create a universal firewall rule for our newly deployed virtual machines. Log in at the DC-SE vCenter system and navigate to Networking and Security > Security > Firewall > General. We start by creating a new universal firewall section. Click on “Add Section“. Call it “Universal Rules” and make sure you enable “Universal Synchronization“.

Once we have a universal firewall section we can start creating universal firewall rules in it. Click on “Add Rule” so a new empty rule shows up in the universal section.

Click on “Enter rule name” and write “Allow MariaDB”. Edit the source field and in the “Specify Source” click on “Create new IP Set“. Name the IP set “App-Tier” and add the 192.168.0.0/24 CIDR.

Select “App-Tier as the source for our “Allow MariaDB” rule:

Repeat this same procedure to specify the destination in the firewall rule. Call the IP set “DB-Tier” and specify the 192.168.1.0/24 CIDR.

Next we need to specify the service in this rule. Search for and add the “MySQL” service. Click “Save“.

The universal firewall rule should look like this now.

Click “Publish” to save and activate the changes.

Now let’s see if we can hit on the rule. We won’t actually install any MariaDB/MySQL server. Instead we’ll run a simple netcat listener on DBServer-SE and do a netcat connect from AppServer-US as a quick proof of concept.

First we need to allow traffic on TCP port 3306 through the iptables firewall on DBServer-SE:

iptables -A INPUT -p tcp --dport 3306 -j ACCEPT

Next start a netcat listener on TCP port 3306:

nc -l -p 3306

Over at the AppServer-US we can try to connect to port 3306:

nc -zv 192.168.0.20 3306

With no “block” rule in our distributed firewall anywhere this is the expected result (of course). But let’s have a look at the stats for our MySQL firewall rule. This can be looked up in a couple of places: vCenter GUI, central CLI, syslog or on the ESXi host. Today I’ll look this up on the ESXi host. Log in to the ESXi host where AppServer-US is running.

We first need the name of the filter at IOChain slot 2 of the AppServer-U vNic:

summarize-dvfilter | grep AppServer-U -A5

In my environment the filter’s name is “nic-2110503-eth0-vmware-sfw.2”

We use the “vsipioctl” command to see statistics for the rules in this filter:

vsipioctl getrules -f nic-2110503-eth0-vmware-sfw.2 -s

The ID for the MySQL firewall rule in my environment is “2147483648”. As we can see, the simple netcat connectivity test resulted in exactly 1 hit on the MySQL firewall rule we created earlier.

The ID of a firewall rule can be found at several places as well. The DFW GUI in vCenter is one of them.

Wrapping up

In this part we deployed the data plane components for facilitating East-West traffic in a cross-vCenter NSX environment: universal logical switching and universal distributed logical routing. We touched upon universal distributed firewalling as well.

In part three we will look at the components and configuration involved with the North-South traffic in a cross-vCenter NSX environment: The Edge tier.

Cross-vCenter NSX – Part 1

One of those scenarios where the NSX platform really shines is in multi-site environments. Here NSX together with vSphere is the infrastructure that delivers on business requirements like workload mobility, resource pooling, and consistent security.

Since NSX version 6.2 we can roll out NSX in a vSphere environment managed by more than one vCenter system. This type of deployment is called cross-vCenter NSX.
With cross-vCenter NSX we are able to centrally deploy and manage networking and security constructs regardless of the management domain architecture.

In preparation for some assignments involving cross-vCenter NSX, I’ve been busy with a cross-vCenter NSX lab. I thought I’d do a little writeup in two three parts on setting this up.

In this first post we’ll prepare the management and control plane for cross-vCenter NSX. In part 2 we’ll have a closer look at how to deploy the data plane in a cross-vCenter NSX environment.

The lab environment

The following components are the building blocks I used for this simple cross-vCenter NSX lab:

  • 8 x nested ESXi 6.7 U1 hosts
  • 2 x vCenter 6.7 U1 systems
  • vSAN storage
  • NSX 6.4.4
  • 2 x pfSense routers

Just so we spend time focusing on the relevant stuff I’ve done some preparation in advance.

I set up two fictional sites: DC-SE and DC-US. Each with its own, non-linked, vCenter server system, four ESXi hosts, vSAN storage, and a standalone NSX Manager.
The ESXi hosts are prepared for NSX (VIBs installed) and a segment ID pool and transport zone are configured. DC-SE is running a controller cluster.

Each site has a pfSense machine acting as the perimeter router. Static routing is set up so each site’s management, vMotion and VTEP subnets can reach each other. Both sites also have a VLAN for vSAN plus one for ESG uplink which will be used in part two.

VLANDC-SEDC-US
Management10.0.10.0/2410.1.10.0/24
vMotion10.0.20.0/2410.1.20.0/24
vSAN10.0.30.0/2410.1.30.0/24
VXLAN transport10.0.50.0/2410.1.50.0/24
Uplink10.0.70.0/2410.1.70.0/24
High-level overview of the lab environment before cross-vCenter NSX is implemented

Please keep in mind that this is a simple lab environment design and in no way a design for a production environment. Have a look at VMware Validated Designs if you want to learn more about SDDC designs including NSX for production environments.

Step 1 – Assign primary role to NSX Manager

There’s a 1:1 relationship between vCenter server and NSX Manager. This is true when setting up cross-vCenter NSX as well, but here the NSX managers involved are assigned roles.
The NSX manager that should be running the controller cluster is assigned the primary role. Additional NSX Managers participating in cross-vCenter NSX (up to 7) are assigned the secondary role.

So let’s start by assigning the NSX Manager in DC-SE the primary role. In vCenter, go to Networking and Security > Installation and Upgrade > Management > NSX Managers. Select the NSX Manager and click on Actions > Assign Primary Role.

As you can see the role has changed from Standalone to Primary.

When we assign the primary role to a NSX Manager, its controller cluster automatically becomes the universal controller cluster. It is the one and only controller cluster in cross-vCenter NSX and provides control plane functionality (MAC, ARP, VTEP tables) for both the primary and secondary NSX Managers.

Step 2 – Configure Logical Network settings

While we’re at DC-SE we continue with the configuration of the logical network settings for cross-vCenter NSX.

We begin with defining a Universal Segment ID pool. These segment IDs (VNIs) are assigned to universal logical switches. Universal logical switches are logical switches that are synced to the secondary NSX Managers. We will look more at this in part two.

Go to Networking and Security > Installation and Upgrade > Logical Network Settings > VXLAN Settings > Segment IDs and click Edit.

Configure a unique range for the universal segment ID pool.

Create a universal transport zone and add CL01-SE

Next to VXLAN Settings we find Transport Zones. Click it and click Add to start adding an Universal Transport Zone.

Give it a name like “UTZ” and switch Universal Synchronization to On and add the CL01-SE vSphere cluster to the transport zone.

Step 3 – Assign secondary role to NSX Manager

Assigning the secondary role to the NSX Manager located in DC-US is done from the primary NSX Manager in DC-SE.

In vCenter, navigate to Networking and Security > Installation and Upgrade > Management > NSX Managers. Select the NSX Manager and click on Actions > Add Secondary Manager.

Here you enter the information of the NSX Manager at DC-US and click Add.

The NSX Manager at DC-US now has the secondary role. We can verify this by logging in to vCenter at DC-US and navigate to Networking and Security > Installation and Upgrade > Management > NSX Managers.

As you can see the NSX Manager now has the Secondary role.

Add CL01-US to the universal transport zone

While still logged in to vCenter at DC-US navigate to Networking and Security > Installation and Upgrade > Logical Network Settings > Transport Zones. The transport zone that we created over at the primary NSX Manager in DC-SE, shows up here too. Mark the transport zone and click on the “Connect Clusters” button. Add the CL01-US cluster to the transport zone.

Wrapping up

This completes the preparation of the management and control plane for our simple cross-vCenter NSX lab.

We started by assigning the primary role to the NSX Manager at DC-SE. By doing so we got ourselves a universal controller cluster. Next we configured the logical network settings necessary for cross-vCenter NSX. Finally we paired primary NSX Manager at DC-SE with the standalone NSX Manager at DC-US by assigning it the secondary role. Along the way we also added the vSphere clusters on both sites to the same universal transport zone.

In part 2 we will set up the data plane in our cross-vCenter NSX lab. We’ll have a look at how logical switching, distributed logical routing, and distributed security works in a cross-vCenter NSX environment.

3V0-643

3V0-643 is the exam code for the VMware Certified Advanced Professional 6 – Network Virtualization Deployment Exam. I recently passed this exam and thought I’d write a short blog post about it.

About the exam

The exam preparation guide for this exam contains the following description:
” The 3VO-643 exam tests candidates on their skills and abilities in implementation of a VMware NSX solution, including deployment, administration, optimization, and troubleshooting.”

As you might have guessed, this is a lab-based exam. Meaning you connect to a remote lab environment and work your way through a number of scenarios. You will be doing actual deployment, administration, optimization, and troubleshooting of a live VMware NSX platform.

Exam prerequisites

There is no course requirement for VCAP6-NV. That being said, to achieve the VCAP6-NV certification you must hold a valid VCP6-NV certification which does have a course requirement.

VMware recommends candidates to take the VMware NSX: Troubleshooting and Operations [V6.2] course as a preparation for this exam. I did the NSX version 6.4 of that course and although the course itself was very valuable, I can not say that it was really relevant as preparation for the 3V0-643 exam.

Another VMware recommendation is that the candidate should have two years of experience deploying NSX. Experience is a good thing of course. I would say this depends very much on how much and in which way you work with NSX in your day-to-day job role. My two cents:

  • You can skip the recommended course, but be sure you know your NSX including basic NSX troubleshooting.
  • Experience with deploying and managing all components of NSX is a must. Remember that you can gain experience in a lab environment as well.

Exam preparation

If you’ve ever been doing a VMware Hands-on Lab, the exam’s interface will be familiar to you. The controls, menus, and layout of the exam interface are very similar if not identical to the ones in the hands-on lab environment. I recommend doing a couple of NSX hands-on labs just to get used to the interface. HOLs VMware’s free, high quality labs that offer a convenient way for you to try out all kind of VMware tech and solutions. So check them out.

As always with VMware exams, the exam blueprint is your friend. It clearly states what areas of NSX (all) and scenarios you should prepare for. For 3V0-643 I would say the exam blueprint is spot on. On the exam I was faced with questions and scenarios linked to all of the objectives in the exam blueprint.

An important thing to keep in mind is that at the time of this writing the 3V0-643 lab environment is still based on vSphere 6.0 and NSX version 6.2. You should take this into account as you prepare for the exam.

  • Do your hands-on training in a NSX 6.2 environment. Build a nested lab with vSphere 6.0/NSX 6.2 if necessary. There are differences between the 6.2 and 6.4 UIs. If you’re not familiar with the 6.2 way of doing things you end up spending a lot of precious time navigating around in the (slow) vCenter UI trying to find things.
  • Study the NSX version 6.2 official documentation. It’s not meaningful to study anything else than version 6.2 documentation for this exam.
  • Study and practice all the objectives that are listed in the exam blueprint. Make sure you are comfortable accomplishing all of the objectives before you consider taking this exam.

What to expect?

23 questions/scenarios to be completed within 205 minutes. All of this in an outdated and slow vSphere 6.0/NSX 6.2 remote lab environment. It wasn’t the most exciting experience to be honest. The scenarios and problems you need to solve on the other hand were realistic and quite fun.  It’s a mixture of  tasks and objectives. One scenario you’ll complete with a couple of mouse clicks or commands, while the next will take you more than 30 minutes to complete.

  • Time is an issue. In my experience 205 minutes is not a lot of time. You easily end up with time pressure. You should be able to accomplish the objectives relatively fast. There is not much time for clicking around, finding your way, or thinking for that matter. Knowing beforehand how to accomplish objectives is key.
  • Time is not an issue. Don’t get stuck. You can skip questions and come back to them later if there’s time left. Correctly answered questions add to your score. Focus on that instead of getting stuck trying to answer a question and running out of time. This is especially true when you’re early on in the exam. Also, know that you can pass this exam even when you run out of time.

Conclusion

In my opinion passing 3V0-643 is an accomplishment. This lab-based NSX exam will thoroughly test your ability to successfully deploy, manage, optimize, and troubleshoot all aspects of a NSX platform. All that under time pressure.

A certification alone doesn’t make you a subject matter expert, but it’s probably fair to say that you know a thing or two about NSX when you pass 3V0-643.

Last but not least it would be great if VMware would update the exam’s lab environment to for instance vSphere 6.5/NSX 6.4, but I doubt that will happen. My guess is that we’ll see NSX-T exams instead, but time will tell.