Welcome back! In part one we prepared the management and control plane for cross-vCenter NSX. In this part we’re going to deploy the data plane for the East-West traffic in a cross-vCenter NSX environment.
Universal logical switch
We start by deploying logical switching between our two fictional sites “DC-SE” and “DC-US”. To accomplish this we create a “stretched” logical switch. In cross-vCenter NSX terminology this is called a universal logical switch.
With cross-vCenter NSX, the vCenter system paired with the primary NSX Manager is the point of administration for NSX universal constructs. In my environment the vCenter at DC-SE is paired with the primary NSX Manager.
Log in at the DC-SE’s vCenter and navigate to Networking and Security > Logical Switches. Click the “+ Add” button. Type a name for the logical switch,. In my lab I’ll call it “ULS App”. This logical switch will use the universal transport zone “UTZ” that was created in part one. Click “Add“.
By picking a universal transport zone for a logical switch, the logical switch itself becomes universal and will be synced to secondary NSX Managers.
We now have a logical switch that exists on both of the sites. The VXLAN tunneling protocol and the VTEPs are responsible for carrying the frames over layer 3 between the sites. If we now connect virtual machines on both sites to the same universal logical switch, we should have layer 2 connectivity between them.
Let’s test this by deploying a virtual machine at each site and connect them to the “ULS App” universal logical switch.
I like Photon OS for lab purposes so I’m going to deploy two of these using the OVA.
Deploy the first virtual machine in DC-SE.
I’ll call it “AppServer-SE”.
And connect it to the “ULS App” logical switch.
Repeat this process over at DC-US. In my lab that virtual machine is called “AppServer-US”.
The first time we boot Photon OS deployed via the OVA, we need to change the root password. The default password is “changeme”.
I also want to set a hostname to avoid any confusion later on. Reboot the virtual machines after this.
At this point let’s have a look what the universal controller cluster knows about this logical segment. Start an SSH session to one of the NSX Managers and run the following command (replace “25000” with the VNI of your universal logical switch):
show logical-switch controller master vni 25000 mac
This looks good. The control plane has picked up the MAC addresses of both virtual machines. We can also see that the VTEPs of ESXi hosts on both sites are involved.
Do we have ARP entries?
show logical-switch controller master vni 25000 arp
Empty as expected. Let’s configure an IP address on the virtual machines.
ifconfig eth0 192.168.0.10 netmask 255.255.255.0
ifconfig eth0 192.168.0.11 netmask 255.255.255.0
On both virtual machines we need to create iptables rules to allow ICMP:
iptables -A OUTPUT -p icmp -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT
And let’s see if the virtual machines can ping each other:
It works! Do we have ARP entries at the control plane now?
show logical-switch controller master vni 25000 arp
There they are!
So, we have verified L2 connectivity between the virtual machines which are located at different sites. Multi-site VXLAN in action!
Step 2 – Universal distributed logical router
East-West routing in a cross-vCenter NSX scenario is accomplished by deploying a universal distributed logical router. So let’s do that.
Log in at the DC-SE vCenter system and navigate to Networking and Security > NSX Edges. Click the “+Add” button and choose “Universal Distributed Logical Router”.
Give it a name like “UDLR-01”. Make sure you enable “Local Egress” which we’ll talk about in part three. Also tick the “Deploy Control VMs” box.
Set a password and enable SSH.
Configure the control VM deployment and choose a connection for the HA interface.
At this point we’ll just add one internal interface to the UDLR and attach it to the “ULS-App” logical switch that we created earlier. In my environment I call the interface “App-Tier” and assign it IP address 192.168.0.1 with a 24 prefix.
Disable “Configure Default Gateway” and complete the deployment wizard.
Once the UDLR is deployed we can test connectivity from one of the virtual machines by running a ping to the “App-Tier” interface IP address.
That should work as expected. Let’s add another universal logical switch to the environment so we can test some basic routing.
Once again in the vCenter system paired with the primary NSX Manager, navigate to Networking and Security > Logical Switches. Click the “+ Add” button. Type a name for the universal logical switch. I call it “ULS DB” in my environment. Once again choose “UTZ” as the transport zone. Click “Add“
Once created select the new “ULS DB” switch and click on Actions > Connect Edge. Select the “UDLR-01” edge.
Give the interface a name. I call it “DB-Tier” and select “Internal” as the interface type. Make sure the connectivity status is set to “Connected“. Assign IP address 192.168.1.1 with a 24 prefix and click “Add“.
Deploy a new virtual machine at one of the sites and connect it to the “ULS DB” logical switch.
Once the virtual machine is deployed, set a hostname. Mine is called “DBServer-SE” and reboot. Next configure it with IP address 192.168.1.10/24. Add a rule to iptables to allow ICMP. You should now be able to ping the DB-Tier UDLR-01 interface (192.168.1.1):
Can we reach any of the virtual machines connected to the App Tier at this point? Yes, but as none of the virtual machines have a default gateway configured it won’t work. Let’s quickly fix this:
For the App server virtual machines:
route add default gw 192.168.0.1 eth0
For the DBServer virtual machine:
route add default gw 192.168.1.1 eth0
And now routing between the two segments should work:
We can also use the NSX traceflow utility to visualize the path of the ICMP packets. From any of the vCenter systems navigate to Networking and Security > Tools > Traceflow. Do a trace from the DBServer-SE to AppServer-US for example:
Step 3 – Universal distributed firewall
Now that we have cross-vCenter routing and switching working it’s time to look at security.
Just as with universal distributed logical routing and universal logical switching, we use universal objects for security when we want to propagate them cross-vCenter. It is worth noting that not all NSX security features are available for cross-vCenter NSX. The things that are not supported are:
- Exclude list
- Flow monitoring for aggregate flows
- Network service insertion
- Edge Firewall
- Service Composer
Universal security groups can contain the following:
- Universal IP Sets
- Universal MAC Sets
- Universal Security Groups
- Universal Security Tags
- Dynamic criteria
As you see with cross-vCenter NSX security we’re not able to use vCenter constructs when defining security policies. Remember there’s a 1:1 relationship between NSX Manager and vCenter. One NSX Manager has no clue about the vCenter objects of a vCenter system paired with another NSX Manager.
Let’s create a universal firewall rule for our newly deployed virtual machines. Log in at the DC-SE vCenter system and navigate to Networking and Security > Security > Firewall > General. We start by creating a new universal firewall section. Click on “Add Section“. Call it “Universal Rules” and make sure you enable “Universal Synchronization“.
Once we have a universal firewall section we can start creating universal firewall rules in it. Click on “Add Rule” so a new empty rule shows up in the universal section.
Click on “Enter rule name” and write “Allow MariaDB”. Edit the source field and in the “Specify Source” click on “Create new IP Set“. Name the IP set “App-Tier” and add the 192.168.0.0/24 CIDR.
Select “App-Tier as the source for our “Allow MariaDB” rule:
Repeat this same procedure to specify the destination in the firewall rule. Call the IP set “DB-Tier” and specify the 192.168.1.0/24 CIDR.
Next we need to specify the service in this rule. Search for and add the “MySQL” service. Click “Save“.
The universal firewall rule should look like this now.
Click “Publish” to save and activate the changes.
Now let’s see if we can hit on the rule. We won’t actually install any MariaDB/MySQL server. Instead we’ll run a simple netcat listener on DBServer-SE and do a netcat connect from AppServer-US as a quick proof of concept.
First we need to allow traffic on TCP port 3306 through the iptables firewall on DBServer-SE:
iptables -A INPUT -p tcp --dport 3306 -j ACCEPT
Next start a netcat listener on TCP port 3306:
nc -l -p 3306
Over at the AppServer-US we can try to connect to port 3306:
nc -zv 192.168.0.20 3306
With no “block” rule in our distributed firewall anywhere this is the expected result (of course). But let’s have a look at the stats for our MySQL firewall rule. This can be looked up in a couple of places: vCenter GUI, central CLI, syslog or on the ESXi host. Today I’ll look this up on the ESXi host. Log in to the ESXi host where AppServer-US is running.
We first need the name of the filter at IOChain slot 2 of the AppServer-U vNic:
summarize-dvfilter | grep AppServer-U -A5
In my environment the filter’s name is “nic-2110503-eth0-vmware-sfw.2”
We use the “vsipioctl” command to see statistics for the rules in this filter:
vsipioctl getrules -f nic-2110503-eth0-vmware-sfw.2 -s
The ID for the MySQL firewall rule in my environment is “2147483648”. As we can see, the simple netcat connectivity test resulted in exactly 1 hit on the MySQL firewall rule we created earlier.
The ID of a firewall rule can be found at several places as well. The DFW GUI in vCenter is one of them.
In this part we deployed the data plane components for facilitating East-West traffic in a cross-vCenter NSX environment: universal logical switching and universal distributed logical routing. We touched upon universal distributed firewalling as well.
In part three we will look at the components and configuration involved with the North-South traffic in a cross-vCenter NSX environment: The Edge tier.