Cross-vCenter NSX – Part 2

Welcome back! In part one we prepared the management and control plane for cross-vCenter NSX. In this part we’re going to deploy the data plane for the East-West traffic in a cross-vCenter NSX environment.

Universal logical switch

We start by deploying logical switching between our two fictional sites “DC-SE” and “DC-US”. To accomplish this we create a “stretched” logical switch. In cross-vCenter NSX terminology this is called a universal logical switch.

With cross-vCenter NSX, the vCenter system paired with the primary NSX Manager is the point of administration for NSX universal constructs. In my environment the vCenter at DC-SE is paired with the primary NSX Manager.

Log in at the DC-SE’s vCenter and navigate to Networking and Security > Logical Switches. Click the “+ Add” button. Type a name for the logical switch,. In my lab I’ll call it “ULS App”. This logical switch will use the universal transport zone “UTZ” that was created in part one. Click “Add“.

By picking a universal transport zone for a logical switch, the logical switch itself becomes universal and will be synced to secondary NSX Managers.

Testing connectivity

We now have a logical switch that exists on both of the sites. The VXLAN tunneling protocol and the VTEPs are responsible for carrying the frames over layer 3 between the sites. If we now connect virtual machines on both sites to the same universal logical switch, we should have layer 2 connectivity between them.

Let’s test this by deploying a virtual machine at each site and connect them to the “ULS App” universal logical switch.
I like Photon OS for lab purposes so I’m going to deploy two of these using the OVA.

Deploy the first virtual machine in DC-SE.

I’ll call it “AppServer-SE”.

And connect it to the “ULS App” logical switch.

Repeat this process over at DC-US. In my lab that virtual machine is called “AppServer-US”.

The first time we boot Photon OS deployed via the OVA, we need to change the root password. The default password is “changeme”.

I also want to set a hostname to avoid any confusion later on. Reboot the virtual machines after this.

At this point let’s have a look what the universal controller cluster knows about this logical segment. Start an SSH session to one of the NSX Managers and run the following command (replace “25000” with the VNI of your universal logical switch):

show logical-switch controller master vni 25000 mac

This looks good. The control plane has picked up the MAC addresses of both virtual machines. We can also see that the VTEPs of ESXi hosts on both sites are involved.

Do we have ARP entries?

show logical-switch controller master vni 25000 arp

Empty as expected. Let’s configure an IP address on the virtual machines.

AppServer-SE:

ifconfig eth0 192.168.0.10 netmask 255.255.255.0

AppServer-US:

ifconfig eth0 192.168.0.11 netmask 255.255.255.0

On both virtual machines we need to create iptables rules to allow ICMP:

iptables -A OUTPUT -p icmp -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT

And let’s see if the virtual machines can ping each other:

It works! Do we have ARP entries at the control plane now?

show logical-switch controller master vni 25000 arp

There they are!

So, we have verified L2 connectivity between the virtual machines which are located at different sites. Multi-site VXLAN in action!

Step 2 – Universal distributed logical router

East-West routing in a cross-vCenter NSX scenario is accomplished by deploying a universal distributed logical router. So let’s do that.

Log in at the DC-SE vCenter system and navigate to Networking and Security > NSX Edges. Click the “+Add” button and choose “Universal Distributed Logical Router”.

Give it a name like “UDLR-01”. Make sure you enable “Local Egress” which we’ll talk about in part three. Also tick the “Deploy Control VMs” box.

Set a password and enable SSH.

Configure the control VM deployment and choose a connection for the HA interface.

At this point we’ll just add one internal interface to the UDLR and attach it to the “ULS-App” logical switch that we created earlier. In my environment I call the interface “App-Tier” and assign it IP address 192.168.0.1 with a 24 prefix.

Disable “Configure Default Gateway” and complete the deployment wizard.

Once the UDLR is deployed we can test connectivity from one of the virtual machines by running a ping to the “App-Tier” interface IP address.

That should work as expected. Let’s add another universal logical switch to the environment so we can test some basic routing.

Once again in the vCenter system paired with the primary NSX Manager, navigate to Networking and Security > Logical Switches. Click the “+ Add” button. Type a name for the universal logical switch. I call it “ULS DB” in my environment. Once again choose “UTZ” as the transport zone. Click “Add

Once created select the new “ULS DB” switch and click on Actions > Connect Edge. Select the “UDLR-01” edge.

Give the interface a name. I call it “DB-Tier” and select “Internal” as the interface type. Make sure the connectivity status is set to “Connected“. Assign IP address 192.168.1.1 with a 24 prefix and click “Add“.

Deploy a new virtual machine at one of the sites and connect it to the “ULS DB” logical switch.

Once the virtual machine is deployed, set a hostname. Mine is called “DBServer-SE” and reboot. Next configure it with IP address 192.168.1.10/24. Add a rule to iptables to allow ICMP. You should now be able to ping the DB-Tier UDLR-01 interface (192.168.1.1):

Can we reach any of the virtual machines connected to the App Tier at this point? Yes, but as none of the virtual machines have a default gateway configured it won’t work. Let’s quickly fix this:

For the App server virtual machines:

route add default gw 192.168.0.1 eth0

For the DBServer virtual machine:

route add default gw 192.168.1.1 eth0

And now routing between the two segments should work:

We can also use the NSX traceflow utility to visualize the path of the ICMP packets. From any of the vCenter systems navigate to Networking and Security > Tools > Traceflow. Do a trace from the DBServer-SE to AppServer-US for example:

Step 3 – Universal distributed firewall

Now that we have cross-vCenter routing and switching working it’s time to look at security.

Just as with universal distributed logical routing and universal logical switching, we use universal objects for security when we want to propagate them cross-vCenter. It is worth noting that not all NSX security features are available for cross-vCenter NSX. The things that are not supported are:

  • Exclude list
  • SpoofGuard
  • Flow monitoring for aggregate flows
  • Network service insertion
  • Edge Firewall
  • Service Composer

Universal security groups can contain the following:

  • Universal IP Sets
  • Universal MAC Sets
  • Universal Security Groups
  • Universal Security Tags
  • Dynamic criteria

As you see with cross-vCenter NSX security we’re not able to use vCenter constructs when defining security policies. Remember there’s a 1:1 relationship between NSX Manager and vCenter. One NSX Manager has no clue about the vCenter objects of a vCenter system paired with another NSX Manager.

Let’s create a universal firewall rule for our newly deployed virtual machines. Log in at the DC-SE vCenter system and navigate to Networking and Security > Security > Firewall > General. We start by creating a new universal firewall section. Click on “Add Section“. Call it “Universal Rules” and make sure you enable “Universal Synchronization“.

Once we have a universal firewall section we can start creating universal firewall rules in it. Click on “Add Rule” so a new empty rule shows up in the universal section.

Click on “Enter rule name” and write “Allow MariaDB”. Edit the source field and in the “Specify Source” click on “Create new IP Set“. Name the IP set “App-Tier” and add the 192.168.0.0/24 CIDR.

Select “App-Tier as the source for our “Allow MariaDB” rule:

Repeat this same procedure to specify the destination in the firewall rule. Call the IP set “DB-Tier” and specify the 192.168.1.0/24 CIDR.

Next we need to specify the service in this rule. Search for and add the “MySQL” service. Click “Save“.

The universal firewall rule should look like this now.

Click “Publish” to save and activate the changes.

Now let’s see if we can hit on the rule. We won’t actually install any MariaDB/MySQL server. Instead we’ll run a simple netcat listener on DBServer-SE and do a netcat connect from AppServer-US as a quick proof of concept.

First we need to allow traffic on TCP port 3306 through the iptables firewall on DBServer-SE:

iptables -A INPUT -p tcp --dport 3306 -j ACCEPT

Next start a netcat listener on TCP port 3306:

nc -l -p 3306

Over at the AppServer-US we can try to connect to port 3306:

nc -zv 192.168.0.20 3306

With no “block” rule in our distributed firewall anywhere this is the expected result (of course). But let’s have a look at the stats for our MySQL firewall rule. This can be looked up in a couple of places: vCenter GUI, central CLI, syslog or on the ESXi host. Today I’ll look this up on the ESXi host. Log in to the ESXi host where AppServer-US is running.

We first need the name of the filter at IOChain slot 2 of the AppServer-U vNic:

summarize-dvfilter | grep AppServer-U -A5

In my environment the filter’s name is “nic-2110503-eth0-vmware-sfw.2”

We use the “vsipioctl” command to see statistics for the rules in this filter:

vsipioctl getrules -f nic-2110503-eth0-vmware-sfw.2 -s

The ID for the MySQL firewall rule in my environment is “2147483648”. As we can see, the simple netcat connectivity test resulted in exactly 1 hit on the MySQL firewall rule we created earlier.

The ID of a firewall rule can be found at several places as well. The DFW GUI in vCenter is one of them.

Wrapping up

In this part we deployed the data plane components for facilitating East-West traffic in a cross-vCenter NSX environment: universal logical switching and universal distributed logical routing. We touched upon universal distributed firewalling as well.

In part three we will look at the components and configuration involved with the North-South traffic in a cross-vCenter NSX environment: The Edge tier.

Cross-vCenter NSX – Part 1

One of those scenarios where the NSX platform really shines is in multi-site environments. Here NSX together with vSphere is the infrastructure that delivers on business requirements like workload mobility, resource pooling, and consistent security.

Since NSX version 6.2 we can roll out NSX in a vSphere environment managed by more than one vCenter system. This type of deployment is called cross-vCenter NSX.
With cross-vCenter NSX we are able to centrally deploy and manage networking and security constructs regardless of the management domain architecture.

In preparation for some assignments involving cross-vCenter NSX, I’ve been busy with a cross-vCenter NSX lab. I thought I’d do a little writeup in two parts on setting this up.

In this first post we’ll prepare the management and control plane for cross-vCenter NSX. In part 2 we’ll have a closer look at how to deploy the data plane in a cross-vCenter NSX environment.

The lab environment

The following components are the building blocks I used for this simple cross-vCenter NSX lab:

  • 8 x nested ESXi 6.7 U1 hosts
  • 2 x vCenter 6.7 U1 systems
  • vSAN storage
  • NSX 6.4.4
  • 2 x pfSense routers

Just so we spend time focusing on the relevant stuff I’ve done some preparation in advance.

I set up two fictional sites: DC-SE and DC-US. Each with its own, non-linked, vCenter server system, four ESXi hosts, vSAN storage, and a standalone NSX Manager.
The ESXi hosts are prepared for NSX (VIBs installed) and a segment ID pool and transport zone are configured. DC-SE is running a controller cluster.

Each site has a pfSense machine acting as the perimeter router. Static routing is set up so each site’s management, vMotion and VTEP subnets can reach each other. Both sites also have a VLAN for vSAN plus one for ESG uplink which will be used in part two.

VLANDC-SEDC-US
Management10.0.10.0/2410.1.10.0/24
vMotion10.0.20.0/2410.1.20.0/24
vSAN10.0.30.0/2410.1.30.0/24
VXLAN transport10.0.50.0/2410.1.50.0/24
Uplink10.0.70.0/2410.1.70.0/24
High-level overview of the lab environment before cross-vCenter NSX is implemented

Please keep in mind that this is a simple lab environment design and in no way a design for a production environment. Have a look at VMware Validated Designs if you want to learn more about SDDC designs including NSX for production environments.

Step 1 – Assign primary role to NSX Manager

There’s a 1:1 relationship between vCenter server and NSX Manager. This is true when setting up cross-vCenter NSX as well, but here the NSX managers involved are assigned roles.
The NSX manager that should be running the controller cluster is assigned the primary role. Additional NSX Managers participating in cross-vCenter NSX (up to 7) are assigned the secondary role.

So let’s start by assigning the NSX Manager in DC-SE the primary role. In vCenter, go to Networking and Security > Installation and Upgrade > Management > NSX Managers. Select the NSX Manager and click on Actions > Assign Primary Role.

As you can see the role has changed from Standalone to Primary.

When we assign the primary role to a NSX Manager, its controller cluster automatically becomes the universal controller cluster. It is the one and only controller cluster in cross-vCenter NSX and provides control plane functionality (MAC, ARP, VTEP tables) for both the primary and secondary NSX Managers.

Step 2 – Configure Logical Network settings

While we’re at DC-SE we continue with the configuration of the logical network settings for cross-vCenter NSX.

We begin with defining a Universal Segment ID pool. These segment IDs (VNIs) are assigned to universal logical switches. Universal logical switches are logical switches that are synced to the secondary NSX Managers. We will look more at this in part two.

Go to Networking and Security > Installation and Upgrade > Logical Network Settings > VXLAN Settings > Segment IDs and click Edit.

Configure a unique range for the universal segment ID pool.

Create a universal transport zone and add CL01-SE

Next to VXLAN Settings we find Transport Zones. Click it and click Add to start adding an Universal Transport Zone.

Give it a name like “UTZ” and switch Universal Synchronization to On and add the CL01-SE vSphere cluster to the transport zone.

Step 3 – Assign secondary role to NSX Manager

Assigning the secondary role to the NSX Manager located in DC-US is done from the primary NSX Manager in DC-SE.

In vCenter, navigate to Networking and Security > Installation and Upgrade > Management > NSX Managers. Select the NSX Manager and click on Actions > Add Secondary Manager.

Here you enter the information of the NSX Manager at DC-US and click Add.

The NSX Manager at DC-US now has the secondary role. We can verify this by logging in to vCenter at DC-US and navigate to Networking and Security > Installation and Upgrade > Management > NSX Managers.

As you can see the NSX Manager now has the Secondary role.

Add CL01-US to the universal transport zone

While still logged in to vCenter at DC-US navigate to Networking and Security > Installation and Upgrade > Logical Network Settings > Transport Zones. The transport zone that we created over at the primary NSX Manager in DC-SE, shows up here too. Mark the transport zone and click on the “Connect Clusters” button. Add the CL01-US cluster to the transport zone.

Wrapping up

This completes the preparation of the management and control plane for our simple cross-vCenter NSX lab.

We started by assigning the primary role to the NSX Manager at DC-SE. By doing so we got ourselves a universal controller cluster. Next we configured the logical network settings necessary for cross-vCenter NSX. Finally we paired primary NSX Manager at DC-SE with the standalone NSX Manager at DC-US by assigning it the secondary role. Along the way we also added the vSphere clusters on both sites to the same universal transport zone.

In part 2 we will set up the data plane in our cross-vCenter NSX lab. We’ll have a look at how logical switching, distributed logical routing, and distributed security works in a cross-vCenter NSX environment.