NSX-T Lab – Part 3

Welcome back! I’m in the middle of installing NSX-T in my vSphere lab environment. In part one I installed NSX Manager, in part two I deployed the NSX Controller Cluster. Now it’s time start working on what it’s all about: The data plane.

High-level overview

Setting up a complete NSX-T data plane involves installing and configuring several components. We have East-West distributed routing, North-South centralized routing, and security. Then there are the additional services like load balancing, NAT, DHCP and partner integrations.

The order in which you set things up depends primarily on what you’re trying to achieve. I noticed that different documents and guides also use different approaches.

So, I put together bits and pieces from different sources and came up with the following high-level plan for my NSX-T data plane deployment:

  1. Prepare the vSphere distributed switch
  2. Configure transport zones
  3. Create logical switches
  4. Prepare & configure ESXi hosts
  5. Deploy & configure Edge VMs
  6. Configure routing

In this article I will prepare the distributed switch, add the transport zones, and create the logical switches for the uplinks. Just to keep things digistible šŸ™‚

Preparing the vSphere Distributed Switch

The NSX Edge VMs, that will be deployed later on, connect to four different VLANs: management, transport (carrying logical networks), and two uplink VLANs.
I already have a distributed port group that maps to the management VLAN, so I need to create the ones for transport and the uplinks.

In vCenter, navigate to Networking, right-click the distributed switch and select Distributed Port Group > New Distributed Port Group.
I’m calling this port group “pg-transport”.

On the next page I set “VLAN type” to “VLAN” and “VLAN ID” to “1614”. Click “Next” and finish the port group creation.

I repeat this process for the two port groups for the uplinks (VLAN 2711 and 2712). Once done it looks like this:

And the ESXi host’s network configuration now looks something like this:

Here I have the VDS with its 5 port groups as well as a pair of unused NICs which I will use for NSX networking later on.

Configuring NSX transport zones

Transport zones in NSX are containers that define the reach of the transport nodes. I briefly mentioned transports nodes in part two. Transport nodes are the hypervisor hosts and NSX Edges that participate in an NSX overlay. For hypervisor hosts, this means that its VMs can communicate over NSX logical switches. For NSX Edges, this means it will have logical router uplinks and downlinks.

My lab environment will start out with three transport zones: uplink01, uplink02, and overlay01.

Log in to NSX Manager. In the menu at the left select Fabric > Transport Zones.

I start by creating a transport zone called “uplink01”. This is a VLAN transport zone that will be used by the NSX Edge later on:

I’m repeating this process to create the “uplink02” VLAN transport zone.

The third transport zone is an Overlay transport zone. It will be used by the host transport nodes and the NSX Edge:

The three transport zones listed:

Creating logical switches

Next I’ll create two logical switches. These two will facilitate the transit between NSX and the pfSense router. In NSX Manager choose Networking > Switching.


The first logical switch, “ls-uplink01”, I add to transport zone “uplink01” and configured with VLAN 2711 :

I repeat this process to create a second logical switch called “ls-uplink02”. I add it to transport zone “uplink02” and configure it with VLAN Id 2712.

Conclusion

Taking small steps, but getting there. I created the necessary port groups on the vSphere distributed switch which are needed for the Edge VMs. I then went on to create the transport zones as well as two logical switches from NSX Manager.

In the next part I will continue with setting up the transport nodes; The ESXi hosts and the NSX Edge.

NSX-T Lab – Part 2

Welcome back! I’m in the process of installing NSX-T in my lab environment. So far I have deployed NSX Manager which is the central management plane component of the NSX-T platform.
Today I will continue the installation and add a NSX-T controller to the lab environment.

Control plane

The control plane in NSX is responsible for maintaining runtime state based on configuration from the management plane, providing topology information reported by the data plane, and pushing stateless configuration to the data plane.

With NSX the control plane is split in two parts, the central control plane (CCP) which runs on the controller cluster nodes, and the local control plane (LCP) which runs on the transport nodes. A transport node is basically a server participating in NSX-T networking.

Now that I’m mentioning these planes, here’s a good diagram showing the interaction between them

Deploying the controller cluster

In a production NSX-T environment the controller cluster must have three members (controller nodes). The controller nodes are placed on three separate hypervisor hosts to avoid a single point of failure. In a lab environment like this a single controller node in the CCP is acceptable.

First I’m adding a new DNS record for the controller node. This is not required, but a good practice imho.

NameIP address
nsxcontroller-01172.16.11.57

An NSX-T controller node can be deployed in several ways. I added my vCenter system as a compute manager in part one, so perhaps the most convenient way is to deploy the controller node from the NSX Manager GUI. Other options include using the OVA package or the NSX Manager API.

I decided to deploy the controller node using the NSX Manager API. For this I need to prepare a small piece of JSON code that will be the body in the API call:

{
"deployment_requests": [
{
"roles": ["CONTROLLER"],
"form_factor": "SMALL",
"user_settings": {
"cli_password": "VMware1!",
"root_password": "VMware1!"
},
"deployment_config": {
"placement_type": "VsphereClusterNodeVMDeploymentConfig",
"vc_id": "bc5e0012-662f-421c-a286-7b408302bf15",
"management_network_id": "dvportgroup-44",
"hostname": "nsxcontroller-01",
"compute_id": "domain-c7",
"storage_id": "datastore-41",
"default_gateway_addresses":[
"172.16.11.253"
],
"management_port_subnets":[
{
"ip_addresses":[
"172.16.11.57"
],
"prefix_length":"24"
}
]
}
}
],
"clustering_config": {
"clustering_type": "ControlClusteringConfig",
"shared_secret": "VMware1!",
"join_to_existing_cluster": false
}
}

This code instructs the NSX Manager API to deploy one NSX-T controller node with a small form factor. Most things in the code above are pretty self-explanatory. I fetched the values for “management_network_id”, “compute_id”, and “storage_id” from the vCenter managed object browser (https://vcenter.rainpole.local/mob). The “vc_id” is found in NSX Manager under “Fabric” > “Compute Managers” under the “ID” column:

I use Postman to send the REST POST to “https://nsxmanager.rainpole.local/api/v1/cluster/nodes/deployments”:

This initiates the controller node deployment which can be tracked in the NSX Manager GUI under “System” > “Components”:

Once the deployment is finished it should look something like this:

Both cluster connectivity and manager connectivity are showing “Up”. I run the following command from the controller node CLI to verify the status:

get control-cluster status

To verify that NSX Manager knows about this new controller node I run the following command from the NSX Manager CLI:

get nodes

That looks good too. Another useful command is:

get management-cluster status

It shows the status of the management cluster and the control cluster.

I also configure DNS and NTP on the new controller node from the controller node CLI:

set name-servers 172.16.11.10
set ntp-server ntp.rainpole.local

Conclusion

I have the controller “cluster” up and running. With only one controller node it isn’t much of a cluster, but it will do for lab purposes.

The management plane and control plane are now operational. In the next part I will continue with installing the data plane components.

NSX-T Lab – Part 1

Learning by doing is my preferred method of getting to know a product or technology I haven’t worked with before.

Take NSX-T for example. On the surface NSX-T seems to be just another flavour of network virtualisation technology by VMware. But having a closer look I quickly realised NSX-T is quite a different beast compared to NSX-V.

So, in order to learn more about the product I’m going to install NSX-T in a lab environment. In the coming blog posts you can follow along as I’m setting this up.

In this first part I’ll quickly introduce my lab environment and then continue with the deployment of the management plane main component: NSX Manager.

Lab environment

My small NSX-T lab will use the following components:

  • pfSense router with OpenBGPD package
  • Windows Server 2016 (AD/DNS)
  • NetApp NFS storage
  • vCenter 6.7
  • ESXi 6.7
  • NSX-T 2.3

The blog post series is about setting up NSX-T so I deployed some things in advance. A pfSense router, vCenter, three ESXi hosts in a cluster, and a Windows AD/DNS server have been installed and configured.

Here is a simple diagram of the environment the way it looks right now.

The ESXi hosts are equipped with four NICs. Two of them are used by vSphere and the other two are unassigned and will be used by NSX-T networking.

I configured some VLANs on the pfSense router which are trunked down to the ESXi hosts. The pfSense router has the OpenBGPD package installed so that I can do some dynamic routing later on.

I left out the details of the non-nested infrastructure. The three ESXi hosts are nested (running as VMs). vCenter, the Windows server, and pfSense are running outside of the nested environment.

Networking

I configured the following networks:

VLAN IDVLAN FunctionCIDRGateway
1611Management172.16.11.0/24172.16.11.253
1612vMotion172.16.12.0/24172.16.12.253
1614Transport172.16.14.0/24172.16.12.253
2711Uplink01172.27.11.0/24172.27.11.1
2712Uplink02172.27.12.0/24172.27.12.1

Yes, /24 subnets all the way. This is a lab environment so I don’t really have to worry about wasting IP addresses. In a production environment I would most likely be using smaller subnets.

DNS and hostnames

I’m using the “rainpole.local” DNS domain name for this lab and created the following DNS records in advance:

NameIP address
dc01-0172.16.11.10
esxi01172.16.11.51
esxi02172.16.11.52
esxi03172.16.11.53
vcenter172.16.11.50
ntp
172.16.11.10
nsxmanager172.16.11.56

More DNS records will probably be added during the deployment.

Installing NSX Manager

The first thing I need to do is install NSX Manager. For vSphere the NSX Manager installation is done by deploying an OVA package.

The OVA deployment is a straight forward process. I choose a small deployment configuration as this is for lab purposes:

I pick the management network (VLAN 1611) as the destination network for “Network 1” and IPv4 as the protocol:

I need to fill in the required details on the “Customize template” page. I have selected”nsx-manager” as the “Rolename”.

Once the virtual appliance is deployed it needs to be started. It will take some minutes to boot and get things up and running. Once ready I can log in to the NSX Manager web interface at https://nsxmanager.rainpole.local using the previously configured admin account:

The first time I log in I need to accept the license agreement and answer the question about the CEIP program. I then end up at the “NSX Overview” page:

One thing I notice immediately is how elegant NSX Manager interface is. I really like the layout of the HTML5 interface with a menu on the left side that pops out when moving the mouse cursor over it.

I took some time to look around in the web interface just to familiarise myself. After all I’ll be spending quite some time in the web interface during the NSX-T deployment.

The dashboard shows status and health overview of of the platform and its components. It’s pretty empty right now as you see šŸ™‚

Adding a compute manager

Although there is no relationship between NSX Manager and vCenter like there is in NSX-V, it is possible to add vCenter systems as a so called compute managers. This is not required but having this link in place makes deploying NSX-T components on vSphere more convenient.

In the menu on the left I choose “Fabric” > “Compute Manager” and click “+Add”. I fill out the details of my vCenter system and then click on the “Add” button:

After a short registration process vCenter is added to NSX Manager:

Housekeeping

Under “System” I find some items that are of interest at this point:


Two things I would probably want to have a look at straight after deploying NSX Manager in a production environment are certificates and backup.

It is under “Trust” certificates are managed. Default self-signed certificates often need to be replaced with ones that are signed by a trusted CA. This is done over here, partly. An API call is actually needed to activate a newly imported certificate.

Backup can be configured entirely in the GUI. This is done under “Utilities” > “Backup”. Here we configure scheduling/interval and backup destination:

It is here I learn that there are three types of backup:

  • Cluster backup – includes the desired state of the virtual network
  • Node backup – includes theĀ NSX ManagerĀ appliance configuration
  • Inventory backup – includes the set of ESX and KVM hosts and edges

Time synchronisation is important and a NTP server can be configured during the NSX Manager deployment. If I want to change the NTP configuration after deployment I need to use the NSX Manager CLI or API. Using the CLI the following command will configure “ntp.rainpole.local” as a time source:

set ntp-server ntp.rainpole.local 

Likewise, if I need to (re-)configure DNS this is done via the NSX Manager CLI or API. Configuring the Windows AD/DNS server in my lab as the DNS server is done with this CLI command:

set name-servers 172.16.11.10

Conclusion

Installing and configuring NSX Manager in my lab environment was an easy process. The web interface looks really good with a nice layout. I’m a bit surprised to see that some, what I consider, basic configuration can’t be done in the GUI (NTP, DNS, logging), but I’m sure this will be added to the GUI in future releases of NSX Manager.

In the next part I’ll continue my NSX-T deployment journey and set up the control plane.