Kubernetes – NSX-T Lab

A while back Dumlu Timuralp published an excellent guide on integrating NSX-T 2.5 with K8s. If you haven’t read it already I strongly recommend that you have a look at it. The guide goes through every step of configuring the integration and does a great job explaining the architecture and components that make up this solution.

Today’s article is a quick walkthrough of my NSX-T integrated K8s lab which is based on Dumlu’s guide.

Bill of materials

The following components are used in my NSX-T – K8s lab:

  1. vSphere 6.7 U3
  2. NSX-T 2.5.1
  3. Ubuntu 18.04
  4. Docker CE 18.06
  5. Kubernetes 1.16

The lab environment

The starting point before setting up the K8s integration:

A standard vSphere platform consisting of a couple of ESXi hosts and a vCenter server. NSX-T has been deployed and an overlay transport zone has been configured.

On the logical network side of things I have a very basic setup with just a Tier-0 gateway for the North-South connectivity.

The above infrastructure is pretty much always in place and mostly left untouched. The components for the NSX-T – K8s integration are connected to this existing infrastructure. Let’s have a look at how that’s done.

NSX-T constructs

A couple of NSX-T constructs are needed for the K8s integration:

  • Tier-1 gateway for K8s node management
  • Segment for K8s node management
  • Segment for K8s node data plane
  • IP block for K8s namespaces
  • IP block for K8s namespaces not doing source NAT
  • IP pool for K8s Ingress or LoadBalancer service type
  • IP pool for source NATing K8s Pods in the namespaces
  • Two distributed firewall policies

Placing the components on the diagram for some clarity:

Nothing too complex, but creating and configuring this by hand takes some time. Especially when doing this many times, which is not uncommon in my lab, it gets boring.

Luckily, the NSX-T hierarchical policy API helps me out here. I simply specify the desired topology and its configuration as a piece of code and then tell the API to create it for me.

So here’s the JSON-code for the topology and components above. If you want to use it yourself make sure that you change the values for:

  • tier0_path – the path to your Tier-0 gateway
  • transport_zone_path – the path to your overlay transport zone

I send this code as the body of a PATCH request to:

PATCH https://<nsx-mgr>/policy/api/v1/infra

And in a matter of seconds the components are in place.

Ubuntu VMs

On the compute side my K8s cluster consists of three Ubuntu VMs: A master and two worker nodes. Each VM is configured with two NICs where one connects to the “k8s-nodetransport” segment and the other to the “k8s-nodemanagement” segment:

To get these three VMs up and running as quick as possible I built a vApp and stored it as a template in a vSphere content library:

Each of the VMs in this vApp template is pre-configured as follows:

  • Hostname
  • IP stack on the mgt NIC
  • Persistent storage directories
  • Python
  • Docker
  • Kubernetes (installed not initialized)
  • NSX Container Plug-in installation files
  • NSX Container Plug-in container image loaded to the local Docker repository

K8s cluster

Once the vApp is deployed the first thing I do is to initialize the K8s cluster:

k8s-master:~$ sudo kubeadm init

The two worker nodes are joined to the cluster. For example:

k8s-worker1:~$ sudo kubeadm join 10.190.22.10:6443 --token 8xlrqd.uuvi16c7bgacxihe --discovery-token-ca-cert-hash sha256:5ef8bae3ea509e9605bef2a931f0eeccce40da8ae857174df35fa9fd17d54371

At this point “kubectl get nodes” shows me:

kubectl get nodes
NAME          STATUS     ROLES    AGE     VERSION
k8s-master    NotReady   master   2m26s   v1.16.4
k8s-worker1   NotReady            67s     v1.16.4
k8s-worker2   NotReady            15s     v1.16.4

Without a CNI plug-in installed the “NotReady” status is expected.

NSX container plug-in

Before installing NCP I need to tag the three segment ports of the “k8s-nodetransport” segment as follows:

ScopeTag (k8s-master)Tag (k8s-worker1)Tag (k8s-worker2)
ncp/node_namek8s-masterk8s-worker1k8s-worker2
ncp/clusterk8s-clusterk8s-clusterk8s-cluster

The ubuntu-ncp.yaml manifest that deploys NCP is already prepared for my lab environment. If you want to use it make sure you change the values for the following settings so that they match your environment:

  • nsx_api_managers
  • nsx_api_user
  • nsx_api_password
  • overlay_tz
  • tier0_gateway

The manifest is aligned with the JSON that I use to create the NSX-T components.

Installing the NSX container plugin from the master node by running:

kubectl apply -f ncp-ubuntu.yaml

After a minute or two the pods are running in their own “nsx-system” namespace:

kubectl get pods -n nsx-system
NAME                       READY   STATUS    RESTARTS   AGE
nsx-ncp-6978b9cb69-899q8   1/1     Running   0          2m8s
nsx-ncp-bootstrap-8879t    1/1     Running   0          2m8s
nsx-ncp-bootstrap-xlnqh    1/1     Running   0          2m8s
nsx-ncp-bootstrap-zqxh6    1/1     Running   0          2m8s
nsx-node-agent-7twld       3/3     Running   0          2m8s
nsx-node-agent-9n64w       3/3     Running   0          2m8s
nsx-node-agent-jww7g       3/3     Running   0          2m8s

The node status has changed to “Ready” now that NCP is installed:

NAME          STATUS   ROLES    AGE   VERSION
k8s-master    Ready    master   1h   v1.16.4
k8s-worker1   Ready             1h   v1.16.4
k8s-worker2   Ready             1h   v1.16.4

Step 5 – Deploy a workload

To have something to play around with I deploy a containerized WordPress in my K8s cluster. Here are the yaml files that I use to deploy WordPress in case you want to set this up yourself.

First I create a separate namespace for the workload:

kubectl create -f namespace.yaml

Next, I deploy WordPress in this namespace:

kubectl apply -k ./ -n wp

Running a “kubectl get pods -n wp” shows me something like this:

kubectl get pods -n wp
NAME                               READY   STATUS    RESTARTS   AGE
wordpress-55ddbf6d75-7zjc8         1/1     Running   1          109s
wordpress-mysql-78dddb6bf7-n8pvn   1/1     Running   0          109s

Running “kubectl get service -n wp” shows the external IP that is assigned by NSX-T from the “k8s-lb-pool”:

kubectl get service -n wp
 NAME              TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)        AGE
 wordpress         LoadBalancer   10.101.4.78   10.190.10.51   80:30008/TCP   3m38s
 wordpress-mysql   ClusterIP      None                   3306/TCP       3m38s

And browsing to “10.190.10.51” brings up a familiar page:

NSX-T container networking is operational. Happy blogging! 🙂

Summary

No rocket science here, but using the NSX-T hierarchical policy API is a time saver and so are vApp templates and yaml manifests. Put something like Ansible on top of this and you’re looking at a fully automated K8s with NSX-T deployment.

Hopefully this post inspires or maybe even helps you setting up your own NSX-T – K8s integration. It’s a pretty awesome solution and one I plan on covering in future posts as I learn more about it myself.

Stay tuned!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.