NSX-T version 3.0 brings a new routing construct to the table: VRF Lite. With VRF Lite we are able to configure per tenant data plane isolation all the way up to the physical network. Creating dedicated tenant Tier-0 Gateways for this particular use case is now a thing of the past!
With 100 VRFs per Tier-0 Gateway we’re also looking at a quite substantial improvement from a scalability point of view.
A closer look
VRF Lite in NSX-T 3.0 has the following main features and characteristics:
- Each VRF maintains its own routing table.
- A VRF acts as a virtual Tier-0 Gateway associated with a “parent” Tier-0 Gateway.
- Inter-VRF traffic is either routed through the physical fabric or directly by using static routes.
As a child object of a Tier-0 Gateway, the VRF inherits some attributes and configuration from its parent. Edge cluster, HA mode, BGP local AS number, and BGP graceful restart settings are inherited and can’t be changed at the VRF level. All other configuration is managed independently within the VRF. This includes external interfaces, BGP state, BGP neighbors, routes, route filters, route redistribution, NAT, and Edge firewall.
There are some things to keep in mind when working with NSX-T VRF Lite:
- Bandwidth is shared across all Tier-0 Gateways and VRFs.
- The Tier-0 Gateway’s HA mode (A/A or A/S) is inherited by the VRF. This is an important consideration when talking stateful services on the VRF level.
- Inter-SR routing for VRF routing instances is not possible today.
- Inter-VRF static routing does not work with NAT. Route through the physical fabric instead.
Not too bad.
Setting up VRF Lite
Let’s have a look at how to set up VRF Lite for two new tenants: Blue and Green.
In this simple walkthrough the assumption is that a Tier-0 Gateway with external interfaces is already configured. BGP should be enabled but peering or neighbor entries are not required. As a reference, in my lab environment the starting point looks like this:
A Tier-0 Gateway configured with Active/Standby HA mode and four external interfaces (two per Edge node) connecting it to the two ToRs.
Step 1 – Create VLAN segments
Just like its parent, a VRF needs VLAN-based uplink segments for establishing connectivity with the physical network:
Note that we configure a VLAN range indicating that the segment will be trunking VLANs within that range. Trunking segments are required for VRF uplinks.
In total we create four uplink segments for our two VRFs:
|Segment||VLAN range||Uplink Teaming Policy|
The uplink teaming policies make sure that traffic from each segment is steered towards specific Edge node N-VDS uplinks. This is done to establish a deterministic routing path.
Step 2 – Create the VRFs
Creating VRFs in NSX Manager is done under Networking > Connectivity > Tier-0 Gateways > Add Gateway > VRF:
When creating a VRF we initially only need to specify a name and a parent Tier-0 Gateway:
After repeating this process for the “Green” VRF we have our two VRFs as well as the Tier-0 Gateway in place:
Are we done now? No.
Step 3 – Create VRF external interfaces
Just like Tier-0 Gateways, VRFs need external interfaces to connect to the physical network.
In this scenario each VRF is configured with four external interfaces as specified in the table below:
|VRF||Interface Name||IP Address||Segment||Access VLAN|
As you remember, the segments that the external interfaces connect into are configured as trunk segments. Therefore we use the “Access VLAN” property on the VRF external interfaces to specify the BGP peering VLANs.
Step 4 – Configure BGP
With the L2 connectivity in place we can move our focus to L3. As stated earlier, VRFs inherit their BGP local AS number and some other BGP settings from their parent Tier-0, but BGP neighbor configuration is done within each VRF.
Configuring BGP neighbors within a VRF is done exactly the same way as on a Tier-0 Gateway:
There’s not much to explain here. In this particular scenario each VRF gets two BGP neighbor entries (one for each ToR):
Once the neighbor configurations are in place we can have a look at things from the Edge node CLI:
Here we see the two VRFs that we just configured.
After connecting a Tier-1 Gateway to each of the VRFs we can see that DR components are being instantiated for the VRFs:
From within a VRF context we can check things like the BGP neighbor status:
vrf 5 get bgp neighbor summary
And the BGP routing table for this particular VRF:
get bgp ipv4
Inevitably, VRFs add some complexity to NSX-T Edge routing. I recommend using the Network Topology map in NSX Manager which is a pretty nice tool for keeping an overview of the routing configuration:
The new VRF Lite feature introduced in NSX-T 3.0 is a great addition to the platform. It gives customers scalable data plane isolation all the way into the physical network. VRF Lite is easy to set up and maintain and will definitely become the go-to configuration in NSX-T multitenancy environments.
Thanks for reading.
Can a tier1 be connected to both a tier0 that’s part of a vrf and a tier0 in the “global” routing table?
If you mean “connect” in the sense of within the NSX-T management plane connecting a T1 construct to both a T0 construct and a VRF construct the answer is no. You can however “connect” through the data plane by using static routes.
So from the perspective of the Tier1, would I statically route the networks that I want to reach via the VRF to the Tier0 in that VRF and the auto-configured default route would hit the other Tier0 instance?
When you connect a Tier1 to a Tier0 or a VRF a default route is auto-plumbed on the Tier1 pointing to either the Tier0 or the VRF as next hop, but not both.
From the Tier1 point of view a Tier0 and a VRF are separate Tier0 gateways.
I see. If you want to provide both Internet services via NAT and access to a VRF for a virtual server behind a Tier1, is it possible to do that?
Yes that’s fine. One thing to keep in mind is the HA mode. If you want to do NAT on the VRF its parent Tier0 must have been configured with active/standby HA mode.
I guess I’m confused as to how that would be accomplished if you can only connect a Tier1 to a single Tier0. The connection would either be in the VRF or in the ‘global/default’ routing table.
To a single Tier0 or a single VRF. From the Tier1 point of view it does not make any difference. If it’s connected to a Tier0 you configure all of your stateful services on the Tier0 and if it’s connected to a VRF you configure them there. Either way your VM behind the Tier1 gets its stateful service (like NAT) and access to the physical network.
Hello, First of all thanks for the nice example and how you explain that. Do you know how the configuration should be done if we want to do the vrf leaking like between green vrf and T0 global. I did that with static route and scope but it doesnt work, i even created a Lo in both of them but no way.
Actually just resolve the prb, the next hope of the VRF should be the next hope of the T0 and the next hope of the T0 should be the next hope of vrf to T1 which is 100.64.64.x/31
LikeLiked by 1 person
Thanks for sharing the information! Glad it works for you now.
Thanks for the excellent write-up @Rutger, have gone back to this article multiple times during a design I am writing, with my own test-environment in transit :).
LikeLiked by 1 person
First of all thanks for your superb blog, and I tried applying same set of design in our setup.
But in my scenario I want to make multitenant scenario where all VRF are on same VLAN ( Shared Public IP Subnet), and every Tenant NSX_V EDGE has it’s own Public IP on Uplink interface.
We are planning for migration from V to T so was considering the same design of VRF Lite, where Main T0 will have simple interface IP, and every VRF uplink interface will have Public IP. But as every VRF will sit on same VLAN, I doubt in current code of 3.1 it supports
VLAN A—–Tenant 1_EDGE—-VM
VLAN A——Tenant 2_EDGE—VM
VLAN A——Tenant X_EDGE—VM
Any other workaround ? .. May be will try to use API if it hacks this limitation.
Hi Rutger, brilliant article.
I don’t suppose you’d happen to know how to achieve this using the API in postman? My colleague and myself are trying to automate our migration to NSX-T and have a large number of Tier0 gateways to create with the necessary BGP config.
Hi and thanks for reading my blog. Creating Tier-0s with BGP config using the NSX-TAPI shouldn’t be a problem. With Postman you could probably use something like a data file. Another option would be to use Ansible or Terraform for this.
Any idea if VRF can be used for per-tenant isolation (Isolation of East-West traffic between the tenents)?
In my case, I have 3 different tenents which are connected to there respective Tier-1s. All the tenents` Tier-1s connect to a single Tier-0 and currently all the tenents can ping each other because the Tier-0 is routing the traffic between the tenents Tier-1s.
Thanks in advance.
I would use the Gateway Firewall for tenant isolation.
Hope you are doing fine. I tried to look for this information but everywhere i see Single Tier1 connected to a VRF. Is it possible to connect multiple Tier1 to a single VRF. I have tried connecting two Tier1 to a VRF but my network communication is functional through only 1 VRF.
I have tried connecting two Tier1 to a VRF but my network communication is functional through only 1 Tier1 connected to VRF