vSphere with Tanzu LAB Environment Installation Step-by-Step (Included vCenter, vSAN, NSX, Supervisor Cluster and pfSense)-5

Evren Baycan
7 min readNov 3, 2022

--

NSX Manager installation on Physical Infrastructure

We import the NSX Unified Appliance OVA that we have downloaded.

We choose Medium as NSX Manager size.

NOTE: The Supervisor Cluster must be at least Medium.

Don’t forget to choose Thin Provision as the disc format!

Because deployment is on the Physical Infrastructure, we choose ACME-MANAGEMENT as Port Group.

Set our password for NSX Managers Root and Admin users.

We enter our required hostname and network information for NSX Managers.

Activate SSH and Root SSH Login.

After checking all the information, we can start the NSX Manager Import process.

After the import is completed, Power-on the NSX Manager.

After the first Power-on, it will take 4–5 minutes for the NSX-Manager to fully deploy. The GUI will then be accessible.

We have completed all our installations on the Physical Infrastructure.

Now here is an important issue!!!

We set up an environment as Nested without including the Physical Infrastructure Network and Firewall configuration.

If you do not move the VMs we have set up on the Physical Infrastructure to the same ESXi Node and do not create a VM/HOST Rule, our vLANs access will not work on the Nested Infrastructure.

On the Physical Infrastructure, if your VMs have on different nodes, there will be Node to Node access, which means you will be include to the Physical Infrastructure vLANs and FW rules.

To overcome this, move the entire LAB environment to an ESXi node and create a VM/Host Rule like below!!!

This way, the vLANs will be in the same node and you won’t encounter any problems.

We login to NSX Manager via JUMP Server.

First, we need to enter a License. If you do not deploy an EDGE node, you can use it with the default license. Because EDGE node will be required for the Supervisor Cluster, we must enter the license.

We add vCenter via System/Fabric/Compute Managers.

Since we are going to be using the Supervisor Cluster, we must activate the Enable Trust!

We created our vCenter connection with NSX Manager.

NSX Host Preparation on Nested Infrastructure

Currently, ESXi nodes are not aware of NSX. Now we will deploy NSX Kernel Module to them. A Tunnel Endpoint will then be created for each Node. This will create a Node-to-Node Overlay.

Since we are not using DHCP we are going to add an IP Pool for both Host TEP and EDGE TEP.

After entering a suitable name for the IP Pool, we set the Pool under a Subnet.

We enter an IP Range on the ACME-HOST-TEP vLAN/Subnet that we have opened on pfSense and define other network information.

We create a second Pool and do the same operations this time for the EDGE Pool.

We enter an IP Range on the ACME-EDGE-TEP vLAN/Subnet that we have opened on pfSense and define other network information.

We created 2 different IP Pools for HOST TEP and EDGE TEP. NSX Manager is going to now use these Pools to distribute IP as HOST and EDGE nodes are added.

We create 2 Transport Zones for overlay and vLAN network.

We are preparing 2 Uplink Profiles for HOST and EDGE Nodes.

As a Host Profile.

We have 2 uplinks on TRUNK VDS. Here we write the definition as Uplink-1 for Active and Uplink-2 for Passive.

We enter our ACME-HOST-TEP vLAN number and add the profile.

As EDGE Profile.

We have 2 uplinks on TRUNK VDS. Here we write Uplink-1 for the definition Active.

NOTE: We are going to deploy 2 EDGE Nodes later. They will run as clusters.

We enter our ACME-EDGE-TEP vLAN number and add the profile.

Completed our HOST and EDGE Profile definitions.

Now we will define a Transport Node Profile. Host Preparation will be done through this profile.

Not supported by N-VDS Supervisor Cluster! This switch type is now End-of-Support, so we must continue as VDS!

We use ACME-TRUNK-VDS as VDS!

Here we select and add all the Pool and Profile information that we have created before.

We come to the Host Transport Nodes tab and select our vCenter.

After choosing at the cluster level, we choose Configure NSX.

We select the Transport Node Profile we added and select APPLY. We start the Host Preparation process.

This process will take about 5 minutes and NSX Manager will connect to ESXi Nodes one by one and install NSX Kernel Module as VIB.

You should definitely see the Success and UP status after Host Preparation is complete.

We are checking NSX VMkernel adapters on ESXi Nodes.

We are testing access to NSX VMkernel network adapters.

If there is no access, there is definitely a problem. This is a fault that needs to be fixed!

Part 6.

  • NSX EDGE Node installation and configuration on Nested Infrastructure
  • T0 Router configuration on Nested Infrastructure
  • BGP Dynamic Routing configuration on Nested Infrastructure
  • Underlay Routing test on T1 Router with NSX Overlay Segment

--

--