VMware NSX-T 2.4: High-Level Architecture

In all aspects of IT, we can see a digital transformation. With the creation of cloud-centric based applications, the need to review the network has become more important at an architectural level. The security of those applications and their data must be met, despite the rapid growth of the number and complexity of applications.

For that reason, VMware came with NSX-T 2.4 as a network service. It may include other hypervisors, containers, bare metal operating systems and even public clouds. NSX-T 2.4 is a framework to easily manage and increase the visibility of environments that contains VMs and containers. In this blog post, I will try to give an architectural overview of the communication within NSX-T. Previously, VMware launched NSX-V. It had a major limitation that it could only be used with vSphere. NSX-T doesn’t have this limitation since it can connect to other hypervisors and public clouds.

In the architecture of NSX-T 2.4, VMware makes use of a management plane, data plane and a control plane. In this post, a short summary will be given about each plane.

Management Plane:

The management plane represents the graphical user interface of NSX-T. The NSX manager allows us to configure and maintain the NSX configuration by performing tasks, queries and visualizing statistics. When the configuration is finished, it stores the configuration and transmits it to the control and data plane so it can be realized.

The Management plane consists of two main parts:

  • The Management Plane Bus runs on all three nodes of the management cluster. Its responsibility is to validate and save a copy of the configuration. That configuration will be pushed, after validation, to the CCP.
  • Management Plane Agent (MPA): Statistics of services and an inventory of all workloads from multiple compute domains are available via the MPA to the NSX Manager.

Control Plane:

The logical switching, routing, etc. will be provided by the control plane. It does so through two planes, a Central Control Plane (CCP) and a Local Control Plane (LCP).

  • CCP nodes are together a cluster of Virtual machines. Their purpose is to provide redundancy and scalability of resources. It is made separately of the data plane, so failures of the control plane will not affect the data plane. By doing this, no user data will pass the control plane.
  • LCP runs on the transport nodes as a daemon. Its responsibility is to push the configuration of the CCP to the forwarding engines of the data plane.

NSX Manager Appliance:

The NSX manager combines both the NSX Manager and the NSX Controller. Three Appliances are required for cluster availability. They are used in one cluster for scaling and redundancy purposes.

Data Plane:

The transformation and forwarding of the packets, based on the info from the control plane, are performed in the data plane. Another responsibility is to provide statistics to the control plane.

The local control plane and the forwarding daemon together are called a transport node. This transport node includes the NSX Virtual Distributed Switch (N-VDS) that is being used for the transport of the packets. A hypervisor Transport Node is used for the network traffic on the hypervisors. The Edge node, on the other hand, is being used for network traffic that cannot be forwarded to another transport node.

I hope this post gave you some more information about NSX-T. I used the following VMware NSX-T reference design https://communities.vmware.com/docs/DOC-37591 in this blog post. In the future, I will try to create a deep dive technical blogpost series of NSX-T. Hope to see you then!

Leave a Reply

Your email address will not be published. Required fields are marked *