VMware: Horizon Load Balancing

At the beginning of a new VMware Horizon project, we must start by designing the architecture. During the design phase or the workshops with a customer, the aspect of High Availability and load balancing will be discussed at a point in time.

Having your setup HA and load-balanced makes your environment production worthy. It failovers components when they fail, so we do not have any user impact. That is not only handy in the case of a failure but also when upgrading the environment when no downtime is allowed. Furthermore, it can also spread the load over multiple components.

Component overview

A VMware Horizon infrastructure consists of Connection Servers and Unified Access Gateways (UAG). The Connection server brokers the client connection after the authentication of the users. This component will be placed on your internal network. Since users want to access their desktop or applications over the internet, we will need a UAG. This component is typically placed in the DMZ for adding security and to act as a proxy for the user connections. 

Overview

Single Datacenter

DMZ

Starting from the top, our external user will be over the internet to the UAGs. Following our best-practices, our public IP will get an IP of the DMZ via our NAT. This NAT should be configured on port:

  • 443 for the brokering
  • 4172 in case of PCoIP
  • 8443 for Blast Extreme display protocol

The DMZ IP of our NAT is the VIP of our load balancer. This load balancer will point to one of the two UAGs. The health monitor and the persistency of the load balancer will be covered later since it is the same for the UAGs and the connection servers. Once connecting through the UAG, a connection will be established to the connection servers for the brokering.

Internal

The Unified Access Gateways will be connecting to the VIP of the connection servers. The same VIP could also be used for internal clients. This load balancer will, at his turn, monitor the connection servers on health and direct the connection to the least used connection server.

Two Datacenters

When using 2 data centers, the whole setup will be deduplicated. You can follow it in the illustration below:

DMZ

As you may notice, our VIP used for the UAGs is places across our two data centers. From that point of view, we can connect to every Unified Access Gateway to spread the load. Once connected to one of the UAGs, we stay within our datacenter to avoid cross data center connections. Otherwise, the possibility may exist that a user published desktop connection goes through a UAG of DC01, but the VDI of RDSH is placed in DC02.

Internal

The internal user connections are now going through a cross datacenter VIP. So internally, we do have an EXTRA VIP in place. The entitlements of both data centers are handled by the VMware Cloud Pod Architecture. If you want to know more about CPA, enjoy reading my friend and colleague Jens Herremans his blog post: https://cloud-duo.com/2020/06/vcap7-dtm-design-study-guide-part-4/.

Health monitoring

To monitor the health of your connection servers and UAGs, you can use the same Health monitor. It exists out of a few values that you must implement to have a correct monitor.

Monitoring String: 

  • GET /favicon.ico HTTP/1.0
  • GET /favicon.ico HTTP/1.1
  • HEAD /favicon.ico HTTP/1.0
  • HEAD /favicon.ico HTTP/1.1

The Head monitoring string is the preferred way since this is optimized to monitor health.

Polling interval: 30 secs (default recommended)

Response Timeout: 91 secs

This is the default and recommendation of VMware. In smaller environments, where there is less load on the connection servers, we sometimes reduce the polling interval to 5 seconds and the timeout to 3. This gives a faster response on how the server is acting. Be advised, that lowering the value of the recommended settings, may add more load on your UAGs and connection servers!

Persistency

VMware does recommend to use Source IP persistence as the persistence profile of the load balancer. In many cases, this profile must be shared between the other load balancer VIPs. So every associated virtual server uses the same member as their path.

In my next blog, I will try to show you how I configured this setup in VMware AVI Networks.

If there is any feedback, do not hesitate to add some comments below.

Here you can find the links I used:

Leave a Reply

Your email address will not be published. Required fields are marked *