Sep 30 2019
Data Center

How Businesses Leverage Load Balancers and Reverse Proxies for Smoother Web Experiences

Reverse proxies and load-balancing appliances and software help keep web traffic flowing as a business grows.

Companies continue to grow their web apps and services portfolios, but as they scale, they must implement traffic management of some kind if they don’t want to experience problems with performance and function.

No business wants to struggle with website congestion from heavy usage. And no business has to.   

What Is a Load Balancer?

To start with, organizations can invest in load-balancing technology to properly distribute application or network traffic among multiple back-end servers. A load balancer receives and routes client requests for application, text, image or video data to any server in a pool that is capable of fulfilling them and then returns the server’s response to the client.

That way, no single system is overworked, which maximizes speed and capacity utilization for improved performance and prevents any particular application server from being a single point of failure. That ensures high availability.

MORE FROM BIZTECH: Digital transformation starts with the data center.

What Is a Reverse Proxy?

Reverse proxies are another option. They serve as gateways that web traffic must pass through before they forward the request to a server that can fulfill it and then return the server’s response. “They sound a lot like load balancers, but they are different,” says Bob Laliberte, practice director and senior analyst at Enterprise Strategy Group, a research firm. Reverse proxies act as such for HTTP traffic and application programming interfaces. Load balancers can deal with multiple protocols — HTTP as well as Domain Name System protocol, Simple Message Transfer Protocol and Internet Message Access Protocol.   

“It certainly is possible that they could be used together,” Laliberte says. “It would probably depend on the environment (size) and required services.”

Reverse proxies shouldn’t be confused with forward proxies, which are used not for load balancing but for passing requests to the internet from private networks through a firewall and can act as cache servers to reduce outward traffic.

How Do Reverse Proxies Work?

Reverse proxies have built-in security services, including authentication and abstraction of web servers. That is, only its own server IP address is revealed when it intercepts a request, thus protecting the identity of the back-end server. That makes it harder to perform a targeted attack against that system.

Most early load balancers did not have specific security functions, says Laliberte. They could be used in conjunction with security services to balance traffic to multiple security devices to accommodate large flows.

“As load balancers have evolved to application delivery controllers, these solutions now combine load balancing with security capabilities like web application firewall (WAF), distributed denial of service (DDoS), SSL, SSL offload and authentication,” he says.

MORE FROM BIZTECH: Discover everything you need to know about building the modern data center.

What Are the Primary Load Balancing Techniques?

There are different load-balancing algorithms that can govern traffic distribution. The most basic approach to distributing client requests across a group of servers is the round robin. This load-balancing algorithm distributes requests by choosing the next server in line, like the dealer in a card game, says Lori MacVittie, principal tech evangelist at F5 Networks. The first request is sent to server No. 1, the next to server No. 2, and down the line until there are no more requests.

But this algorithm doesn’t take into account that servers aren’t always similar enough to handle equivalent loads, even if they all have equal connections. Those servers with more RAM or faster CPUs don’t take precedence over the others, even though they probably should.

“It’s usually not a good way to distribute traffic,” says MacVittie, with the exception of handling requests for productivity apps, where availability is more important than speed. 

The other two most commonly used industry standard load-balancing algorithms are fastest response and least connections. The fastest response algorithm distributes requests by choosing the server with the fastest response time. Some requests are very time sensitive — financial trading, for instance — so it’s important to optimize for performance. “The least connections algorithm distributes requests by choosing the server with the least number of connections, like choosing the shortest line at the supermarket or bank,” MacVittie explains.

Load Balancers Can Be Acquired as Hardware or Software

Load balancers are available as both hardware appliances and software applications.

Hardware load balancers, which are options for on-premises or colocation environments, are typically built using a proprietary, application-specific integrated circuit to optimize performance, Laliberte says. Every physical load-balancing appliance will have the ability to deliver a software-only version running on a virtual machine, he says.

Software load balancers are available on public cloud platforms such as Microsoft Azure and Google Could as well as in colocation and on-premises environments. They can meet performance requirements via scaling.

“Really, the big difference today would be if the load-balancing software is simply a lift-and-shift from a hardware appliance or if it is a cloud-native design — that is, purposefully built for a software or cloud-based environment,” Laliberte says. “Typically, these would leverage a microservices architecture and run on containers, thus enabling rapid development and bug fixes and making them more attractive to cloud-based and DevOps environments.”

What Are the Top Load Balancers in 2019?

To help them make a choice between the two for on-premises and colocation environments, businesses calculate how many apps and how much traffic they have.

“If there are hundreds or thousands of apps and requests, hardware is a better choice,” says MacVittie. “You need just one piece of hardware versus many software load balancers to scale at the same rate.” Software load balancers are more affordable and are better choices for businesses with fewer apps and less traffic volume.

Yet many businesses may need to take advantage of both. Hardware appliances are designed to provide the best load balancing for specific tasks in their environments. The same customers for these appliances may also prefer to use software load balancers for their cloud requirements.

Vendors offering hardware load balancers include Barracuda, Citrix, F5, Fortinet, Kemp, Radware and Riverbed. Microsoft Azure and Google have emerged as significant players in the cloud load-balancing space, says Laliberte. Even more recently, companies like F5-owned NGINX and VMware-owned Avi Networks have emerged as cloud-native companies.

Analyst MarketsandMarkets estimates the market for load balancers will grow from $2.6 billion in 2018 to $5 billion by 2023. That certainly shows that the technology is gaining ground as more and more businesses scale their web services.

Maksim Tkachenko/Getty
Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.