Distributed Web Server with Load Balancer

This Article will discuss about Load Balancing from common definition to example of implementation. Load Balancing technique was ideal for a website with high traffic and zero downtime. Or maybe, your website’s growth rapidly and upgrading server specification are seems impossible so it need to expand by adding more server, so this article’s perfect for you as starter guideline.

Definition

Basically, main ideas of Load Balancing is to distribute task to more than single server, and so the goal from this technique are to reduce server load and also deliver content in less time. Actually, Load Balancing is a technique that implemented on many service, not only on Web Server. Like database, file server and any other service are using this technique too but with different name. Such as ‘clustering’, ‘parallel system’, ‘distributed file system’ and so on.

For this article which discuss about Web Server, let me use Load balancing term.

Products

There are plenty option for Load Balancer product, you can choose to implement it with hardware or totally with software.

If you prefer to use hardware for Load Balancing, let say F5, Barracuda are quite experience for this area. Like other devices, they also provide ease-to-use software to operate the hardware remotely.

But if you only have a low budget or maybe you like to explore what your server can do, so software based Load Balancing will be your reasonable option. I’ve seen so many software that work not only as Load Balancer specifically, in fact, many of them provide it optionally not as main feature. For example, Squid (proxy) and Nginx (Web Server).

Since Squid and Nginx are the product that I’ve used before, so this article only covered with that two application, including later example.

Methods

Load Balancer work in many different algorithm. But in common the method are:

  • Round Robin: The Load balancer passed the request to the next server inline. 
  • Weight Round Robin: Same as Round Robin but the number of request that passed to server is defined by the weight parameter. more high the weight is more request will be pass to them. 
  • Random: Every request will pass by Load Balancer randomly, without knowing server availability and performance.
  • etc

For each algorithm has their own plus and minus. Find the best method that fit with your overall server configuration and environment by research.

Example Configuration

For this example i use one server for the Load Balancer (IP: 192.168.0.1), and two as Client (IP: 192.168.0.10 and 192.168.0.11). We do not need high specification for Load Balancer. One thing that need to consider is network card specification that support high bandwidth.

Nginx

First thing you must do is define cluster name and client ip’s.

upstream  my_cluster  {
         fair;
         server 192.168.0.10 max_fails=3  fail_timeout=30s;
         server 192.168.0.11 max_fails=3  fail_timeout=30s;
}

Then, we tell Nginx to work as Load Balancer and call my_cluster on server section

server {
        listen       192.168.0.1:80;
        server_name  _;
        location / {
              proxy_pass  http://my_cluster;
              proxy_set_header Host $host;     
              proxy_set_header X-Real-IP $remote_addr;
              proxy_set_header X-Forwarded-For 
              $proxy_add_x_forwarded_for;
}

Restart Nginx service, and now your Nginx is running as Load Balancer

For complete configuration, see this nginx.conf

Squid

It’s more simple to configure if using Squid.

http_port 80 vhost

Then define ip’s for clients.

cache_peer 192.168.0.10 parent 80 0 no-query originserver round-robin
cache_peer 192.168.0.11 parent 80 0 no-query originserver round-robin

You must restart / reload the Squid service in order to take a change you’ve made.

For complete configuration, see this squid.conf

As you can see, Load Balancer configuration are similar with Reverse Proxy configuration. Because basically they works with the same method. for your reference with Reverse Proxy, you can read my Revese Proxy article.

Summary

Load Balancing is a Technique that reasonable to consider if your server load that running as Web Server is getting bigger and bigger. It’s the impact of increasing of web traffic or more complex of application needs. Every Load Balancer offers their own feature and methodology, So by researching and read more article or blog will help you to make a decision.

boi

source: kompasiana

Boyke D Triwahyudhi is a server guy that love to design a system not only by technology, but he focused on architecture approach as well. Grew up as Delphi programmer and currently enjoying playing on web and mobile ecosystem.

Comments

comments