Clustering

Functionality available

Baruwa is capable of running in a cluster. The cluster is divided into the frontend and backend segments. Backend clustering is available in versions >= 2.1.7.

Full Frontend Baruwa functionality is available from any member within a Baruwa Frontend segment cluster and all Frontend segment members have equal status. This allows you to provide round robin access either using Load Balancers or DNS configuration. This makes the running of a cluster totally transparent to the end users.

Cluster wide as well as node status information is visible via Global status and Scanner node status

Requirements

Network quality

High quality network links are required between the front end and backend segments in a cluster.

Cluster Quoram

Warning

Do not setup a backend segment cluster with even number servers or one server as you may loose your data if you do. The impact is not as severe on front end servers.

To setup an efficient cluster you should have an odd number for each system type. So if you are setting up a cluster of database servers for example you need to have 3, 5, 7, 9 etc servers of database type.

Server location

For backend segment systems the systems should be installed in different locations. If you install the systems in the same locations you will experience issues restoring the service if there is a location wide power failure that takes down all your servers.

Bootstrap server

A bootstrap server is required to setup a cluster. A bootstrap server is the initial server used to bring up the cluster. It can be of the backend or database profiles. You only need one bootstrap server per cluster. The bootstrap server is the first server that you should setup.

Note

For backwards compartibility with previous non clustered backend systems, existing systems of backend or database profile are automatically configured as bootstrap servers during upgrade to BaruwaOS 6.9.1.

Root CA Key

A root CA is created on the bootstrap server, the public key of that CA is stored at /etc/pki/BaruwaCA/certs/BaruwaCA.pem. This public key must be copied to all the members of a cluster prior to starting configuration.

Cluster Master Token and Cluster Encryption Key

During configuration of the bootstrap server, a Cluster Master Token and a Cluster Encryption Key is generated on the bootstrap server. These two should then be used on other cluster members that require these parameters.

Shared quarantine

Since version 2.1.0 Baruwa now has built in shared quarantine syncronization without a shared storage system. Quarantined messages are now syncronized between all the cluster nodes. This eliminates the need for a shared filesystem as was previously required. Because messages are syncronized between the cluster members any of the cluster members can process requests to release, learn delete quarantined messages. Users are able to access messages even when the specific host that processed the message via SMTP is not accessible.

Note

Note this is a technology preview and at the moment could have performance degradation issues in mail high volume environments.

When you select use shared quarantine in baruwa-setup, built in syncronization is automatically enabled, if you wish to use a shared filesystem on Baruwa versions >= 2.1.0 you need to overide the built in syncronization by creating the file /etc/baruwa/sync.disable. You can do that by running the following command:

touch /etc/baruwa/sync.disable

In order for the cluster hosts to be able to locate each other you need to add them as nodes under Settings and provide the correct IP address. The cluster nodes perform syncronization on port TCP 1027. If some of your cluster nodes are behind a port forwarded firewall, you need to forward port 1027 to the actual cluster node. If you have multiple nodes behind the same firewall you should use different ports to portforward to 1027 on each internal server. You then need to modify the scanning node under settings and set the port to the port you have configured for this specific server on the firewall.

Since version 2.0.1 Baruwa supports shared quarantines using shared storage subsystems like NFS, GlusterFS, OCFS, etc. With a shared quarantine, message operations are still possible regardless of non availability of the node that processed the message. To use a shared quarantine with a shared storage system you need to:

  • Mount the quarantine directory /var/spool/BaruwaScanner/quarantine to the shared file subsystem
  • Check the Use Shared Quarantine checkbox of the Scanner Setting screen of baruwa-setup
  • Set a unique Cluster id for each node in the Cluster Settings screen of baruwa-setup

Limitations

Host specific quarantines

Note

This limitation is not present when using a shared quarantine.

Quarantines are node specific, so messages quarantined on a failed node will not be accessible until the node is restored.

Management traffic

Note

This limitation is not present when using a clustered backend, available in versions >= 2.1.7.

Given that the primary function of the Baruwa System is processing of email, full high availability is limited to the mail processing function.

In event of backend server connectivity or functionality failure, email processing will NOT be disrupted and will continue functioning normally.

The management interface how ever will be unaccessible in event of backend server connectivity or functionality failure.

When the backend server connectivity or functionality is restored, resyncronization of the system will take place and the management interface will return to normal functionality.

Memcached

Memcached does not support clustering, to setup backend clustering you need to disable memcached and use the built in uwsgi cache system instead.

Load Balancers

Baruwa Enterprise Edition can be setup to use load balancers that support the Proxy-protocol, the most popular being Haproxy.

To use Baruwa Enterprise Edition SMTP servers with these load balancers you need to specify the load balancer IP addresses in the Load Balancer IP’s field on the MTA Settings screen in baruwa-setup

To use Baruwa Enterprise Edition HTTP servers with these load balancers you need to specify the load balancer IP addresses in the Load Balancer IP’s field on the Management Web Settings screen in baruwa-setup

Haproxy

A sample configuration for haproxy with both HTTP and SMTP being load balanced is below.

global
        log 127.0.0.1   local0
        log 127.0.0.1   local1 notice
        maxconn 4096
        chroot /var/lib/haproxy
        user haproxy
        group haproxy
        daemon

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        option redispatch
        retries 3
        maxconn 2000
        timeout connect      5000
        timeout client      50000
        timeout server      50000

listen http :80
        mode tcp
        option tcplog
        balance roundrobin
        server web1 192.168.1.20:80 check send-proxy
        server web2 192.168.1.23:80 check send-proxy

listen https :443
        mode tcp
        option tcplog
        balance roundrobin
        server web1 192.168.1.20:443 check send-proxy
        server web2 192.168.1.23:443 check send-proxy

listen smtp :25
        mode tcp
        no option http-server-close
        option tcplog
        timeout server 1m
        timeout connect 5s
        balance roundrobin
        server smtp1 192.168.1.22:25 send-proxy
        server smtp2 192.168.1.24:25 send-proxy

Fabio

Fabio is a new breed proxy that supports Proxy-protocol, dynamic configuration and service discovery. Baruwa registers services in consul so Fabio can be used to proxy connections to baruwa services.