Configure Multi-Node Cluster

How to cluster RapidDeploy for load balancing and high availability

RapidDeploy instances running on different hosts may be configured together into a cluster for load balancing and high availability. You can add as many RapidDeploy instances as you like into the cluster. The following steps will take you through the procedure to cluster RapidDeploy.

1. Enabling clustering

All RapidDeploy instances that you wish to cluster together need to be reconfigured with clustering mode enabled. Please update the following framework property located in ${MV_HOME}/bin/rapiddeploy.properties file on each instance that is to be added to the cluster. By default this property is disabled (the property is set to false).

rapiddeploy.cluster.enable=true

 

2. Configure Database

All RapidDeploy instances must have access to the RDBMS, which will either be the internal (default) HSQL database or an external proprietary database such as Oracle or DB2.

2.1 Internal HSQL database

 If the RapidDeploy instances are located on different hosts and the (default) HSQL database is used as the primary RDBMS, please follow the next steps:

  1. Select one RapidDeploy instance to be the ‘primary‘ instance. This is the instance where HSQL will run.
  2. On the primary RapidDeploy instance where HSQL will be running, please add the HSQL DB setting for server socket listener address located in the file ${MV_HOME}/bin/rapiddeploy.properties.
    rapiddeploy.built.in.hsqldb.server.address=0.0.0.0
  3. On each of the other RapidDeploy instances in the cluster, the built in HSQL database server should be disabled and database settings should point to the primary instance. Please change the following settings and review all database connection configuration located in the file ${MV_HOME}/bin/rapiddeploy.properties:
    rapiddeploy.datasource.url=jdbc:hsqldb:hsql://[Primary_RapidDeploy_IP]:9001/rapiddeploydb
    
    rapiddeploy.built.in.hsqldb.server.enabled=false

 

2.2 External database

 If you are using an external database such as Oracle or DB2, you will just need to configure the database settings of each instance to point to the same external database. You can find more information about configuring external databases on the Configure Enterprise Database page and the Configure Datasource page.

3. Shared Resources

A number of resources will need to be accessible for all RapidDeploy instances. The following paths must be shared from the primary instance to each instance in the cluster via a cross mounted filesystem.

  • ${MV_HOME}/projects is the location of the RapidDeploy project data stored on disk.
  • ${MV_HOME}/snapshots is the location of the RapidDeploy snapshot data stored on disk.
  • ${MV_HOME}/resources is the location of the RapidDeploy external resource data (location) stored on disk.
  • ${MV_HOME}/libraries is the location of the RapidDeploy library data stored on disk.
  • ${MV_HOME}/logs is the location of the RapidDeploy log files
  • ${MV_HOME}/promotionstore is the location of the promoted deployment packages
  • ${MV_HOME}/buildstore is the location of the deployment packages when using the (default) filesystem based artifact repository plugin
  • ${MV_HOME}/users is the location of the RapidDeploy security role files

 

3.1 Windows

On windows hosts, the MV_HOME of the primary RapidDeploy instance can be shared through the network and mapped as a network drive, then the paths can be referred to via this drive letter. For example, the MV_HOME is mapped to the letter M, so a valid path for project logs would be M:/projects/MY_PROJECT/logs.

3.2 Linux

In Linux, the remote directory MV_HOME of the primary RapidDeploy instance can be mounted in each RapidDeploy instance file system. For example, this can be done via the sshfs command and can be mounted in the same location as the main RapidDeploy instance.

4. Remote Agents

Any remote agents that have been configured to allow access only from certain IP addresses should be reconfigured to allow access from all of the RapidDeploy instances in the cluster.

5. Ssh key files

Each RapidDeploy instance in the cluster should have access to the same key files as the other instances. Normally this can be achieved by cross-mounting any folder that contains the SSH keys.

6. Other Considerations

  1. Each RapidDeploy instance in the cluster should have it’s clock synchronized. This will ensure that any scheduled jobs will only be launched by the job scheduler from one instance in the cluster, which is selected at random for each new job initiated.
  2. You will need to configure a load balancer to redirect your HTTP requests to the cluster. You should enable sticky sessions (session persistence). You do not normally need to enable any form of session replication.