Installing and configuring a Rabbit MQ server cluster for Symantec Mobility Manager
search cancel

Installing and configuring a Rabbit MQ server cluster for Symantec Mobility Manager

book

Article ID: 178206

calendar_today

Updated On:

Products

Mobility Suite

Issue/Introduction

 

Resolution

This document describes how to step up a RabbitMQ server cluster for use with Symantec Mobility Manager. For information specific to RabbitMQ, visit www.rabbitmq.com.

Notes:

  1. This document assumes a Centos 6.5 or Redhat equivalent environment.
  2. This document applies to Symantec Mobility Manager 5.0 and later.

Recommended High-Availability RabbitMQ Cluster Configuration

A high-availability RabbitMQ cluster requires the following:

  • A minimum of two servers, each in a different data center (or at the least, running on different hardware).
  • A high-reliability network connection between the RabbitMQ servers.

The Symantec Mobility Manager servers are configured to point to the single address of a master RabbitMQ server. 

Tip: For failover management, it's best if this address is an easily changed DNS entry. By doing so, if the main address becomes unavailable, you can swap the DNS entry and the Mobility Manager servers will automatically connect to the new address. This saves you from having to modify Mobility Manager directly.

Required Open Ports (for both, Master and Slaves):

  • 5672 - Broker connection
  • 55672 – Management-agent connections between servers
  • 15672 - Management connection
  • 4369 - Erlang port
  • 45000-45010 - Cluster ports. You can extend this port range in rabbitmq.config if you have more than 3 mirroring servers.

RabbitMQ Master Server

After RabbitMQ servers are set to actively cluster with one another, the "master and slave" model is blurred. This is because, at the node level, clustered Rabbit MQ servers all perform the same role. However, before clustering, there is a difference and for convenience, we’ll use the terms "master" and "slave" to identify the two different roles. 

Before the cluster is established, a master node is prepared to seed the cluster. The other servers in the cluster (“slaves”) finalize the connections to the master and then, to one another.  

Master installation and configuration

Use the following commands to install and configure the Rabbit MQ server(s):

  1. Install rabbitmq package
    1. sudo yum install http://www.gtlib.gatech.edu/pub/fedora-epel/6/i386/epel-release-6-8.noarch.rpm
    2. sudo yum clean all
    3. sudo yum install -y rabbitmq-server
       
  2. Configure rabbitmq

1.  vi /etc/rabbitmq/enabled_plugins

  [rabbitmq_management,

  rabbitmq_management_agent

  ].

2. vi /etc/rabbitmq/rabbitmq.config

     [

     {kernel, [

        {inet_dist_listen_min, 45000},

        {inet_dist_listen_max, 45010}

     ]},

     {rabbit, [

        {tcp_listeners, [5672]},

        {collect_statistics_interval, 10000},

        {heartbeat, 30},

        {cluster_partition_handling, autoheal},

        {default_user, <<"some username here">>},

        {default_pass, <<"some password here">>}

 ]},

     {rabbitmq_management, [ {http_log_dir,"/var/log/rabbitmq/mgmt"},{listener, [{port, 15672 }]} ] },

     {rabbitmq_management_agent, [ {force_fine_statistics, false} ] }

     ].

3. vi /etc/rabbitmq/rabbitmq-env.conf

                     NODENAME=rabbit@rabbitmaster (must be unique, can be arbitrary)

4. sudo chown rabbitmq:rabbitmq /etc/rabbitmq/*

5. sudo chmod 400 /etc/rabbitmq/*

            6.  vi /etc/hosts - add the following line (must match the content after "@" in the rabbitmq-env.conf NODENAME

            127.0.0.1   rabbitmaster

7. sudo /etc/init.d/rabbitmq-server restart - a success message is displayed when the restart completes successfully. 

 

Slave server installation

Before you start, you’ll need the following:

  • The local ipv4 address of the rabbitmaster node, you set up in the previous section.
  • The exact contents of /var/lib/rabbitmq/.erlang.cookie  on the rabbitmaster server. 

Note: The erlang.cookie information must be an exact match on all servers in the cluster.

Optional pre-flight check

To make certain your slave(s) can communicate with the master, run the command on each slave:

$$ telnet <local ipv4 of master> 5672

The expected response is:

Trying 10.x.x.x...

Connected to 10.x.x.x. 

Escape character is '^]'.

Connection closed by foreign host.

Installation workflow

Execute the following commands to install each additional cluster server ("slave"):

  1. sudo yum install http://www.gtlib.gatech.edu/pub/fedora-epel/6/i386/epel-release-6-8.noarch.rpm
  2. sudo yum clean all
  3. sudo yum install -y rabbitmq-server
  4. sudo vi /var/lib/rabbitmq/.erlang.cookie - copy the contents from the master's .erlang.cookie file into this file.
  5. Optional: To check the master and slave's erlang.cookie checksum, on each server, run the command:
    md5sum /var/lib/rabbitmq/.erlang.cookie

    Note: The checksum must match between all mirrored systems.

  6. sudo vi /etc/rabbitmq/enabled_plugins

    [rabbitmq_management,

    rabbitmq_management_agent

     

    ].

  7. vi /etc/rabbitmq/rabbitmq.config

    [

        {kernel, [

            {inet_dist_listen_min, 45000},

            {inet_dist_listen_max, 45010}

        ]},

        {rabbit, [

            {tcp_listeners, [5672]},

            {collect_statistics_interval, 10000},

            {heartbeat, 30},

            {cluster_partition_handling, autoheal},

            {cluster_nodes, { ['rabbit@rabbitmaster'], disc}},

            {default_user, <<"some username here">>},

            {default_pass, <<"some password here">>}

        ]},

        {rabbitmq_management, [ {http_log_dir,"/var/log/rabbitmq/mgmt"},{listener, [{port, 15672 }]} ] },

        {rabbitmq_management_agent, [ {force_fine_statistics, false} ] }

    ].
    Note: The cluster_nodes value exists only on the "slave" servers, and not on the "master" server.

  8. vi /etc/rabbitmq/rabbitmq-env.conf

     

       NODENAME=rabbit@rabbitslave

  9. vi /etc/hosts - add the following lines: 
    Note: iptomaster is the local IPv4 address of the master server (e.g., 10.x.x.x)

    iptomaster  rabbitmaster

    127.0.0.1   rabbitslave
     

  10. sudo chown rabbitmq:rabbitmq /etc/rabbitmq/*

  11. sudo chmod 400 /etc/rabbitmq/*

  12. sudo chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie

  13. sudo chmod 600 /var/lib/rabbitmq/.erlang.cookie

  14. sudo /etc/init.d/rabbitmq-server restart

Verify node clustering

To check that the nodes are clustered, execute the command:

sudo /usr/sbin/rabbitmqctl cluster_status

 A list of running nodes is displayed, showing the master and the slaves.

 

Adding Users, Virtual Hosts, and Mirroring Policy

Use the following commands to provide each Mobility Manager cluster its own user and virtual host:

On the master-

  1. sudo rabbitmqctl add_vhost <name of vhost>
  2. sudo rabbitmqctl add_user <username> <password>
  3. sudo rabbitmqctl set_permissions -p <vhost> <username> '.*' '.*' '.*'
    Note: The end of the command ('.*' '.*' '.*')means, 'all configuration' 'all read' 'all write' permissions. The <username> gets full control over the vhost.

  4. sudo rabbitmqctl set_policy -p <vhost> ha-all '.*' '{"ha-mode": "all", "ha-sync-mode": "automatic"}'
    Note: This command mirrors the vhost's information. It is optional on a single, un-mirrored Rabbit MQ server.