Install with Caching LAM
In HA architecture, LAM 1 and LAM 2 run in an active / passive mode for a HA polling pair, and in active / active mode for a HA receiving pair.
Your backend LAM integrations connect to a local two-node RabbitMQ cluster; polling LAM pairs additionally connect to a local MySQL two-node cluster. The Caching LAM does not require any outbound connectivity; remotely configuring inbound connectivity allows you to connect to the local RabbitMQ cluster to fetch messages from the bus.
HA architecture
In our distributed HA installation, the LAM components are installed on the LAM 1 and LAM 2 servers:
-
LAM 1: Local RabbitMQ node 1 and local MySQL node 1.
-
LAM 2: Local RabbitMQ node 2 and local MySQL node 2.
Refer to the Distributed HA system Firewall for more information on connectivity within a fully distributed HA architecture.
Install LAM 1
-
Install Moogsoft AIOps components on the LAM 1 server.
On LAM 1 install the following Moogsoft AIOps components:
yum -y install moogsoft-common-7.3* \ moogsoft-db-7.3* \ moogsoft-mooms-7.3* moogsoft-integrations-7.3* \ moogsoft-utils-7.3*
Edit your
~/.bashrc
file to contain the following lines:export MOOGSOFT_HOME=/usr/share/moogsoft export APPSERVER_HOME=/usr/share/apache-tomcat export JAVA_HOME=/usr/java/latest export PATH=$PATH:$MOOGSOFT_HOME/bin:$MOOGSOFT_HOME/bin/utils
Source the
~/.bashrc
file:source ~/.bashrc
-
Initialize the local Moogsoft AIOps RabbitMQ cluster node on the LAM 1 server.
On LAM 1 initialize RabbitMQ:
moog_init_mooms.sh -pz <zone>
Note
For zone pick a value that is different from the one chosen for the main RabbitMQ cluster.
-
Initialize the Moogsoft AIOps database.
On LAM 1, run the following commands to create the Moogsoft AIOps databases and populate them with the required schema:
moog_init_db.sh -Iu root
-
Configure
system.conf
on the LAM 1 server.-
On LAM 1 edit the
$MOOGSOFT_HOME/config/system.conf
file and set themooms.zone
andmooms.brokers
properties with the following. Substitute the hostnames."zone" : "<ZONE>", "brokers" : [ { "host" : "<LAM 1 server hostname>", "port" : 5672 }, { "host" : "<LAM 2 server hostname>", "port" : 5672 } ],
-
In the same file, set the
ha
section as follows:"ha": { "cluster": "PRIMARY" },
-
Install LAM 2
-
Install Moogsoft AIOps components on the LAM 2 server.
On LAM 2 install the following Moogsoft AIOps components:
yum -y install moogsoft-common-7.3* \ moogsoft-db-7.3* \ moogsoft-mooms-7.3* \ moogsoft-integrations-7.3* \ moogsoft-utils-7.3*
Edit your
~/.bashrc
file to contain the following lines:export MOOGSOFT_HOME=/usr/share/moogsoft export APPSERVER_HOME=/usr/share/apache-tomcat export JAVA_HOME=/usr/java/latest export PATH=$PATH:$MOOGSOFT_HOME/bin:$MOOGSOFT_HOME/bin/utils
Source the
~/.bashrc
file:source ~/.bashrc
-
Initialize the local RabbitMQ cluster node 1 on the LAM 2 server.
-
On LAM 2 initialize RabbitMQ. Use the same zone as you specified for LAM 1:
moog_init_mooms.sh -pz <zone>
-
Copy the
/var/lib/rabbitmq/.erlang.cookie
file from LAM 1 to the same location on the LAM 2 server and replace the existing LAM 2 cookie. Make the LAM 2 cookie read-only:chmod 400 /var/lib/rabbitmq/.erlang.cookie
You may need to change the file permissions on the LAM 2 erlang cookie first to allow this file to be overwritten. For example:
chmod 406 /var/lib/rabbitmq/.erlang.cookie
-
Restart the rabbitmq-server service:
systemctl restart rabbitmq-server
-
Run the following commands to form the cluster:
rabbitmqctl stop_app rabbitmqctl join_cluster rabbit@<LAM 1 server short hostname> rabbitmqctl start_app
The short hostname is the full hostname excluding the DNS domain name. For example, if the hostname is
ip-172-31-82-78.ec2.internal
, the short hostname isip-172-31-82-78
. To find out the short hostname, runrabbitmqctl cluster_status
on LAM 1. -
Apply the HA mirrored queues policy. Use the same zone name as you specified for LAM 1.
rabbitmqctl set_policy -p <zone> ha-all ".+\.HA" '{"ha-mode":"all"}'
-
-
Run
rabbitmqctl cluster_status
to verify the cluster status and queue policy. Example output is as follows:Cluster status of node rabbit@ldev02 ... [{nodes,[{disc,[rabbit@ldev01,rabbit@ldev02]}]}, {running_nodes,[rabbit@ldev01,rabbit@ldev02]}, {cluster_name,<<"rabbit@ldev02">>}, {partitions,[]}, {alarms,[{rabbit@ldev01,[]},{rabbit@ldev02,[]}]}] [root@ldev02 rabbitmq]# rabbitmqctl -p MOOG list_policies Listing policies for vhost "MOOG" ... MOOG ha-all .+\.HA all {"ha-mode":"all"} 0
-
Initialize the Moogsoft AIOps database for polling LAMs on the LAM 2 server.
On LAM 2, run the following commands to create the Moogsoft AIOps databases, and populate them with the required schema:
moog_init_db.sh -Iu root
-
Configure
system.conf
on the LAM 2 server.On LAM 2, edit the
$MOOGSOFT_HOME/config/system.config
and set themooms.zone
andmooms.brokers
properties as follows. Substitute the hostnames."zone" : "<ZONE>", "brokers" : [ { "host" : "<LAM 1 server hostname>", "port" : 5672 }, { "host" : "<LAM 2 server hostname>", "port" : 5672 } ],
For polling LAMs you must also populate
mysql.failover_connections
with the following:"mysql" : { "host" : "<LAM 1 server hostname>", "failover_connections" : [ { "host" : "<LAM 2 server hostname>", "port" : 3306 } ]
In the same file, set the
ha
section as follows:"ha": { "cluster": "SECONDARY" },
-
Set up Master-Master replication for the database for polling LAMs. Contact Moogsoft AIOps support for more information.
Set up a backend LAM HA configuration on LAM 1 and LAM 2
See Set Up LAMs for HA for instructions.