Moogsoft Docs

Install with Caching LAM

In HA architecture, LAM 1 and LAM 2 run in an active / passive mode for a HA polling pair, and in active / active mode for a HA receiving pair.

Your backend LAM integrations connect to a local two-node RabbitMQ cluster; receiving LAM pairs additionally connect to a local MySQL two-node cluster. The Caching LAM does not require any outbound connectivity; remotely configuring inbound connectivity allows you to connect to the local RabbitMQ cluster to fetch messages from the bus.

HA architecture

In our distributed HA installation, the LAM components are installed on servers 9 and 10:

  • LAM 1: Server 9

  • LAM 2: Server 10

  • Local RabbitMQ Node 1: Server 9

  • Local RabbitMQ Node 2: Server 10

  • Local MySQL Node 1: Server 9

  • Local MySQL Node 2: Server 10

Fully distributed installation

See Distributed HA Installation for a reference diagram and steps to achieve a fully distributed installation.

Minimally distributed installation

For a minimally distributed installation follow the instructions below, replacing server 9 and 10 with the relevant values for your architecture.

Install LAM 1

  1. Install Moogsoft AIOps components on the LAM 1 server.

    On server 9 install the following Moogsoft AIOps components:

    yum -y install moogsoft-common-7.3* \
        moogsoft-db-7.3* \
        moogsoft-mooms-7.3*
        moogsoft-integrations-7.3* \
        moogsoft-utils-7.3*
  2. Initialize the local Moogsoft AIOps RabbitMQ cluster node on the LAM 1 server.

    On server 9 initialize RabbitMQ:

    moog_init_mooms.sh -pz <ZONE>

    Note

    For zone pick a value that is different from the one chosen for the main RabbitMQ cluster.

  3. Initialize the Moogsoft AIOps database.

    On server 9, run the following commands to create the Moogsoft AIOps databases and populate them with the required schema:

    moog_init_db.sh -Iu root
  4. Configure system.conf on the LAM 1 server.

    1. On server 9 edit the $MOOGSOFT_HOME/config/system.conf file and set the mooms.zone and mooms.brokers properties with the following. Substitute the values as appropriate, for example <server 9 hostname>:

      "zone" : "<ZONE>",
      "brokers" : [ { "host" : "<server 9 hostname>", "port" : 5672 },
                    { "host" : "<server 10 hostname>", "port" : 5672 }
                  ],
    2. In the same file, set the ha section as follows:

      "ha": { "cluster": "PRIMARY" },

Install LAM 2

  1. Install Moogsoft AIOps components on the LAM 2 server.

    On server 10 install the following Moogsoft AIOps components:

    yum -y install moogsoft-common-7.3* \
        moogsoft-db-7.3* \
        moogsoft-mooms-7.3* \
        moogsoft-integrations-7.3* \
        moogsoft-utils-7.3*
  2. Initialize the local RabbitMQ cluster node 1 on the LAM 2 server.

    1. On server 10 initialize RabbitMQ:

      moog_init_mooms.sh -pz <ZONE>

      Note

      For zone pick a value that is different from the one chosen for the main RabbitMQ cluster.

    2. Copy and replace the /var/lib/rabbitmq/.erlang.cookiefile from server 9 to the same location on this server.

    3. Restart the rabbitmq-server service:

      systemctl restart rabbitmq-server
    4. Run the following commands to form the cluster:

      rabbitmqctl stop_app
      rabbitmqctl join_cluster rabbit@<server 9 hostname>
      rabbitmqctl start_app
    5. Apply the HA mirrored queues policy:

      rabbitmqctl set_policy -p <ZONE> ha-all ".+\.HA" '{"ha-mode":"all"}'

      Note

      Replace <ZONE> with the zone name you used earlier.

    You can then verify cluster status and zone policy:

    [root@ldev02 rabbitmq]# rabbitmqctl cluster_status
    Cluster status of node rabbit@ldev02 ...
    [{nodes,[{disc,[rabbit@ldev01,rabbit@ldev02]}]},
     {running_nodes,[rabbit@ldev01,rabbit@ldev02]},
     {cluster_name,<<"rabbit@ldev02">>},
     {partitions,[]},
     {alarms,[{rabbit@ldev01,[]},{rabbit@ldev02,[]}]}]
    [root@ldev02 rabbitmq]# rabbitmqctl -p MOOG list_policies
    Listing policies for vhost "MOOG" ...
    MOOG    ha-all  .+\.HA  all {"ha-mode":"all"}   0
  3. Initialize the Moogsoft AIOps database for polling LAMs on the LAM 2 server.

    On server 10, run the following commands to create the Moogsoft AIOps databases, and populate them with the required schema:

    moog_init_db.sh -Iu root
  4. Configure system.conf on the LAM 2 server.

    On server 10, edit the $MOOGSOFT_HOME/config/system.config and set the mooms.zone and mooms.brokers properties as follows. Substitute the values as appropriate, for example <server 9 hostname>:

    "zone" : "<ZONE>",
    "brokers" : [ { "host" : "<server 9 hostname>", "port" : 5672 },
                  { "host" : "<server 10 hostname>", "port" : 5672 }
                ],

    For polling LAMs you must also populate mysql.failover_connections with the following:

    "mysql" :
            {
                "host"                 : "<server 9 hostname>",
                "failover_connections" :
                    [
                      { "host"  : "<server 10 hostname>", "port"  : 3306 }
                    ]

    In the same file, set the ha section as follows:

    "ha": { "cluster": "SECONDARY" },
  5. Set up Master-Master replication for the database for polling LAMs.

Set up a backend LAM HA configuration on LAM 1 and LAM 2

See Set Up LAMs for HA for instructions.