Set Up the Core Role for HA
In Moogsoft Enterprise HA architecture, Core 1 and Core 2 run in an active / passive HA pair.
HA architecture
In our distributed HA installation, the Core components are installed on Core 1, Core 2, and Redundancy servers:
Core 1: Core Data Processing 1 (Moogfarmd), Search Node 1, RabbitMQ Node 1.
Core 2: Core Data Processing 2 (Moogfarmd), Search Node 2, RabbitMQ Node 2.
Redundancy: Search Node 3, RabbitMQ Node 3.
Refer to the Distributed HA system Firewall for more information on connectivity within a fully distributed HA architecture.
Install Core 1
Install the required Moogsoft Enterprise packages:
VERSION=8.2.0; yum -y install moogsoft-server-${VERSION} \ moogsoft-search-${VERSION} \ moogsoft-common-${VERSION} \ moogsoft-mooms-${VERSION} \ moogsoft-integrations-${VERSION} \ moogsoft-integrations-ui-${VERSION} \ moogsoft-utils-${VERSION}
Edit your
~/.bashrc
file to contain the following lines:export MOOGSOFT_HOME=/usr/share/moogsoft export APPSERVER_HOME=/usr/share/apache-tomcat export JAVA_HOME=/usr/java/latest export PATH=$PATH:$MOOGSOFT_HOME/bin:$MOOGSOFT_HOME/bin/utils
Source the
~/.bashrc
file:source ~/.bashrc
Initialize RabbitMQ Cluster Node 1 on the Core 1 server. Substitute a name for your zone.
moog_init_mooms.sh -pz <zone>
Initialize, configure and start Opensearch/Elasticsearch Cluster Node 1 on the Core 1 server.
For Moogsoft Enterprise v8.1.x and older:
Initialize Opensearch/Elasticsearch on Core 1:
moog_init_search.sh
Uncomment and edit the properties in the
/etc/elasticsearch/elasticsearch.yml
file on Core 1 as follows:cluster.name: aiops node.name: <Core 1 server hostname> ... network.host: 0.0.0.0 http.port: 9200 discovery.zen.ping.unicast.hosts: [ "<Core 1 server hostname>","<Core 2 server hostname>","<Redundancy server hostname>"] discovery.zen.minimum_master_nodes: 1 gateway.recover_after_nodes: 1 node.master: true
For Moogsoft Enterprise v8.2.x and newer:
Initialize Opensearch/Elasticsearch on Core 1:
moog_init_search.sh -i
Edit the following properties in /etc/opensearch/opensearch.yml, changing the node names etc as appropriate:
cluster.name: moog-opensearch-cluster node.name: node1 network.host: 0.0.0.0 node.master: true discovery.seed_hosts: ["node1.example.com", "node2.example.com", "node3.example.com"] cluster.initial_master_nodes: ["node1","node2","node3"] plugins.security.allow_default_init_securityindex: true plugins.security.ssl.transport.pemcert_filepath: /etc/opensearch/node1.pem plugins.security.ssl.transport.pemkey_filepath: /etc/opensearch/node1-key.pem plugins.security.ssl.transport.pemtrustedcas_filepath: /etc/opensearch/root-ca.pem plugins.security.ssl.transport.enforce_hostname_verification: false plugins.security.restapi.roles_enabled: ["all_access"] plugins.security.ssl.http.enabled: false plugins.security.authcz.admin_dn: - 'CN=ADMIN,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA' # Give value according to SSL certificate. plugins.security.nodes_dn: - 'CN=node1.example.com,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA' - 'CN=node2.example.com,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA' - 'CN=node2.example.com,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'
The minimum and maximum JVM heap sizes must be large enough to ensure that Elasticsearch starts.
See Finalize and Validate the Install for more information.
You can enable password authentication on Opensearch/Elasticsearch. See Elasticsearch Encryption for more information. For Opensearch this is enabled by default
On Core 1, edit
$MOOGSOFT_HOME/config/system.conf
and set the following properties. Substitute the name of your RabbitMQ zone, the server hostnames, and the cluster names."mooms" : { ... "zone" : "<zone>", "brokers" : [ {"host" : "<Core 1 server hostname>", "port" : 5672}, {"host" : "<Core 2 server hostname>", "port" : 5672}, {"host" : "<Redundancy server hostname>", "port" : 5672} ], ... "cache_on_failure" : true, ... "search" : { ... "nodes" : [ {"host" : "<Core 1 server hostname>", "port" : 9200}, {"host" : "<Core 2 server hostname>", "port" : 9200}, {"host" : "<Redundancy server hostname>", "port" : 9200} ] ... "failover" : { "persist_state" : true, "hazelcast" : { "hosts" : ["<Core 1 server hostname>","<Core 2 server hostname>"], "cluster_per_group" : true } "automatic_failover" : true, } ... "ha": { "cluster": "PRIMARY" }
Restart Opensearch/Elasticsearch (run the right command as appropriate):
systemctl restart elasticsearch systemctl restart opensearch
Uncomment and edit the following properties in
$MOOGSOFT_HOME/config/moog_farmd.conf
. Note the importance of the initial comma. Delete the cluster line in this section of the file., ha: { group: "moog_farmd", instance: "moog_farmd", default_leader: true, start_as_passive: false }
Start
moog_farmd.conf
:systemctl start moogfarmd
Install, configure and start HA Proxy on the Core 1 server to connect to Percona XtraDB Cluster.
Install Core 2
Install Moogsoft Enterprise components on the Core 2 server.
On Core 2 install the following Moogsoft Enterprise components:
VERSION=8.2.0; yum -y install moogsoft-server-${VERSION} \ moogsoft-search-${VERSION} \ moogsoft-common-${VERSION} \ moogsoft-mooms-${VERSION} \ moogsoft-integrations-${VERSION} \ moogsoft-integrations-ui-${VERSION} \ moogsoft-utils-${VERSION}
Edit your
~/.bashrc
file to contain the following lines:export MOOGSOFT_HOME=/usr/share/moogsoft export APPSERVER_HOME=/usr/share/apache-tomcat export JAVA_HOME=/usr/java/latest export PATH=$PATH:$MOOGSOFT_HOME/bin:$MOOGSOFT_HOME/bin/utils
Source the
~/.bashrc
file:source ~/.bashrc
On Core 2 initialize RabbitMQ. Use the same zone name as Core 1:
moog_init_mooms.sh -pz <zone>
Initialize, configure and start Opensearch/Elasticsearch Cluster Node 2 on the Core 2 server.
For Moogsoft Enterprise v8.1.x and older:
Initialize Opensearch/Elasticsearch on Core 2:
moog_init_search.sh
Uncomment and edit the properties of the
/etc/elasticsearch/elasticsearch.yml
file on Core 2 as follows:cluster.name: aiops node.name: <Core 2 server hostname> ... network.host: 0.0.0.0 http.port: 9200 discovery.zen.ping.unicast.hosts: [ "<Core 1 server hostname>","<Core 2 server hostname>","<Redundancy server hostname>"] discovery.zen.minimum_master_nodes: 1 gateway.recover_after_nodes: 1 node.master: true
For Moogsoft Enterprise v8.2.x and newer:
Initialize Opensearch/Elasticsearch on Core 2:
moog_init_search.sh -i
Edit the following properties in /etc/opensearch/opensearch.yml, changing the node names etc as appropriate:
cluster.name: moog-opensearch-cluster node.name: node1 network.host: 0.0.0.0 node.master: true discovery.seed_hosts: ["node1.example.com", "node2.example.com", "node3.example.com"] cluster.initial_master_nodes: ["node1","node2","node3"] plugins.security.allow_default_init_securityindex: true plugins.security.ssl.transport.pemcert_filepath: /etc/opensearch/node1.pem plugins.security.ssl.transport.pemkey_filepath: /etc/opensearch/node1-key.pem plugins.security.ssl.transport.pemtrustedcas_filepath: /etc/opensearch/root-ca.pem plugins.security.ssl.transport.enforce_hostname_verification: false plugins.security.restapi.roles_enabled: ["all_access"] plugins.security.ssl.http.enabled: false plugins.security.authcz.admin_dn: - 'CN=ADMIN,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA' # Give value according to SSL certificate. plugins.security.nodes_dn: - 'CN=node1.example.com,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA' - 'CN=node2.example.com,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA' - 'CN=node2.example.com,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'
The minimum and maximum JVM heap sizes must be large enough to ensure that Opensearch/Elasticsearch starts.
See Finalize and Validate the Install for more information.
You can enable password authentication on Opensearch/Elasticsearch. See Elasticsearch Encryption for more information.
On Core 2, edit
$MOOGSOFT_HOME/config/system.conf
and set the following properties. Substitute the name of your RabbitMQ zone, the server hostnames, and the cluster names."mooms" : { ... "zone" : "<zone>", "brokers" : [ {"host" : "<Core 1 server hostname>", "port" : 5672}, {"host" : "<Core 2 server hostname>", "port" : 5672}, {"host" : "<Redundancy server hostname>", "port" : 5672} ], ... "cache_on_failure" : true, ... "search" : { ... "nodes" : [ {"host" : "<Core 1 server hostname>", "port" : 9200}, {"host" : "<Core 2 server hostname>", "port" : 9200}, {"host" : "<Redundancy server hostname>", "port" : 9200} ] ... "failover" : { "persist_state" : true, "hazelcast" : { "hosts" : ["<Core 1 server hostname>","<Core 2 server hostname>"], "cluster_per_group" : true } "automatic_failover" : true, } ... "ha": { "cluster": "SECONDARY" }
Restart Opensearch/Elasticsearch (run the right command as appropriate):
systemctl restart elasticsearch systemctl restart opensearch
Uncomment and edit the following properties in
$MOOGSOFT_HOME/config/moog_farmd.conf
. Note the importance of the initial comma. Delete the cluster line in this section of the file., ha: { group: "moog_farmd", instance: "moog_farmd", default_leader: false, start_as_passive: false }
Start
moog_farmd.conf
:systemctl start moogfarmd
The erlang cookies must be the same for all RabbitMQ nodes. Replace the erlang cookie on Core 2 with the Core 1 erlang cookie located at
/var/lib/rabbitmq/.erlang.cookie
. Make the Core 2 cookie read-only:chmod 400 /var/lib/rabbitmq/.erlang.cookie
You may need to change the file permissions on the Core 2 erlang cookie first to allow this file to be overwritten. For example:
chmod 406 /var/lib/rabbitmq/.erlang.cookie
Restart the
rabbitmq-server
service and join the cluster. Substitute the Core 1 short hostname and zone:systemctl restart rabbitmq-server rabbitmqctl stop_app rabbitmqctl join_cluster rabbit@<Core 1 server short hostname> rabbitmqctl start_app
The short hostname is the full hostname excluding the DNS domain name. For example, if the hostname is
ip-172-31-82-78.ec2.internal
, the short hostname isip-172-31-82-78
. To find out the short hostname, runrabbitmqctl cluster_status
on Core 1.Apply HA mirrored queues policy. Use the same zone name as Core 1.
rabbitmqctl set_policy -p <zone> ha-all ".+\.HA" '{"ha-mode":"all"}'
Run
rabbitmqctl cluster_status
to verify the cluster status and queue policy. Example output is as follows:Cluster status of node rabbit@ip-172-31-93-201 ... [{nodes,[{disc,['rabbit@ip-172-31-82-211','rabbit@ip-172-31-85-42','rabbit@ip-172-31-93-201']}]}, {running_nodes,['rabbit@ip-172-31-85-42','rabbit@ip-172-31-82-211','rabbit@ip-172-31-93-201']}, {cluster_name,<<"rabbit@ip-172-31-93-201.ec2.internal">>}, {partitions,[]}, {alarms,[{'rabbit@ip-172-31-85-42',[]},{'rabbit@ip-172-31-82-211',[]},{'rabbit@ip-172-31-93-201',[]}]}]
Install, configure and start HA Proxy on the Core 2 server to connect to Percona XtraDB Cluster
Opensearch/Elasticsearch Encryption
You can enable password authentication on Opensearch/Elasticsearch by editing the $MOOGSOFT_HOME/config/system.conf
configuration file. You can use either an unencrypted password or an encrypted password, but you cannot use both.
You should use an encrypted password in the configuration file if you do not want users with configuration access to be able to access integrated systems.
Enable password authentication
To enable unencrypted password authentication on Opensearch/Elasticsearch, set the following properties in the system.conf
file:
"search": { ... “username” : <username>, “password” : <password>, ... }
To enable encrypted password authentication on Opensearch/Elasticsearch, set the following properties in the system.conf
file:
"search": { ... “username” : <username>, “encrypted_password” : <encrypted password> ... }
Initialize Opensearch/Elasticsearch
Opensearch in v8.2.x already has password authentication enabled, but other users can be added. If the admin password was already changed by moog_init_search.sh while deploying Opensearch, the script will prompt for admin account details to use to create the new users. To initialize Opensearch/Elasticsearch with password authentication, run:
moog_init_search.sh -a username:password
or:
moog_init_search.sh --auth username:password
If you run moog_init_search
without the -a/--auth
parameters, you will not enable password authentication in Opensearch/Elasticsearch.
See Moog Encryptor for more information on how to encrypt passwords stored in the system.conf
file.
You can also manually add authentication to the Opensearch/Elasticsearch configuration. You should do this if you have your own local Opensearch/Elasticsearch installation. See the external documentation for Opensearch here https://opensearch.org/docs/latest/security-plugin/configuration/index/ or Elasticsearch here: Elasticsearch documentation on configuring security for more information.
Validate that failover is working
On Core 1, confirm the moog_farmd
process is active on Core 1:
ha_cntl -v
You should see output indicating that the moog_farmd
process is active on Core 1. If moog_farmd
process is active on Core 2, stop moog_farmd on the Core 2 and restart it on Core 1.
On Core 1, deactivate moog_farmd on the primary cluster:
ha_cntl --deactivate primary.moog_farmd
Enter "y" when prompted.
Run ha_cntl -v
to monitor the moog_farmd
process. You will see this process stop on Core 1 and start on Core 2.
To fail the cluster back to its default state, run the following command on Core 2 to deactivate moog_farmd
on the secondary cluster:
ha_cntl --deactivate secondary.moog_farmd
On Core 1, activate moog_farmd
on the primary cluster:
ha_cntl --activate primary.moog_farmd
Run ha_cntl -v
to confirm that moog_farmd
is active on the primary cluster.