Set Up the Redundancy Server Role
In Moogsoft Enterprise HA architecture, both RabbitMQ and Opensearch/ElasticSearch run as three-node clusters. The three-node clusters prevent issues with ambiguous data state, such as a "split-brain".
RabbitMQ is the Message Bus used by Moogsoft Enterprise. Opensearch/Elasticsearch delivers the search functionality.
The three nodes are distributed across the two Core roles and the redundancy server.
HA architecture
In our distributed HA installation, the RabbitMQ and Opensearch/Elasticsearch components are installed on the Core 1, Core 2 and Redundancy servers.
Core 1: RabbitMQ Node 1, Search Node 1
Core 2: RabbitMQ Node 2, Search Node 2
Redundancy server: RabbitMQ Node 3, Search Node 3
Refer to the Distributed HA system Firewall for more information on connectivity within a fully distributed HA architecture.
Install Redundancy server
Install the Moogsoft Enterprise components on the Redundancy server.
On the Redundancy server install the following Moogsoft Enterprise components:
VERSION=8.0.0.10; yum -y install moogsoft-common-${VERSION} \ moogsoft-mooms-${VERSION} \ moogsoft-search-${VERSION} \ moogsoft-utils-${VERSION}
Edit the
~/.bashrc
file to contain the following lines:export MOOGSOFT_HOME=/usr/share/moogsoft export APPSERVER_HOME=/usr/share/apache-tomcat export JAVA_HOME=/usr/java/latest export PATH=$PATH:$MOOGSOFT_HOME/bin:$MOOGSOFT_HOME/bin/utils
Source the
.bashrc
file:source ~/.bashrc
Initialize RabbitMQ cluster node 3 on the Redundancy server and join the cluster.
On the Redundancy server initialise RabbitMQ. Use the same zone name as Core 1 and Core 2:
moog_init_mooms.sh -pz <zone>
The erlang cookies must be the same for all RabbitMQ nodes. Replace the erlang cookie on the Redundancy server with the Core 1 erlang cookie located at
/var/lib/rabbitmq/.erlang.cookie
. Make the Redundancy server cookie read-only:chmod 400 /var/lib/rabbitmq/.erlang.cookie
You may need to change the file permissions on the Redundancy server erlang cookie first to allow this file to be overwritten. For example:
chmod 406 /var/lib/rabbitmq/.erlang.cookie
Restart the
rabbitmq-server
service and join the cluster. Substitute the Core 1 server short hostname:systemctl restart rabbitmq-server rabbitmqctl stop_app rabbitmqctl join_cluster rabbit@<Core 1 server short hostname> rabbitmqctl start_app
The short hostname is the full hostname excluding the DNS domain name. For example, if the hostname is
ip-172-31-82-78.ec2.internal
, the short hostname isip-172-31-82-78
. To find out the short hostname, runrabbitmqctl cluster_status
on Core 1.Apply the HA mirrored queues policy. Use the same zone name as Core 1:
rabbitmqctl set_policy -p <zone> ha-all ".+\.HA" '{"ha-mode":"all"}'
Run
rabbitmqctl cluster_status
to verify the cluster status and queue policy. Example output is as followsCluster status of node rabbit@ldev02 ... [{nodes,[{disc,[rabbit@ldev01,rabbit@ldev02]}]}, {running_nodes,[rabbit@ldev01,rabbit@ldev02]}, {cluster_name,<<"rabbit@ldev02">>}, {partitions,[]}, {alarms,[{rabbit@ldev01,[]},{rabbit@ldev02,[]}]}] [root@ldev02 rabbitmq]# rabbitmqctl -p MOOG list_policies Listing policies for vhost "MOOG" ... MOOG ha-all .+\.HA all {"ha-mode":"all"} 0
Initialise, configure and start Opensearch/Elasticsearch cluster node 3 on the Redundancy server.
For Moogsoft Enterprise v8.1.x and older:
Initialize Opensearch/Elasticsearch on the Redundancy server:
moog_init_search.sh
Uncomment and edit the properties of the
/etc/elasticsearch/elasticsearch.yml
file on the Redundancy server as follows:cluster.name: aiops node.name: <Redundancy server hostname> ... network.host: 0.0.0.0 http.port: 9200 discovery.zen.ping.unicast.hosts: [ "<Core 1 hostname>","<Core 2 hostname>","<Redundancy server hostname>"] discovery.zen.minimum_master_nodes: 1 gateway.recover_after_nodes: 1 node.master: true
Restart Elasticsearch on the Core 1, Core 2 and Redundancy servers:
systemctl restart elasticsearch
For Moogsoft Enterprise v8.2.x and newer:
Initialize Opensearch on Redundancy Server
moog_init_search.sh -i
Edit the following properties in /etc/opensearch/opensearch.yml, changing the node names etc as appropriate:
cluster.name: moog-opensearch-cluster node.name: node1 network.host: 0.0.0.0 node.master: true discovery.seed_hosts: ["node1.example.com", "node2.example.com", "node3.example.com"] cluster.initial_master_nodes: ["node1","node2","node3"] plugins.security.allow_default_init_securityindex: true plugins.security.ssl.transport.pemcert_filepath: /etc/opensearch/node1.pem plugins.security.ssl.transport.pemkey_filepath: /etc/opensearch/node1-key.pem plugins.security.ssl.transport.pemtrustedcas_filepath: /etc/opensearch/root-ca.pem plugins.security.ssl.transport.enforce_hostname_verification: false plugins.security.restapi.roles_enabled: ["all_access"] plugins.security.ssl.http.enabled: false plugins.security.authcz.admin_dn: - 'CN=ADMIN,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA' # Give value according to SSL certificate. plugins.security.nodes_dn: - 'CN=node1.example.com,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA' - 'CN=node2.example.com,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA' - 'CN=node2.example.com,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'
Restart Opensearch on the Core 1, Core 2 and Redundancy servers:
service opensearch restart
Initialise the security plugin on Core1/node1:
/usr/share/opensearch/plugins/opensearch-security/tools/securityadmin.sh -cd /usr/share/opensearch/plugins/opensearch-security/securityconfig/ -nhnv \ -cacert /etc/opensearch/root-ca.pem \ -cert /etc/opensearch/admin.pem \ -key /etc/opensearch/admin-key.pem \ -cn moog-opensearch-cluster
Verify that the Opensearch/Elasticsearch nodes are working correctly:
curl -X GET "localhost:9200/_cat/health?v&pretty"
If the Opensearch/Elasticsearch service has authentication enabled (Opensearch is secured by default), the curl command will need to be provided with the appropriate user details as an argument e.g.: curl -u <username>:<password> -X ...
Example cluster health status:
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 1580490422 17:07:02 aiops green 3 3 0 0 0 0 0 0 - 100.0%
Opensearch/Elasticsearch Encryption
You can enable password authentication on Opensearch/Elasticsearch by editing the $MOOGSOFT_HOME/config/system.conf
configuration file. You can use either an unencrypted password or an encrypted password, but you cannot use both.
You should use an encrypted password in the configuration file if you do not want users with configuration access to be able to access integrated systems.
Enable password authentication
To enable unencrypted password authentication on Opensearch/Elasticsearch, set the following properties in the system.conf
file:
"search": { ... “username” : <username>, “password” : <password>, ... }
To enable encrypted password authentication on Opensearch/Elasticsearch, set the following properties in the system.conf
file:
"search": { ... “username” : <username>, “encrypted_password” : <encrypted password> ... }
Initialize Opensearch/Elasticsearch
Opensearch in v8.2.x already has password authentication enabled, but other users can be added. If the admin password was already changed by moog_init_search.sh while deploying Opensearch, the script will prompt for admin account details to use to create the new users. To initialize Opensearch/Elasticsearch with password authentication, run:
moog_init_search.sh -a username:password
or:
moog_init_search.sh --auth username:password
If you run moog_init_search
without the -a/--auth
parameters, you will not enable password authentication in Opensearch/Elasticsearch.
See Moog Encryptor for more information on how to encrypt passwords stored in the system.conf
file.
You can also manually add authentication to the Opensearch/Elasticsearch configuration. You should do this if you have your own local Opensearch/Elasticsearch installation. See the external documentation for Opensearch here https://opensearch.org/docs/latest/security-plugin/configuration/index/ or Elasticsearch here: Elasticsearch documentation on configuring security for more information.