Upgrade Moogsoft Onprem RPM to v9.2.0
This topic describes how to patch an RPM-based distribution of Moogsoft Onprem to v9.2.0 from v9.0.0 or 9.0.0.x or 9.0.1.x, 9.1.0.x, or 9.1.1.x
Important
This process requires the presence of a third Moogsoft Onprem server to act a redundancy server as per : Fully Distributed HA Installation
If this server is not already provisioned and running, it must be provisioned before starting this process, and a clean installation of Moogsoft Onprem v9.2.0 performed on it, with just the RabbitMQ and OpenSearch processes left running
In the process below, Server 1 and Server 2 are existing servers running MoogFarmD, Apache-Tomcat, RabbitMQ etc
In the process below, Server 3 is the Redundancy Server running just RabbitMQ and possibly OpenSearch too
On Server 1 or Server 2
IMPORTANT: Make a note of the existing RabbitMQ Events.HA queue name by running this command:
rabbitmqctl -p <YOUR_VHOST> list_queues | grep Events.HA | awk '{print $1}' | head -n 1
This queue name is needed later in the upgrade process
On Server 3
Stop RabbitMQ
service rabbitmq-server stop
On Server 1
Remove the Server 3 RabbitMQ node from the cluster:
rabbitmqctl forget_cluster_node rabbit@<SERVER_3_IP_OR_HOSTNAME>
Comment out any reference to the Server 3 RabbitMQ broker from $MOOGSOFT_HOME/config/system.conf
The entry under mooms -> brokers should be commented out
Stop MoogFarmd to pause event processing
service moogfarmd stop
Restart all other Moogsoft Onprem services (excluding MoogFarmD)
service apache-tomcat restart service restlamd restart service socketlamd restart
On Server 2
Comment out any reference to the Server 1 and the Server 3 RabbitMQ brokers from $MOOGSOFT_HOME/config/system.conf
The entries under mooms -> brokers should be commented out
Stop MoogFarmD to pause event processing
service moogfarmd stop
Restart all other Moogsoft Onprem Services (excluding MoogFarmD)
service apache-tomcat restart service restlamd restart service socketlamd restart
On Server 3 (Redundancy Server)
Stop all services
service opensearch stop
Delete the RabbitMQ mnesia dir to reset RabbitMQ to prepare for the upgrade
rm -rf /var/lib/rabbitmq/mnesia/*
Upgrade the packages on the server
This topic describes how to patch an RPM-based distribution of Moogsoft Onprem to v9.2.0 from v9.0.0 or 9.0.0.x or 9.0.1.x, 9.1.0.x, or 9.1.1.x
Warning
Moogsoft Onprem v9.1.1 and newer requires the underlying Percona XtraDB cluster to be v8.4 as part of the upgrade process (see Percona Cluster 8.4 RPM Minor Version Upgrade)
An issue with Percona XtraDB Cluster 8.4 has been identified (https://perconadev.atlassian.net/browse/PXC-4658) and this appears to be related to binary logging and async replication – which is recommended for a DR (disaster recovery) architecture. The recommendations are:
If you are not running a DR architecture using binary logging and async replication, it is safe to upgrade to v9.1.1
If you are running a DR architecture and can delay the upgrade to v9.1.1, delay the upgrade until Percona have resolved the issue, Moogsoft has verified this fix, and this document is updated accordingly
If you are running a DR architecture and cannot delay the upgrade to v9.1.1, ignore the Percona Database upgrade step ( Step 4 ) below. The existing version of Percona PXC (8.0) is supported and compatible with v9.1.1
It is instead recommended to ensure the Percona Database is the latest v8.0 version by following the instructions on this page : Percona Cluster 8.0 RPM Minor Version Upgrade
If you have any questions please contact your usual Moogsoft technical contact and they will discuss the best route forward.
Important
Enabling the "latency performance" RHEL profile is strongly recommended. This profile allows RabbitMQ to operate much more efficiently so that throughput is increased and smoothed out.
For more information on performance profiles, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/getting-started-with-tuned_monitoring-and-managing-system-status-and-performance
Enable the profile by running the following command as root:
tuned-adm profile latency-performance
This setting will survive machine restarts and only needs to be set once.
Warning
For deployments upgrading from v9.0.0 or v9.0.0.1
The upgrade path from v9.0.0/v9.0.0.1 to v9.0.1 onwards (any pre v9.0.0.2 release going to any post v9.0.1 release) requires a 'full stop' upgrade of any running RabbitMQ clusters. All rabbit nodes will need to be stopped before their binaries are upgraded. This means there will be a window of time during the upgrade where RabbitMQ cannot be used to store events. Further upgrade details are in the relevant step below.
For deployments upgrading from v9.0.0.2
The RabbitMQ upgrade as part of this process requires all feature flags to be enabled.
The following command must be run on all RabbitMQ server nodes before the following steps are performed:
rabbitmqctl enable_feature_flag all
For users with UI Integrations containing a Rule with a name containing the word 'password'
A known issue will affect the UI integrations page after upgrade to 9.2.0. A workaround is to change the name of this rule so it doesn't equal or contain the word 'password'. Alternatively, a hotfix is available. Please contact Moogsoft Support in this case.
Ensure the patch RPMs are available to each server being patched:
For internet-connected hosts, ensure there is a repo file under the /etc/yum.repos.d/ directory pointing to the 'speedy esr' yum repo.
An example file is below:
[moogsoft-aiops-9] name=moogsoft-aiops-9 baseurl=https://<username>:<password>@speedy.moogsoft.com/v9/repo/ enabled=1 gpgcheck=0 sslverify=false
For offline-hosts:
Download the two offline yum repository files (requires 'speedy' yum credentials):
https://speedy.moogsoft.com/v9/offline/2025-05-19-1747654154-MoogsoftBASE8_offline_repo.tar.gz https://speedy.moogsoft.com/v9/offline/2025-05-19-1747654154-MoogsoftESR_9.2.0_offline_repo.tar.gz
Move the two offline installer bundle files to each server being upgraded as needed
Create two directories to house the repositories. For example:
sudo mkdir -p /media/localRPM/BASE/ sudo mkdir -p /media/localRPM/ESR/
Extract the two Tarball files into separate directories. For example:
tar xzf *-MoogsoftBASE8_offline_repo.tar.gz -C /media/localRPM/BASE/ tar xzf *-MoogsoftESR_9.2.0_offline_repo.tar.gz -C /media/localRPM/ESR/
Back up the existing /etc/yum.repos.d directory. For example:
mv /etc/yum.repos.d /etc/yum.repos.d-backup
Create an empty /etc/yum.repos.d directory. For example:
mkdir /etc/yum.repos.d
Create a local.repo file in the /etc/yum.repos.d/ folder ready to contain the local repository details for example:
[BASE] name=MoogRHE-$releasever - MoogRPM baseurl=file:///media/localRPM/BASE/RHEL gpgcheck=0 enabled=1 [ESR] name=MoogRHEL-$releasever - MoogRPM baseurl=file:///media/localRPM/ESR/RHEL gpgcheck=0 enabled=1
Clean the Yum cache:
yum clean all
FOR ALL VERSIONS
IMPORTANT: Please ensure you have read the Warning at the top of this page regarding this step
Update Percona to the latest version using the instructions here: Percona Cluster 8.4 RPM Minor Version Upgrade
FOR DEPLOYMENTS BEING UPGRADED FROM v9.0.0.x OR EARLIER ONLY
Update the NginX yum repo file so it contains the mainline repo:
rm -f /etc/yum.repos.d/nginx.repo; cat <<END > /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/rhel/8/\$basearch/ gpgcheck=0 enabled=1 module_hotfixes=1 [nginx-mainline] name=nginx mainline repo baseurl=http://nginx.org/packages/mainline/rhel/8/\$basearch/ gpgcheck=0 enabled=1 gpgkey=https://nginx.org/keys/nginx_signing.key module_hotfixes=1 END
FOR ALL VERSIONS
For online-deployments only: Configure the latest RabbitMQ yum repository:
curl -s https://packagecloud.io/install/repositories/cloudamqp/rabbitmq/script.rpm.sh | sudo bash
FOR DEPLOYMENTS BEING UPGRADED FROM v9.0.1 OR EARLIER ONLY
Important
Ensure the RabbitMQ feature flags have been enabled before proceeding. See the start of this document for the required command.
Upgrade Erlang (required for the new version of RabbitMQ):
Online RPM erlang upgrade command:
yum upgrade https://github.com/rabbitmq/erlang-rpm/releases/download/v27.3.3/erlang-27.3.3-1.el8.x86_64.rpm
Offline RPM erlang upgrade command:
yum upgrade erlang-27.3.3
FOR ALL VERSIONS
On each host where moogsoft packages are installed, install the patch RPMs:
For internet-connected hosts run the following command:
yum -y upgrade $(rpm -qa --qf '%{NAME}\n' | grep moogsoft | sed 's/$/-9.2.0/')
For offline hosts, run the following command in the directory containing the patch RPMs:
yum -y upgrade $(rpm -qa --qf '%{NAME}\n' | grep moogsoft | sed 's/$/-9.2.0*.rpm/')
For ALL RPM-based deployments, ensure the Java JDK folder permissions are correct by running the following command as root (or a user with sudo permissions):
chmod -R 755 /usr/java /usr/lib/jvm
FOR DEPLOYMENTS BEING UPGRADED FROM v9.1.0 OR NEWER ONLY
Important
Ensure the RabbitMQ feature flags have been enabled before proceeding. See the start of this document for the required command.
Upgrade Erlang (required for the new version of RabbitMQ):
Online RPM erlang upgrade command:
yum upgrade https://github.com/rabbitmq/erlang-rpm/releases/download/v27.3.3/erlang-27.3.3-1.el8.x86_64.rpm
Offline RPM erlang upgrade command:
yum upgrade erlang-27.3.3
FOR ALL VERSIONS
In the latest release a number of the configuration files are different out of the box. This means after the RPM upgrade, the following configuration files will be replaced with 'rpmsave' versions of those same files.
$MOOGSOFT_HOME/config/security.conf
Any customisations made to the pre-upgrade versions of these files (*.rpmsave) should be copied into the non-rpmsave versions of the files. Alternatively, the rpmsave versions of the files can be renamed to replace the new file versions. For example:
cp $MOOGSOFT_HOME/config/security.conf $MOOGSOFT_HOME/config/911cleansecurity.conf.bak; mv $MOOGSOFT_HOME/config/security.conf.rpmsave $MOOGSOFT_HOME/config/security.conf;
FOR DEPLOYMENTS BEING UPGRADED FROM v9.0.0.x OR EARLIER ONLY
Upgrade Nginx:
yum -y upgrade nginx service nginx restart
FOR DEPLOYMENTS BEING UPGRADED FROM v9.0.0 OR v9.0.0.1 OR v9.0.0.2 OR EARLIER ONLY
New security enhancements in v9.0.1 require enabling the Sign Response As Required option on the IDP side, if configurable. We kindly request your SAML team to do this for the
<PROD/UAT>
environment before the upgrade. If you are unsure after communicating with your SAML team whether this option applies to your setup, please contact Moogsoft support.If a new IDP is generated after this change, SAML team should provide its metadata file to the team taking care of the upgrade. During the upgrade, the existing IDP file will be replaced with the one provided. In all cases, the SP metadata file will be regenerated and should be shared with SAML team. They may need to import the new SP metadata or configure the relevant fields with the information supplied in the file to complete the trust configuration.
Create the RabbitMQ vhost and Events.HA quorum queue on that node (to allow events to queue here during the upgrade)
VHOST=<your_vhost/zone_name>; $MOOGSOFT_HOME/bin/utils/moog_init_mooms.sh -pz "${VHOST}"; QUEUE_NAME="<YOUR_EVENTS_HA_QUEUE_NAME>"; chmod u+x /usr/lib/rabbitmq/lib/rabbitmq_server-4.*/plugins/rabbitmq_management-4.*/priv/www/cli/rabbitmqadmin; /usr/lib/rabbitmq/lib/rabbitmq_server-4.*/plugins/rabbitmq_management-4.*/priv/www/cli/rabbitmqadmin -u moogsoft -p m00gs0ft --vhost=${VHOST} declare queue name=${QUEUE_NAME} durable=true arguments='{"x-queue-type":"quorum"}'; /usr/lib/rabbitmq/lib/rabbitmq_server-4.*/plugins/rabbitmq_management-4.*/priv/www/cli/rabbitmqadmin -u moogsoft -p m00gs0ft --vhost=${VHOST} declare exchange name=MOOMS.GENERAL.TOPIC_EXCHANGE type=topic durable=true /usr/lib/rabbitmq/lib/rabbitmq_server-4.*/plugins/rabbitmq_management-4.*/priv/www/cli/rabbitmqadmin -u moogsoft -p m00gs0ft -V ${VHOST} declare binding source="MOOMS.GENERAL.TOPIC_EXCHANGE" destination_type="queue" destination="${QUEUE_NAME}" routing_key="events"
On Server 2
Confirm that the RabbitMQ HA policy is still present on the Events queue:
rabbitmqctl list_queues name durable policy -p <YOUR_VHOST> | grep Events.HA
This command should return a value. If it does not, the policy should be reapplied with this command:
rabbitmqctl set_policy -p <YOUR_ZONE> ha-all ".+.HA" '{"ha-mode":"all"}'
Create a RabbitMQ shovel from the existing Events.HA queue to the Server 3 Events.HA queue to buffer events during the upgrade. Update the commands below with accurate variable values for the hostnames, v-hosts, and credentials.
SERVER_2_RABBITMQ_CREDS="moogsoft:m00gs0ft"; SERVER_2_RABBITMQ_VHOST="<YOUR_VHOST_NAME>"; SERVER_2_RABBITMQ_HOST="127.0.0.1"; QUEUE="<YOUR_EVENTS_HA_QUEUE_NAME>"; SERVER_3_RABBITMQ_CREDS="moogsoft:m00gs0ft"; SERVER_3_RABBITMQ_VHOST="<YOUR_VHOST_NAME>"; SERVER_3_RABBITMQ_HOST="<YOUR_SERVER_3_IP_OR_HOSTNAME>"; rabbitmq-plugins enable rabbitmq_shovel; rabbitmq-plugins enable rabbitmq_shovel_management; rabbitmqctl -p "${SERVER_2_RABBITMQ_VHOST}" set_parameter shovel my_shovel "$(cat <<EOF { "src-uri": "amqp://${SERVER_2_RABBITMQ_CREDS}@${SERVER_2_RABBITMQ_HOST}/${SERVER_2_RABBITMQ_VHOST}", "src-queue": "${QUEUE}", "dest-uri": "amqp://${SERVER_3_RABBITMQ_CREDS}@${SERVER_3_RABBITMQ_HOST}/${SERVER_3_RABBITMQ_VHOST}", "add-forward-headers": false, "dest-exchange": "MOOMS.GENERAL.TOPIC_EXCHANGE", "ack-mode": "on-confirm", "delete-after": "never" } EOF )"
Confirm that the Shovel was created successfully by running the command below.
The "state" must show "running" in this output. If the Shovel is not running at this stage, events may be lost depending on the overall process being carried out.
rabbitmqctl -p <YOUR_VHOST> shovel_status
Assuming events are being ingested via LAMs and UI Integrations, the count for the Events.HA queue on the Server 3 node should now be increasing.
You can check this using the CLI command below:
rabbitmqctl -p <YOUR_VHOST> list_queues
On Server 1
Confirm that the RabbitMQ HA policy is still present on the Events queue:
VHOST=$(${MOOGSOFT_HOME}/bin/utils/moog_config_reader -k mooms.zone) rabbitmqctl list_queues name durable policy -p "${VHOST}" | grep Events.HA
This command should return a value. If it does not, the policy should be reapplied with this command:
rabbitmqctl set_policy -p "${VHOST}" ha-all ".+.HA" '{"ha-mode":"all"}'
Stop RabbitMQ
service rabbitmq-server stop
On Server 2
Remove the Server 1 RabbitMQ node from the cluster:
rabbitmqctl forget_cluster_node rabbit@<SERVER_1_IP_OR_HOSTNAME>
On Server 1
Stop all services (update the commands below as needed)
service apache-tomcat stop service opensearch stop service restlamd stop service socketlamd stop
Delete the RabbitMQ mnesia dir to reset RabbitMQ to prepare for the upgrade
rm -rf /var/lib/rabbitmq/mnesia/*
Upgrade the packages on the server
This topic describes how to patch an RPM-based distribution of Moogsoft Onprem to v9.2.0 from v9.0.0 or 9.0.0.x or 9.0.1.x, 9.1.0.x, or 9.1.1.x
Warning
Moogsoft Onprem v9.1.1 and newer requires the underlying Percona XtraDB cluster to be v8.4 as part of the upgrade process (see Percona Cluster 8.4 RPM Minor Version Upgrade)
An issue with Percona XtraDB Cluster 8.4 has been identified (https://perconadev.atlassian.net/browse/PXC-4658) and this appears to be related to binary logging and async replication – which is recommended for a DR (disaster recovery) architecture. The recommendations are:
If you are not running a DR architecture using binary logging and async replication, it is safe to upgrade to v9.1.1
If you are running a DR architecture and can delay the upgrade to v9.1.1, delay the upgrade until Percona have resolved the issue, Moogsoft has verified this fix, and this document is updated accordingly
If you are running a DR architecture and cannot delay the upgrade to v9.1.1, ignore the Percona Database upgrade step ( Step 4 ) below. The existing version of Percona PXC (8.0) is supported and compatible with v9.1.1
It is instead recommended to ensure the Percona Database is the latest v8.0 version by following the instructions on this page : Percona Cluster 8.0 RPM Minor Version Upgrade
If you have any questions please contact your usual Moogsoft technical contact and they will discuss the best route forward.
Important
Enabling the "latency performance" RHEL profile is strongly recommended. This profile allows RabbitMQ to operate much more efficiently so that throughput is increased and smoothed out.
For more information on performance profiles, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/getting-started-with-tuned_monitoring-and-managing-system-status-and-performance
Enable the profile by running the following command as root:
tuned-adm profile latency-performance
This setting will survive machine restarts and only needs to be set once.
Warning
For deployments upgrading from v9.0.0 or v9.0.0.1
The upgrade path from v9.0.0/v9.0.0.1 to v9.0.1 onwards (any pre v9.0.0.2 release going to any post v9.0.1 release) requires a 'full stop' upgrade of any running RabbitMQ clusters. All rabbit nodes will need to be stopped before their binaries are upgraded. This means there will be a window of time during the upgrade where RabbitMQ cannot be used to store events. Further upgrade details are in the relevant step below.
For deployments upgrading from v9.0.0.2
The RabbitMQ upgrade as part of this process requires all feature flags to be enabled.
The following command must be run on all RabbitMQ server nodes before the following steps are performed:
rabbitmqctl enable_feature_flag all
For users with UI Integrations containing a Rule with a name containing the word 'password'
A known issue will affect the UI integrations page after upgrade to 9.2.0. A workaround is to change the name of this rule so it doesn't equal or contain the word 'password'. Alternatively, a hotfix is available. Please contact Moogsoft Support in this case.
Ensure the patch RPMs are available to each server being patched:
For internet-connected hosts, ensure there is a repo file under the /etc/yum.repos.d/ directory pointing to the 'speedy esr' yum repo.
An example file is below:
[moogsoft-aiops-9] name=moogsoft-aiops-9 baseurl=https://<username>:<password>@speedy.moogsoft.com/v9/repo/ enabled=1 gpgcheck=0 sslverify=false
For offline-hosts:
Download the two offline yum repository files (requires 'speedy' yum credentials):
https://speedy.moogsoft.com/v9/offline/2025-05-19-1747654154-MoogsoftBASE8_offline_repo.tar.gz https://speedy.moogsoft.com/v9/offline/2025-05-19-1747654154-MoogsoftESR_9.2.0_offline_repo.tar.gz
Move the two offline installer bundle files to each server being upgraded as needed
Create two directories to house the repositories. For example:
sudo mkdir -p /media/localRPM/BASE/ sudo mkdir -p /media/localRPM/ESR/
Extract the two Tarball files into separate directories. For example:
tar xzf *-MoogsoftBASE8_offline_repo.tar.gz -C /media/localRPM/BASE/ tar xzf *-MoogsoftESR_9.2.0_offline_repo.tar.gz -C /media/localRPM/ESR/
Back up the existing /etc/yum.repos.d directory. For example:
mv /etc/yum.repos.d /etc/yum.repos.d-backup
Create an empty /etc/yum.repos.d directory. For example:
mkdir /etc/yum.repos.d
Create a local.repo file in the /etc/yum.repos.d/ folder ready to contain the local repository details for example:
[BASE] name=MoogRHE-$releasever - MoogRPM baseurl=file:///media/localRPM/BASE/RHEL gpgcheck=0 enabled=1 [ESR] name=MoogRHEL-$releasever - MoogRPM baseurl=file:///media/localRPM/ESR/RHEL gpgcheck=0 enabled=1
Clean the Yum cache:
yum clean all
FOR ALL VERSIONS
IMPORTANT: Please ensure you have read the Warning at the top of this page regarding this step
Update Percona to the latest version using the instructions here: Percona Cluster 8.4 RPM Minor Version Upgrade
FOR DEPLOYMENTS BEING UPGRADED FROM v9.0.0.x OR EARLIER ONLY
Update the NginX yum repo file so it contains the mainline repo:
rm -f /etc/yum.repos.d/nginx.repo; cat <<END > /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/rhel/8/\$basearch/ gpgcheck=0 enabled=1 module_hotfixes=1 [nginx-mainline] name=nginx mainline repo baseurl=http://nginx.org/packages/mainline/rhel/8/\$basearch/ gpgcheck=0 enabled=1 gpgkey=https://nginx.org/keys/nginx_signing.key module_hotfixes=1 END
FOR ALL VERSIONS
For online-deployments only: Configure the latest RabbitMQ yum repository:
curl -s https://packagecloud.io/install/repositories/cloudamqp/rabbitmq/script.rpm.sh | sudo bash
FOR DEPLOYMENTS BEING UPGRADED FROM v9.0.1 OR EARLIER ONLY
Important
Ensure the RabbitMQ feature flags have been enabled before proceeding. See the start of this document for the required command.
Upgrade Erlang (required for the new version of RabbitMQ):
Online RPM erlang upgrade command:
yum upgrade https://github.com/rabbitmq/erlang-rpm/releases/download/v27.3.3/erlang-27.3.3-1.el8.x86_64.rpm
Offline RPM erlang upgrade command:
yum upgrade erlang-27.3.3
FOR ALL VERSIONS
On each host where moogsoft packages are installed, install the patch RPMs:
For internet-connected hosts run the following command:
yum -y upgrade $(rpm -qa --qf '%{NAME}\n' | grep moogsoft | sed 's/$/-9.2.0/')
For offline hosts, run the following command in the directory containing the patch RPMs:
yum -y upgrade $(rpm -qa --qf '%{NAME}\n' | grep moogsoft | sed 's/$/-9.2.0*.rpm/')
For ALL RPM-based deployments, ensure the Java JDK folder permissions are correct by running the following command as root (or a user with sudo permissions):
chmod -R 755 /usr/java /usr/lib/jvm
FOR DEPLOYMENTS BEING UPGRADED FROM v9.1.0 OR NEWER ONLY
Important
Ensure the RabbitMQ feature flags have been enabled before proceeding. See the start of this document for the required command.
Upgrade Erlang (required for the new version of RabbitMQ):
Online RPM erlang upgrade command:
yum upgrade https://github.com/rabbitmq/erlang-rpm/releases/download/v27.3.3/erlang-27.3.3-1.el8.x86_64.rpm
Offline RPM erlang upgrade command:
yum upgrade erlang-27.3.3
FOR ALL VERSIONS
In the latest release a number of the configuration files are different out of the box. This means after the RPM upgrade, the following configuration files will be replaced with 'rpmsave' versions of those same files.
$MOOGSOFT_HOME/config/security.conf
Any customisations made to the pre-upgrade versions of these files (*.rpmsave) should be copied into the non-rpmsave versions of the files. Alternatively, the rpmsave versions of the files can be renamed to replace the new file versions. For example:
cp $MOOGSOFT_HOME/config/security.conf $MOOGSOFT_HOME/config/911cleansecurity.conf.bak; mv $MOOGSOFT_HOME/config/security.conf.rpmsave $MOOGSOFT_HOME/config/security.conf;
FOR DEPLOYMENTS BEING UPGRADED FROM v9.0.0.x OR EARLIER ONLY
Upgrade Nginx:
yum -y upgrade nginx service nginx restart
FOR DEPLOYMENTS BEING UPGRADED FROM v9.0.0 OR v9.0.0.1 OR v9.0.0.2 OR EARLIER ONLY
New security enhancements in v9.0.1 require enabling the Sign Response As Required option on the IDP side, if configurable. We kindly request your SAML team to do this for the
<PROD/UAT>
environment before the upgrade. If you are unsure after communicating with your SAML team whether this option applies to your setup, please contact Moogsoft support.If a new IDP is generated after this change, SAML team should provide its metadata file to the team taking care of the upgrade. During the upgrade, the existing IDP file will be replaced with the one provided. In all cases, the SP metadata file will be regenerated and should be shared with SAML team. They may need to import the new SP metadata or configure the relevant fields with the information supplied in the file to complete the trust configuration.
Upgrade the database schema
Upgrade the database schema and refresh all stored procedures (provide the 'ermintrude' DB user password when prompted):
$MOOGSOFT_HOME/bin/utils/moog_db_auto_upgrader -t 9.2.0 -u ermintrude
Cluster RabbitMQ with the Node 3 RabbitMQ
The primary erlang cookie is located at /var/lib/rabbitmq/.erlang.cookie on whichever RabbitMQ node has been upgraded first.
The erlang cookie must be the same for all RabbitMQ nodes. Replace the erlang cookie on all other RabbitMQ nodes with primary erlang cookie
Make the cookies on non-primary nodes read-only:
chmod 400 /var/lib/rabbitmq/.erlang.cookie
You may need to change the file permissions on the existing erlang cookies first to allow those files to be overwritten. For example:
chmod 600 /var/lib/rabbitmq/.erlang.cookie
Restart RabbitMQ on nodes where the erlang cookie has been replaced
service rabbitmq-server restart
To join a RabbitMQ node to a cluster, run these commands on the servers needing to join the cluster, but first substitute the short hostname of the RabbitMQ server you wish to cluster with. The short hostname is the full hostname excluding the DNS domain name. For example, if the hostname is ip-172-31-82-78.ec2.internal, the short hostname is ip-172-31-82-78. To find out the short hostname, run rabbitmqctl cluster_status on the appropriate server.
systemctl restart rabbitmq-server rabbitmqctl stop_app rabbitmqctl join_cluster rabbit@<NODE_3_SHORTNAME> rabbitmqctl start_app
Run rabbitmqctl cluster_status to get the cluster status. Example output is as follows:
Cluster status of node rabbit@ip-172-31-93-201 Basics Cluster name rabbit@ip-172-31-93-201.ec2.internal Disk Nodes rabbit@ip-172-31-93-201 rabbit@ip-172-31-85-42 rabbit@ip-172-31-93-201 Running Nodes rabbit@ip-172-31-93-201 rabbit@ip-172-31-85-42 rabbit@ip-172-31-93-201 Versions ...
Update the RabbitMQ brokers in $MOOGSOFT_HOME/config/system.conf to include Server 1 and Server 3
The entries under mooms -> brokers should be added or uncommented as needed
Upgrade Apache-Tomcat
Warning
This process will completely remove and re-deploy Apache-Tomcat. If the Apache-Tomcat logs need to be kept, a copy should be made before proceeding.
Deploy the new version of Apache Tomcat and the latest webapps:
$MOOGSOFT_HOME/bin/utils/moog_init_ui.sh -twf
If you made any changes to the original Apache Tomcat service script (such as the Xmx value), apply the same changes to the new version. This will require a restart of the apache-tomcat service
Perform final checks
Validate the patch:
$MOOGSOFT_HOME/bin/utils/moog_install_validator.sh $MOOGSOFT_HOME/bin/utils/tomcat_install_validator.sh $MOOGSOFT_HOME/bin/utils/moog_db_validator.sh
If there are any errors from the validators, contact Moogsoft Support
Re-install the latest 'Addons' pack: Install Moogsoft Add-ons
The OpenSearch Cluster now needs to be recreated (if it has not been already): Opensearch Clustering Guide - RPM
Restart any event feeds if they were stopped
Start the LAMs (update the commands below as needed)
service restlamd restart service socketlamd restart
On Server 2
Stop Tomcat and LAMs (update the commands below as needed)
service apache-tomcat stop service restlamd stop service socketlamd stop
Wait for the old Events.HA RabbitMQ queue to drain via the shovel to the new nodes. The command below needs to return a value of 0 before proceeding:
VHOST=$(${MOOGSOFT_HOME}/bin/utils/moog_config_reader -k mooms.zone) rabbitmqctl -p "${VHOST}" list_queues name messages | grep Events.HA
Stop RabbitMQ
service rabbitmq-server stop
Delete the RabbitMQ mnesia dir to reset RabbitMQ to prepare for the upgrade
rm -rf /var/lib/rabbitmq/mnesia/*
Upgrade the packages on the server
This topic describes how to patch an RPM-based distribution of Moogsoft Onprem to v9.2.0 from v9.0.0 or 9.0.0.x or 9.0.1.x, 9.1.0.x, or 9.1.1.x
Warning
Moogsoft Onprem v9.1.1 and newer requires the underlying Percona XtraDB cluster to be v8.4 as part of the upgrade process (see Percona Cluster 8.4 RPM Minor Version Upgrade)
An issue with Percona XtraDB Cluster 8.4 has been identified (https://perconadev.atlassian.net/browse/PXC-4658) and this appears to be related to binary logging and async replication – which is recommended for a DR (disaster recovery) architecture. The recommendations are:
If you are not running a DR architecture using binary logging and async replication, it is safe to upgrade to v9.1.1
If you are running a DR architecture and can delay the upgrade to v9.1.1, delay the upgrade until Percona have resolved the issue, Moogsoft has verified this fix, and this document is updated accordingly
If you are running a DR architecture and cannot delay the upgrade to v9.1.1, ignore the Percona Database upgrade step ( Step 4 ) below. The existing version of Percona PXC (8.0) is supported and compatible with v9.1.1
It is instead recommended to ensure the Percona Database is the latest v8.0 version by following the instructions on this page : Percona Cluster 8.0 RPM Minor Version Upgrade
If you have any questions please contact your usual Moogsoft technical contact and they will discuss the best route forward.
Important
Enabling the "latency performance" RHEL profile is strongly recommended. This profile allows RabbitMQ to operate much more efficiently so that throughput is increased and smoothed out.
For more information on performance profiles, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/getting-started-with-tuned_monitoring-and-managing-system-status-and-performance
Enable the profile by running the following command as root:
tuned-adm profile latency-performance
This setting will survive machine restarts and only needs to be set once.
Warning
For deployments upgrading from v9.0.0 or v9.0.0.1
The upgrade path from v9.0.0/v9.0.0.1 to v9.0.1 onwards (any pre v9.0.0.2 release going to any post v9.0.1 release) requires a 'full stop' upgrade of any running RabbitMQ clusters. All rabbit nodes will need to be stopped before their binaries are upgraded. This means there will be a window of time during the upgrade where RabbitMQ cannot be used to store events. Further upgrade details are in the relevant step below.
For deployments upgrading from v9.0.0.2
The RabbitMQ upgrade as part of this process requires all feature flags to be enabled.
The following command must be run on all RabbitMQ server nodes before the following steps are performed:
rabbitmqctl enable_feature_flag all
For users with UI Integrations containing a Rule with a name containing the word 'password'
A known issue will affect the UI integrations page after upgrade to 9.2.0. A workaround is to change the name of this rule so it doesn't equal or contain the word 'password'. Alternatively, a hotfix is available. Please contact Moogsoft Support in this case.
Ensure the patch RPMs are available to each server being patched:
For internet-connected hosts, ensure there is a repo file under the /etc/yum.repos.d/ directory pointing to the 'speedy esr' yum repo.
An example file is below:
[moogsoft-aiops-9] name=moogsoft-aiops-9 baseurl=https://<username>:<password>@speedy.moogsoft.com/v9/repo/ enabled=1 gpgcheck=0 sslverify=false
For offline-hosts:
Download the two offline yum repository files (requires 'speedy' yum credentials):
https://speedy.moogsoft.com/v9/offline/2025-05-19-1747654154-MoogsoftBASE8_offline_repo.tar.gz https://speedy.moogsoft.com/v9/offline/2025-05-19-1747654154-MoogsoftESR_9.2.0_offline_repo.tar.gz
Move the two offline installer bundle files to each server being upgraded as needed
Create two directories to house the repositories. For example:
sudo mkdir -p /media/localRPM/BASE/ sudo mkdir -p /media/localRPM/ESR/
Extract the two Tarball files into separate directories. For example:
tar xzf *-MoogsoftBASE8_offline_repo.tar.gz -C /media/localRPM/BASE/ tar xzf *-MoogsoftESR_9.2.0_offline_repo.tar.gz -C /media/localRPM/ESR/
Back up the existing /etc/yum.repos.d directory. For example:
mv /etc/yum.repos.d /etc/yum.repos.d-backup
Create an empty /etc/yum.repos.d directory. For example:
mkdir /etc/yum.repos.d
Create a local.repo file in the /etc/yum.repos.d/ folder ready to contain the local repository details for example:
[BASE] name=MoogRHE-$releasever - MoogRPM baseurl=file:///media/localRPM/BASE/RHEL gpgcheck=0 enabled=1 [ESR] name=MoogRHEL-$releasever - MoogRPM baseurl=file:///media/localRPM/ESR/RHEL gpgcheck=0 enabled=1
Clean the Yum cache:
yum clean all
FOR ALL VERSIONS
IMPORTANT: Please ensure you have read the Warning at the top of this page regarding this step
Update Percona to the latest version using the instructions here: Percona Cluster 8.4 RPM Minor Version Upgrade
FOR DEPLOYMENTS BEING UPGRADED FROM v9.0.0.x OR EARLIER ONLY
Update the NginX yum repo file so it contains the mainline repo:
rm -f /etc/yum.repos.d/nginx.repo; cat <<END > /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/rhel/8/\$basearch/ gpgcheck=0 enabled=1 module_hotfixes=1 [nginx-mainline] name=nginx mainline repo baseurl=http://nginx.org/packages/mainline/rhel/8/\$basearch/ gpgcheck=0 enabled=1 gpgkey=https://nginx.org/keys/nginx_signing.key module_hotfixes=1 END
FOR ALL VERSIONS
For online-deployments only: Configure the latest RabbitMQ yum repository:
curl -s https://packagecloud.io/install/repositories/cloudamqp/rabbitmq/script.rpm.sh | sudo bash
FOR DEPLOYMENTS BEING UPGRADED FROM v9.0.1 OR EARLIER ONLY
Important
Ensure the RabbitMQ feature flags have been enabled before proceeding. See the start of this document for the required command.
Upgrade Erlang (required for the new version of RabbitMQ):
Online RPM erlang upgrade command:
yum upgrade https://github.com/rabbitmq/erlang-rpm/releases/download/v27.3.3/erlang-27.3.3-1.el8.x86_64.rpm
Offline RPM erlang upgrade command:
yum upgrade erlang-27.3.3
FOR ALL VERSIONS
On each host where moogsoft packages are installed, install the patch RPMs:
For internet-connected hosts run the following command:
yum -y upgrade $(rpm -qa --qf '%{NAME}\n' | grep moogsoft | sed 's/$/-9.2.0/')
For offline hosts, run the following command in the directory containing the patch RPMs:
yum -y upgrade $(rpm -qa --qf '%{NAME}\n' | grep moogsoft | sed 's/$/-9.2.0*.rpm/')
For ALL RPM-based deployments, ensure the Java JDK folder permissions are correct by running the following command as root (or a user with sudo permissions):
chmod -R 755 /usr/java /usr/lib/jvm
FOR DEPLOYMENTS BEING UPGRADED FROM v9.1.0 OR NEWER ONLY
Important
Ensure the RabbitMQ feature flags have been enabled before proceeding. See the start of this document for the required command.
Upgrade Erlang (required for the new version of RabbitMQ):
Online RPM erlang upgrade command:
yum upgrade https://github.com/rabbitmq/erlang-rpm/releases/download/v27.3.3/erlang-27.3.3-1.el8.x86_64.rpm
Offline RPM erlang upgrade command:
yum upgrade erlang-27.3.3
FOR ALL VERSIONS
In the latest release a number of the configuration files are different out of the box. This means after the RPM upgrade, the following configuration files will be replaced with 'rpmsave' versions of those same files.
$MOOGSOFT_HOME/config/security.conf
Any customisations made to the pre-upgrade versions of these files (*.rpmsave) should be copied into the non-rpmsave versions of the files. Alternatively, the rpmsave versions of the files can be renamed to replace the new file versions. For example:
cp $MOOGSOFT_HOME/config/security.conf $MOOGSOFT_HOME/config/911cleansecurity.conf.bak; mv $MOOGSOFT_HOME/config/security.conf.rpmsave $MOOGSOFT_HOME/config/security.conf;
FOR DEPLOYMENTS BEING UPGRADED FROM v9.0.0.x OR EARLIER ONLY
Upgrade Nginx:
yum -y upgrade nginx service nginx restart
FOR DEPLOYMENTS BEING UPGRADED FROM v9.0.0 OR v9.0.0.1 OR v9.0.0.2 OR EARLIER ONLY
New security enhancements in v9.0.1 require enabling the Sign Response As Required option on the IDP side, if configurable. We kindly request your SAML team to do this for the
<PROD/UAT>
environment before the upgrade. If you are unsure after communicating with your SAML team whether this option applies to your setup, please contact Moogsoft support.If a new IDP is generated after this change, SAML team should provide its metadata file to the team taking care of the upgrade. During the upgrade, the existing IDP file will be replaced with the one provided. In all cases, the SP metadata file will be regenerated and should be shared with SAML team. They may need to import the new SP metadata or configure the relevant fields with the information supplied in the file to complete the trust configuration.
Cluster RabbitMQ with Node 3 RabbitMQ
The primary erlang cookie is located at /var/lib/rabbitmq/.erlang.cookie on whichever RabbitMQ node has been upgraded first.
The erlang cookie must be the same for all RabbitMQ nodes. Replace the erlang cookie on all other RabbitMQ nodes with primary erlang cookie
Make the cookies on non-primary nodes read-only:
chmod 400 /var/lib/rabbitmq/.erlang.cookie
You may need to change the file permissions on the existing erlang cookies first to allow those files to be overwritten. For example:
chmod 600 /var/lib/rabbitmq/.erlang.cookie
Restart RabbitMQ on nodes where the erlang cookie has been replaced
service rabbitmq-server restart
To join a RabbitMQ node to a cluster, run these commands on the servers needing to join the cluster, but first substitute the short hostname of the RabbitMQ server you wish to cluster with. The short hostname is the full hostname excluding the DNS domain name. For example, if the hostname is ip-172-31-82-78.ec2.internal, the short hostname is ip-172-31-82-78. To find out the short hostname, run rabbitmqctl cluster_status on the appropriate server.
systemctl restart rabbitmq-server rabbitmqctl stop_app rabbitmqctl join_cluster rabbit@<NODE_3_SHORTNAME> rabbitmqctl start_app
Run rabbitmqctl cluster_status to get the cluster status. Example output is as follows:
Cluster status of node rabbit@ip-172-31-93-201 Basics Cluster name rabbit@ip-172-31-93-201.ec2.internal Disk Nodes rabbit@ip-172-31-93-201 rabbit@ip-172-31-85-42 rabbit@ip-172-31-93-201 Running Nodes rabbit@ip-172-31-93-201 rabbit@ip-172-31-85-42 rabbit@ip-172-31-93-201 Versions ...
Update the RabbitMQ brokers in $MOOGSOFT_HOME/config/system.conf to include Server 1 and Server 2, and Server 3
The entries under mooms -> brokers should be added or uncommented as needed
Upgrade Apache-Tomcat
Warning
This process will completely remove and re-deploy Apache-Tomcat. If the Apache-Tomcat logs need to be kept, a copy should be made before proceeding.
Deploy the new version of Apache Tomcat and the latest webapps:
$MOOGSOFT_HOME/bin/utils/moog_init_ui.sh -twf
If you made any changes to the original Apache Tomcat service script (such as the Xmx value), apply the same changes to the new version. This will require a restart of the apache-tomcat service
Perform final checks
Validate the patch:
$MOOGSOFT_HOME/bin/utils/moog_install_validator.sh $MOOGSOFT_HOME/bin/utils/tomcat_install_validator.sh $MOOGSOFT_HOME/bin/utils/moog_db_validator.sh
If there are any errors from the validators, contact Moogsoft Support
Re-install the latest 'Addons' pack: Install Moogsoft Add-ons
The OpenSearch Cluster now needs to be recreated (if it has not been already): Opensearch Clustering Guide - RPM
Restart any event feeds if they were stopped
Start the LAMs (update the commands below as needed)
service restlamd restart service socketlamd restart
On Server 1
Ensure all RabbitMQ quorum queues are updated to have the right number of replicas:
rabbitmq-queues grow rabbit@<SERVER1_HOSTNAME_OR_IP> all; rabbitmq-queues grow rabbit@<SERVER2_HOSTNAME_OR_IP> all; rabbitmq-queues grow rabbit@<SERVER3_HOSTNAME_OR_IP> all;
Update the RabbitMQ brokers in $MOOGSOFT_HOME/config/system.conf to include Server 1 and Server 2, and Server 3
The entries under mooms -> brokers should be added or uncommented as needed
Start MoogFarmD
service moogfarmd restart
Restart Apache-Tomcat
service apache-tomcat restart
On Server 2
Start MoogFarmD
service moogfarmd restart
Upgrade the Moogsoft Bridge Servers