Skip to main content

Migrate to v9 from v8.x.x

Follow these steps to perform a migration/upgrade to Moogsoft Onprem v9.0.x from v8.0.x, v8.1.x or 8.2.x. Upgrades from v6.x or v7.x are not supported.

This guide requires Percona XtraBackup to export the v8.x database. Ensure it is installed on one of the v8.x.x servers your are migrating data from and has access to the database.

Expected Downtime during the migration

This process WILL involve some downtime because in-place upgrades are not possible (new major database and major operating system versions are required). However, no loss of events should occur as the v9 RabbitMQ instance buffers events until it is safe for the v9 MoogFarmd process to start up.

High level steps and potential downtime impacts
  1. Install v9 on new RHEL8 servers and install Percona v8: No downtime needed at this stage for v8.x deployment. May be completed well in advance of the actual migration.

  2. Copy the configuration from v8.x to v9.x servers: No downtime needed at this stage for v8.x deployment. May be completed well in advance of the actual migration, but it is important to ensure that the latest versions of the configuration files are deployed on the v9 servers.

  3. Stop services and create a RabbitMQ Shovel to buffer events: Downtime expected at this stage. The v9.x RabbitMQ instance will buffer events, but MoogFarmd will no longer be running on either side. The UI will not be usable from this point until the upgrade is complete.

  4. Prepare the v8.x database for migration: Downtime expected at this stage for v8.0.x and v8.1.x deployments only, as these steps will make the schema incompatible the v8.0.x/v8.1.x deployment. Deployments on v8.2.x can skip the this section.

  5. Migrate the database to the new servers: Downtime expected at this stage. The time taken to export and import the database varies, depending on the size of the database and the speed of the connection between the servers when it is transferred (for example, via scp).

  6. Upgrade the database: Downtime expected at this stage. The actual amount of downtime should be small as it is a minor schema upgrade only.

  7. Allow the new Percona cluster to perform an SST: Downtime expected at this stage. It is strongly recommended that you allow the new Percona cluster to sync to ensure all nodes are available on the new v9.x system for failovers, if needed. Otherwise, all new connections to the bootstrap node will be blocked until the sync is complete.

  8. Restart the v9.x services and disable the v8.x services: Downtime not applicable at this stage. Once the v9.x system is up and running, the v8.x system can be fully turned off.

Pre-upgrade v8.x database health check

It is important to ensure the database schema is in a good state before attempting the upgrade.

Run the following command:

$MOOGSOFT_HOME/bin/utils/moog_db_validator.sh

If any errors are reported, contact Moogsoft Support before attempting to upgrade to confirm whether it is safe to proceed with the schema differences.

Install v9.x.x

Moogsoft Onprem must be installed on new RHEL8 servers (minimum RHEL v8.6 - versions of Red Hat below this will not be supported) in the same deployment configuration (single server, HA, etc.) as the v8.x.x deployment. Installation instructions are available here: Install Moogsoft Onprem.

Important

It is important to ensure that the new Percona database nodes are configured with the same settings as they had in the existing production v8.x deployment.

The install_percona_nodes.sh/install_percona_nodes_tarball.sh scripts will deploy a new node with the recommended my.cnf configuration file. The following properties cannot be copied from Percona v5.7 to Percona v8 as they are no longer supported:

  • wsrep_sst_auth

  • gtid-mode

  • enforce-gtid-consistency

  • service-startup-timeout

The following properties must be in place for Percona v8 to work as expected:

  • disable-log-bin

  • authentication_policy = mysql_native_password

  • pxc-encrypt-cluster-traffic = OFF

Important

Enabling the "latency performance" RHEL profile is strongly recommended. This profile allows RabbitMQ to operate much more efficiently so that throughput is increased and smoothed out.

For more information on performance profiles, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/getting-started-with-tuned_monitoring-and-managing-system-status-and-performance

Enable the profile by running the following command as root:

tuned-adm profile latency-performance

This setting will survive machine restarts and only needs to be set once.

Copy configuration files to v9 servers

Back up the following folders and copy them to the v9 servers:

  • $MOOGSOFT_HOME/config

  • $MOOGSOFT_HOME/bots

Additionally, the following folders can contain files which need to be copied across:

  • $MOOGSOFT_HOME/etc/ (for the .key file etc)

  • $MOOGSOFT_HOME/etc/saml/ (for the idp and sp metadata files)

  • /etc/init.d (if any custom service scripts are in use or the OOTB ones have been customized)

  • The moog-data directory which contains profile photos and attachments etc. For RPM the default is /var/lib/moogsoft/moog-data

Use these commands to create tarballs of the config and bots folders, as well as any saml xml files:

tar --ignore-failed-read --exclude=config/system.conf --exclude=config/servlets.conf -C $MOOGSOFT_HOME -cvzf config_bots_saml_for_migration.tgz config bots etc/saml

Create a tarball zip of the moog-data directory using the following command:

  • For RPM-deployments: use this command, assuming the default location. If the folder has been customized in $MOOGSOFT_HOME/config/servlets.conf, replace that folder in this command:

    tar -C /var/lib/moogsoft/moog-data -cvzf moog_data_for_migration.tgz .
  • For Tarball-deployments: use this command, assuming the default location. If the folder has been customized in $MOOGSOFT_HOME/config/servlets.conf, replace that folder in this command:

    tar -C $MOOGSOFT_HOME/moog-data -cvzf moog_data_for_migration.tgz .

Now, copy the tgz files onto the v9 servers.

On the v9 server, it is important to make a backup of the config and bots folders:

tar -C $MOOGSOFT_HOME -cvzf ~/config_bots_v9_original.tgz config bots

You can now extract the v8 tgz files and fix the permissions:

  • For RPM-based deployments:

    tar --exclude=config/system.conf --exclude=config/servlets.conf -xf config_bots_saml_for_migration.tgz -C $MOOGSOFT_HOME/
    tar -xf moog_data_for_migration.tgz -C /var/lib/moogsoft/moog-data
    chown -R moogsoft:moogsoft $MOOGSOFT_HOME/config/
    chown -R moogsoft:moogsoft $MOOGSOFT_HOME/bots/
    chown -R moogsoft:moogsoft /var/lib/moogsoft/moog-data
  • For tarball-based deployments:

    tar --exclude=config/system.conf --exclude=config/servlets.conf -xf config_bots_saml_for_migration.tgz -C $MOOGSOFT_HOME/
    tar -xf moog_data_for_migration.tgz -C $MOOGSOFT_HOME/moog-data
    chown -R $(whoami):$(whoami) $MOOGSOFT_HOME/config
    chown -R $(whoami):$(whoami) $MOOGSOFT_HOME/bots
    chown -R $(whoami):$(whoami) $MOOGSOFT_HOME/moog-data

Optional: Apply v9 configuration updates to the migrated v8 configuration files

The following additions and changes to the referenced files are optional and can be made to apply new v9 behaviors. Note that in most cases, if the properties are not applied, the default value of each property will be used.

  • Changes for ${MOOGSOFT_HOME}/config/system.conf

    • New code:

      #
      # This configuration is used to tell automatic recovery how long to sleep
      # between reconnect attempts, except for producers.
      #
      # Automatic connection recovery attempts by default are dynamic and managed by an 
      # out-of-the-box backoff algorithm that calculates the next delay using default Fibonacci
      # sequence after each failed attempt. Alternatively, the recovery interval can be 
      # configured to be a fixed 5 seconds between each attempt by setting this 
      # parameter to true
      #
      # The default is 'false' if not specified.
      #
      "fixed_recovery_interval" : false,

      Can be added in the "mooms" block

    • New code:

      # Set aurora to true to use mariadb aurora driver. Defaults to false
      # "aurora"        : false,

      Can be added to the "mysql" block. This setting MUST be set to true if the deployment is connected to an AWS Aurora Database

    • New code:

      # Buffer Limit for OpenSearch connections. Value is in megabytes.
      # Default is 1024. Minimum value supported is 100.
      , "buffer_limit" : 1024

      Can be added in the "search" block

    • New code:

      # A List of IPs from which Management Center operations are allowed.
      # If not mentioned, by default all source connections are allowed.
      "trusted_interfaces" : ["127.0.0.1"],

      Can be added in the "failover"/"hazelcast" block

    • Updated code:

      "processes": []

      Can replace the "processes" variable inside the "process_monitor" block

  • Changes for ${MOOGSOFT_HOME}/config/servlets.conf

    • New code:

      # To avoid XSS vulnerability in Situation Alert Tools and Client Alert Tools
      # Set sanitize_client_tool_urls: false to disable sanitizing ClientTool Urls. Defaults to true
      sanitize_client_tool_urls: true

      Can be added in the moogsvr: block

  • Changes for ${MOOGSOFT_HOME}/config/security.conf

    • New code:

      # Uncomment to enable session logs in session_audit_log table. Defaults to false
      #
      #   "enableSessionAuditLog" : false

      Can be added in the "global_settings" block

    • New code:

      # This is to set password policy for DB Users, it is optional.
      # It is compulsory to add regex and validationMessage if passwordPolicy is configured
       "passwordPolicy" : {
             "regex" : "^(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#_.&()\\-[{}]:;',?/*~$^+=<>]).{8,}$",
             "validationMessage" : "Password length should be at least 8 characters, and consist of upper and lower case letters, numbers, and special characters in any order"
       }

      Can be added in the "Example DB realm" block

    • New code:

      # Before using encryptedSystemPassword, please encrypted the password using moog_encryptor utility
      # "encryptedSystemPassword": "<ENCRYPTED PASSWORD>",

      Can be added in the "Example LDAP realm" block

    • New code:

      # If overlayGroupSearch is set to true, user groups will be identified
      # by the memberof overlay instead. This property should be used with the
      # groupSearchAttribute property.
      # "overlayGroupSearch": true,
      #
      # If the overlayGroupSearch property is true, groupSearchAttribute allows
      # control over which LDAP attribute is used to identify group information
      # "groupSearchAttribute" : "memberOf",

      Can be added in the "Example LDAP realm" block

    • New code:

      #
      # Controls the 'forceAuth' setting. Defaults to true if commented out or missing
      # Needs to be set to false if used with an Azure IDP to prevent the user being asked
      # to sign in more than needed
      #
      # "forceAuth" : true,

      Can be added in the "my_saml_realm" block

Stop service, create a RabbitMQ Shovel to buffer events, and prevent UI access

RabbitMQ supports a feature called Shovels: https://www.rabbitmq.com/shovel.html

This guide uses a Dynamic Shovel (https://www.rabbitmq.com/shovel-dynamic.html#tutorial), which enables the RabbitMQ instance in the v8.x deployment to forward and buffer any events in the *.Events.HA queue to the equivalent queue in the v9.x RabbitMQ instance until the v9.x MoogFarmd process can be safely started up. Hence, events will not be lost during latter stages of the v9.x migration process.

  1. Start the MoogFarmd instance in the v9.x deployment so it creates the appropriate queue (Events.HA) in RabbitMQ:

    • RPM-based deployments:

      service moogfarmd start
    • Tarball-based deployments:

      $MOOGSOFT_HOME/bin/utils/process_cntl moog_farmd start
  2. When the queue is present, stop the MoogFarmd process in the v9.x deployment:

    RPM-based deployments:

    service moogfarmd stop

    Tarball-based deployments:

    $MOOGSOFT_HOME/bin/utils/process_cntl moog_farmd stop
  3. Now, stop the MoogFarmd instance in the v8.x deployment:

    RPM-based deployments:

    service moogfarmd stop

    Tarball-based deployments:

    $MOOGSOFT_HOME/bin/utils/process_cntl moog_farmd stop

For the following section, the following values must be used:

  1. QUEUE_TO_SHOVEL value needs to be: moog_farmd.moog_farmd.Events.HA

  2. RABBITMQ1 value needs to be the v8.x.x RabbitMQ server

  3. RABBITMQ2 value needs to be the v9.0.0 RabbitMQ server

RabbitMQ Shovel

This process documents the way to create a RabbitMQ Shovel (https://www.rabbitmq.com/shovel-dynamic.html) from one RabbitMQ instance (referenced as RABBITMQ1 in this process) to a second RabbitMQ instance (referenced as RABBITMQ2 in this process). RabbitMQ vhosts/usernames/passwords/hostnames etc need to be changed as appropriate. Please check the guidance above this section for specific instructions on which servers to use (which varies depending on the use-case).

In the following section, where QUEUE_TO_SHOVEL is mentioned, please check the guidance above this section for specific instructions on what the value of this property should be.

  1. Confirm that the expected QUEUE_TO_SHOVEL is present in both RabbitMQ instances using the following command:

    rabbitmqctl list_queues -p <vhost> | grep QUEUE_TO_SHOVEL
  2. In the RABBITMQ1 deployment, run the following commands to enable the RabbitMQ Shovel plugin:

    rabbitmq-plugins enable rabbitmq_shovel
    rabbitmq-plugins enable rabbitmq_shovel_management

    The output of the above commands should be:

    The following plugins have been enabled:
      rabbitmq_shovel
    started 1 plugins.
    The following plugins have been enabled:
      rabbitmq_shovel_management
    started 1 plugins.
  3. Confirm that the RABBITMQ1 instance can communicate with the RABBITMQ2 instance by running this command from the RABBITMQ1 server. Replace the IP/Hostname of the RABBITMQ2 host into the command instead of the example 192.168.51.8:

    sleep 1 | telnet 192.168.51.8 5672

    The output of this command should contain:

    Connected to 192.168.51.8
  4. On the RABBITMQ1 server, run the following commands to create a Dynamic Shovel named 'my_shovel' from a RABBITMQ1 queue to the same queue on RABBITMQ2. Update the IP addresses/ vhosts / credentials as needed for your environment before running the final command:

    RABBITMQ1_CREDS="moogsoft:m00gs0ft"
    RABBITMQ1_VHOST="TEST_ZONE"
    RABBITMQ1_HOST="127.0.0.1"
    QUEUE="QUEUE_TO_SHOVEL"
    RABBITMQ2_CREDS="moogsoft:m00gs0ft"
    RABBITMQ2_VHOST="TEST_ZONE"
    RABBITMQ2_HOST="192.168.51.8"
    
    rabbitmqctl -p "${RABBITMQ1_VHOST}" set_parameter shovel my_shovel '{"src-protocol": "amqp091", "src-uri": "amqp://'${RABBITMQ1_CREDS}'@'${RABBITMQ1_HOST}'/'${RABBITMQ1_VHOST}'", "src-queue": "'${QUEUE}'", "dest-protocol": "amqp091", "dest-uri": "amqp://'${RABBITMQ2_CREDS}'@'${RABBITMQ2_HOST}'/'${RABBITMQ2_VHOST}'"}'
  5. Confirm that the Shovel was created successfully by logging in the RabbitMQ admin Shovel User Interface: http://<RABBITMQ1HostOrIP>:15672/#/shovels

    The "state" must show "running" on this page. If the Shovel is not running at this stage, events may be lost depending on the overall process being carried out.

  6. The count for the QUEUE_TO_SHOVEL queue in the RABBITMQ2 instance should now be increasing. You can check this using the queues interface: http://<RABBITMQ2HostOrIP>:15672/#/queues

At this stage, Apache-Tomcat can be blocked (but not disabled) to prevent users (human and automated) from updating records in the database during/after it has been exported.

Run these commands on all v8.x servers which are running nginx:

  • RPM-based deployments:

    sed -i -e 's/location \/moogsvr/location \/moogsvrOLD/' -e 's/location \/graze/location \/grazeOLD/' /etc/nginx/conf.d/moog-ssl.conf
    service nginx reload
  • Tarball-based deployments:

    sed -i -e 's/location \/moogsvr/location \/moogsvrOLD/' -e 's/location \/graze/location \/grazeOLD/' $MOOGSOFT_HOME/cots/nginx/config/conf.d/moog-ssl.conf
    $MOOGSOFT_HOME/bin/utils/process_cntl nginx restart
    

Note

This step prevents access to v8.x via the UI, but allows UI-based integrations to continue to accept Events and queue them in the v9.x RabbitMQ (via the Shovel).

Prepare the v8.0.x or v8.1.x database schema for migration

Percona v8.x / Aurora3 is not directly compatible with older versions of our schema (pre-v8.2.x), so before the database can be migrated, some SQL is needed to prepare the schema.

Warning

Do not perform this step if the database schema started at OR has been upgraded to v8.2.x. This step is ONLY for systems migrating from version 8.0.x and 8.1.x

Important

Applying these changes will make the database schema incompatible with v8.x.x, so if the v8.x.x environment might still be needed, it is recommended that you make a backup of the database before running the script below

Run the following command (copy the whole code block) on the server with the existing v8.0.x or v8.1.x database:

$MOOGSOFT_HOME/bin/utils/moog_mysql_client <<-EOF
#MOOG-17298
RENAME TABLE \`groups\` TO user_groups;

drop procedure create_and_fetch_thread_entry;
drop procedure create_remote_user;
drop procedure create_user_with_id;
drop procedure set_password;
drop procedure update_user;
drop procedure reset_table_keys;
drop view lock_waits;

#MOOG-17299
ALTER TABLE enrichments CHANGE COLUMN \`rank\` ranking INT NOT NULL;

#MOOG-17300
ALTER TABLE feedback CHANGE COLUMN \`window\` time_window BIGINT;
SELECT properties->>"$.historic_database_name" INTO @hist_db_name FROM system_config WHERE config_type = 'Splitter';
SET @sql_text = CONCAT('ALTER TABLE ', @hist_db_name, '.feedback CHANGE COLUMN \`window\` time_window BIGINT');
PREPARE stmt FROM @sql_text;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
EOF

Migrate database to the new servers

Important

The instructions below are for exporting a database from Percona and into Percona. Users with databases on Aurora should follow an official AWS-supported process to migrate their database from Aurora2 to Aurora3 such as: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.mysql80-upgrade-procedure.html#AuroraMySQL.mysql80-upgrade-example-v2-v3

Follow these steps to create a backup of the old database and import it into the new Percona cluster:

  1. Make a backup of the existing v8.x.x database. Ensure the specified "target-dir" directory/network mount has enough disk space.

    • RPM-based deployments:

      xtrabackup --backup --target-dir=/tmp/v8backup -u root
    • Tarball-based deployments:

      xtrabackup --backup --target-dir=/tmp/v8backup -u root
      • If the above command produces this error:

        Failed to connect to MySQL server: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2).

        Add this to the xtrabackup command (change the socket file location if it is different in your deployment):

         --socket=~/moog_datastore/var/lib/mysql/mysql.sock

    Important

    Ensure this command does not end with errors when it is run, for example:

    ...
    [01] xtrabackup: Error: failed to read page after 10 retries. File ./moogdb/topo_node_entropy.ibd seems to be corrupted.
    [01] xtrabackup: Error: xtrabackup_copy_datafile() failed.
    [01] xtrabackup: Error: failed to copy datafile.

    A successful run should end like this:

    ...
    xtrabackup: Transaction log of lsn (6302197911) to (6302197920) was copied.
    220707 12:38:18 completed OK!
  2. Important: Perform a "Prepare" of the database backup to ensure it is fully synchronized with the current database

    • RPM-based deployments:

      xtrabackup --prepare --target-dir=/tmp/v8backup -u root
    • Tarball-based deployments:

      xtrabackup --prepare --target-dir=/tmp/v8backup -u root
      • If the above command produces this error:

        Failed to connect to MySQL server: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2).

        Add this to the xtrabackup command (change the socket file location if it is different in your deployment):

         --socket=~/moog_datastore/var/lib/mysql/mysql.sock

    A successful run should end like this:

    ...
    InnoDB: FTS optimize thread exiting.
    InnoDB: Starting shutdown...
    InnoDB: Shutdown completed; log sequence number 6302198824
    220707 12:40:21 completed OK!
  3. In the v9.x deployment, stop the Percona process on all nodes in the cluster, and remove the MySQL datadir:

    • RPM-based deployments:

      service mysqld stop
      rm -rf /var/lib/mysql/*
    • Tarball-based deployments (replace the moog_datastore path as appropriate):

      $MOOGSOFT_HOME/bin/utils/process_cntl mysqld stop
      rm -rf ~/moog_datastore/var/lib/mysql/*
  4. Choose a Percona node to be the "bootstrap" node in the v9 cluster, copy the exported database directory to it, and then restart the service

    Important

    The Percona "datadir" must be empty before restoring the backup. It's also important to note that MySQL server needs to be shut down before restore is performed. You can’t restore to a datadir of a running MySQL instance

    Copy the v8backup folder contents to the v9 MySQL datadir, fix the permissions, configure the node to start as bootstrap, and then start the node.

    The first rsync command is an example of how to copy the files from the v8 database backup into the v9 deployment mysql data directory; change the <v8hostname> parameter as needed. Alternatively, the following scp command may be used.

    • RPM-based deployments - run these commands on the v9 Percona server:

      # Copy the files from one server to the next using either this rsync command or the scp one after it
      # rsync -avrP <v8hostname>:/tmp/v8backup/* /var/lib/mysql/
      scp -r <v8hostname>:/tmp/v8backup/* /var/lib/mysql/
      chown -R mysql:mysql /var/lib/mysql
      sed -i 's;wsrep_cluster_address.*;wsrep_cluster_address = gcomm://;' /etc/my.cnf
      sed -i 's/safe_to_bootstrap.*/safe_to_bootstrap : 1/g' /var/lib/mysql/grastate.dat 2>/dev/null
      service mysqld start
    • Tarball-based deployments - replace the moog_datastore path and usernames as appropriate, then run these commands on the v9 Percona server:

      # Copy the files from one server to the next using either this rsync command or the scp one after it
      # rsync -avrP <v8hostname>:/tmp/v8backup/* ~/moog_datastore/var/lib/mysql/
      scp -r <v8hostname>:/tmp/v8backup/* ~/moog_datastore/var/lib/mysql/
      chown -R $(whoami):$(whoami) ~/moog_datastore/var/lib/mysql
      sed -i 's;wsrep_cluster_address.*;wsrep_cluster_address = gcomm://;' ~/.my.cnf
      sed -i 's/safe_to_bootstrap.*/safe_to_bootstrap : 1/g' ~/moog_datastore/var/lib/mysql/grastate.dat 2>/dev/null
      $MOOGSOFT_HOME/bin/utils/process_cntl mysqld start
  5. Ensure the "ermintrude" user has updated GRANTs in the migrated database by running this command:

    mysql -u root -e "GRANT SYSTEM_USER, SELECT, PROCESS ON *.* TO 'ermintrude'@'%', 'ermintrude'@'localhost';"

Upgrade the database schema to v9

Run the following command to upgrade the newly-imported v8.x database to v9.0.0:

$MOOGSOFT_HOME/bin/utils/moog_db_auto_upgrader -t 9.0.0 -u ermintrude

Validate the upgrade

Run the following commands on the v9.x servers to ensure the installation is valid and the database was successfully upgraded:

$MOOGSOFT_HOME/bin/utils/moog_install_validator.sh
$MOOGSOFT_HOME/bin/utils/moog_db_validator.sh
$MOOGSOFT_HOME/bin/utils/tomcat_install_validator.sh

If there are any errors from the validators, contact Moogsoft Support.

Allow the Percona cluster to perform an SST

Start the remaining two Percona nodes to allow the SST (State Snapshot Transfer) process to begin:

  • RPM-based deployments:

    service mysqld start
  • Tarball-based deployments:

    $MOOGSOFT_HOME/bin/utils/process_cntl mysqld start

Important

It is strongly recommended that you give the SST process time to complete to minimize any performance or operational impact before starting the event feeds again.

On the two "Joiner" Percona nodes, you can run the following command to check if the SST is complete:

curl http://localhost:9198

If the process is complete for the node, the command should return:

Percona XtraDB Cluster Node is synced

When the SST is complete for both joiner nodes, the "bootstrap" node can be converted back into a regular node.

  • RPM-based deployments:

    1. Repopulate the wsrep_cluster_address = gcomm:// line in the /etc/my.cnf file with all three Percona node IP addresses/hostnames comma separated.

    2. Restart the node:

      service mysqld restart
  • Tarball-based deployments:

    1. Repopulate the wsrep_cluster_address = gcomm:// line in the ~/.my.cnf file with all three Percona node IP addresses/hostnames comma separated.

    2. Restart the node:

      $MOOGSOFT_HOME/bin/utils/process_cntl mysqld restart

Deploy Add-ons

Install and deploy Add-ons.

Find the latest Add-ons using the instructions here: Install Moogsoft Add-ons

Restart the v9.x services and disable the v8.x services

  1. Restart Apache-Tomcat, MoogFarmd, LAMs, etc. in the v9.x deployment.

    • Examples for RPM-based deployments:

      service apache-tomcat restart
      service moogfarmd restart
      service restlamd restart
      service socketlamd restart
    • and for Tarball-based deployments:

      $MOOGSOFT_HOME/bin/utils/process_cntl apache-tomcat restart
      $MOOGSOFT_HOME/bin/utils/process_cntl moog_farmd restart
      $MOOGSOFT_HOME/bin/utils/process_cntl rest_lam restart
      $MOOGSOFT_HOME/bin/utils/process_cntl socket_lam restart

    Note

    When the v9.x MoogFarmD process starts up, it should consume and process the events in the v9.x RabbitMQ Events.HA queue which have been building up during the migration.

  2. Before the Apache-Tomcat, LAMs, Integrations, and RabbitMQ can be shut down in the v8.x deployment, any applications configured to send events the v8.x deployment (LAMs or UI Integrations) need to be updated to now send events to the v9.x deployment.

  3. Once this is done, all the v8.x processes can be shut down.

    • Examples for RPM-based deployments:

      service apache-tomcat stop
      service restlamd stop
      service socketlamd stop
      
    • Examples for Tarball-based deployments:

      $MOOGSOFT_HOME/bin/utils/process_cntl apache-tomcat stop
      $MOOGSOFT_HOME/bin/utils/process_cntl rest_lam stop
      $MOOGSOFT_HOME/bin/utils/process_cntl socket_lam stop
      
  4. Remove the RabbitMQ Shovel from the v8, and shut down the v8.x instance of RabbitMQ.

    • Instructions for the v8.x RPM-based deployments (where "TEST_ZONE" is the v8 RabbitMQ vhost):

      rabbitmqctl clear_parameter shovel "my_shovel" -p TEST_ZONE
      service rabbitmq-server stop
      
    • Examples for the v8.x Tarball-based deployments (where "TEST_ZONE" is the v8 RabbitMQ vhost):

      rabbitmqctl clear_parameter shovel "my_shovel" -p TEST_ZONE
      $MOOGSOFT_HOME/bin/utils/process_cntl rabbitmq stop
  5. Re-create any crontab entries on the v9.0.0 servers which were present on the old v8.x.x servers. Possibly entries might include:

    • events_analyser

    • moog_archiver

    • broker-watchdog

    • process_keepalive

    Run the following command to enter a 'vi-like' editor to update the crontab as needed:

    crontab -e

Re-Index Opensearch

OpenSearch needs to be re-indexed so Situations and Alerts are searchable. After MoogFarmd has started fully, run the following command to trigger a re-index:

$MOOGSOFT_HOME/bin/utils/moog_indexer -f -n