Skip to main content

Migrate an RPM deployment from v8.x to v9.x

Follow these instructions to upgrade your RPM v8.x deployment to Moogsoft Onprem v9.x. See Upgrade to Moogsoft Onprem v9.x for an overview of the process.

These steps should be carried out on one of the database servers in your v8.x deployment in advance of the migration maintenance window.

  1. Verify that Percona XtraBackup is installed on at least one of your v8.x database servers.

    xtrabackup --version

    If your v8.x deployment is a typical distributed high availability system with three Percona XtraDB database nodes, Percona XtraBackup will have been installed with the Percona XtraDB software.

  2. If the xtrabackup --version command returns 'command not found', consult the Percona documentation to install XtraBackup.

    Note

    If your installation uses a different MySQL database such as Aurora, consult its documentation on how to verify the presence of a database backup utility and install one if necessary. You will need to use the utility to export your v8.x databases and later import them into the MySQLv8 version of the original database e.g. Aurora MySQL v3

  3. Verify that the backup utility has access to the moogdb, moog_reference, moog_intdb, and historic_moogdb databases. Change the hostname and port of the database as needed. Run the following command:

    mysql -u ermintrude -h localhost -P 3306 -p
    show databases;
  4. Use the moog_archiver utility to reduce the size of the historic_moogdb database, which stores older Situations, their contained alerts, alerts not in situations, and statistics data. For example, use this command to archive everything older than 13 months:

    $MOOGSOFT_HOME/bin/utils/moog_archiver -r -s 396 -l 396 -m

    Contact Moogsoft Support if you need help with determining whether moog_archiver is already scheduled to run regularly, with configuring moog_archiver, or with verifying that the archiving process was successful.

    Moogsoft strongly recommends archiving as much older data as possible to speed up the v9.x migration process, reduce downtime during the migration maintenance window, and improve v9.x performance. Moogsoft recommends keeping no more than 13 months of data for production systems, and no more than 30 days of data for non-production systems.

  5. Conduct a database health check by running the following command:

    $MOOGSOFT_HOME/bin/utils/moog_db_validator.sh

    If any errors are reported, contact Moogsoft Support to determine whether it is safe to proceed with the migration.

The v9.x installation should be completed on new Red Hat Enterprise Linux version 8.6 or higher servers in advance of the migration maintenance window, with support limited to RHEL8 and not extending to RHEL9.

  1. Provision the necessary number of servers for the Moogsoft Onprem v9.x installation. The servers must be running RHEL version 8.6 or higher. Only RHEL8 is supported. RHEL9 versions are not supported. Match the number and type of servers to the servers in your existing v8.x deployment. For example, if your v8.x deployment has two core/UI servers behind a load balancer and three database servers, duplicate that configuration with five new servers.

  2. Follow the appropriate instructions for an RPM or tarball single-server or distributed HA deployment on your RHEL8 servers. Consult Install Moogsoft Onprem for installation guides. Do not configure data ingestion at this time.

  3. If you are using encrypted passwords, regenerate new encrypted passwords for your v9.x deployment using the Moog Encryptor utility, ensuring the .key file is consistent for all servers that share the same configuration files.

  4. Install Moogsoft Add-Ons using the latest version released.

  5. Run the following command on each of the new servers to enable the latency performance profile. You will need to run this command as root. This setting survives machine restarts and only needs to be run once on each server.

    tuned-adm profile latency-performance

    The latency performance profile increases and smooths throughput to improve RabbitMQ messaging service efficiency. See the Red Hat documentation for more information on performance profiles.

Carry out these steps on each of the database servers in your v9.x deployment before the migration maintenance window.

  1. Examine the /etc/my.cnf file on one of your v8.x database servers and compare it to the same file on one of your v9.x database servers. Copy any custom settings from the v8.x server to the my.cnf files on each of your v9.x database servers. Note that the wsrep_sst_auth, gtid-mode, enforce-gtid-consistency, and service-startup-timeout properties do not exist in the v9.x my.cnf file, so those settings should not be transferred.

  2. For each of your v9.x database servers, verify that the following properties are set as indicated in the my.cnf file:

    Property

    Setting

    disable-log-bin

    n/a (ensure this property is enabled)

    default_authentication_plugin

    mysql_native_password

    pxc-encrypt-cluster-traffic

    OFF

  3. Verify that the Percona replication settings in my.cnf in your v9.x deployment mirror those in your v8.x deployment (though hostnames and IP addresses will be different).

These steps should be completed before the migration maintenance window.

For this part of the migration process, you will need to choose whether to manually transfer custom settings from your v8.x servers to your v9.x servers or overwrite the configuration files on your v9.x files with their v8.x equivalents.

If you overwrite the v9.x configuration files, you will preserve your deployment customizations, including all of your customized data ingestion LAMbots. You will need to update machine-specific settings, however, such as hostnames, IP addresses, High Availability cluster names, and RabbitMQ vhosts. Follow the guidance in the Moogsoft Onprem installation docs to identify and update these settings.

If you do not overwrite the v9.x configuration files, you will need to compare the v8.x and v9.x files using diff commands (or an equivalent tool) and manually transfer data ingestion and other customizations to your v9.x files. In this case the machine-specific settings for your v9.x servers will not need to be updated as they will have been set during the v9.x install process.

This guide suggests overwriting almost all of your v9.x configuration files with their v8.x equivalents, with the exception of the system.conf and servlets.conf files, in order to preserve the machine-specific settings in those two files.

  1. On each v8.x server, examine these directories to determine whether they contain custom configuration information. Compare the contents of the files to verify that they are consistent across your v8.x servers.

    Directory

    Type of Custom Information

    $MOOGSOFT_HOME/config

    Moogsoft system configuration files

    $MOOGSOFT_HOME/bots

    Moogsoft data ingestion and data processing files

    $MOOGSOFT_HOME/etc/saml

    IdP and SP metadata files for SAML authentication

    /etc/init.d

    Moogsoft-specific customized or custom service scripts

    /var/lib/moogsoft/moog-data

    Profile photos and attachments

    If you have customized any configuration files other than the ones listed above, you will need to re-enter the custom settings manually on the v9.x servers. You should not copy complete files other than those listed above from the v8.x to v9.x servers.

  2. Based on your examination, choose a v8.x server to use as a source (multiple servers may be needed if the config files for different components are not shared - for example one host may contain MoogFarmD configuration and another host may contain LAM configuration) for the configuration files you will migrate to your v9.x deployment. On that server, back up the folders in the $MOOGSOFT_HOME directory which contain custom configuration files, including, at a minimum, the config and bots folders. For example, use this command to create tarballs of the config, bots, and saml folders:

    tar --ignore-failed-read --exclude=config/system.conf --exclude=config/servlets.conf -C $MOOGSOFT_HOME -cvzf config_bots_saml_for_migration.tgz config bots etc/saml
  3. Back up any Moogsoft-related custom files in the /etc/init.d folder. For example, a customised version of the MoogFarmD init script or new init scripts for custom LAMs etc. Init scripts which have not been customised do not need to be backed up.

  4. Back up the moog-data folder. If the files are not in the default location, examine the cache_root setting in the $MOOGSOFT_HOME/config/servlets.conf file to determine the location and replace it in the path below.

    tar -C /var/lib/moogsoft/moog-data -cvzf moog_data_for_migration.tgz .
  5. On each v9.x server, make a backup of the original v9.x config and bots folders. 

    tar -C $MOOGSOFT_HOME -cvzf ~/config_bots_v9_original.tgz config bots
  6. Copy the backup files from your v8.x server to each of the v9.x servers.

  7. On each v9.x server, extract the v8.x backup files and adjust the permissions. For example, to extract the config, bots, moog_data, and saml folders, use the following commands:

    tar --exclude=config/system.conf --exclude=config/servlets.conf -xf config_bots_saml_for_migration.tgz -C $MOOGSOFT_HOME/
    tar -xf moog_data_for_migration.tgz -C /var/lib/moogsoft/moog-data
    chown -R moogsoft:moogsoft $MOOGSOFT_HOME/config/
    chown -R moogsoft:moogsoft $MOOGSOFT_HOME/bots/
    chown -R moogsoft:moogsoft /var/lib/moogsoft/moog-data

    The commands above do not copy the system.conf and servlets.conf files, since those two files have many machine-specific settings which would need to be updated after copying. If you prefer to copy your system.conf and servlets.conf files from v8.x to v9.x and then update them, you may want to also include the v9.x-specific updates discussed in the next section.

These updates are optional, but conforming migrated v8.x configuration files to v9.x format is recommended to simplify troubleshooting and future updates.

Update system.conf

Update the system configuration file to reflect changes in v9.x. These steps should ONLY be carried out if you copied your v8.x system.conf file to your v9.x deployment. If you used the suggested backup tar command in the previous section, which does not copy system.conf, skip these steps.

  1. In the "mooms" block, update automatic connection recovery settings.

    Add this block:

            # Automatic connection recovery attempts by default will continue at a fixed
            # time interval, 5 seconds by standard, until a new connection is successfully
            # opened. However recovery delay can be made dynamic by an out-of-the-box
            # backoff algorithm that calculates the next delay using default Fibonacci
            # sequence after each failed attempt.
            #
            # The default is 'true' if not specified. Note that dynamically computed delay
            # interval values that are too low (lower than 2 seconds), are not recommended.
            #
            "fixed_recovery_interval" : true,
  2. In the "mysql" block, add a configuration setting for Amazon Aurora. Note that Percona XtraDB is required for Moogsoft Onprem deployments. Amazon Aurora is only recommended for Moogsoft Hosted deployments.

    Replace this code:

                #    "trustStorePassword" : "moogsoft"
                #}
            },

    with this:

                #    "trustStorePassword" : "moogsoft"
                #}
    
                # Set aurora to true to use mariadb aurora driver. Defaults to false
                # "aurora"        : true,
    
                # Set jdbc_url to use that url as dbUrl.
                # If not set then default dbUrl used.
                # "jdbc_url"      : "jdbc:mariadb:sequential://"
            },
  3. In the "search" block, add a buffer limit for OpenSearch connections.

    Replace this code:

                ]
            },
    
        # Failover configuration parameters.

    with this:

                ],
                # Buffer Limit for OpenSearch connections. Value is in megabytes.
                # Default is 1024. Minimum value supported is 100.
                "buffer_limit" : 1024
            },
    
        # Failover configuration parameters.
  4. In the "hazelcast" block, update the Management Center configuration.

    Replace this code:

                        # Hazelcast's Management Center UI, if running.
                        "man_center"    :
                            {
                                "enabled"   : false,
                                "host"      : "localhost",
                                "port"      : 8091,
                                "endpoint"  : "hazelcast-mancenter"
                            },
    
                        # If set to true the stateful info from each process 

    with this:

                        # Hazelcast's Management Center UI, if running.
                        # A List of IPs from which Management Center operations are allowed.
                        # If not mentioned, by default all source connections are allowed.
                        "trusted_interfaces" : ["127.0.0.1"],
    
                        # If set to true the stateful info from each process  
  5. For v8.0.x migrations only: In the "processes" block, comment out the example processes. Do not make this change if you have customized the configuration in the "processes" block.

    Replace this code:

                # Subcomponent only applies to moog_farmd and indicates which of
                # of the moolets are currently running.
                "processes": [

    with this:

                # Subcomponent only applies to moog_farmd and indicates which of
                # of the moolets are currently running.
                "processes": []
    
    # 	        Example processes block:
    # 	        "processes": [

    Comment out the remainder of the example processes block as indicated.

    # 	        Example processes block:
    # 	        "processes": [
    # 	                        #
    #                             # Each process config has the following options available:
    
                 ---COMMENT OUT INTERVENING LINES---
    
    #                                 "group"           : "rest_client_lam",
    #                                 "instance"        : "",
    #                                 "service_name"    : "restclientlamd",
    #                                 "process_type"    : "LAM",
    #                                 "reserved"        : false
    #                             }
    # 			]

Update servlets.conf

Update the servlets configuration file to reflect changes in v9.x. This step should ONLY be carried out if you copied your v8.x servlets.conf file to your v9.x deployment. If you used the suggested backup tar command in the previous section, which does not copy servlets.conf, skip this step.

  1. In the "moogsvr" block, add a setting to sanitize client tool URLs.

    Replace this code:

            priority_db_connections: 25
    
            # Uncomment to overwrite the web host address
            # webhost: "https://localhost"
        },

    with this:

            priority_db_connections: 25,
    
            # Uncomment to overwrite the web host address
            # webhost: "https://localhost"
    
            # To avoid XSS vulnerability in Situation Alert Tools and Client Alert Tools
            # Set sanitize_client_tool_urls: false to disable sanitizing ClientTool Urls. Defaults to true
    
            sanitize_client_tool_urls: true
        },

Update security.conf

Update the security configuration file you copied from the v8.x system to reflect changes in v9.x.

  1. In the "global_settings" block, add a setting to enable or disable session audit logs (by default it should be disabled).

    Replace this code:

            #   "csrfProtection" : false
        # },

    with this:

            #   "csrfProtection" : false,
            #
            # Uncomment to enable session logs in session_audit_log table. Defaults to false
            #
            #   "enableSessionAuditLog" : true
        # },
  2. In the "Example DB realm" block, uncomment the DB realm type and password policy, and update the password policy to a more stringent standard.

    Replace this code:

        # "Example DB realm" : {
        #     "realmType": "DB",
        # },
        #
        # "Example LDAP realm" : {
    

    with this:

        "Example DB realm" : {
            "realmType": "DB",
    
            # This is to set password policy for DB Users, it is optional.
            # It is compulsory to add regex and validationMessage if passwordPolicy is configured
             "passwordPolicy" : {
                   "regex" : "^(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#_.&()\\-[{}]:;',?/*~$^+=<>]).{8,}$",
                   "validationMessage" : "Password length should be at least 8 characters, and consist of upper and lower case letters, numbers, and special characters in any order"
             }
        }
    
        #, "Example LDAP realm" : {
  3. In the "Example LDAP realm" block, add a new overlap group search setting.

    Replace this code:

            # "encryptedSystemPassword": "<ENCRYPTED PASSWORD>",
            #
            # A filter (see below) will be used in conjunction with

    with this:

            # "encryptedSystemPassword": "<ENCRYPTED PASSWORD>",
    
            # If overlayGroupSearch is set to true, user groups will be identified
            # by the memberof overlay instead. This property should be used with the
            # groupSearchAttribute property.
            # "overlayGroupSearch": true,
            #
            # A filter (see below) will be used in conjunction with
  4. In the "Example LDAP realm" block, add a new group search attribute.

    Replace this code:

            # "memberAttribute": "member",
            #
            # Once the groups have been found using the above DN and filter

    with this:

            # "memberAttribute": "member",
            #
            # If the overlayGroupSearch property is true, groupSearchAttribute allows
            # control over which LDAP attribute is used to identify group information
            # "groupSearchAttribute" : "memberOf",
            #
            # Once the groups have been found using the above DN and filter
    
  5. For v8.0 migrations only: In the "my_saml_realm" block, add an authorization setting needed only for Azure IDP.

    Replace this code:

           #"realmType": "SAML2",
    
            #
            # Provide the location of the IdP's metadata file, it has to be an xml file.

    with this:

           #"realmType": "SAML2",
    
            #
            # Controls the 'forceAuth' setting. Defaults to true if commented out or missing
            # Needs to be set to false if used with an Azure IDP to prevent the user being asked
            # to sign in more than needed
            #
            # "forceAuth" : true,
            #
            # Provide the location of the IdP's metadata file, it has to be an xml file.

These steps involve downtime and should take place during a scheduled maintenance window.

Set up a dynamic RabbitMQ Shovel to forward events from your v8.x system to your v9.x system. While the shovel is active, Moogfarmd will be shut down on both systems, and data ingestion LAMs will be shut down on the v9.x system. Incoming events will be ingested on the v8.x system, forwarded to the v9.x system, and buffered in both systems but not processed.

After the shovel is turned off and services are restarted, receiving LAMs can ingest data in parallel on both systems. Polling LAMs will need to be stopped on v8.x before being started on v9.x.

  1. On one of your v9.x core servers, determine which Moogsoft components are currently running.

    ha_cntl -v
  2. If Moogfarmd is not running on your v9.x server, start it to create an events queue.

    service moogfarmd start
  3. Verify that the Events.HA queue exists in RabbitMQ. Fill in<vhost>based on the results of the list_vhosts command.

    rabbitmqctl list_vhosts
    rabbitmqctl -p <vhost> list_queues
  4. Stop Moogfarmd.

    service moogfarmd stop
  5. Stop the Moogfarmd process on all of your v8.x core servers.

    service moogfarmd stop
  6. Follow the instructions in the next section to install the shovel on one of your v8.x core servers, using these values:

    QUEUE_TO_SHOVEL

    moog_farmd.moog_farmd.Events.HA

    RABBITMQ1

    one of your v8.x RabbitMQ servers

    RABBITMQ2

    one of your v9.x RabbitMQ servers

Create a RabbitMQ Shovel (https://www.rabbitmq.com/shovel-dynamic.html) from one RabbitMQ instance (referenced as RABBITMQ1 in this process) to a second RabbitMQ instance (referenced as RABBITMQ2 in this process). RabbitMQ vhosts/usernames/passwords/hostnames etc need to be changed as appropriate. Please check the guidance above for specific instructions on which servers to use (which varies depending on the use case).

In the following section, where QUEUE_TO_SHOVEL is mentioned, please check the guidance above for specific instructions on what the value of this property should be.

  1. Confirm that the expected QUEUE_TO_SHOVEL is present in both RabbitMQ instances using the following command:

    rabbitmqctl list_queues -p <vhost> | grep QUEUE_TO_SHOVEL
  2. In the RABBITMQ1 deployment, run the following commands to enable the RabbitMQ Shovel plugin:

    rabbitmq-plugins enable rabbitmq_shovel
    rabbitmq-plugins enable rabbitmq_shovel_management

    The output of the above commands should be:

    The following plugins have been enabled:
      rabbitmq_shovel
    started 1 plugins.
    The following plugins have been enabled:
      rabbitmq_shovel_management
    started 1 plugins.
  3. Confirm that the RABBITMQ1 instance can communicate with the RABBITMQ2 instance by running this command from the RABBITMQ1 server. Replace the IP/Hostname of the RABBITMQ2 host into the command instead of the example 192.168.51.8:

    sleep 1 | telnet 192.168.51.8 5672

    The output of this command should contain:

    Connected to 192.168.51.8
  4. On the RABBITMQ1 server, run the following commands to create a Dynamic Shovel named 'my_shovel' from a RABBITMQ1 queue to the same queue on RABBITMQ2. Update the IP addresses/ vhosts / credentials as needed for your environment before running the final command:

    RABBITMQ1_CREDS="moogsoft:m00gs0ft"
    RABBITMQ1_VHOST="TEST_ZONE"
    RABBITMQ1_HOST="127.0.0.1"
    QUEUE="QUEUE_TO_SHOVEL"
    RABBITMQ2_CREDS="moogsoft:m00gs0ft"
    RABBITMQ2_VHOST="TEST_ZONE"
    RABBITMQ2_HOST="192.168.51.8"
    
    rabbitmqctl -p "${RABBITMQ1_VHOST}" set_parameter shovel my_shovel '{"src-protocol": "amqp091", "src-uri": "amqp://'${RABBITMQ1_CREDS}'@'${RABBITMQ1_HOST}'/'${RABBITMQ1_VHOST}'", "src-queue": "'${QUEUE}'", "dest-protocol": "amqp091", "dest-uri": "amqp://'${RABBITMQ2_CREDS}'@'${RABBITMQ2_HOST}'/'${RABBITMQ2_VHOST}'"}'
  5. Confirm that the Shovel was created successfully by logging in the RabbitMQ admin Shovel User Interface: http://<RABBITMQ1HostOrIP>:15672/#/shovels

    The "state" must show "running" on this page. If the Shovel is not running at this stage, events may be lost depending on the overall process being carried out.

  6. The count for the QUEUE_TO_SHOVEL queue in the RABBITMQ2 instance should now be increasing. You can check this using the queues interface: http://<RABBITMQ2HostOrIP>:15672/#/queues

This step should be performed during the maintenance window.

Block the v8.x user interface from accepting human or automated input during and after database export. UI-based integrations will continue to receive and queue events.

  1. Run these commands on each of your UI servers:

    sed -i -e 's/location \/moogsvr/location \/moogsvrOLD/' -e 's/location \/graze/location \/grazeOLD/' /etc/nginx/conf.d/moog-ssl.conf
    service nginx reload

These steps should be carried out during the maintenance window. They apply ONLY to v8.0 and v8.1 systems which have not been upgraded. The v8.2 database schema is already compatible with Moogsoft Onprem v9.x and does not need to be updated.

  1. Warning

    From this point onwards, if access to a functional v8 environment is still needed (for example during a recovery scenario), a database restore will need to be done using the files created during this initial step. Guidelines about how to restore a Percona database can be found here : https://docs.percona.com/percona-xtrabackup/2.4/backup_scenarios/full_backup.html#restoring-a-backup

    Back up your v8.0 or v8.1 database. This allows a restore to be performed if there are any problems encountered during the upgrade. Run this command on one of your database servers:

    xtrabackup --backup --target-dir=/tmp/v8backup_pre82_$(date +%s) -u root
  2. Run the following entire code block on one of your database servers. Running this command with render the v8.x environment non-functional, however this step is reversible if there is an upgrade upgrading to v9. Please contact Moogsoft Support if you need to undo this SQL change in the v8 environment.

    $MOOGSOFT_HOME/bin/utils/moog_mysql_client <<-EOF
    #MOOG-17298
    RENAME TABLE \`groups\` TO user_groups;
    
    drop procedure create_and_fetch_thread_entry;
    drop procedure create_remote_user;
    drop procedure create_user_with_id;
    drop procedure set_password;
    drop procedure update_user;
    drop procedure reset_table_keys;
    drop view lock_waits;
    
    #MOOG-17299
    ALTER TABLE enrichments CHANGE COLUMN \`rank\` ranking INT NOT NULL;
    
    #MOOG-17300
    ALTER TABLE feedback CHANGE COLUMN \`window\` time_window BIGINT;
    SELECT properties->>"$.historic_database_name" INTO @hist_db_name FROM 
    system_config WHERE config_type = 'Splitter';
    SET @sql_text = CONCAT('ALTER TABLE ', @hist_db_name, '.feedback CHANGE COLUMN 
    \`window\` time_window BIGINT');
    PREPARE stmt FROM @sql_text;
    EXECUTE stmt;
    DEALLOCATE PREPARE stmt;
    SET @sql_text := CONCAT('CREATE OR REPLACE VIEW historic_feedback AS SELECT * FROM ', @hist_db_name, '.feedback');
    PREPARE stmt FROM @sql_text;
    EXECUTE stmt;
    DEALLOCATE PREPARE stmt;
    
    #MOOG-17303
    CREATE OR REPLACE
    SQL SECURITY DEFINER
    VIEW largest_tables
    AS
    SELECT CONCAT(table_schema, '.', table_name)                                          table_name,
           CONCAT(ROUND(table_rows / 1000000, 2), 'M')                                    table_rows,
           CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G')                    DATA,
           CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G')                   idx,
           CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size,
           ROUND(index_length / data_length, 2)                                           idxfrac
    FROM   information_schema.TABLES
    ORDER  BY data_length + index_length DESC
    LIMIT  10;
    EOF
    

These steps should be carried out during the maintenance window. The purpose is to ensure that the database schema is compatible with MySQL 8 by removing potentially problematic Stored Procedures in advance (these will be replaced later in the upgrade process).

  1. Run the following entire code block on one of your database servers. Running this command with render the v8.x environment non-functional, however this step is reversible if there is an issue during the upgrade to v9. Please contact Moogsoft Support if you need to undo this SQL change in the v8 environment.

    $MOOGSOFT_HOME/bin/utils/moog_mysql_client -e "$($MOOGSOFT_HOME/bin/utils/moog_mysql_client --skip-column-names -Be "SELECT group_concat('DROP PROCEDURE IF EXISTS ', r.routine_name, '; ' ORDER BY r.routine_name SEPARATOR '') AS DROP_STATEMENTS FROM information_schema.routines r WHERE r.routine_schema = database() AND r.routine_type = 'PROCEDURE';" 2>/dev/null)";
    
    $MOOGSOFT_HOME/bin/utils/moog_mysql_client -r -e "$($MOOGSOFT_HOME/bin/utils/moog_mysql_client --skip-column-names -r -Be "SELECT group_concat('DROP PROCEDURE IF EXISTS ', r.routine_name, '; ' ORDER BY r.routine_name SEPARATOR '') AS DROP_STATEMENTS FROM information_schema.routines r WHERE r.routine_schema = database() AND r.routine_type = 'PROCEDURE';" 2>/dev/null)";
    

These steps should be performed during the maintenance window.

Back up the v8.x database, copy it to one of your v9.x database servers, and restart the MySQL service using the migrated database. These steps assume your system architecture is configured for high availability with three database nodes. If you are running a single-server deployment, you can skip some of the steps at the end of this section.

  1. On one of your v8.x database servers, make a backup of the existing database for the purpose of exporting and importing into the v9 environment. This backup is different from the one at the beginning of the upgrade which was for protecting the v8.x database state should there be any issues during the v9 upgrade.. Verify that you have sufficient disk space in the target directory before running the backup command.

    xtrabackup --backup --target-dir=/tmp/v8backup -u root
  2. If you receive this error:

    Failed to connect to MySQL server: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2).

    Repeat the backup command with an additional parameter, changing the socket file location if it is different in your deployment e.g.:

    xtrabackup --backup --target-dir=/tmp/v8backup -u root --socket=~/moog_datastore/var/lib/mysql/mysql.sock 
  3. Verify that the backup command ended successfully, with no error messages. A successful run should end like this:

    ...xtrabackup: Transaction log of lsn (6302197911) to (6302197920) was copied.
    220707 12:38:18 completed OK!
  4. Run a "prepare" of the backup to ensure it is synchronized with the database:

    xtrabackup --prepare --target-dir=/tmp/v8backup -u root
  5. If you receive this error:

    Failed to connect to MySQL server: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2).

    Repeat the command with an additional parameter, changing the socket file location if it is different in your deployment e.g.:

    xtrabackup --prepare --target-dir=/tmp/v8backup -u root --socket=~/moog_datastore/var/lib/mysql/mysql.sock
  6. Verify that the command ended successfully. A successful run should end like this:

    ...
    InnoDB: FTS optimize thread exiting.
    InnoDB: Starting shutdown...
    InnoDB: Shutdown completed; log sequence number 6302198824
    220707 12:40:21 completed OK!
  7. Stop the MySQL service on ALL of your v9.x database servers, and remove the MySQL data directory from all the v9.x database nodes:

    service mysqld stop
    rm -rf /var/lib/mysql/*
  8. Run the following commands on ONE of your v9.x database servers. Copy the backup into the MySQL data directory, adjust the permissions, and configure the v9.x server as a Percona bootstrap node. Replace <v8hostname> with the v8.x database server you used to make the backup.

    # Copy the files from one server to the next using either this rsync command or the scp one after it
    # rsync -avrP <v8hostname>:/tmp/v8backup/* /var/lib/mysql/
    scp -r <v8hostname>:/tmp/v8backup/* /var/lib/mysql/
    chown -R mysql:mysql /var/lib/mysql
    sed -i 's;wsrep_cluster_address.*;wsrep_cluster_address = gcomm://;' /etc/my.cnf
    sed -i 's/safe_to_bootstrap.*/safe_to_bootstrap : 1/g' /var/lib/mysql/grastate.dat 2>/dev/null
  9. Restart the chosen MySQL node in bootstrap mode on your v9.x database server:

    systemctl start mysql@bootstrap.service
  10. Update the privileges for the 'ermintrude' user:

    mysql -u root -e "GRANT SYSTEM_USER, SELECT, PROCESS ON *.* TO 'ermintrude'@'%', 'ermintrude'@'localhost';"
  11. Run the following command to update the database schema:

    $MOOGSOFT_HOME/bin/utils/moog_db_auto_upgrader -t 9.0.1 -u ermintrude
  12. Validate the upgrade.

    $MOOGSOFT_HOME/bin/utils/moog_install_validator.sh
    $MOOGSOFT_HOME/bin/utils/moog_db_validator.sh
    $MOOGSOFT_HOME/bin/utils/tomcat_install_validator.sh

    Contact Moogsoft Support if there are any errors.

  13. Unless you have a single-server deployment, start the MySQL service on each of your other database servers:

    service mysql start
  14. Wait for the SST (snapshot state transfer) process to complete. Verify that the database has successfully replicated by running this command on each of your other database servers:

    curl http://localhost:9198

    Once the process is complete, you should see this response from ALL of the nodes:

    Percona XtraDB Cluster Node is synced
  15. After the databases have synced, update the wsrep_cluster_address setting in the /etc/my.cnf file on the first database server by adding the IP addresses for each of your database servers. This will convert it from a Percona bootstrap node to a normal node.

    wsrep_cluster_address = gcomm://<IP_address1>,<IP_address2>,<IP_address3>
  16. Restart the MySQL service on the first database server to make the change take effect:

    service mysqld restart

These steps will complete the migration maintenance window and transfer all functions to your v9.x deployment.

Examine and adjust machine-specific data ingestion settings, restart data ingestion and data processing services, and update third-party applications to send data to your v9.x servers.

  1. Inspect the system.conf, servlets.conf, and other configuration files on each of your v9.x core, UI, and data ingestion servers. Ensure that the following items are configured appropriately:

    • MOOMS broker definition (RabbitMQ zone, host, and credentials)

    • MySQL host and credentials

    • OpenSearch nodes.host and credentials

    • Hazelcast hosts

    • HA configuration blocks in moog_farmd.conf, system.conf, servlets.conf and any running LAM configuration files

    Consult the Moogsoft Onprem installation documentation for more information about these settings.

  2. Restart the Apache Tomcat service on each of your v9.x UI servers:

    service apache-tomcat restart
  3. Restart the Moogfarmd service on each of your v9.x core servers:

    service moogfarmd restart

    When the Moogfarmd process starts up, it should consume and process the events in the RabbitMQ events queue which have been building up during the migration from the RabbitMQ Shovel.

  4. Enable and start data ingestion services. For example:

    systemctl enable restlamd
    service restlamd start
    systemctl enable socketlamd
    service socketlamd start
  5. Reconfigure third-party applications which have been sending events to your v8.x data ingestion servers to send them to your v9.x data ingestion servers instead.

  6. Shut down the user interface on each of your v8.x UI servers:

    service apache-tomcat stop
  7. Shut down data ingestion services on each of your v8.x data ingestion servers. For example:

    service restlamd stop
    service socketlamd stop
  8. Remove the RabbitMQ Shovel from your v8.x core server. Replace <vhost> with your RabbitMQ vhost in the command below:

    rabbitmqctl clear_parameter shovel "my_shovel" -p <vhost>
  9. Shut down the RabbitMQ service on each of your v8.x core servers.

    service rabbitmq-server stop
  10. Examine crontab entries on each of your v8.x core servers and recreate any Moogsoft-specific entries on the corresponding v9.x core servers. Possible examples include:

    events_analyser
    moog_archiver
    broker-watchdog
    process_keepalive

    Use the crontab -e command to view and edit the crontabs as needed.

  11. Once Moogfarmd has restarted fully, reindex OpenSearch on one of your v9.x core servers so Situations and alerts are searchable:

    $MOOGSOFT_HOME/bin/utils/moog_indexer -f -n