Skip to main content

Migrate Broker or Bridge deployments to v9 from v8.x.x

Follow these steps to perform the required on-premise tasks for upgrading to Moogsoft Onprem v9.0.x from v8.0.x, v8.1.x or 8.2.x in a Moogsoft Hosted deployment scenario. Upgrades from v6.x or v7.x are not supported.

For migrating a fully on-premise version of Moogsoft Onprem to v9.0.x, use the Upgrade to Moogsoft Onprem v9.x guide instead.

Note

This process WILL involve some downtime as in-place upgrades are not possible (a new major database and major operating system versions are required). However, no event loss should occur during this period as the v9 RabbitMQ instance buffers events until it is safe for the v9 Moogfarmd process to start up.

Moogsoft Support will inform you once your Moogsoft Hosted instance has been migrated to v9.0.x.

For customers using Broker or Bridge servers: the downtime means only a small gap in the flow of events to the Moogsoft Hosted instance; the instructions in this guide ensure that no events are lost in the process.

Migrate a Broker-based v8.x.x deployment to v9.0.x

This section provides instructions for migrating the local/on-premise portion of a Moogsoft Hosted deployment based on the Moogsoft Broker to v9.0.x. Example: there is a provisioned local Moogsoft Broker responsible for managing selected local polling-based UI integrations which send the events to Moogsoft Hosted.

  1. Provision a new RHEL8 (8.6 minimum) server on which the v9.0.x Moogsoft Broker will be later installed.

  2. Stop any running brokers and integration-related Java processes on the v8.x.x server:

    For RPM-based deployments:

    kill -9 $(ps -ef | grep java | grep -v grep | grep broker | awk '{print $2}') 2>/dev/null
    kill -9 $(ps -ef | grep java | grep -v grep | grep lam | awk '{print $2}') 2>/dev/null

    For Tarball-based deployments:

    kill -9 $(ps -ef | grep java | grep -v grep | grep broker | awk '{print $2}') 2>/dev/null
    kill -9 $(ps -ef | grep java | grep -v grep | grep lam | awk '{print $2}') 2>/dev/null
  3. After Moogsoft Support confirms that the Moogsoft Hosted environment has been successfully upgraded to v9.0.x:

    1. Install and start the v9 broker on the new RHEL8 server as per the usual instructions: Brokers.

    2. In the v9.0.x Integrations UI, update the relevant polling Integrations to run against the v9.0.x Brokers instead of the old v8.x.x Broker.

    3. The v9 brokers should restart the relevant integrations and the polling should resume from the point they were stopped on the v8 side.

  4. When the new Broker and LAMs are running as expected in the v9.0.x environment, you can fully shut down the v8.x.x environment.

Migrate a Bridge-based v8.x.x deployment to v9.0.x

This section provides instructions for migrating the local/on-premise portion of a Moogsoft Hosted deployment based on the Moogsoft Bridge to v9.0.x. Example: there are Moogsoft LAMs connected to a local Moogsoft Bridge and the Bridge forwards the events to Moogsoft Hosted.

  1. Install v9.0.x.

    Moogsoft Onprem must be installed on new RHEL8 servers (minimum RHEL v8.6) in the same deployment configuration (single server, HA, etc.) as the v8.x.x deployment.

    For installation instructions, see Install Moogsoft Onprem.

  2. Copy the required configuration files from the v8.x.x server to the v9.0.x server.

    The following folders must be backed up and copied to the v9 servers:

    • $MOOGSOFT_HOME/config

    • $MOOGSOFT_HOME/bots

    • /etc/init.d (if any custom service scripts are in use or the OOTB ones were customized)

    The following commands will create tarballs of the config and bots folders, as well as any saml xml files:

    tar --ignore-failed-read --exclude=config/system.conf -C $MOOGSOFT_HOME -cvzf config_bots_for_migration.tgz config bots

    The tgz file must now be copied onto the v9 servers.

    On the v9 server, make a backup of the config and bots folders using this command:

    tar -C $MOOGSOFT_HOME -cvzf ~/config_bots_v9_original.tgz config bots

    The v8 tgz files can now be extracted, and the permissions fixed:

    • For RPM-based deployments:

      tar --exclude=config/system.conf -xf config_bots_for_migration.tgz -C $MOOGSOFT_HOME/
      chown -R moogsoft:moogsoft $MOOGSOFT_HOME/config/
      chown -R moogsoft:moogsoft $MOOGSOFT_HOME/bots/
      
    • For tarball-based deployments:

      tar --exclude=config/system.conf -xf config_bots_for_migration.tgz -C $MOOGSOFT_HOME/
      chown -R $(whoami):$(whoami) $MOOGSOFT_HOME/config
      chown -R $(whoami):$(whoami) $MOOGSOFT_HOME/bots
      
  3. Moogsoft will create a new v9.0.x Hosted environment and migrate your hosted data and configuration. At the start of this process, Moogsoft will ask you to create a RabbitMQ Shovel, and to stop the local v8 and (if started, v9) Moogsoft Bridge processes. This will allow events to buffer in the local RabbitMQ instances while Moogsoft is upgrading the hosted instance to v9.

  4. As per Moogsoft Support's instructions, create the RabbitMQ Shovel. This will forward events sent to the local v8 Bridge server to the local v9 Bridge server until the Hosted upgrade is complete.

    For the following steps, these values must be used:

    • QUEUE_TO_SHOVEL value needs to be: persistent-messages.HA

    • RABBITMQ1 value needs to be the v8.x.x RabbitMQ server

    • RABBITMQ2 value needs to be the v9.0.x RabbitMQ server

    Create a RabbitMQ Shovel (https://www.rabbitmq.com/shovel-dynamic.html) from one RabbitMQ instance (referenced as RABBITMQ1 in this process) to a second RabbitMQ instance (referenced as RABBITMQ2 in this process). RabbitMQ vhosts/usernames/passwords/hostnames etc need to be changed as appropriate. Please check the guidance above for specific instructions on which servers to use (which varies depending on the use case).

    In the following section, where QUEUE_TO_SHOVEL is mentioned, please check the guidance above for specific instructions on what the value of this property should be.

    1. Confirm that the expected QUEUE_TO_SHOVEL is present in both RabbitMQ instances using the following command:

      rabbitmqctl list_queues -p <vhost> | grep QUEUE_TO_SHOVEL
    2. In the RABBITMQ1 deployment, run the following commands to enable the RabbitMQ Shovel plugin:

      rabbitmq-plugins enable rabbitmq_shovel
      rabbitmq-plugins enable rabbitmq_shovel_management

      The output of the above commands should be:

      The following plugins have been enabled:
        rabbitmq_shovel
      started 1 plugins.
      The following plugins have been enabled:
        rabbitmq_shovel_management
      started 1 plugins.
    3. Confirm that the RABBITMQ1 instance can communicate with the RABBITMQ2 instance by running this command from the RABBITMQ1 server. Replace the IP/Hostname of the RABBITMQ2 host into the command instead of the example 192.168.51.8:

      sleep 1 | telnet 192.168.51.8 5672

      The output of this command should contain:

      Connected to 192.168.51.8
    4. On the RABBITMQ1 server, run the following commands to create a Dynamic Shovel named 'my_shovel' from a RABBITMQ1 queue to the same queue on RABBITMQ2. Update the IP addresses/ vhosts / credentials as needed for your environment before running the final command:

      RABBITMQ1_CREDS="moogsoft:m00gs0ft"
      RABBITMQ1_VHOST="TEST_ZONE"
      RABBITMQ1_HOST="127.0.0.1"
      QUEUE="QUEUE_TO_SHOVEL"
      RABBITMQ2_CREDS="moogsoft:m00gs0ft"
      RABBITMQ2_VHOST="TEST_ZONE"
      RABBITMQ2_HOST="192.168.51.8"
      
      rabbitmqctl -p "${RABBITMQ1_VHOST}" set_parameter shovel my_shovel '{"src-protocol": "amqp091", "src-uri": "amqp://'${RABBITMQ1_CREDS}'@'${RABBITMQ1_HOST}'/'${RABBITMQ1_VHOST}'", "src-queue": "'${QUEUE}'", "dest-protocol": "amqp091", "dest-uri": "amqp://'${RABBITMQ2_CREDS}'@'${RABBITMQ2_HOST}'/'${RABBITMQ2_VHOST}'"}'
    5. Confirm that the Shovel was created successfully by logging in the RabbitMQ admin Shovel User Interface: http://<RABBITMQ1HostOrIP>:15672/#/shovels

      The "state" must show "running" on this page. If the Shovel is not running at this stage, events may be lost depending on the overall process being carried out.

    6. The count for the QUEUE_TO_SHOVEL queue in the RABBITMQ2 instance should now be increasing. You can check this using the queues interface: http://<RABBITMQ2HostOrIP>:15672/#/queues

  5. As per Moogsoft Support's instructions, stop the v8 Bridge process (and the v9 Bridge process, if started). Events will continue to feed into the local v8 server but now will be forwarded to the local v9 server RabbitMQ process for storage during the upgrade maintenance window.

    crontab -l | sed '/moogsoft_bridge_watchdog/s/^/#/' | crontab -
    kill -9 $(ps -ef | grep java | grep -v grep | grep bridge | awk '{print $2}') 2>/dev/null
  6. Events will now queue in the v9.x.x RabbitMQ instance as they will be sent from the v8.x.x RabbitMQ instance to the v9.x.x RabbitMQ instance.

  7. Respond to Moogsoft Support and communicate that the Shovel has been created and the Bridge(s) have been stopped.

  8. Moogsoft Support will contact you once the hosted v9 upgrade is complete.

    Caution

    If and only if Moogsoft Support reports an unrecoverable problem with the Hosted v8.x.x to v9.0.x migration process, Moogsoft Support may need to revert to the v8.x.x Hosted environment until the issue can be investigated and resolved. In this case, follow the very last section of this document: Roll-back Scenario

  9. At this point the local v9 Bridge process can be configured and started. Start the Moogsoft Bridge on your v9.0.x server:

    For RPM-based deployments:

    $MOOGSOFT_HOME/bin/utils/moog_init_lams.sh --bridge

    For Tarball-based deployments:

    $MOOGSOFT_HOME/bin/utils/moog_init_lams.sh --bridge
  10. Start the required Moogsoft LAMs on the v9 server

    Example for RPM-based deployments:

    service restlamd start

    Example for Tarball-based deployments:

    $MOOGSOFT_HOME/bin/utils/process_cntl rest_lam start
  11. Update all Event Feeds to send events to your v9.0.x LAMs instead of the v8.x.x LAMs

  12. Re-create any crontab entries on the v9.0.x server(s) which were present on the old v8.x.x servers. Possibly entries might include:

    • events_analyser

    • moog_archiver

    • broker-watchdog

    • process_keepalive

    Run the following command to enter a 'vi-like' editor to update the crontab as needed:

    crontab -e
  13. Once the new v9 Bridge, v9 LAMs, and UI-based Integrations are running as expected in the v9.0.x environment, you can fully shut down the local v8.x.x environment.

Roll-back Scenario

If and only if Moogsoft Support reports an unrecoverable problem with the Hosted v8.x.x to v9.0.x migration/upgrade process, Moogsoft Support may need to revert to the v8.x.x Hosted environment until the issue can be investigated and resolved. Until then, the events which have buffered in the local v9 server need to be sent 'back' to the v8 server.

  1. Remove the v8 Shovel using this command on the local v8 RabbitMQ server:

    rabbitmqctl clear_parameter shovel "my_shovel" -p TEST_ZONE
  2. Create a new shovel going the opposite direction using the commands in this section as a template:

    However this time, swap the parameters as follows:

    • QUEUE_TO_SHOVEL value needs to be: persistent-messages.HA

    • RABBITMQ1 value needs to be the v9.0.x RabbitMQ server

    • RABBITMQ2 value needs to be the v8.x.x RabbitMQ server

    This will send any queued events back across to the local v8 Bridge server.

  3. Stop the v9 Bridge process and start the v8 Bridge process.