Moogsoft Docs

HA - Reference Architectures

Introduction

Note

This document describes various practical deployment scenarios. Readers are encouraged to first refer to the High Availability to understand HA fundamentals

The reference architectures described here are for example only and the internal expertise within your organization must validate deployment architecture with reference to HA goals, organizational standards, and system configuration limitations before deployment. The following HA reference architectures are included:

  • Two Server Active/Passive

  • Two Server Active/Active

  • Four Server Active/Passive

  • Four Server Active/Active

  • Eight Server Active/Active Fully Distributed

Production implementations can use additional servers and combinations to address Scaling and Performance. These reference architectures are focused on showing HA configuration and failover scenarios. Scaling and Performance are not the focus. Also, these reference architectures are intended to complement the deployment scenarios section.

Note

  • The reference architectures listed are based only on hazelcast as the mechanism for state persistence in a HA setup. Previously, both memcached and hazelcast mechanisms were available, but memcached has been removed as of 5.1.3 and later.

  • Currently, failover of Moogsoft AIOps components is manually triggered (no auto failover) using the ha_cntl command line utility.

Considerations for Single and Multi-tiered Architecture

Advantage

Disadvantage

Single-tier

maintenance

single point of failure

budget considerations

CPU resource

zero network latency between components

memory resource

simple upgrade

security - single point of access

Multi-tier

db has dedicated machine - higher performance

possible network latency

greater performance - machine dependent

time consuming upgrade

upgrade a node at a time

maintenance

UI Clients performance increased

farmd - no horizontal processing

LAMs performance increased

firewalls and security

high availability

HA Feature Overview

In Moogsoft AIOps, two or more servers can be clustered to provide redundancy. Clustered Moogsoft AIOps processes can be set to active or passive states. Processing can be moved from one server to another via the ha_cntl command. The ha_cntl command allows Moogsoft AIOps clusters, process groups, or single process processing to be moved from one server to another, as well as add additional active processes (tomcat, LAMs) on one or more servers. Only one moogfarmd process can be active at a time because moogfarmd runs the Sigaliser process which create the Situations. The Sigaliser in-memory data is synced between moogfarmd processes with hazelcast. When switching over, the primary processes in the activated cluster will become active and the corresponding processes in the secondary cluster will be become inactive/passive.

Common Elements

User connections

run through a Server Load Balancer (SLB) that routes them to the active tomcat servlets

Tomcat

connects to MySQL during initial load then listens to the MooMS (RabbitMQ) bus for Alert and Situation updates. In Moogsoft AIOps more than one MySQL and RabbitMQ servers can be specified

MySQL

is the main store/DB for Alert information. More than one MySQL server can be configured in the system.conf file allowing Moogsoft AIOps processes to seek out the active MySQL host if the first one is down

moogfarmd

runs the software processing layer or the moolet processes (AlertBulder, Alert Rules Engine, Template Matcher, Sigalisers, Situation Manager, Notifier and other user moolets). The active moogfarmd synchronizes the passive moogfarmd via hazelcast. This allows a passive moogfarmd to become active and take up where the formerly active moogfarmd stopped processing. Most moolets have extensions called moobots which are JavaScript processing rules.

RabbitMQ

is a clustered messaging bus. All process communication about events, Alerts and Situations use this bus. Tomcat, moogfarmd, and the LAMs subscribe and publish on the bus. It is also responsible for archiving events, Alerts, and Situations in the MySQL database

Link Access Modules (LAM)

processes raw feeds into events and publish them on the RabbitMQ bus. There are extensions on the LAM code that are similar to moobots called lambots. These like the moobots are also written in JavaScript

Events entering the system are routed to the active LAMs via the Server Load Balancer (SLB)

Database

Comprehensive discussion/detail about DB replication techniques/reference architectures is out of scope of this document. Most enterprises have a dedicated team that typically handle DB administration using their standard methods and procedures (Clustering etc). It is also recommended to use a MySQL Cluster in an HA environment to better handle the replication, failover, split-brain type scenarios

It is important to understand the basic differences so that you can make an informed choice about the best setup to use based on your specific environment. Depending on the configuration, you can replicate all/selected databases, or even selected tables within a database. In terms of the terminology:

  • Master: is where all the changes happen. All database updates occur here, from adding, updating or deleting table records to creating functions, stored procedures or making table changes

  • Slave/Replica: receives a copy of the changes applied at the master server. This all happens very quickly in order that the slave is always in sync with the master

Database Replication

There are three common types of replication, with pros/cons for each listed below:

  • Master-Slave Replication

  • Master-Master Replication

  • MySQL Cluster

Warning

The examples below are for illustrative purposes only. Please consult your Moogsoft technical contact for advice on the best strategy to use in your environment

Master-Slave Replication

One server acts as the master database and all other server(s) act as slaves. Writes can only occur on the master node by the application.

Pros

Cons

  • Applications can read from the slave(s) without impacting the master

  • Backups of the entire database of relatively no impact on the master

  • Slaves can be taken offline and sync back to the master without any downtime

  • In the instance of a failure a slave has to be promoted to master to take over its place. No automatic failover

  • Downtime and possibly lost of data when a master fails

  • All writes also have to be made to the master in a master-slave design

  • Each additional slave add some load* to the master since the binary log have to be read and data copied to each slave

  • Application might have to be restarted

Master-Master Replication

In a master-master configuration each master is configured as a slave to the other. Writes and reads can occur on both servers.

Pros

Cons

  • Applications can read from both masters

  • Distributes write load across both master nodes

  • Simple, automatic and quick failover

  • Loosely consistent

  • Not as simple as master-slave to configure and deploy

RabbitMQ (MooMS)

Refer to the standard RabbitMQ Clustering Guide documentation for the relevant details as needed. The following steps below show how to setup the clustering quickly between two servers. Assume that we are creating the cluster between two Moogsoft AIOps servers, Moog 1 and Moog 2.

RabbitMQ Cluster Setup

Moog 1

  • Check the status via:

service rabbitmq-server status
  • If running, leave rabbitmq-server running on Server1

Moog 2

  • Stop the rabbitmq-server via:

service rabbitmq-server stop   
  • Transfer the Erlang cookie located under /var/lib/rabbitmq from Server 1 to Server 2

Note

  • For two nodes to be able to communicate they must have the same shared secret called the Erlang cookie. The cookie is just a string of alphanumeric characters.Every cluster node must have the same cookie

  • Erlang cookie output will be different, as it is generated automatically when Rabbit is started up for the first time

  • Verify that the ownership of Erlang cookie is correct (owned by rabbitmq:rabbitmq)

# ls -al /var/lib/rabbitmq/
total 16
drwxr-xr-x   3 rabbitmq rabbitmq 4096 Jun 22 22:08 .
drwxr-xr-x. 20 root     root     4096 Jun 22 09:45 ..
-r--------   1 rabbitmq rabbitmq   20 Jun 22 22:21 .erlang.cookie
drwxr-x---   4 rabbitmq rabbitmq 4096 Jun 22 22:09 mnesia

# cat /var/lib/rabbitmq/.erlang.cookie
WRDEKOWQWLRQQDTXIIFT
  • Compare the output of the cookie with the one on Server 1 to ensure that they match

To create the cluster, run the following commands:

Moog 2

# service rabbitmq-server start
Starting rabbitmq-server: SUCCESS
rabbitmq-server.

# rabbitmqctl stop_app
Stopping node rabbit@moog2 ...

# rabbitmqctl join_cluster rabbit@moog1
Clustering node rabbit@moog2 with rabbit@moog1 ...

# rabbitmqctl start_app
Starting node rabbit@moog2 ...

# rabbitmqctl cluster_status
Cluster status of node rabbit@moog2 ...
[{nodes,[{disc,[rabbit@moog1,rabbit@moog2]}]},
 {running_nodes,[rabbit@moog1,rabbit@moog2]},
 {cluster_name,<<"rabbit@moog1.local">>},
 {partitions,[]}]

Moog 1

  • Verify that it is setup properly:

[root@moog1 moogsoft]# rabbitmqctl cluster_status
Cluster status of node rabbit@moog1 ...
[{nodes,[{disc,[rabbit@moog1,rabbit@moog2]}]},
 {running_nodes,[rabbit@moog1]},
 {cluster_name,<<"rabbit@moog1.local">>},
 {partitions,[]}]
[root@moog1 moogsoft]#
Mirrored Queues
  • Set queue mirroring policy to mirror all non-exclusive queues to all nodes. From either server (Moog 1 or Moog 2):

# rabbitmqctl set_policy -p <zone> ha-all ".+\.HA" '{"ha-mode":"all"}'
Setting policy "ha-all" for pattern ".+\\.HA" to "{\"ha-mode\":\"all\"}" with priority "0" …
  • On both Moog 1 and Moog 2, edit $MOOGSOFT-HOME/config/system.conf file and add both rabbitmq servers to the brokers object:

"brokers" :
[
    {
        "host"     : "Moog 1",
        "port"     : 5672
    },
    {
        "host"     : "Moog 2",
        "port"     : 5672
    }
],
Server Load Balancing

Note

SLB configuration steps are not listed here. However, there are some example configuration steps listed based on HAProxy in the example deployment scenarios that can be referenced as needed. It is assumed that most enterprises have a dedicated team that handle SLB administration using their standard methods and procedures

For LAMs, configure healthcheck based on type of LAM, and whether any LAMs are expected to normally run Passive.

Two Server Active/Passive
29959701.png

Server A

Server B

  • Tomcat (active)

  • MySQL (primary master)

    • MySQL Replication slave to Server B

  • moogfarmd (active)

    • Synced with moogfarmd on Server B via hazelcast

  • RabbitMQ (active)

    • Clustered with RabbitMQ on Server B, with mirrored queues

  • LAMs (active)

  • Elasticsearch (active)

  • Tomcat (passive)

  • MySQL (secondary master)

    • MySQL Replication slave to Server A (ready for failover)

  • moogfarmd (passive)

    • Synced with moogfarmd on Server A via hazelcast

  • RabbitMQ (active)

    • Clustered with RabbitMQ on Server A , with mirrored queues

  • LAMs (passive)

  • Elasticsearch (active) - will use MySQL on Server B (see replication config)

Sample Configuration

Cluster configuration template:

Server A

Server B

Zone

zone_a

zone_a

Hostname

server_a

server_b

HA Cluster name

DC1

DC2

Cluster.Process Group.

Instance

moogfarmd:DC1.moog_farmd.moog_farmd-1

Tomcat:DC1.UI.servlets

SampleLAM: DC1.sample_lam.sample_lam-1

moogfarmd:DC2.moog_farmd.moog_farmd-1

Tomcat:DC2.UI.servlets

Sample LAM: DC2.sample_lam.rest_lam-1

system.conf

mooms.zone:zone_a

mooms.brokers.host:server_a

mooms.message_persistence: true

mysql.host: server_a

elasticsearch.host:server_a

failover.persist_state: true

failover.hazelcast.hosts:server_a,server_b

process_monitor: DC1&DC2 instances

ha.cluster_name:DC1

mooms.zone:zone_a

mooms.brokers.host:server_b

mooms.message_persistence: true

mysql.host: server_b

elasticsearch.host:server_b

failover.persist_state: true

failover.hazelcast.hosts:server_a,server_b

process_monitor: DC1&DC2 instances

ha.cluster_name:DC2

MySQL/etc/my.cnf (custom replication config)

# Replication
server-id=1
log-bin=mysql-bin
binlog_format=row
replicate-ignore-table=moogdb.elastic_inc
# Replication
server-id=2
log-bin=mysql-bin
binlog_format=row
replicate-ignore-table=moogdb.elasticsearch_inc

RabbitMQ

rabbitmqctl set_policy -p zone_a ha-all ".+\.HA" '{"ha-mode":"all"}'
rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit@server_a
rabbitmqctl start_app
Failover Scenarios
Server A Failure
  • In the event Server A fails, ha_cntl is invoked to make moogfarmd, tomcat, and LAM components active on Server B. Components on Server B will use Server B’s MySQL, RabbitMQ, and Elasticsearch. SLB will take Server A components (LAM, httpd, tomcat) out of the rotation, and put Server B components into the rotation

  • Using ha_cntl to make Server B moogfarmd, tomcat, and LAM components active, based on the command: ha_cntl -a DC2

Post-failover Steps
  • When restoring Server A, synchronize Server A’s MySQL with Server B, and configure Server A as the Secondary in Master-Master setup

  • Server A can be restored to Primary by using ha_cntl to switch processing back to Server A

Two Server Active/Active
29959700.png

Server A

Server B

  • Tomcat (active)

  • MySQL (primary master)

    • MySQL Replication slave to Server B

  • moogfarmd (active)

    • Sync with moogfarmd on Server B via hazelcast

  • RabbitMQ (active)

    • Clustered with RabbitMQ on Server B

  • LAMs (active)

  • Elasticsearch (active)

  • Tomcat (active)

  • MySQL (secondary master)

    • MySQL Replication slave to Server A (ready for failover)

  • moogfarmd (passive)

    • Sync with moogfarmd on Server A via hazelcast

  • RabbitMQ (active) clustered with RabbitMQ on Server A

  • LAMs (active)

  • Elasticsearch (active) - will use MySQL on Server B (see replication config)

Sample Configuration

Cluster configuration template:

Server A

Server B

Zone

zone_a

zone_a

Hostname

server_a

server_b

HA Cluster name

DC

DC

Cluster.Process Group.

Instance

moogfarmd:DC.moog_farmd.moog_farmd-1

Tomcat:DC.UI1.servlets(see note below)

REST LAM: DC.rest_lam.rest_lam-1

moogfarmd:DC.moog_farmd.moog_farmd-2

Tomcat:DC.UI2.servlets(see note below)

REST LAM: DC.rest_lam.rest_lam-2

system.conf

mooms.zone:zone_a

mooms.brokers.host:server_a,server_b

mooms.message_persistence: true

mysql.host: server_a

mysql.failover_connections: server_b

elasticsearch.host:localhost

failover.persist_state: true

failover.hazelcast.hosts:server_a,server_b

process_monitor: moog_farmd-1,servlets-1,rest_lam-1,etc

ha.cluster_name:DC

mooms.zone:zone_a

mooms.brokers.host:server_a,server_b

mooms.message_persistence: true

mysql.host: server_a

mysql.failover_connections:server_b

elasticsearch.host:localhost

failover.persist_state: true

failover.hazelcast.hosts:server_a,server_b

process_monitor: moog_farmd-2,servlets-2,rest_lam-2,etc

ha.cluster_name:DC

MySQL/etc/my.cnf (custom replication config)

# Replication
server-id=1
log-bin=mysql-bin
binlog_format=row
replicate-ignore-table=moogdb.elasticsearch_inc
# Replication
server-id=2
log-bin=mysql-bin
binlog_format=row
replicate-ignore-table=moogdb.elasticsearch_inc

RabbitMQ

rabbitmqctl set_policy -p zone_a ha-all ".+\.HA" '{"ha-mode":"all"}'
rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit@server_a
rabbitmqctl start_app

Note: As this is an active/active setup and message_persistence is set to true, the UI servlets must be run in separately named process groups.

Failover Scenarios
Server A Failure
  • In the event Server A fails, ha_cntl can be invoked to make moogfarmd active on Server B. Components using MySQL will failover to Server B’s MySQL, which will be considered Primary going forward. Components using Server A’s RabbitMQ will switch over to Server B’s RabbitMQ. SLB will take Server A components (LAM, httpd, tomcat) out of the rotation

  • Using ha_cntl to make Server B moogfarmd active, based on the command: ha_cntl -a DC.moog_farmd.moog_farmd-2

Post-failover Steps
  • Establish Server B MySQL as Primary: On each server, change system.conf’s mysql.host to Server B, mysql.failover to Server A, and restart components (Moogfarmd can be kept running during these changes by switching which instance is active with ha_cntl, and only restart moogfarmd when it is passive!)

  • When restoring Server A, synchronize Server A’s MySQL with Server B, and configure Server A as the Secondary in Master-Master setup

  • Server A can be restored to Primary by repeating the above steps, starting with shutting down Server B’s MySQL, and letting components failover to Secondary on Server A (then change system.conf, restart components, etc.)

Four Server Active/Passive
29959699.png

Server A

Server B

Server C

Server D

  • Tomcat (active)

  • moogfarmd (active)

    • Synced with moogfarmd on Server B via hazelcast

  • RabbitMQ (active)

    • Clustered with RabbitMQ on Server B, with mirrored queues

  • LAMs (active)

  • Tomcat (passive)

  • moogfarmd (passive)

    • Synced with moogfarmd on Server A via hazelcast

  • RabbitMQ (active)

    • Clustered with RabbitMQ on Server A , with mirrored queues

  • LAMs (passive)

  • MySQL (primary master)

    • MySQL Replication slave to Server D

  • Elasticsearch(active)

  • MySQL (secondary master)

    • MySQL Replication slave to Server C (ready for failover)

  • Elasticsearch (active)

Sample Configuration

Cluster configuration template:

Server A

Server B

Zone

zone_a

zone_a

Hostname

server_a

server_b

HA Cluster name

DC1

DC2

Cluster.Process Group.

Instance

moogfarmd:DC1.moog_farmd.moog_farmd-1

Tomcat:DC1.UI.servlets

Sample LAM: DC1.sample_lam.sample_lam-1

moogfarmd:DC2.moog_farmd.moog_farmd-1

Tomcat:DC2.UI.servlets

Sample LAM: DC2.sample_lam.rest_lam-1

system.conf

mooms.zone:zone_a

mooms.brokers.host:server_a

mooms.message_persistence: true

mysql.host: server_c

mysql.failover_connections: server_d

elasticsearch.host:server_c

failover.persist_state: true

failover.hazelcast.hosts:server_a,server_b

process_monitor: DC1&DC2 instances

ha.cluster_name:DC1

mooms.zone:zone_a

mooms.brokers.host:server_b

mooms.message_persistence: true

mysql.host: server_c

mysql.failover_connections: server_d

elasticsearch.host:server_c

failover.persist_state: true

failover.hazelcast.hosts:server_a,server_b

process_monitor: DC1&DC2 instances

ha.cluster_name:DC2

MySQL/etc/my.cnf (custom replication config)

# Replication (Server C)
server-id=1
log-bin=mysql-bin
binlog_format=row
replicate-ignore-table=moogdb.elasticsearch_inc
# Replication (Server D)
server-id=2
log-bin=mysql-bin
binlog_format=row
replicate-ignore-table=moogdb.elasticsearch_inc

RabbitMQ

rabbitmqctl set_policy -p zone_a ha-all ".+\.HA" '{"ha-mode":"all"}'
rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit@server_a
rabbitmqctl start_app
Failover Scenarios
Server A Failure
  • In the event Server A fails, ha_cntl is invoked to make moogfarmd, tomcat, and LAM components active on Server B. Components on Server B will use Server B’s RabbitMQ. SLB will take Server A components (LAM, httpd, tomcat) out of the rotation, and put Server B components into the rotation

  • Using ha_cntl to make Server B moogfarmd, tomcat, and LAM components active, based on the command: ha_cntl -a DC2

  • Server A can be restored to Primary by using ha_cntl to switch processing back to Server A

Server C Failure
  • In the event Server C fails, active components will use their mysql.failover_connection setting to connect to Server D

  • elasticsearch failover need to be done manually by changing system.conf

Post-failover Steps
  • Before restoring Server C, change system.conf so that Server D is primary mysql connection, Server C is failover

  • When restoring Server C, synchronize Server C’s MySQL with Server D, and configure Server C as the Secondary in Master-Master setup

  • Server C can be restored to primary master by shutting down Server D, allowing components to failover to Server C, then reverse primary/failover in system.conf again. Last, bring up mysql on Server D, resync as slave to Server C

Four Server Active/Active
29959698.png

Server A

Server B

Server C

Server D

  • Tomcat (active)

  • moogfarmd (active)

    • Synced with moogfarmd on Server B via hazelcast

  • RabbitMQ (active)

    • Clustered with RabbitMQ on Server B, with mirrored queues

  • LAMs (active)

  • Tomcat (active)

  • moogfarmd (passive)

    • Synced with moogfarmd on Server A via hazelcast

  • RabbitMQ (active)

    • Clustered with RabbitMQ on Server A , with mirrored queues

  • LAMs (active)

  • MySQL (primary master)

    • MySQL Replication slave to Server D

  • Elasticsearch (active)

  • MySQL (secondary master)

    • MySQL Replication slave to Server C (ready for failover)

  • Elasticsearch (active)

Sample Configuration

Cluster configuration template:

Server A

Server B

Zone

zone_a

zone_a

Hostname

server_a

server_b

HA Cluster name

DC1

DC2

Cluster.Process Group.

Instance

moogfarmd:DC1.moog_farmd.moog_farmd-1

Tomcat:DC1.UI1.servlets (see note below)

SampleLAM: DC1.sample_lam.sample_lam-1

moogfarmd:DC2.moog_farmd.moog_farmd-1

Tomcat:DC2.UI2.servlets (see note below)

SampleLAM: DC2.sample_lam.rest_lam-1

system.conf

mooms.zone:zone_a

mooms.brokers.host:server_a,server_b

mooms.message_persistence: true

mysql.host: server_c

mysql.failover_connections: server_d

elasticsearch.host:server_c

failover.persist_state: true

failover.hazelcast.hosts:server_a,server_b

process_monitor: DC1&DC2 instances

ha.cluster_name:DC1

mooms.zone:zone_a

mooms.brokers.host:server_a,server_b

mooms.message_persistence: true

mysql.host: server_c

mysql.failover_connections: server_d

elasticsearch.host:server_d

failover.persist_state: true

failover.hazelcast.hosts:server_a,server_b

process_monitor: DC1&DC2 instances

ha.cluster_name:DC2

MySQL/etc/my.cnf (custom replication config)

# Replication (Server C)
server-id=1
log-bin=mysql-bin
binlog_format=row
replicate-ignore-table=moogdb.elasticsearch_inc
# Replication (Server D)
server-id=2
log-bin=mysql-bin
binlog_format=row
replicate-ignore-table=moogdb.elasticsearch_inc

RabbitMQ

rabbitmqctl set_policy -p zone_a ha-all ".+\.HA" '{"ha-mode":"all"}'
rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit@server_a
rabbitmqctl start_app

Note: As this is an active/active setup and message_persistence is set to true, the UI servlets must be run in separately named process groups.

Failover Scenarios
Server A Failure
  • In the event Server A fails, ha_cntl can be invoked to make moogfarmd active on Server B. Components using Server A’s RabbitMQ will switch over to Server B’s RabbitMQ. SLB will take Server A components (LAM, httpd, tomcat) out of the rotation

  • Using ha_cntl to make Server B moogfarmd active, based on the command: ha_cntl -a DC.moog_farmd.moog_farmd-2

Server C Failure
  • In the event Server C fails, active components will use their mysql.failover_connection setting to connect to Server D

  • elasticsearch failover needs to be done manually by changing system.conf

Post-failover Steps
  • Establish Server D MySQL as Primary: On each server, change system.conf’s mysql.host to Server D, mysql.failover to Server C, and restart components (Moogfarmd can be kept running during these changes by switching which instance is active with ha_cntl, and only restart moogfarmd when it is passive!)

  • When restoring Server C, synchronize Server C’s MySQL with Server D, and configure Server C as the Secondary in Master-Master setup

  • Server C can be restored to Primary by repeating the above steps, starting with shutting down Server D’s MySQL, and letting components failover to Secondary on Server C (then change system.conf, restart components, etc)

  • Change Server A’s system.conf so that Server A’s tomcat fails over to use Server D’s elasticsearch. Server B’s tomcat already uses Server D’s elasticsearch, so no change, no failover. Server D elasticsearch's connection to Server D’s DB does not change either

Eight Server Active/Active Fully Distributed
29959697.png

Server A, Server E

Server B, Server F

Server C, Server G

Server D, Server H

  • Moog packages installed: moogsoft-ui

  • Tomcat (active) and nginx on both servers

  • Moog packages installed: moogsoft-mooms + moogsoft-server + moogsoft-utils

  • moogfarmd (active) on Server B

    • Synced with moogfarmd (Passive) on Server F via Hazelcast

  • RabbitMQ (active)

    • Clustered with RabbitMQ on Server F, with mirrored queues

  • Moog packages installed: moogsoft-lams

  • LAMs (Active) on both Servers. Can be used with/behind a Load Balancer or without

  • Moog packages installed: moogsoft-db, moogsoft-search

  • Elasticsearch (Active) on both Servers. Note that Elasticsearch currently do not support HA

  • MySQL (Server D) is the Primary Master/Active

  • MySQL (Server H) is the Failover Master/Active

  • Both MySQL Servers are setup for bi-directional Active/Active replication

HA Cluster, Group, Instance Visualization

The diagram shows how the HA Cluster, Group and Instances are setup across all eight servers.

29959708.png
Sample Configuration

Note

  • The sample configuration below only show the relevant portion of the moog config files needed for the HA setup as shown in the above architecture diagram. It does not list the complete config output of each file

  • The sample configuration below also does not show the Server Load Balancer (SLB) Configuration either. Depending upon the SLB used (for example HAProxy, F5, etc.), refer to the respective documentation.

  • Moogsoft AIOps documentation also have some example HAProxy configuration listed that can be used as reference/applicable

UI Servers

$MOOGSOFT_HOME/config/servlets.conf

Server A

Server E

{
   loglevel: "WARN",
   webhost : "https://<Server_Load_Balancer_VIP_or_Name>",
   moogsvr:
   {
        eula_per_user: false,
        cache_root: "/mnt/nfs/shared",
        db_connections: 10,
        priority_db_connections: 25
    },
    moogpoller :
    {
    },
    toolrunner :
    {
        sshtimeout: 900000,
        toolrunnerhost: "Server D",
        toolrunneruser: "<toolrunner username>",
        toolrunnerpassword: "<toolrunner password>"
    },
    graze  :
    {
    },
    events  :
    {
    },
    ha :
    {
        cluster: "QAP",
        instance: "SERVLETS-P",
        group: "UI",
        start_as_passive: false
    }
}
{
   loglevel: "WARN",
   webhost : "https://<Server_Load_Balancer_VIP_or_Name>",
   moogsvr:
   {
        eula_per_user: false,
        cache_root: "/mnt/nfs/shared",
        db_connections: 10,
        priority_db_connections: 25
    },
    moogpoller :
    {
    },
    toolrunner :
    {
        sshtimeout: 900000,
        toolrunnerhost: "Server H",
        toolrunneruser: "<toolrunner username>",
        toolrunnerpassword: "<toolrunner password>"
    },
    graze  :
    {
    },
    events  :
    {
    },
    ha :
    {
        cluster: "QAS",
        instance: "SERVLETS-S",
        group: "UI",
        start_as_passive: false
    }
}

$MOOGSOFT_HOME/config/system.conf

Server A

Server E

"mooms" :
        {
        #
        "zone" : "QA",
        "brokers" : [
                {
                #
                # This can be an IPv4 or IPv6 address
                #
                "host" : "Server B",
                #
                # The listening port of the MooMS broker
                #
                "port": 5672
                }
        ],
 "mysql" :
        {
                "host" : "Server D",
                "database"            : "moogdb",
                "port" : 3306,

                # To use Multi-Host Connections for failover support use:
                #
                "failover_connections":
                [
                {
                        "host": "Server H",
                        "port": 3306
                }
"search" :
        {
            "limit"         : 1000,
            "nodes" : [
                {
                    "host"     : "Server D",
                    "port"     : 9200
                }
            ]
        },

# Failover configuration parameters.
        “failover":
        {
                # Set persist_state to true to turn persistence on. If set, state
                # will be persisted in a Hazelcast cluster.
                "persist_state" : true,
        "process_monitor":
# Moog_farmd
                  {
                     "group"           : "moog_farmd",
                     "instance"        : "QAFARMDP",
                  # Servlets
                     {
                       "group"           : "UI",
                       "instance"        : "SERVLETS-P",
                       "service_name"    : "apache-tomcat",
                       "process_type"    : "servlets",
                       "reserved"        : true,
                       "subcomponents"   : [
                                              "moogsvr",
                                              "moogpoller",
                                              "toolrunner"
                                            ]
                     },
                      # Lams
                       {
                         "group"              : "rest_lam",
                         "instance"        : "QALAMDP",
                         "service_name"    : "rest_lamd",
                         "process_type"    : "LAM",
                            "reserved"        : false
        "ha":
        {
                # Use this to change the default HA cluster name.
                "cluster": "QAP"
        }
"mooms" :
        {
        #
        "zone" : "QA",
        "brokers" : [
                {
                #
                # This can be an IPv4 or IPv6 address
                #
                "host" : "Server F",
                #
                # The listening port of the MooMS broker
                #
                "port": 5672
                }
        ],
"mysql" :
        {
                "host" : "Server D",
                "database"            : "moogdb",
                "port" : 3306,
                # To use Multi-Host Connections for failover support use:
                #
                "failover_connections":
                [
                {
                        "host": "Server H",
                        "port": 3306
                }
"search" :
        {
            "limit"         : 1000,
            "nodes" : [
                {
                    "host"     : "Server H",
                    "port"     : 9200
                }
            ]
        },
# Failover configuration parameters.
        “failover":
        {
                # Set persist_state to true to turn persistence on. If set, state
                # will be persisted in a Hazelcast cluster.
                "persist_state" : true,
        "process_monitor":

# Moog_farmd
                  {
                   "group"           : "moog_farmd",
                   "instance"        : "QAFARMDS",
                  # Servlets
                  {
                       "group"           : "UI",
                       "instance"        : "SERVLETS-S",
                       "service_name"    : "apache-tomcat",
                       "process_type"    : "servlets",
                       "reserved"        : true,
                       "subcomponents"   : [
                                              "moogsvr",
                                              "moogpoller",
                                              "toolrunner"
                                            ]
                     },
                   # Lams
                   {
                    "group"           : "rest_lam",
                    "instance"        : "QALAMDS",
                    "service_name"    : "rest_lamd",
                    "process_type"    : "LAM",
                       "reserved"        : false
        "ha":
        {
                # Use this to change the default HA cluster name.
                "cluster": "QAS"
        }
Moogfarmd/RabbitMQ Servers

$MOOGSOFT_HOME/config/system.conf

Server B

Server F

"mooms" :
    {
        "zone" : "QA",
        "brokers" : [
            {
                #
                # This can be an IPv4 or IPv6 address
                #
                "host" : "Server B",
                #
                # The listening port of the MooMS broker
                #
                "port" : 5672
            },
            {
                #
                # This can be an IPv4 or IPv6 address
                #
                "host" : "Server C",
                #
                # The listening port of the MooMS broker
                #
                "port" : 5672
            }
        ],
               "message_persistence" : true,
    "mysql" :
        {
            "host" : "Server D",

            "database"        : "moogdb",

"failover_connections" :
               [
                 {
                     "host"  : "Server H",
                    "port"  : 3306
                 }
    "search" :
        {
            "limit"         : 1000,
            "nodes" : [
                {
                    "host"     : "Server D",
                    "port"     : 9200
                }
            ]
        },
    # Failover configuration parameters.
    “failover" :
        {
            # Set persist_state to true to turn persistence on. If set, state
            # will be persisted in a Hazelcast cluster.
            "persist_state" : true,
            # Configuration for the Hazelcast cluster.
            "hazelcast" :
                {
                    # A list of hosts to allow to participate in the cluster.
                    "hosts" : ["Server B", "Server F"],
                    # Moog_farmd
                            {
                                "group"           : "moog_farmd",
                                "instance"        : "QAFARMDP",
                                "service_name"    : "moogfarmd",
                                "process_type"    : "moog_farmd",
                                "reserved"        : true,
                                "subcomponents"   : [
                                                        "AlertBuilder",
                                                        "Sigaliser",
                                                        "Default Cookbook",
                                                        "Journaller",
                                                        "TeamsMgr"
                                                        #"AlertRulesEngine",
                                                        #"SituationMgr",
                                                        #"Notifier"
                                                  ]
                            },
                            # Servlets
                            {
                       "group"           : "UI",
                       "instance"        : "SERVLETS-P",
                       "service_name"    : "apache-tomcat",
                       "process_type"    : "servlets",
                       "reserved"        : true,
                       "subcomponents"   : [
                                              "moogsvr",
                                              "moogpoller",
                                              "toolrunner"
                                            ]
                     },
                     },
                            # Lams
                            {
                                "group"           : "rest_lam",
                                "instance"        : "QALAMDP",
                                "service_name"    : "rest_lamd",
                                "process_type"    : "LAM",
                                "reserved"        : false
                            }
                       ]
        }
        "ha":
        {
            # Use this to change the default HA cluster name.
            "cluster": "QAP"
        }
    "mooms" :
    {
        "zone" : "QA",
        "brokers" : [
            {
                #
                # This can be an IPv4 or IPv6 address
                #
                "host" : "Server B",

                #
                # The listening port of the MooMS broker
                #
                "port" : 5672
            },
            {
                #
                # This can be an IPv4 or IPv6 address
                #
                "host" : "Server C",

                #
                # The listening port of the MooMS broker
                #
                "port" : 5672
            }
        ],
               "message_persistence" : true,
    "mysql" :
        {
            "host" : "Server D",
            "database"        : "moogdb",

"failover_connections" :
               [
                 {
                     "host"  : "Server H",
                     "port"  : 3306
                 }
    "search" :
        {
            "limit"         : 1000,
            "nodes" : [
                {
                    "host"     : "Server H",
                    "port"     : 9200
                }
            ]
        },

    # Failover configuration parameters.
    "failover" :
        {
            # Set persist_state to true to turn persistence on. If set, state
            # will be persisted in a Hazelcast cluster.
            "persist_state" : true,

            # Configuration for the Hazelcast cluster.
            "hazelcast" :
                {
                    # A list of hosts to allow to participate in the cluster.
                    "hosts" : ["Server B", "Server F"],
                    # Moog_farmd
                            {
                                "group"           : "moog_farmd",
                                "instance"        : "QAFARMDS",
                                "service_name"    : "moogfarmd",
                                "process_type"    : "moog_farmd",
                                "reserved"        : true,
                                "subcomponents"   : [
                                                        "AlertBuilder",
                                                        "Sigaliser",
                                                        "Default Cookbook",
                                                        "Journaller",
                                                        "TeamsMgr"
                                                        #"AlertRulesEngine",
                                                        #"SituationMgr",
                                                        #"Notifier"
                                                  ]
                            },

                            # Servlets
                            {
                       "group"           : "UI",
                       "instance"        : "SERVLETS-S",
                       "service_name"    : "apache-tomcat",
                       "process_type"    : "servlets",
                       "reserved"        : true,
                       "subcomponents"   : [
                                              "moogsvr",
                                              "moogpoller",
                                              "toolrunner"
                                            ]
                     },
                            # Lams
                            {
                                "group"           : "rest_lam",
                                "instance"        : "QALAMDS",
                                "service_name"    : "rest_lamd",
                                "process_type"    : "LAM",
                                "reserved"        : false
                            }
                       ]
        }
        "ha":
        {
            # Use this to change the default HA cluster name.
            "cluster": "QAS"
        }

$MOOGSOFT_HOME/config/moog_farmd.conf

Server B

Server F

For all the Moolets that are being used and are set to "run_on_startup: true", ensure that they have "persist_state: true" to take advantage of message persistence in case of a failover.

ha:
        {
            cluster: "QAP",
            group: "QAFARMD",
            default_leader: true
        }

For all the Moolets that are being used and are set to "run_on_startup: true", ensure that they have "persist_state: true" to take advantage of message persistence in case of a failover.

 ha:
        {
            cluster: "QAS",
            group: "QAFARMD",
            default_leader: true
        }
LAM Servers

$MOOGSOFT_HOME/config/system.conf

Server C

Server G

"mooms" :
    {
        "zone" : "QA",

        "brokers" : [
            {
                #
                # This can be an IPv4 or IPv6 address
                #
                "host" : "Server B",

                #
                # The listening port of the MooMS broker
                #
                "port" : 5672
                "message_persistence" : true,
"mysql" :
        {
            "host" : "Server D",
            "database"        : "moogdb",
            "username"        : "moogsoft",
            "password"        : "m00",
#
            "failover_connections" :
               [
                 {
                     "host"  : "Server H",
                     "port"  : 3306
                 }
"search" :
        {
            "limit"         : 1000,
            "nodes" : [
                {
                    "host"     : "Server D",
                    "port"     : 9200
                }
            ]
        },
      # Persistence configuration parameters.
    "persistence" :
        {
            # Set persist_state to true to turn persistence on. If set, state
            # will be persisted in a Hazelcast cluster.
            "persist_state" : true,
# Moog_farmd
                            {
                                "group"           : "moog_farmd",
                                "instance"        : "QAFARMDP",
                                "service_name"    : "moogfarmd",
                                "process_type"    : "moog_farmd",
                                "reserved"        : true,
                                "subcomponents"   : [
                                                        "AlertBuilder",
                                                        "Default Cookbook",
                                                        "Journaller",
                                                        "TeamsMgr"
                                                        #"AlertRulesEngine",
                                                        #"SituationMgr",
                                                        #"Notifier"
                                                  ]
                            },

                            # Servlets
                            {
                       "group"           : "UI",
                       "instance"        : "SERVLETS-P",
                       "service_name"    : "apache-tomcat",
                       "process_type"    : "servlets",
                       "reserved"        : true,
                       "subcomponents"   : [
                                              "moogsvr",
                                              "moogpoller",
                                              "toolrunner"
                                            ]
                     },
                            # Lams
                            {
                                "group"           : "rest_lam",
                                "instance"        : "QALAMDP",
                                "service_name"    : "rest_lamd",
                                "process_type"    : "LAM",
                                "reserved"        : false
                            }
"ha":
        {
            # Use this to change the default HA cluster name.
            "cluster": "QAP"
        }
"mooms" :
    {
       "zone" : "QA",

        "brokers" : [
            {
                #
                # This can be an IPv4 or IPv6 address
                #
                "host" : "Server F",

                #
                # The listening port of the MooMS broker
                #
                "port" : 5672
 "mysql" :
        {
            "host" : "Server D",
            "database"        : "moogdb",
            "username"        : "moogsoft",
            "password"        : "m00",
            "failover_connections" :
               [
                 {
                     "host"  : "Server H",
                     "port"  : 3306
                 }
"search" :
        {
            "limit"         : 1000,
            "nodes" : [
                {
                    "host"     : "Server H",
                    "port"     : 9200
                }
            ]
        },
# Persistence configuration parameters.
    "persistence" :
        {
            # Set persist_state to true to turn persistence on. If set, state
            # will be persisted in a Hazelcast cluster.
            "persist_state" : true,
# Moog_farmd
                            {
                                "group"           : "moog_farmd",
                                "instance"        : "QAFARMDS",
                                "service_name"    : "moogfarmd",
                                "process_type"    : "moog_farmd",
                                "reserved"        : true,
                                "subcomponents"   : [
                                                        "AlertBuilder",
                                                        "Sigaliser",
                                                        "Default Cookbook",
                                                        "Journaller",
                                                        "TeamsMgr"
                                                        #"AlertRulesEngine",
                                                        #"SituationMgr",
                                                        #"Notifier"
                                                  ]
                            },

                            # Servlets
                            {
                       "group"           : "UI",
                       "instance"        : "SERVLETS-S",
                       "service_name"    : "apache-tomcat",
                       "process_type"    : "servlets",
                       "reserved"        : true,
                       "subcomponents"   : [
                                              "moogsvr",
                                              "moogpoller",
                                              "toolrunner"
                                            ]
                     },
                            # Lams
                            {
                                "group"           : "rest_lam",
                                "instance"        : "QALAMDS",
                                "service_name"    : "rest_lamd",
                                "process_type"    : "LAM",
                                "reserved"        : false
                            }
"ha":
        {
            # Use this to change the default HA cluster name.
            "cluster": "QAS"
        }

$MOOGSOFT_HOME/config/rest_lam.conf

Server C

Server G

ha:
     {
        cluster: "QAP",
        group: "QALAMD"
     },
 ha:
     {
        cluster: "QAS",
        group: "QALAMD"
     },
Database Servers

$MOOGSOFT_HOME/config/system.conf

Server D

Server H

"mysql" :
        {
            "host"            : "localhost",
            "database"        : "moogdb",
            "port"            : 3306
        },

"search" :
        {
            "limit"         : 1000,
            "nodes" : [
                {
                    "host"     : "Server D",
                    "port"     : 9200
                }
            ]
        },
 "ha":
        {
            # Use this to change the default HA cluster name.
            "cluster": "QAP"
        }
"mysql" :
        {
            "host"            : "localhost",
            "database"        : "moogdb",
            "port"            : 3306
       },

"search" :
        {
            "limit"         : 1000,
            "nodes" : [
                {
                    "host"     : "Server H",
                    "port"     : 9200
                }
            ]
        },
 "ha":
        {
            # Use this to change the default HA cluster name.
            "cluster": "QAS"
        }

NFS Shared Storage setup for handling server files

File attachments uploaded to the UI (Situation Room thread entry attachments and User Profile Pictures) are written to the disk on the UI server to which the user is connected to - therefore to ensure these attachments are available to users logged in to other UI servers as part of an HA/Load Balancer setup, the location of these attachments should be configured to be on a shared disk (NFS) available to all UI servers. The example below show a sample configuration.

  • The configuration below assume that /mnt/nfs/shared is the shared location exposed by the NFS Server called NFS_Server

  • Ensure that the tomcat:tomcat user/group has the same uid/gid across all 3 servers and has ownership of the shared directory. If not then the following commands can be run on all 3 servers to ensure that they are the same. The gid can be be verified on all 3 servers via: cat /etc/group | grep tomcat command

groupmod -g <gid> tomcat
usermod -u <gid> tomcat
NFS Server
service apache-tomcat stop

chown -R tomcat:tomcat /usr/share/apache-tomcat

chown -R tomcat:tomcat /var/run/apache-tomcat

mkdir /shared

chown tomcat:tomcat /shared/

chmod 755 /shared/

yum install -y nfs-utils nfs-utils-lib
chkconfig nfs on

service rpcbind start

service nfs start

service apache-tomcat start

Edit /etc/exports and set: /shared Server A(rw,sync,no_subtree_check,insecure) Server E(rw,sync,no_subtree_check,insecure)

Then run: exportfs -a
Server A
service apache-tomcat stop

chown -R tomcat:tomcat /usr/share/apache-tomcat

chown -R tomcat:tomcat /var/run/apache-tomcat

yum install -y nfs-utils nfs-utils-lib

mkdir -p /mnt/nfs/shared

mount NFS_Server:/shared /mnt/nfs/shared

Edit $MOOGSOFT_HOME/config/servlets.conf and set cache_root: "/mnt/nfs/shared",

service apache-tomcat start
Server E
service apache-tomcat stop

chown -R tomcat:tomcat /usr/share/apache-tomcat

chown -R tomcat:tomcat /var/run/apache-tomcat

yum install -y nfs-utils nfs-utils-lib

mkdir -p /mnt/nfs/shared

mount NFS_Server:/shared /mnt/nfs/shared

Edit $MOOGSOFT_HOME/config/servlets.conf and set cache_root: "/mnt/nfs/shared",

service apache-tomcat start
Failover Scenarios
  • moog_farmd: moog_farmd process running in Passive mode will not process Events or detect Situations. When it fails over to Active mode, it will be able to carry on using the state from the previously Active Instance if this has been persisted

  • LAMs operating in Passive mode do not send Events to the MooMS bus. For example, the REST LAM in Passive mode will reject POST requests with an HTTP status of 503 (server unavailable)

Moog_farmd and UI Failover
  • In the setup shown in Figure A, all components are running in active/active mode except for moog_farmd

  • In the event Server B fails, ha_cntl can be invoked to make moogfarmd active on Server F. Components using Server B’s RabbitMQ will switch over to Server F’s RabbitMQ. Since both UI’s are active behind a SLD, users will see no delay as there is no switch-over from one UI to another in this scenario. They will seamlessly pick-up from Server F

  • Using ha_cntl to make Server F moogfarmd active, based on the command: ha_cntl -a QAP.QAFARMD

  • Since both moog_farmd's are in the same HA group (Figure B), activating the primary on Server B, will automatically de-activate the moog_farmd on Server F (making it passive). To switch over to Server F, ha_cntl -a QAS.QAFARMD will activate moog_farmd on Server F making the Server B's inactive

  • Use the "ha_cntl -v" output to see the HA status of the cluster

LAMs
  • Since both LAMs are active and behind a SLB, only one LAM will be sending events onto the message bus, thus preventing sending duplicate events. In case of Server C LAM failure, SLB will seamlessly transition the data flow over to Server G

Warning

  • It is important to note that in case where both LAMs on Server C and G are not behind a SLB and events are sent to both at the same time, will result in sending duplicate events onto the message bus, which will cause moog_farmd seeing 2 events and will create 2 Alerts (2 LAMs = 2 Events on the bus = 2 Alerts). This will result in users seeing duplicate Alerts in the UI, which will have the same details (date/time stamp, description) with different Alert IDs. To prevent this, only one LAM can be running in active mode and the other must be in passive mode

  • Since both LAMs are in the same HA group (Figure B), starting the LAM on Server C via ha_cntl -a QAP.QALAMD will automatically de-activate the LAM on Server G making it passive. To switch over to the LAM on Server G, ha_cntl -a QAS.QALAMD will make it active, thereby making the other on Server C as passive

Database and Elasticsearch
  • Server D is the active DB with Server H as the failover connection with active/active replication between them. Both elasticsearch are setup to point to their respective DB Servers. In case of Server D failing, the DB connection will automatically failover to Server H. Whilst the failover is occurring some temporary MySQL connection errors or warnings may be seen in the log output. If should be noted that if the primary or another failover_connection higher up the list becomes available again, the connection will not automatically failback to that until the a Moogsoft AIOps component is restarted or makes a new connection

  • Due to the possibility of DB split-brain scenario (i.e. the two DBs can no longer talk to each other, but they are still running), it is recommended to use a MySQL Cluster instead

MySQL Bi-Directional Replication Configuration
Relevant Commands

Server D

Server H

  • Setup my.cnf file

/etc/my.cnf:
# Replication
server-id=1
log-bin=mysql-bin
binlog_format=row
replicate-ignore-table=moogdb.elasticsearch_inc
  • Create Replication user

mysql> CREATE USER 'repl'@'Server H' IDENTIFIED BY 'm00g';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'Server H';
  • Lock Tables and Record file/position

mysql> FLUSH TABLES WITH READ LOCK;
mysql -u root -e “SHOW MASTER STATUS;"  (in separate session, output includes file & position)
  • Create Copy of Master DB

mysqldump --all-databases --master-data > dbdump.db
  • Quit mysql session above that had read lock after starting mysqldump

  • Setup my.cnf file

/etc/my.cnf:
# Replication
server-id=2
log-bin=mysql-bin
binlog_format=row
replicate-ignore-table=moogdb.elasticsearch_inc
  • Create Replication user

mysql> CREATE USER 'repl'@'Server D' IDENTIFIED BY 'm00g';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'Server D';
  • Load Copy of Master DB

mysql < dbdump.db
  • Configure and Start Slave

mysql> CHANGE MASTER TO MASTER_HOST='Server D', MASTER_USER='repl', MASTER_PASSWORD='m00g', MASTER_LOG_FILE='<file>', MASTER_LOG_POS=<position>;

Example:

mysql> CHANGE MASTER TO MASTER_HOST='Server D', MASTER_USER='repl', MASTER_PASSWORD='m00g', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=6144;
mysql> START SLAVE;
mysql> SHOW SLAVE STATUS\G;
Complete Master/Master Setup

Server H

Server D (using Server H’s file & position)

mysql> show master status;
mysql> CHANGE MASTER TO MASTER_HOST=Server H, MASTER_USER='repl', MASTER_PASSWORD='m00g', MASTER_LOG_FILE='mysql-bin.000003', MASTER_LOG_POS=27639080;

mysql> START SLAVE;

mysql> SHOW SLAVE STATUS\G;

Note

You may have to use: “mysqladmin -u root -p flush-hosts” to flush the host cache if some of the hosts change IP address or if the error message Host 'host_name' is blocked occurs

Related HA Command(s)
# ./ha_cntl --help
usage: ha_cntl [--activate cluster[.group[.instance]] | --deactivate
               cluster[.group[.instance]] | --diagnostics
               cluster[.group[.instance]]  [ --assumeyes ] | --view ] [
               --loglevel (INFO|WARN|ALL) ] [ --time_out <seconds> ] |
               --help

MoogSoft ha_cntl: Utility to send high availability control command
 -a,--activate <arg>      Activate all groups within a cluster, a specific
                          group within a cluster or a single instance
 -d,--deactivate <arg>    Deactivate all groups within a cluster, a
                          specific group within a cluster or a single
                          instance
 -h,--help                Print this message
 -i,--diagnostics <arg>   Print additional diagnostics where available to
                          process log file.
 -l,--loglevel <arg>      Specify (INFO|WARN|ALL) to choose the amount of
                          debug output
 -t,--time_out <arg>      Amount of time (in seconds) to wait for the last
                          answer (default 2 seconds)
 -v,--view                View the current status
 -y,--assumeyes           Answer yes for all questions