Skip to main content
Version: Spectra Analyze 9.2.0

Redundancy System

Spectra Analyze allows administrators to set up two Spectra Analyze instances into a redundant cluster. The cluster provides switch-over capabilities to ensure no data is lost during system upgrades or in case of failure on the primary appliance.

Only administrators can access and use the Redundancy system feature.

To create a new redundant cluster or manage an existing one, select the Redundancy system option on the Administration page of the primary appliance.

note

This chapter describes redundancy on the same subnet. Spectra Analyze automatically supports redundancy on the same network and takes care of all the details, such as setting up a virtual (shared) IP address. However, to set up a redundant system across the Internet, users will need to set up their own load balancer (for example, HAProxy). This load balancer will route requests to the secondary Spectra Analyze in case the primary one fails. In that case, the procedure to set up a redundant system is the same as in the remainder of this chapter, with the exception that the Floating IP Address will be ignored.

How Redundancy Works on Spectra Analyze

Two Spectra Analyze appliances are connected into a redundant cluster, where one appliance is configured as the primary (master) appliance, and the other as the secondary (standby) appliance. All data from the primary appliance is synchronized to the secondary appliance in regular intervals. This is done to ensure that both appliances contain the same data at all times.

Essentially, the secondary appliance serves as a replica of the primary appliance, and is able to seamlessly take over all processing activities in case the primary appliance experiences a system failure for any reason.

When the two appliances are configured in a cluster, the primary appliance can be normally used for tasks such as file uploads and analysis. The secondary appliance receives all new samples and database updates from the primary appliance, but is never used for file analysis or processing.

When outage is detected on the primary appliance, all processing is transferred to the secondary appliance. In this case, the switch-over happens and the secondary appliance takes over the role of the primary appliance. Additionally, administrators can manually perform the switch-over and promote the secondary appliance to the primary one at any point.

The cluster is controlled from the primary appliance. The administrator should decide which of the two appliances will initially be used as the primary one, and perform all cluster configuration and management actions from it.

All existing data is erased from the secondary appliance during the cluster configuration process.

During the cluster configuration process, the primary appliance is accessible and can be used. The secondary appliance is inaccessible and cannot be used.

Once the cluster is configured, the floating IP address should be used to access the cluster instead of individual IP addresses of the appliances.

Creating a Redundant Cluster

To set up a redundant cluster, select the Redundancy system option on the Administration page of the primary appliance.

Prerequisites for setting up a redundant cluster:

  • Administrator access to both Spectra Analyze appliances. This is required for logging into the secondary appliance during the configuration process.
  • The Spectra Analyze appliances must be running the same software version.
  • The system specifications of the two Spectra Analyze appliances should be as similar as possible; especially the RAM and the storage size. Considerable differences in system specifications can cause synchronization issues.
  • The link between the Spectra Analyze appliances must be at least 1 Gbps. Using a lower bandwidth link in situations with larger databases (over 60 GB) can cause database synchronization issues.
  • Both appliances must be able to connect to each other via SSH.
  • Host names and IP addresses must be defined for both appliances, as well as the shared (floating) IP address. Under Administration > Configuration > General > Allowed Hosts, add the host name and IP address of the primary and secondary appliance, as well as the floating IP address. Perform this on the other appliance as well.

Administration > Configuration > General > Allowed Hosts. This box lists the host/domain names that this application site can serve.

If no previous cluster configuration is present on the appliance, only the options for creating a new cluster will be visible on the Administration > Redundancy system page. Click the Create redundant cluster button to start the configuration wizard.

../_images/analyze-redundancy-cluster-wizard.png

Important notes about the cluster configuration process:

  • All existing data is erased from the secondary appliance during the configuration process. The user is warned about this and has to confirm before starting the configuration process.
  • Due to additional background processes that are required to set up the cluster, system performance may be impacted during the cluster configuration.
  • If the cluster configuration process is canceled by the user or interrupted due to unforeseen consequences, the secondary appliance may become non-functional depending on the stage at which the interruption occurred. Administrators may attempt resetting the secondary appliance using the tcbase reset command from the appliance console.
  • After the cluster has been configured, the initial synchronization between primary and secondary appliances can take a significant amount of time (up to several hours, depending on the amount of data on the appliances). During this time, local network traffic will be increased because of the data replication.

The first step of the configuration process consists of providing the required information about the two Spectra Analyze appliances that will be in the cluster.

All fields in the configuration dialog are required.

  • Floating IP Address - input the IP address that will be used to access the cluster once it’s configured. This address must be added to the Allowed hosts list on both Spectra Analyze appliances to allow them to communicate with each other
  • Local IP Address - input the IP address of the current (primary) appliance
  • Secondary Spectra Analyze IP Address - input the IP address of the secondary appliance that will be added to the cluster
  • Secondary Spectra Analyze URL - input the host name of the secondary appliance (in the format: http(s)://XX.XX.X.XXX)
  • Secondary Spectra Analyze user - input the username of the administrator account on the secondary appliance
  • Secondary Spectra Analyze password - input the password of the administrator account on the secondary appliance

After all the fields have been filled in, click the Next button to continue with the configuration process. In the next step, the primary appliance will check if all the prerequisites for setting up the cluster have been satisfied. If all steps are completed successfully, click the Next button to proceed to the last step of the configuration process.

On the next page, click the Start configuration button to initiate cluster configuration. A confirmation dialog will appear, warning the user about the impending data removal from the secondary appliance.

The configuration wizard displays information about activities happening in the background during the configuration process. Users can click the Toggle autoscroll on/off button to enable or disable automatic scrolling of the configuration log in the RL Cluster tab.

../_images/analyze-redundancy-cluster-config.png

When the cluster configuration is successfully completed, the configuration wizard will prompt the user to click the Finish button. This will open the Redundancy system page, displaying information about the status of the cluster.

Checking the Status of a Redundant Cluster

To check the status of the redundant cluster, navigate to the floating IP address defined in the configuration dialog, then select the Redundancy system option on the Administration page. The Cluster configuration section displays information about the IP addresses configured in the cluster, and provides options to perform a manual switch-over, or remove the cluster configuration completely.

../_images/analyze-redundancy-cluster-complete.png

Switching to the Status section in the sidebar on the left allows users to view more detailed information about the status of each node (appliance) and components in the cluster, as well as to browse system logs related to the redundancy functionality.

../_images/analyze-redundancy-cluster-status.png

The status of the whole cluster is indicated by one of the following modes.

Cluster ModeDescription
No ClusterThe cluster has not been defined.
NegotiateThe secondary appliance is not synchronized with the primary appliance. Fail-over is not yet possible. The cluster needs to decide which appliance will be set as the primary.
SyncingThe secondary appliance is in the process of synchronizing with the primary appliance. Fail-over is currently not possible.
MaintenancePrimary and secondary appliances are being upgraded to new versions.
OperationalThe cluster is operational, and the secondary appliance is synchronized with the primary one. Fail-over is now possible.

The status of individual nodes (appliances) in the cluster is indicated by the following modes.

Node ModeDescription
SingleThis node is not joined to any cluster.
Secondary Not SyncedThis node is configured as the secondary, but data synchronization with the primary node has not started yet.
SyncingThis node is configured as the secondary. Data synchronization with the primary node is in progress.
SecondaryThis node is configured as the secondary and is now fully synchronized.
PrimaryThis node is configured as the primary.

Users can also check which appliance in the cluster is acting as the primary using the Redundant Status API.

Performing a Manual Switch-over

Once the redundant cluster is successfully configured and started in operational mode, the user generally does not need to interact with the cluster, other than check its status. The cluster will perform the switch-over automatically when it detects failure on the primary appliance.

However, users can manually perform a switch-over at any time if necessary. To do this, access the Administration > Redundancy system > Cluster configuration section from the Spectra Analyze interface on the IP address of the primary appliance in the cluster. Click the Manual switchover button and confirm the switch-over in the dialog that opens. Performing a manual switch-over from the floating IP address of the redundancy cluster can cause issues.

This will initiate the switch-over process, during which the current secondary appliance will be promoted to the primary one. Services required for the cluster to operate will be configured in the background. The switch-over process may take some time, depending on the system specifications of the appliances and the amount of data on them. Any existing RabbitMQ file processing queues will not be replicated on the secondary appliance.

Performing Upgrades in a Redundant Cluster

The Redundancy system feature allows Spectra Analyze administrators to upgrade the appliances in the cluster without affecting the cluster configuration. Upgrades should be performed from the primary appliance.

To upgrade both appliances in the cluster, access the Administration > System update section on the primary appliance. Upload and apply the upgrade package to the primary appliance. After that, the upgrade package will be uploaded to the secondary appliance.

While the primary appliance is being upgraded, it will enter maintenance mode and background services will be shut down. This includes services related to the redundancy functionality, which means that the data replication from the primary to the secondary appliance is stopped during the upgrade.

The redundancy functionality remains unavailable until the secondary appliance finishes upgrading. However, once the upgrade of the primary appliance is done, users can normally access and work on the primary Spectra Analyze appliance while the secondary is being upgraded.

If the primary appliance is connected to the ReversingLabs Spectra Detect Manager Spectra Detect Manager, it is also possible to upgrade it from the Spectra Detect Manager interface. Starting the appliance upgrade from Spectra Detect Manager will automatically initiate the upgrade of the whole cluster; meaning, the secondary appliance will be upgraded as well.

Removing the Redundant Cluster Configuration

If the cluster functionality is no longer needed, administrators can remove the cluster configuration.

warning

Removing the cluster functionality will require a full reinstallation of the secondary appliance. Visit the Setup chapter for instructions.

To dismantle the cluster, access the Administration > Redundancy system > Cluster configuration section from the Spectra Analyze interface on the cluster’s floating IP address. Click the Remove cluster configuration button and confirm the action in the dialog that opens.

The appliances will be removed from the cluster and the configured floating IP address will be deactivated. All data that is present on the primary appliance will be preserved. The primary appliance will return to its normal (single-node) operation mode, and will be able to continue processing files.

The process of removing the cluster configuration may take some time.

Troubleshooting the Cluster Configuration

In some cases, the cluster can experience an issue known as the “split brain” during the switch-over process, where both nodes in the cluster act as if they are the primary node.

This can be detected in the Administration > Redundancy system > Status section, where the Component status table will indicate issues with the database on the secondary appliance. Other related issues with the cluster will also be indicated in this section, such as the failure to establish the connection with the database, or a situation where the database runs in standby mode on the primary appliance.

To further troubleshoot and mitigate cluster configuration issues, contact ReversingLabs support.