Redundancy
Spectra appliances are designed to be fully redundant. All components are built so that if one fails, another one can immediately take over using the same configuration settings as used on the one before. This chapter describes how redundancy works for Spectra Detect.
Manager Redundancy
- As admin, go to Administration > Redundancy > Create redundant cluster.
- Fill out the required fields. Note:
- A VPN is established between the redundant Manager instances, and that information is automatically generated and filled in.
- The user on the secondary Manager must also be admin.
- Check the status on the status page. You can’t work on the secondary Manager while it’s used as the redundant system. All configuration settings made on the primary Manager will automatically be propagated to the secondary one.
Hub Groups
The Hub group feature allows users to create a group of two Hubs that share a virtual IP address, so that a second Hub can take over load balancing functions if the primary Hub fails for any reason.
Hubs and Workers must be connected to a Manager before they can be added to a Hub group.
Ensure that appliances can contact each other. Depending on the deployment type, this can take different forms, but taking the AWS VPC environment as an example, if a Worker is in one security group, and the Manager in another security group, the security group configuration must allow relevant packets (for example: ICMP, SNMP, HTTP) to move between appliances.
To create a Hub group, click Add new group on the dashboard or Create new group on the Central Configuration page.
This opens the Create new group dialog. In the Group type pull-down list, select the TiScale Hub Group option.
When the TiScale Hub Group is selected, additional options become available in the dialog.
Primary Hub
Primary hub | Configure which of the Hubs connected to the Manager will be the primary Hub in this group by selecting it from the pull-down list. If a Hub instance is already assigned as a primary Hub to another cluster, it will not be listed. If no Hub instances are available or connected to the Manager, the pull-down list will be disabled altogether, and it will not be possible to create a Hub group. |
Router ID for primary hub | Specify the router ID (hostname of the Hub instance). This field is required. |
Primary hub priority | Specify the priority of the primary Hub. The value configured here indicates the priority of the Hub instance in the VRRP router. |
The priority value MUST BE DIFFERENT on the primary and the secondary Hub. The secondary (redundant) Hub MUST have a lower priority.
Secondary Hub
Secondary host | Configure which of the Hubs connected to the Manager will be the secondary Hub in this group by selecting it from the pull-down list. If all Hub instances are already assigned to other clusters, the pull-down list will be empty. Selecting a secondary Hub is optional, since a Hub group can contain only one (primary) Hub instance. If a secondary Hub is not selected, the Hub cluster will not be redundant. |
Router ID for secondary hub | Specify the router ID (hostname of the Hub instance). |
Secondary hub priority | Specify the priority of the secondary Hub. The value configured here indicates the priority of the Hub instance in the VRRP router. It must be lower than the priority value of the primary Hub. |
If the secondary Hub is configured, the VRRP options also must be configured. Users must select the Enable VRRP checkbox, and configure the shared virtual IP address, the virtual router ID, and the authorization password used by both Hub instances.
Different network
If the IP addresses of the primary and secondary Hub aren’t on the same network, users will need to set up their own load balancer. In this case, the Virtual IP address will be ignored. In both cases (same or different network), the primary Hub will communicate with the redundant Hub, and all connectors will fail over to the redundant Hub in case the primary Hub goes offline.
Connecting Workers to a Hub Cluster
The Spectra Detect Workers section allows configuring tokens for all Worker instances in the Hub group.
Srv Available token | Specify the token used to check Worker availability status. Must correspond to the /srv/tiscale/v1/available token set on Workers connected to this Hub. All Workers in the group must use the same token for services like load balancing to work properly. Leave blank if this token hasn’t been previously configured on connected Workers. |
Srv token | Specify the token used for controlling load balancing on Worker appliances in the Hub group. The value configured here must correspond to the /srv/tiscale token set directly on Workers in the Hub group. All Workers in the same group must use the same token. Leave this field blank if this token hasn’t been previously configured on any of the Workers in the Hub group. |
API token | Specify the token used for accessing services like Connectors. The value configured here must correspond to the api/tiscale token set directly on Workers in the Hub group. All Workers in the same group must use the same token. Leave this field blank if this token hasn’t been previously configured on any of the Workers in the Hub group. |
The Large File Size Threshold (mebibytes) value configures which files will be considered as large and analyzed by Workers assigned to handle large files in the Large File Appliances section. The group has to be created and Workers have to be added to the group in the groups central configuration settings before they can be selected in this section. Large File Appliances receive a different configuration, specifically created to improve large file processing performance. To see a list of these overridden settings, check the expandable section on the group’s Central Configuration page, below the Apply Configuration section.
To apply changes to the Hub group, click Add.
Once the group has been created, visit the group’s central configuration page and select the checkboxes next to Workers to add them to the group. To connect all Workers, select the All checkbox. Workers that are already in another group will have an indicator next to their name.
After clicking the Save button, the Manager displays a message informing the user which Worker and Hub instances will be added to the new Hub group, which ones will be removed (if any), as well as which instances will leave their previous groups.