AWS EKS Deployment
Introduction
This document provides an overview of deploying Spectra Detect on Kubernetes, outlining the goals of this deployment model, key differences compared to traditional virtual machine deployments (OVA and AMI), and installation and upgrade instructions.
Traditionally, Spectra Detect was distributed as:
-
OVA packages for deployment on the customer's infrastructure.
-
AMI packages for deployment in the customer's AWS account.
ReversingLabs also offered a hosted option, running Spectra Detect in our own AWS account.
In all of these cases, Spectra Detect would run continuously, even during idle periods, resulting in unnecessary operational costs. This could have been avoided if the system could scale down intelligently when idle.
With Kubernetes deployment, we are introducing a new option: customers can run Spectra Detect either in their own on-premises Kubernetes cluster or in the cloud (e.g., Amazon EKS). In this setup, Spectra Detect workers (its processing units) are part of an auto-scalable cluster, enabling additional workers to spin up during high load and scale down during low activity to save resources and reduce costs.
In the initial release, only EKS deployments are supported.
Feature differences between OVA and Kubernetes deployments
Feature | OVA | Kubernetes |
---|---|---|
Large file threshold | Configures the threshold after which the files are sent to the retry queue | If configured to process large files with dedicated large file workers, sets the size threshold after which files are sent to those workers. Otherwise, behavior matches the OVA deployment. To enable large file workers, please follow the instructions here. |
Disk High Threshold configuration | Controls the highest allowed percentage of hard disk usage on the system. If it exceeds the configured value, the appliance will start rejecting traffic. | Removed. Workers use EFS storage, which is effectively unlimited. Disk usage alarms can still be configured if needed. |
Queue High threshold default value | 1000 | Bumped to 20000 as all the workers now share the same queue |
Monit memory threshold | Controls the percentage of memory, between 50 and 100, that services can use. If memory usage reaches the number configured here, the system will restart services. | Removed. Memory limits are now set in Helm charts. |
SNMP configuration | Used for monitoring appliances | Unavailable at the moment. |
Editing hub group details | Configures primary and secondary hub and other parameters | Removed. Load balancing is handled by Kubernetes Ingress. |
Download support logs for connected appliances | Downloads logs | All pods include a Filebeat sidecar. By default, logs are sent to the console, but they can be forwarded to aggregators like ELK or OpenSearch. Learn more here. |
Hub group tokens | Configured in hub group details | Configured in Worker configuration → Authentication. All workers in the group share the same configuration. |
Load balancer tab on Hub Appliance details | Displays load balancer data. | Removed. Load balancing is handled by Kubernetes Ingress. Use available Ingress metrics. |
Large file configuration overrides | Not available. | Can be overridden via config maps in Kubernetes mode. |
Appliances metrics | Displays CPU, RAM, and queue metrics. | Unavailable at the moment. |
Upload custom report types | Configurable via UI. | Not currently available in the UI. Custom report types and views can be stored in config maps. Learn more here. |
Multiple configuration groups | Groups can be created in the UI, and appliances assigned to them. | In Kubernetes, create a namespace matching the group name and install a Worker Helm chart in that namespace. Learn more here. |