Skip to main content
Version: Spectra Detect 5.5.1

Kubernetes Requirements

Before proceeding with the installation of the Spectra Detect platform, ensure your environment meets the following prerequisites.

Kubernetes Cluster

  • Kubernetes version ≥ 1.31.0 and < 1.33.0
  • Resources: Adequate K8s resources must be available to run the Spectra Detect platform as per the recommended specifications. Please see Recommended Spectra Detect Resource Settings for reference.
  • Nodes running cgroups v1

Nodes running cgroups v1

Our container images require Kubernetes nodes to be running cgroups v1 in order to start the processes. Running Detect containers on hosts with cgroupv2 enabled would require them to run in privileged mode due to technical limitations related to systemd. Unlike cgroupv1, where container runtimes can mount the cgroup filesystem in a read-only fashion, cgroupv2 requires a read-write mount of /sys/fs/cgroup for systemd to function properly. This is because systemd expects to manage its own cgroup subhierarchies and cannot start correctly without write access to the cgroup filesystem. Kubernetes does not currently provide a way to safely delegate cgroup write access to unprivileged containers, especially when systemd is PID 1.

In addition, systemd has other expectations that conflict with standard container security constraints — such as requiring PID 1 status, root privileges inside the container, writable tmpfs mounts (e.g., /run), and proper signal handling. While user namespaces and cgroup delegation could theoretically allow some of these requirements to be met in an unprivileged setup, these features are not yet fully supported or stable in EKS and container runtimes like containerd. As a result, privileged mode remains the only reliable way to run systemd-based workloads under cgroupv2 on Kubernetes today.

Workarounds

Self-hosted Kubernetes environments

If users are running hosts with cgroup v2 and are self hosting the Kubernetes cluster you can mitigate the issue by booting the nodes with the following GRUB option:

systemd.unified_cgroup_hierarchy=0

This disables the unified cgroup hierarchy and switches back to using cgroupv1.

AWS EKS

For a managed Amazon EKS cluster, AL2 node images are required, as they run cgroupv1, while AL2023 runs cgroupv2.

Kubernetes versions ≥ 1.33.0 on AWS EKS are currently not supported because AWS is not releasing an EKS-optimized Amazon Linux 2 AMI for Kubernetes ≥ 1.33. https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions-standard.html

Booting AL2023 nodes into cgroupv1 node is not supported or recommended by AWS. https://docs.aws.amazon.com/linux/al2023/ug/cgroupv2.html

At the time of writing, we are using Amazon Linux 2 (AL2) as K8s hosts with K8s version 1.32 and cgroupv1, and our containers are explicitly configured to avoid privileged mode. AL2 / K8s combination is supported until 23 Mar 2027 with extended support. We apply targeted securityContext settings to grant only the minimum capabilities required for each component:

Worker container:

securityContext:
privileged: false
capabilities:
add:
- SYS_NICE

Detect Manager container:

securityContext:
privileged: false
capabilities:
add:
- NET_ADMIN
- SYS_NICE
- IPC_LOCK

This reflects our security-first approach and adherence to the principle of least privilege in containerized environments.

Operators and Tools

Keda (Autoscaling - optional)

For Detect to autoscale workers, Keda needs to be installed on the cluster. Keda can be deployed following the official Deploying Keda documentation. It is not required to have Keda installed to run Detect on K8s, but it is required to utilize Worker autoscaling features.

RabbitMQ Broker

Spectra Detect Helm charts support using external RabbitMQ Brokers, like AmazonMQ as well as deploying and using RabbitMQ cluster resources as part of a Detect deployment installed in the same namespace. Users can choose which option they want to use.

External RabbitMQ Broker

If a user wants to use an external/existing RabbitMQ Broker, it needs to be set up as per the broker installation guides.

Amazon MQ Example: https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/getting-started-rabbitmq.html

RabbitMQ Operator

If a user doesn't have an existing RabbitMQ broker or if they would prefer using a cloud native broker that gets deployed and managed by Spectra Detect Helm charts, a RabbitMQ Operator needs to be installed on the K8s cluster.

# RabbitMQ Operator
kubectl apply --wait -f \
https://github.com/rabbitmq/cluster-operator/releases/download/v2.6.0/cluster-operator.yml

PostgreSQL Server

Spectra Detect Helm charts support using external PostgreSQL Clusters, like Amazon RDS as well as deploying and using a PostgreSQL cluster resources as part of a Detect deployment installed in the same namespace. Users can choose which option they want to use.

External PostgreSQL Server

If a user wants to use an external/existing PostgreSQL server, it needs to be set up as per the PostgreSQL server guide.

Amazon RDS Example: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html

CloudNativePG Operator

If a user doesn't have an existing PostgreSQL Server or if they would prefer using a cloud native solution that gets deployed and managed by Spectra Detect Helm charts, a CloudNativePG Operator needs to be installed on the K8s cluster.

# PostgreSQL Operator - CloudNativePG (CNPG)
kubectl apply --wait -f \
https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.21/releases/cnpg-1.21.1.yaml

Remote Storage

To use the Worker autoscaling feature and have multiple Workers running at the same time, a shared remote storage where they can write needs to be set up.

Amazon EFS Remote Storage

If you are running Kubernetes on Amazon EKS, you can use Amazon EFS storage for the shared Worker storage. You will need to:

  1. Install Amazon EFS CSI Driver on the cluster to use EFS. https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html
  2. Create EFS file system via Amazon EFS console or command line
  3. Set Throughput mode to Elastic or Provisioned for higher throughput levels.
  4. Add mount targets for the node's subnets.
  5. Create a storage class for Amazon EFS.

Ingress Controller

In order to be able to access Detect endpoints from outside the cluster and in order for Worker pod to be able to connect to the C1000, an Ingress Contoller like AWS ALB or Nginx Controller must be configured on the K8s cluster. FOllow the official installation guides for the controllers.

Helm

The installation must be performed from a machine with a working Helm setup.