Spectra Detect AWS EKS Config Reference — Secrets and ConfigMap Values
Detect Platform General Values
| Key | Type | Default | Description |
|---|---|---|---|
| connectorS3.enabled | bool | false | If enabled, connector-s3 chart will be used and S3 connector pods will be deployed. |
| global.applyRestrictedPolicy | bool | false | Apply needed parts of the restricted policy to all sub-charts. When enabled, SDM needs to be disabled, since currently it is only supported for Worker. |
| global.clusterDomainName | string | "cluster.local" | Cluster Domain Name |
| global.postgresCustomSecretName | string | nil | Postgres custom secret name. If not set, default secret name will be used. |
| global.rabbitmqAdminCustomSecretName | string | nil | RabbitMQ custom admin secret name. If not set, default secret name will be used. |
| global.rabbitmqCustomSecretName | string | nil | RabbitMQ custom secret name. If not set, default secret name will be used. |
| global.umbrella | bool | true | Notifies dependency charts that they are being used from the umbrella chart. Must always be set to true. |
| global.useReloader | boolean | true | Whether to enable Reloader annotations. |
| logging.enabled | bool | false | If enabled, logging stack (Alloy + Loki + Grafana) will be deployed for log collection and visualization. |
| prometheus.enabled | bool | true | Has to be enabled when Prometheus is used. It creates a secret containing the Prometheus configuration, enabling other charts to connect. |
| registry.authSecretName | string | "rl-registry-key" | The name of a Kubernetes secret resource used for pulling images from a Docker registry. |
| registry.authSecretPassword | string | nil | The password used for authenticating with the registry. |
| registry.authSecretUsername | string | nil | The username used for authenticating with the registry. |
| registry.createRegistrySecret | bool | true | If enabled, a Kubernetes secret for pulling images will be created. If disabled, the secret must be created manually in the namespace. |
| registry.imageRegistry | string | "registry.reversinglabs.com" | The image registry address. |
| reloader.enabled | bool | true | If enabled, reloader will be deployed. Reloader ensures that latest configuration is applied at all time. |
| sdm.enabled | bool | false | If enabled, Spectra Detect Manager chart will be deployed. |
| worker.enabled | bool | true | If enabled, worker chart will be used and Worker service pods will be deployed. |
Processing
Worker Secrets
| Secret (when custom secret name is used) | Secret (default secret name) | Type | Description | Used in deployments (Pods) |
|---|---|---|---|---|
<secrets.api.customSecretName> | <release_name>-secret-worker-api-token | Optional | Token secret which contains token that is used to protect all endpoints with /api/ prefix, e.g. file upload. | Auth |
<secrets.apiTask.customSecretName> | <release_name>-secret-worker-api-task-token | Optional | Token secret which contains token that is used to protect /api/tiscale/v1/task endpoints. If left empty, the mentioned API is protected by <release_name>-secret-worker-api-token | Auth |
<secrets.cloud.customSecretName> | <release_name>-secret-worker-cloud | Required when related feature is enabled | Basic authentication secret which contains username and password for Spectra Intelligence authentication. Required when Spectra Intelligence is enabled (configuration.cloud.enabled). | Processor, Retry Processor, Preprocessor, Postprocessor, Receiver |
<secrets.cloudProxy.customSecretName> | <release_name>-secret-worker-cloud-proxy | Required when related feature is enabled | Basic authentication secret which contains username and password for Spectra Intelligence Proxy authentication. Required when Spectra Intelligence Proxy is enabled (configuration.cloud.proxy.enabled). | Processor, Retry processor, Preprocessor, Postprocessor, Receiver, Cloud Cache |
<secrets.aws.customSecretName> | <release_name>-secret-worker-aws | Required when related feature is enabled | Basic authentication secret which contains username and password for AWS authentication. Required if any type of S3 storage (File, SNS, Report, Unpacked) is enabled (configuration.s3.enabled, configuration.sns.enabled, configuration.reportS3.enabled, configuration.unpackedS3.enabled) | Postprocessor |
<secrets.azure.customSecretName> | <release_name>-secret-worker-azure | Required when related feature is enabled | Basic authentication secret which contains username and password for Azure authentication. Required if any type of ADL storage (File, Report, Unpacked) is enabled (configuration.adl.enabled, configuration.reportAdl.enabled, configuration.unpackedAdl.enabled). | Postprocessor |
<secrets.msGraph.customSecretName> | <release_name>-secret-worker-ms-graph | Required when related feature is enabled | Basic authentication secret which contains username and password for Microsoft Cloud Storage authentication. Required if any type of Microsoft Cloud storage (File, Report, Unpacked) is enabled (configuration.msGraph.enabled, configuration.reportMsGraph.enabled, configuration.unpackedMsGraph.enabled). | Postprocessor |
<secrets.unpackedS3.customSecretName> | <release_name>-secret-worker-unpacked-s3 | Optional | Secret which contains only password. Used for encryption of the archive file. Relevant only when the configuration.unpackedS3.archiveUnpacked option is set to true. | Postprocessor |
<secrets.reportS3.customSecretName> | <release_name>-secret-worker-report-s3 | Optional | Secret which contains only password. Used for encryption of the archive file. Relevant only when the configuration.reportS3.archiveSplitReport option is set to true. | Postprocessor |
<secrets.unpackedAdl.customSecretName> | <release_name>-secret-worker-unpacked-adl | Optional | Secret which contains only password. Used for encryption of the archive file. Relevant only when the configuration.unpackedAdl.archiveUnpacked option is set to true. | Postprocessor |
<secrets.reportAdl.customSecretName> | <release_name>-secret-worker-report-adl | Optional | Secret which contains only password. Used for encryption of the archive file. Relevant only when the configuration.reportAdl.archiveSplitReport option is set to true. | Postprocessor |
<secrets.unpackedMsGraph.customSecretName> | <release_name>-secret-worker-unpacked-ms-graph | Optional | Secret which contains only password. Used for encryption of the archive file. Relevant only when the configuration.unpackedMsGraph.archiveUnpacked option is set to true. | Postprocessor |
<secrets.reportMsGraph.customSecretName> | <release_name>-secret-worker-report-ms-graph | Optional | Secret which contains only password. Used for encryption of the archive file. Relevant only when the configuration.reportMsGraph.archiveSplitReport option is set to true. | Postprocessor |
<secrets.splunk.customSecretName> | <release_name>-secret-worker-splunk | Optional | Token secret which contains token for Splunk authentication. Relevant only if Splunk Integration is enabled (configuration.splunk.enabled). | Postprocessor |
<secrets.archive.customSecretName> | <release_name>-secret-worker-archive-zip | Optional | Secret which contains only password. Relevant only when the configuration.archive.fileWrapper value is set to "zip" or "mzip". | Postprocessor |
<secrets.spectraAnalyzeIntegration.customSecretName> | <release_name>-secret-worker-spectra-analyze-integration-token | Required when related feature is enabled | Token secret which contains the token used in authentication on Spectra Analyze when Spectra Analyze Integration is enabled (configuration.spectraAnalyzeIntegration.enabled). This token should be created in Spectra Analyze. | Postprocessor |
<secrets.authCreds.customSecretName> | <release_name>-secret-worker-auth-creds | Optional | Secret of type kubernetes.io/dockerconfigjson. Contains authentication credentials for multiple registries. Required when using API for container image upload. | Receiver |
<secrets.caCerts.customSecretName> | <release_name>-secret-worker-ca-certs | Optional | Opaque secret which contains a single key, ca_bundle. The value of that key are bundled certificates. Needed when certificates are required, example when using API for container image upload and the registry the image is on requires it. | Receiver |
Worker Secret Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| secrets.api | object | - | Secret configuration values for setting the authorization token that is used to protect all endpoint with /api/ prefix. |
| secrets.api.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the token parameter. WARNING: Use this for convenience/testing only. |
| secrets.api.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.api.token | string | "" | API token secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.apiTask | object | - | Secret configuration values for setting the authorization token that is used to protect /api/tiscale/v1/task endpoints. |
| secrets.apiTask.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the token parameter. WARNING: Use this for convenience/testing only. |
| secrets.apiTask.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.apiTask.token | string | "" | API task token secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.archive | object | - | Secret configuration values for setting the password used for encryption of the zip file. Relevant only when the configuration.archive.fileWrapper value is set to 'zip' or 'mzip'. |
| secrets.archive.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the password parameter. WARNING: Use this for convenience/testing only. |
| secrets.archive.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.archive.password | string | "" | Archive password secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.authCreds | object | - | Secret configuration values for setting the dockerconfig secret which will contain registry authentication credentials used for upload of the container images for analysis. |
| secrets.authCreds.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret of type dockerconfig using the credentials list of username, password and registry parameters. WARNING: Use this for convenience/testing only. |
| secrets.authCreds.credentials | list | [] | List of credentials from which the dockerconfig secret can be created. Only used if 'createUserSecret' is set to 'true'. Structure of the list items can be found here: Authentication Credentials. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.authCreds.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.aws | object | - | Secret configuration values for setting the credentials for any S3 type storage. |
| secrets.aws.awsS3AccessKeyId | string | "" | AWS S3 access key ID. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.aws.awsS3SecretAccessKey | string | "" | AWS S3 secret access key. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.aws.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the awsS3AccessKeyId and awsS3SecretAccessKey parameters. WARNING: Use this for convenience/testing only. |
| secrets.aws.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.azure | object | - | Secret configuration values for setting the Azure credentials. |
| secrets.azure.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the username and password parameters. WARNING: Use this for convenience/testing only. |
| secrets.azure.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.azure.password | string | "" | Azure password secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.azure.username | string | "" | Azure username secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.caCerts | object | - | Secret configuration values for setting the secret which will contain bundled certificates. |
| secrets.caCerts.certificates | list | [] | List of certificates which will be bundled. Each list item must contain the complete PEM-encoded file content of the certificate, including headers, not its file path. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive data in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.caCerts.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret which contains bundled certificates that are given in the certificates list. WARNING: Use this for convenience/testing only. |
| secrets.caCerts.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.cloud | object | - | Secret configuration values for setting the Spectra Intelligence credentials. |
| secrets.cloud.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the username and password parameters. WARNING: Use this for convenience/testing only. |
| secrets.cloud.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.cloud.password | string | "" | Cloud password secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments.. |
| secrets.cloud.username | string | "" | Cloud username secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.cloudProxy | object | - | Secret configuration values for setting the Spectra Intelligence credentials when proxy is used. |
| secrets.cloudProxy.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the username and password parameters. WARNING: Use this for convenience/testing only. |
| secrets.cloudProxy.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.cloudProxy.password | string | "" | Cloud proxy password secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.cloudProxy.username | string | "" | Cloud proxy username secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.msGraph | object | - | Secret configuration values for setting the credentials for Microsoft Cloud Storage. |
| secrets.msGraph.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the username and password parameters. WARNING: Use this for convenience/testing only. |
| secrets.msGraph.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.msGraph.password | string | "" | MS Graph password secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.msGraph.username | string | "" | MS Graph username secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.reportAdl | object | - | Secret configuration values for setting the password used for encryption of the archive file. Relevant only when the configuration.reportAdl.archiveSplitReport option is set to true. |
| secrets.reportAdl.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the password parameter. WARNING: Use this for convenience/testing only. |
| secrets.reportAdl.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.reportAdl.password | string | "" | Report ADL password secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.reportMsGraph | object | - | Secret configuration values for setting the password used for encryption of the archive file. Relevant only when the configuration.unpackedMsGraph.archiveSplitReport option is set to true. |
| secrets.reportMsGraph.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the password parameter. WARNING: Use this for convenience/testing only. |
| secrets.reportMsGraph.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.reportMsGraph.password | string | "" | Report MS Graph password secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.reportS3 | object | - | Secret configuration values for setting the password used for encryption of the archive file. Relevant only when the configuration.reportS3.archiveSplitReport option is set to true. |
| secrets.reportS3.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the password parameter. WARNING: Use this for convenience/testing only. |
| secrets.reportS3.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.reportS3.password | string | "" | Report S3 password secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.spectraAnalyzeIntegration | object | - | Secret configuration values for setting the token used for authentication on Spectra Analyze when Spectra Analyze Integration is enabled (configuration.spectraAnalyzeIntegration.enabled). |
| secrets.spectraAnalyzeIntegration.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the token parameter. WARNING: Use this for convenience/testing only. |
| secrets.spectraAnalyzeIntegration.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.spectraAnalyzeIntegration.token | string | "" | Spectra Analyze Integration token secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.splunk | object | - | Secret configuration values for setting the token for Splunk authentication. |
| secrets.splunk.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the token parameter. WARNING: Use this for convenience/testing only. |
| secrets.splunk.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.splunk.token | string | "" | Splunk token secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.unpackedAdl | object | - | Secret configuration values for setting the password used for encryption of the archive file. Relevant only when the configuration.unpackedAdl.archiveUnpacked option is set to true. |
| secrets.unpackedAdl.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the password parameter. WARNING: Use this for convenience/testing only. |
| secrets.unpackedAdl.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.unpackedAdl.password | string | "" | Unpacked ADL password secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.unpackedMsGraph | object | - | Secret configuration values for setting the password used for encryption of the archive file. Relevant only when the configuration.reportMsGraph.archiveSplitReport option is set to true. |
| secrets.unpackedMsGraph.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the password parameter. WARNING: Use this for convenience/testing only. |
| secrets.unpackedMsGraph.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.unpackedMsGraph.password | string | "" | Unpacked MS Graph password secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.unpackedS3 | object | - | Secret configuration values for setting the password used for encryption of the archive file. Relevant only when the configuration.unpackedS3.archiveUnpacked option is set to true. |
| secrets.unpackedS3.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the password parameter. WARNING: Use this for convenience/testing only. |
| secrets.unpackedS3.customSecretName | string | nil | Custom secret name. If not set, default secret name will be used. |
| secrets.unpackedS3.password | string | "" | Unpacked S3 password secret. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
Worker Secret Configuration Values - Authentication Credentials
| Key | Type | Default | Description |
|---|---|---|---|
| credentials[n] | list | - | List of credentials to be added to the dockerconfig secret. All values need to be set for the secret to be created. |
| credentials[n].password | string | "" | Registry password. WARNING: Use this for convenience/testing only. Do not store sensitive data in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| credentials[n].registry | string | "" | Registry name. |
| credentials[n].username | string | "" | Registry username. WARNING: Use this for convenience/testing only. Do not store sensitive data in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
Worker Application Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| appliance.configMode | string | "STANDARD" | Configuration mode of the appliance. Allowed values: CONFIGMAP (Configuration is provided with configmap), STANDARD (configuration is provided over UI). |
| configuration.a1000 | object | - | Integration with Spectra Analyze appliance. |
| configuration.a1000.host | string | "" | The hostname or IP address of the A1000 appliance associated with the Worker. |
| configuration.adl | object | - | Settings for storing files in an Azure Data Lake container. |
| configuration.adl.container | string | "" | The hostname or IP address of the Azure Data Lake container that will be used for storage. Required when storing files in ADL is enabled. |
| configuration.adl.enabled | bool | false | Enable or disable the storage of processed files. |
| configuration.adl.folder | string | "" | Specify the name of the folder on the container where files will be stored. |
| configuration.apiServer | object | - | Configures a custom Worker IP address which is included in the response when uploading a file to the Worker for processing. |
| configuration.apiServer.host | string | "" | Configures the hostname or IP address of the Worker. Only necessary if the default IP address or network interface is incorrect. |
| configuration.archive | object | - | After processing, files can be zipped before external storage. Available only for S3 and Azure. |
| configuration.archive.fileWrapper | string | "" | Specify whether the files should be compressed as a ZIP archive before uploading to external storage. Supported values are: zip, mzip. If this parameter is left blank, files will be uploaded in their original format. |
| configuration.archive.zipCompress | int | 0 | ZIP compression level to use when storing files in a ZIP file. Allowed range: 0 (no compression) to 9 (maximum compression). |
| configuration.archive.zipMaxfiles | int | 0 | Maximum allowed number of files that can be stored in one ZIP archive. Allowed range: 1-65535. 0 represents unlimited. |
| configuration.authentication | object | - | Authentication settings for Detect Worker. |
| configuration.authentication.enabled | bool | false | Enable/disable authentication on Detect Worker ingress APIs. |
| configuration.authentication.externalAuthUrl | string | "" | If set external/custom authentication service will be used for authentication, otherwise simple Token service is deployed which protects paths with tokens defined in the secrets. |
| configuration.aws | object | - | Configuration of integration with AWS or AWS-compatible storage to be used for SNS, and for uploading files and analysis reports to S3. |
| configuration.aws.caPath | string | "" | Path on the file system pointing to the certificate of a custom (self-hosted) S3 server. |
| configuration.aws.endpointUrl | string | "" | Only required in non-AWS setups in order to store files to an S3-compatible server. When this parameter is left blank, the default is https://aws.amazonaws.com. Supported pattern(s): https?://.+". |
| configuration.aws.maxReattempts | int | 5 | Maximum number of retries when saving a report to an S3-compatible server. |
| configuration.aws.payloadSigningEnabled | bool | false | Specifies whether to include an SHA-256 checksum with Amazon Signature Version 4 payloads. |
| configuration.aws.region | string | "us-east-1" | Specify the correct AWS geographical region where the S3 bucket is located. Required parameter, ignored for non-AWS setups. |
| configuration.aws.serverSideEncryption | string | "" | Specify the encryption algorithm used on the target S3 bucket (e.g. aws:kms or AES256). |
| configuration.aws.sslVerify | bool | false | Enable/disable SSL verification. |
| configuration.awsRole | object | - | Configures the AWS IAM roles used to access S3 buckets without sharing secret keys. The IAM role which will be used to obtain temporary tokens has to be created in the AWS console. |
| configuration.awsRole.enableArn | bool | false | Enables or disables this entire feature. |
| configuration.awsRole.externalRoleId | string | "" | The external ID of the role that will be assumed. This can be any string. Usually, it’s an ID provided by the entity which uses (but doesn’t own) an S3 bucket. The owner of that bucket takes that external ID and builds an ARN with it. |
| configuration.awsRole.refreshBuffer | int | 5 | Number of seconds to fetch a new ARN token before the token timeout is reached. |
| configuration.awsRole.roleArn | string | "" | The role ARN created using the external role ID and an Amazon ID. In other words, the ARN which allows a Worker to obtain a temporary token, which then allows it to save to S3 buckets without a secret access key. |
| configuration.awsRole.roleSessionName | string | "" | Name of the session visible in AWS logs. Can be any string. |
| configuration.awsRole.tokenDuration | int | 900 | How long before the authentication token expires and is refreshed. The minimum value is 900 seconds. |
| configuration.azure | object | - | Configures integration with Azure Data Lake Gen2 for the purpose of storing processed files in Azure Data Lake containers. |
| configuration.azure.endpointSuffix | string | "core.windows.net" | Specify the suffix for the address of your Azure Data Lake container. |
| configuration.callback | object | - | Settings for automatically sending file analysis reports via POST request. |
| configuration.callback.advancedFilterEnabled | bool | false | Enable/disable the advanced filter. |
| configuration.callback.advancedFilterName | string | "" | Name of the advanced filter. |
| configuration.callback.caPath | string | "" | If the url parameter is configured to use HTTPS, this parameter can be used to set the path to the certificate file. This automatically enables SSL verification. If this parameter is left blank or not configured, SSL verification will be disabled, and the certificate will not be validated. |
| configuration.callback.enabled | bool | false | Enable/disable connection. |
| configuration.callback.maliciousOnly | bool | false | When set, the report will only contain malicious and suspicious children. |
| configuration.callback.reportType | string | "medium" | Specifies which report_type is returned. By default, or when empty, only the medium (summary) report is provided in the callback response. Set to extended_small, small, medium or large to view results of filtering the full report. |
| configuration.callback.splitReport | bool | false | By default, reports contain information on parent files and all extracted children files. If set to true, reports for extracted files will be separated from the full report and saved as standalone files. If any user-defined data was appended to the analyzed parent file, it will be included in every split child report. |
| configuration.callback.sslVerify | bool | false | Enable/disable SSL verification |
| configuration.callback.timeout | int | 5 | Specify the number of seconds to wait before the POST request times out. In case of failure, the Worker will retry the request up to six times, increasing the waiting time between requests after the second retry has failed. With the default timeout set, the total possible waiting time before a request finally fails is 159 seconds. |
| configuration.callback.topContainerOnly | bool | false | If set to true, the reports will only contain metadata for the top container. Reports for unpacked files will not be generated. |
| configuration.callback.url | string | "" | Specify the full URL that will be used to send the callback POST request. Both HTTP and HTTPS are supported. If this parameter is left blank, reports will not be sent, and the callback feature will be disabled. Supported pattern(s): http?://.+ |
| configuration.callback.view | string | "" | Specifies whether a custom report view should be applied to the report. |
| configuration.cef | object | - | Configures Common Event Format (CEF) settings. CEF is an extensible, text-based logging and auditing format that uses a standard header and a variable extension, formatted as key-value pairs. |
| configuration.cef.cefMsgHashType | string | "md5" | Specify the type of hash that will be included in CEF messages. Supported values are: md5, sha1, sha256. |
| configuration.cef.enableCefMsg | bool | false | Enable or disable sending CEF messages to syslog. Defaults to false to avoid flooding. |
| configuration.classify | object | - | Configure settings for Worker analysis and classification of files using the Spectra Core static analysis engine. |
| configuration.classify.certificates | bool | true | Enable checking whether file certificate passes the certificate validation, in addition to checking certificate whitelists and blacklists. |
| configuration.classify.documents | bool | true | Enable document format threat detection. |
| configuration.classify.emails | bool | true | Enable detection of phishing and other email threats. |
| configuration.classify.hyperlinks | bool | true | Enable embedded hyperlinks detection. |
| configuration.classify.ignoreAdware | bool | false | When set to true, classification results that match adware will be ignored. |
| configuration.classify.ignoreHacktool | bool | false | When set to true, classification results that match hacktool will be ignored. |
| configuration.classify.ignorePacker | bool | false | When set to true, classification results that match packer will be ignored. |
| configuration.classify.ignoreProtestware | bool | false | When set to true, classification results that match protestware will be ignored. |
| configuration.classify.ignoreRiskware | bool | false | When set to true, classification results that match riskware will be ignored. |
| configuration.classify.ignoreSpam | bool | false | When set to true, classification results that match spam will be ignored. |
| configuration.classify.ignoreSpyware | bool | false | When set to true, classification results that match spyware will be ignored. |
| configuration.classify.images | bool | true | When true, the heuristic image classifier for supported file formats is used. |
| configuration.classify.modelsLinuxGeneral | string | "malicious" | This setting controls how the ML Model affects the classification. Possible values: malicious, suspicious, ignored, disabled. |
| configuration.classify.modelsScriptsAutoit | string | "malicious" | This setting controls how the ML Model affects the classification Possible values: malicious, suspicious, ignored, disabled. |
| configuration.classify.modelsScriptsExcel | string | "malicious" | This setting controls how the ML Model affects the classification. Possible values: malicious, suspicious, ignored, disabled. |
| configuration.classify.modelsScriptsPowershell | string | "malicious" | This setting controls how the ML Model affects the classification. Possible values: malicious, suspicious, ignored, disabled. |
| configuration.classify.modelsScriptsPython | string | "malicious" | This setting controls how the ML Model affects the classification. Possible values: malicious, suspicious, ignored, disabled. |
| configuration.classify.modelsScriptsVisualbasic | string | "malicious" | This setting controls how the ML Model affects the classification. Possible values: malicious, suspicious, ignored, disabled. |
| configuration.classify.modelsWindowsBackdoor | string | "malicious" | This setting controls how the ML Model affects the classification. Possible values: malicious, suspicious, ignored, disabled. |
| configuration.classify.modelsWindowsDownloader | string | "malicious" | This setting controls how the ML Model affects the classification. Possible values: malicious, suspicious, ignored, disabled. |
| configuration.classify.modelsWindowsGeneral | string | "malicious" | This setting controls how the ML Model affects the classification. Possible values: malicious, suspicious, ignored, disabled. |
| configuration.classify.modelsWindowsInfostealer | string | "malicious" | This setting controls how the ML Model affects the classification. Possible values: malicious, suspicious, ignored, disabled. |
| configuration.classify.modelsWindowsKeylogger | string | "malicious" | This setting controls how the ML Model affects the classification. Possible values: malicious, suspicious, ignored, disabled. |
| configuration.classify.modelsWindowsRansomware | string | "malicious" | This setting controls how the ML Model affects the classification. Possible values: malicious, suspicious, ignored, disabled. |
| configuration.classify.modelsWindowsRiskware | string | "malicious" | This setting controls how the ML Model affects the classification. Possible values: malicious, suspicious, ignored, disabled. |
| configuration.classify.modelsWindowsWorm | string | "malicious" | This setting controls how the ML Model affects the classification. Possible values: malicious, suspicious, ignored, disabled. |
| configuration.classify.pecoff | bool | true | When true, the heuristic Windows executable classifier for supported PE file formats is used. |
| configuration.cleanup | object | - | Configures how often the Worker file system is cleaned up. |
| configuration.cleanup.fileAgeLimit | int | 1440 | Time before an unprocessed file present on the appliance is deleted, in minutes. |
| configuration.cleanup.taskAgeLimit | int | 90 | Time before analysis reports and records of processed tasks are deleted, in minutes. |
| configuration.cleanup.taskUnprocessedLimit | int | 1440 | Time before an incomplete processing task is canceled, in minutes. |
| configuration.cloud | object | - | Configures integration with the Spectra Intelligence service or a T1000 instance to receive additional classification information. |
| configuration.cloud.enabled | bool | false | Enable/disable connection. |
| configuration.cloud.proxy | object | - | Configure an optional proxy connection. |
| configuration.cloud.proxy.enabled | bool | false | Enable/disable proxy server. |
| configuration.cloud.proxy.port | int | 8080 | Specify the TCP port number if using an HTTP proxy. Allowed range(s): 1 … 65535. Required only if proxy is used. |
| configuration.cloud.proxy.server | string | "" | Proxy hostname or IP address for routing requests from the appliance to Spectra Intelligence. Required only if proxy is used. |
| configuration.cloud.server | string | "https://appliance-api.reversinglabs.com" | Hostname or IP address of the Spectra Intelligence server. Required if Spectra Intelligence integration is enabled. Format: https://<ip_or_hostname>. |
| configuration.cloud.timeout | int | 6 | Specify the number of seconds to wait when connecting to Spectra Intelligence before terminating the connection request. |
| configuration.cloudAutomation | object | - | Configures the Worker to automatically submit files to Spectra Intelligence for antivirus scanning (in addition to local static analysis and remote reputation lookup (from previous antivirus scans)). |
| configuration.cloudAutomation.dataChangeSubscribe | bool | false | Subscribe to the Spectra Intelligence data change notification mechanism. |
| configuration.cloudAutomation.spexUpload | object | - | Scanning settings. |
| configuration.cloudAutomation.spexUpload.enabled | bool | false | Enable/disable this feature. |
| configuration.cloudAutomation.spexUpload.rescanEnabled | bool | true | Enable/disable rescan of files upon submission based on the configured interval to include the latest AV results in the reports. |
| configuration.cloudAutomation.spexUpload.rescanThresholdInDays | int | 3 | Set the interval in days for triggering an AV rescan. If the last scan is older than the specified value, a rescan will be initiated. A value of 0 means files will be rescanned with each submission. |
| configuration.cloudAutomation.spexUpload.scanUnpackedFiles | bool | false | Enable/disable sending unpacked files to Deep Cloud Analysis for scanning. Consumes roughly double the processing resources compared to standard analysis. |
| configuration.cloudAutomation.waitForAvScansTimeoutInMinutes | int | 240 | Sets the maximum wait time (in minutes) for Deep Cloud Analysis to complete. If the timeout is reached, the report will be generated without the latest AV results. |
| configuration.cloudAutomation.waitForAvScansToFinish | bool | false | If set to true, delays report generation until Deep Cloud Analysis completes, ensuring the latest AV results are included. |
| configuration.cloudCache.cacheMaxSizePercentage | float | 6.25 | Maximum cache size expressed as a percentage of the total allocated RAM on the Worker. Allowed range: 5 - 15. |
| configuration.cloudCache.cleanupWindow | int | 10 | How often to run the cache cleanup process, in minutes. It is advisable for this value to be lower, or at least equal to the TTL value. Allowed range: 5 - 60. |
| configuration.cloudCache.enabled | bool | true | Enable or disable the caching feature. |
| configuration.cloudCache.maxIdleUpstreamConnections | int | 50 | The maximum number of idle upstream connections. Allowed range: 10 - 50. |
| configuration.cloudCache.ttl | int | 240 | Time to live for cached records, in minutes. Allowed range: 1 - 7200. |
| configuration.general.maxUploadSizeMb | int | 2048 | The largest file (in MB) that Worker will accept and start processing. Ignored if Spectra Intelligence is connected and file upload limits are set there. |
| configuration.general.tsWorkerCheckThresholdMins | int | 720 | How often the processing service will be checked for timeouts. If any issues are detected, the process will be restarted. |
| configuration.general.uploadSizeLimitEnabled | bool | false | Whether or not the upload size filter is active. Ignored if Spectra Intelligence is connected and file upload limits are set there. |
| configuration.hashes | object | - | Spectra Core calculates file hashes during analysis and includes them in the analysis report. The following options configure which additional hash types should be calculated and included in the Worker report. SHA1 and SHA256 are always included and therefore aren’t configurable. Selecting additional hash types (especially SHA384 and SHA512) may slow report generation. |
| configuration.hashes.enableCrc32 | bool | false | Include CRC32 hashes in reports. |
| configuration.hashes.enableMd5 | bool | true | Include MD5 hashes in reports. |
| configuration.hashes.enableSha384 | bool | false | Include SHA384 hashes in reports. |
| configuration.hashes.enableSha512 | bool | false | Include SHA512 hashes in reports. |
| configuration.hashes.enableSsdeep | bool | false | Include SSDEEP hashes in reports. |
| configuration.hashes.enableTlsh | bool | false | Include TLSH hashes in reports. |
| configuration.health | object | - | Configures system health check configuration. |
| configuration.health.diskHigh | int | 95 | Threshold for high disk usage. |
| configuration.health.diskPath | string | "/scratch" | Empty string disables disk status check. |
| configuration.health.enableDiskUsageCheck | bool | false | Enable/disable disk usage check. |
| configuration.health.enabled | bool | true | Enable/disable system health check. |
| configuration.health.queueHigh | int | 2000 | Number of files allowed in the queue. |
| configuration.health.runtime | int | 10 | Specifies the number of seconds over which disk usage is sampled to assess the disk status. |
| configuration.logging | object | - | Configures the severity above which events will be logged or sent to a remote syslog server. Severity can be: INFO, WARNING, or ERROR. |
| configuration.logging.tiscaleLogLevel | string | "INFO" | Events below this level will not be presented on standard output. |
| configuration.msGraph | object | - | Configures the Microsoft Cloud Storage file integration. |
| configuration.msGraph.enabled | bool | false | Turns the Microsoft Cloud Storage file integration on or off. |
| configuration.msGraph.folder | string | "" | Folder where samples will be stored in Microsoft Cloud Storage. |
| configuration.msGraphGeneral | object | - | Configures the general options for the Microsoft Cloud Storage integration. |
| configuration.msGraphGeneral.customDomain | string | "" | Application’s custom domain configured in the Azure portal. |
| configuration.msGraphGeneral.siteHostname | string | "" | Used only if storageType is set to SharePoint. This is the SharePoint hostname. |
| configuration.msGraphGeneral.siteRelativePath | string | "" | SharePoint Online site relative path. Only used when storageType is set to SharePoint. |
| configuration.msGraphGeneral.storageType | string | "onedrive" | Specifies the storage type. Supported values are: onedrive or sharepoint. |
| configuration.msGraphGeneral.username | string | "" | Used only if storageType is set to OneDrive. Specifies which user’s drive will be used. |
| configuration.processing | object | - | Configure the Worker file processing capabilities to improve performance and load balancing. |
| configuration.processing.cacheEnabled | bool | false | Enable/disable caching. When enabled, Spectra Core can skip reprocessing the same files (duplicates) if uploaded consecutively in a short period. |
| configuration.processing.cacheTimeToLive | int | 0 | If file processing caching is enabled, specify how long (in seconds) the analysis reports should be preserved in the cache before they expire. A value of 0 uses the default. Default: 600. Maximum: 86400. |
| configuration.processing.depth | int | 0 | Specifies how "deep" a file is unpacked. By default, when set to 0, Workers will unpack files recursively until no more files can be unpacked. Setting a value greater than 0 limits the depth of recursion, which can speed up analyses but provide less detail. |
| configuration.processing.largefileThreshold | int | 100 | If advanced mode is enabled, files larger than this threshold (in MB) will be processed individually, one by one. This parameter is ignored in standard mode. |
| configuration.processing.mode | int | 2 | Configures the Worker processing mode to improve load balancing. Supported modes are standard (1) and advanced (2). |
| configuration.processing.timeout | int | 28800 | Specifies how many seconds the Worker should wait for a file to process before terminating the task. Default: 28800. Maximum: 259200. |
| configuration.propagation | object | - | Configure advanced classification propagation options supported by the Spectra Core static analysis engine. When Spectra Core classifies files, the classification of a child file can be applied to the parent file. |
| configuration.propagation.enabled | bool | true | Enable/disable the classification propagation feature. When propagation is enabled, files can be classified based on the content extracted from them. This means that files containing a malicious or suspicious file will also be considered malicious or suspicious. |
| configuration.propagation.goodwareOverridesEnabled | bool | true | Enable/disable goodware overrides. When enabled, any files extracted from a parent file and whitelisted by certificate, source or user override can no longer be classified as malicious or suspicious. This is an advanced goodware whitelisting technique that can be used to reduce the amount of false positive detections. |
| configuration.propagation.goodwareOverridesFactor | int | 1 | When goodware overrides are enabled, this parameter must be configured to determine the factor to which overrides will be applied. Supported values are 0 to 5, where zero represents the best trust factor (highest confidence that a sample contains goodware). Overrides will apply to files with a trust factor equal to or lower than the value configured here. |
| configuration.report | object | - | Configure the contents of the Spectra Detect file analysis report. |
| configuration.report.firstReportOnly | bool | false | If disabled, the reports for samples with child files will include relationships for all descendant files. Enabling this setting will only include relationship metadata for the root parent file to reduce redundancy. |
| configuration.report.includeStrings | bool | false | When enabled, strings are included in the file analysis report. Spectra Core can extract strings from binaries. This can be useful but may result in extensive metadata. To reduce noise, the types of included strings can be customized in the strings section. |
| configuration.report.networkReputation | bool | false | If enabled, analysis reports include a top-level network_reputation object with reputation information for every extracted network resource. For this feature, Spectra Intelligence must be configured on the Worker, and the ticore.processingMode option must be set to "best". |
| configuration.report.relationships | bool | false | Includes sample relationship metadata in the file analysis report. When enabled, the relationships section lists the hashes of files found within the given file. |
| configuration.reportAdl | object | - | Settings to configure how reports saved to Azure Data Lake are formatted. |
| configuration.reportAdl.archiveSplitReport | bool | true | Enable sending a single, smaller archive of split report files to ADL instead of each file. Relevant only when the 'Split report' option is used. |
| configuration.reportAdl.container | string | "" | Container where reports will be stored. Required when this feature is enabled. |
| configuration.reportAdl.enabled | bool | false | Enable/disable storing file processing reports to ADL. |
| configuration.reportAdl.filenameTimestampFormat | string | "" | File naming pattern for the report itself. A timestamp is appended to the SHA1 hash of the file. The timestamp format must follow the strftime specification and be enclosed in quotation marks. If not specified, the ISO 8601 format is used. |
| configuration.reportAdl.folder | string | "" | Specify the name of a folder where analysis reports will be stored. If the folder name is not provided, files are stored into the root of the configured container. |
| configuration.reportAdl.folderOption | string | "date_based" | Select the naming pattern that will be used when automatically creating subfolders for storing analysis reports. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash). |
| configuration.reportAdl.maliciousOnly | bool | false | When set, the report will only contain malicious and suspicious children. |
| configuration.reportAdl.reportType | string | "large" | Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory. |
| configuration.reportAdl.splitReport | bool | false | By default, reports contain information on parent files and all extracted children files. When this option is enabled, analysis reports for extracted files are separated from their parent file report, and saved as individual report files. |
| configuration.reportAdl.timestampEnabled | bool | true | Enable/disable appending a timestamp to the report name. |
| configuration.reportAdl.topContainerOnly | bool | false | When enabled, the file analysis report will only include metadata for the top container and subreports for unpacked files will not be generated. |
| configuration.reportAdl.view | string | "" | Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker. |
| configuration.reportApi | object | - | Configures the settings applied to the file analysis report fetched using the GET endpoint. To modify synchronous API timeouts or connection limits, apply the appropriate annotations to your Ingress resource. |
| configuration.reportApi.maliciousOnly | bool | false | Report contains only malicious and suspicious children. |
| configuration.reportApi.reportType | string | "large" | Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory. |
| configuration.reportApi.topContainerOnly | bool | false | When enabled, the file analysis report will only include metadata for the top container and subreports for unpacked files will not be generated. |
| configuration.reportApi.view | string | "" | Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker. |
| configuration.reportMsGraph | object | - | Settings to configure how reports saved to OneDrive or SharePoint are formatted. |
| configuration.reportMsGraph.archiveSplitReport | bool | true | Enable sending a single, smaller archive of split report files to Microsoft Cloud Storage instead of each file. Relevant only when the "Split Report" option is used. |
| configuration.reportMsGraph.enabled | bool | false | Enable/disable storing file processing reports. |
| configuration.reportMsGraph.filenameTimestampFormat | string | "" | This refers to the naming of the report file itself. A timestamp is appended to the SHA1 hash of the file. The timestamp format must follow the strftime specification and be enclosed in quotation marks. If not specified, the ISO 8601 format is used. |
| configuration.reportMsGraph.folder | string | "" | Folder where report files will be stored on the Microsoft Cloud Storage. If the folder name is not provided, files are stored into the root of the configured container. |
| configuration.reportMsGraph.folderOption | string | "date_based" | Select the naming pattern that will be used when automatically creating subfolders for storing analysis reports. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash). |
| configuration.reportMsGraph.maliciousOnly | bool | false | When set, the report will only contain malicious and suspicious children. |
| configuration.reportMsGraph.reportType | string | "large" | Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory. |
| configuration.reportMsGraph.splitReport | bool | false | By default, reports contain information on parent files and all extracted children files. When this option is enabled, analysis reports for extracted files are separated from their parent file report, and saved as individual report files. |
| configuration.reportMsGraph.topContainerOnly | bool | false | When enabled, file analysis report will only include metadata for the top container, and subreports for unpacked files will not be generated. |
| configuration.reportMsGraph.view | string | "" | Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker. |
| configuration.reportS3 | object | - | Settings to configure how reports saved to S3 buckets are formatted. |
| configuration.reportS3.advancedFilterEnabled | bool | false | Enable/disable usage of the advanced filter. |
| configuration.reportS3.advancedFilterName | string | "" | Name of the advanced filter. |
| configuration.reportS3.archiveSplitReport | bool | true | Enable sending a single, smaller archive of split report files to S3 instead of each file. Relevant only when the 'Split report' option is used. |
| configuration.reportS3.bucketMapping | object | {} | Used if destinationType is set to mapping. Accepts a dictionary of S3 input buckets mapped to output buckets, enclosed in quotation marks. |
| configuration.reportS3.bucketName | string | "" | Name of the S3 bucket where processed files will be stored. Required when this feature is enabled. |
| configuration.reportS3.bucketS3ConnectionMapping | list | [] | List of structures that sets individual AWS connection methods for each target output bucket. |
| configuration.reportS3.destinationType | string | "default" | Supported values are default (saves the reports into the bucket configured by bucketName), source (saves the reports into the S3 bucket where the samples originated from), mapping (saves the reports according to the mapping configured by bucketMapping). |
| configuration.reportS3.enabled | bool | false | Enable/disable storing file processing reports to S3. |
| configuration.reportS3.filenameTimestampFormat | string | "" | This refers to the naming of the report file itself. A timestamp is appended to the SHA1 hash of the file. The timestamp format must follow the strftime specification and be enclosed in quotation marks. If not specified, the ISO 8601 format is used. |
| configuration.reportS3.folder | string | "" | Folder where report files will be stored in the given S3 bucket. The folder can be up to 1024 bytes long when encoded in UTF-8, and can contain letters, numbers and special characters: "!", "-", "_", ".", "*", "'", "(", ")", "/". It must not start or end with a slash or contain leading or trailing spaces. Consecutive slashes ("//") are not allowed. |
| configuration.reportS3.folderOption | string | "date_based" | Select the naming pattern used when automatically creating subfolders for storing analysis reports. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash). |
| configuration.reportS3.maliciousOnly | bool | false | When set, the report will only contain malicious and suspicious children. |
| configuration.reportS3.reportType | string | "large" | Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory. |
| configuration.reportS3.splitReport | bool | false | By default, reports contain information on parent files and all extracted children files. When this option is enabled, analysis reports for extracted files are separated from their parent file report, and saved as individual report files. |
| configuration.reportS3.timestampEnabled | bool | true | Enable/disable appending a timestamp to the report name. |
| configuration.reportS3.topContainerOnly | bool | false | When enabled, the file analysis report will only include metadata for the top container and subreports for unpacked files will not be generated. |
| configuration.reportS3.view | string | "" | Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker. |
| configuration.s3 | object | - | Settings for storing a copy of all files uploaded for analysis on Worker to an S3 or a third-party, S3-compatible server. |
| configuration.s3.advancedFilterEnabled | bool | false | Enable/disable usage of the advanced filter. |
| configuration.s3.advancedFilterName | string | "" | Name of the advanced filter. |
| configuration.s3.bucketMapping | object | {} | Used if destinationType is set to mapping. Accepts a dictionary of S3 input buckets mapped to output buckets, enclosed in quotation marks. |
| configuration.s3.bucketName | string | "" | Name of the S3 bucket where processed files will be stored. Required when this feature is enabled. |
| configuration.s3.bucketS3ConnectionMapping | list | [] | List of structures that sets individual AWS connection methods for each target output bucket. |
| configuration.s3.destinationType | string | "default" | Supported values are default (saves reports into the bucket configured by bucketName) and mapping (saves reports according to bucketMapping). |
| configuration.s3.enabled | bool | false | Enable/disable storing file processed files on S3. |
| configuration.s3.folder | string | "" | Specify the name of a folder where analyzed files will be stored. If the folder name is not provided, files are stored into the root of the configured bucket. |
| configuration.s3.storeMetadata | bool | true | When true, analysis metadata will be stored to the uploaded S3 object. |
| configuration.scaling | object | - | Configures the number of concurrent processes and the number of files analyzed concurrently. Parameters in this section can be used to optimize the file processing performance on Worker. |
| configuration.scaling.postprocessing | int | 1 | Specify how many post-processing instances to run. Post-processing instances will then modify and save reports or upload processed files to external storage. Increasing this value can increase throughput for servers with extra available cores. Maximum: 256. |
| configuration.scaling.preprocessingUnpacker | int | 1 | Specify how many copies of Spectra Core are used to unpack samples for Deep Cloud Analysis. This setting only has effect if Deep Cloud Analysis is enabled with Scan Unpacked Files capability. |
| configuration.scaling.processing | int | 1 | Specify how many copies of Spectra Core engine instances to run. Each instance starts threads to process files. Maximum: 256. |
| configuration.sns | object | - | Configures settings for publishing notifications about file processing status and links the reports to an Amazon SNS (Simple Notification Service) topic. |
| configuration.sns.enabled | bool | false | Enable/disable publishing notifications to Amazon SNS. |
| configuration.sns.topic | string | "" | Specify the SNS topic ARN that the notifications should be published to. Prerequisite: the AWS account in the AWS settings must be given permission to publish to this topic. Required when this feature is enabled. |
| configuration.spectraAnalyzeIntegration | object | - | Configuration settings to upload processed samples to configured Spectra Analyze. |
| configuration.spectraAnalyzeIntegration.address | string | "" | Spectra Analyze address. Required when this feature is enabled. Has to be in the following format: https://<ip_or_hostname>. |
| configuration.spectraAnalyzeIntegration.advancedFilterEnabled | bool | true | Enable/disable the advanced filter. |
| configuration.spectraAnalyzeIntegration.advancedFilterName | string | "default_filter" | Name of the advanced filter. |
| configuration.spectraAnalyzeIntegration.enabled | bool | false | Enable/disable integration with Spectra Analyze. |
| configuration.splunk | object | - | Configures integration with Splunk, a logging server that can receive Spectra Detect file analysis reports. |
| configuration.splunk.caPath | string | "" | Path to the certificate. |
| configuration.splunk.chunkSizeMb | int | 0 | The maximum size (MB) of a single request sent to Splunk. If an analysis report exceeds this size, it will be split into multiple parts. The report is split into its subreports (for child files). A request can contain one or multiple subreports, as long as its total size doesn’t exceed this limit. The report is never split by size alone - instead, complete subreports are always preserved and sent to Splunk. Default: 0 (disabled) |
| configuration.splunk.enabled | bool | false | Enable/disable Splunk integration. |
| configuration.splunk.host | string | "" | Specify the hostname or IP address of the Splunk server that should connect to the Worker appliance. |
| configuration.splunk.https | bool | true | If set to true, HTTPS will be used for sending information to Splunk. If set to false, HTTP is used. |
| configuration.splunk.port | int | 8088 | Specify the TCP port of the Splunk server’s HTTP Event Collector. |
| configuration.splunk.reportType | string | "large" | Specifies which report_type is returned. By default or when empty, only the medium (summary) report is provided in the callback response. Set to small, medium or large to view results of filtering the full report. |
| configuration.splunk.sslVerify | bool | false | If HTTPS is enabled, setting this to true will enable certificate verification. |
| configuration.splunk.timeout | int | 5 | Specify how many seconds to wait for a response from the Splunk server before the request fails. If the request fails, the report will not be uploaded to the Splunk server, and an error will be logged. The timeout value must be greater than or equal to 1, and not greater than 999. |
| configuration.splunk.topContainerOnly | bool | false | Specifies if Splunk should receive the report for the top (parent) file only. If set to true, no subreports will be sent. |
| configuration.splunk.view | string | "" | Specifies whether a custom Report View should be applied to the file analysis report and returned in the response. |
| configuration.strings | object | - | Configure the output of strings extracted from files during Spectra Core static analysis. |
| configuration.strings.enableStringExtraction | bool | false | If set to true, user-provided criteria for string extraction will be used. |
| configuration.strings.maxLength | int | 32768 | Maximum number of characters in strings. |
| configuration.strings.minLength | int | 4 | Minimum number of characters in strings. Strings shorter than this value are not extracted. |
| configuration.strings.unicodePrintable | bool | false | Specify whether strings are Unicode printable or not. |
| configuration.strings.utf16be | bool | true | Allow/disallow extracting UTF-16BE strings. |
| configuration.strings.utf16le | bool | true | Allow/disallow extracting UTF-16LE strings. |
| configuration.strings.utf32be | bool | false | Allow/disallow extracting UTF-32BE strings. |
| configuration.strings.utf32le | bool | false | Allow/disallow extracting UTF-32LE strings. |
| configuration.strings.utf8 | bool | true | Allow/disallow extracting UTF-8 strings. |
| configuration.ticore | object | - | Configures cloud options supported by Spectra Core. Worker must be connected to Spectra Intelligence for these settings to take effect. |
| configuration.ticore.maxDecompressionFactor | float | 1.0 | Decimal value between 0 and 999.9. If multiple decimals are given, it will be rounded to one decimal. Used to protect the user from intentional or unintentional archive bombs, terminating decompression if the size of unpacked content exceeds a set quota. |
| configuration.ticore.mwpExtended | bool | false | Enable/disable information from antivirus engines in Spectra Intelligence. |
| configuration.ticore.mwpGoodwareFactor | int | 2 | Determines when a file classified as KNOWN in Spectra Intelligence Cloud is classified as Goodware by Spectra Core. By default, all KNOWN cloud classifications are converted to Goodware. Supported values are 0 - 5, where zero represents the best trust factor (highest confidence that a sample contains goodware). Lowering the value reduces the number of samples classified as goodware. Samples with a trust factor above the configured value are considered UNKNOWN. |
| configuration.ticore.processingMode | string | "best" | Determines which file formats are unpacked by Spectra Core for detailed analysis. "best" fully processes all supported formats; "fast" processes a limited set. |
| configuration.ticore.useXref | bool | false | Enabling XREF service will enrich analysis reports with cross-reference metadata like AV scanner results. |
| configuration.unpackedAdl | object | - | Settings for storing extracted files in an Azure Data Lake container. |
| configuration.unpackedAdl.archiveUnpacked | bool | true | Enable sending a single, smaller archive of unpacked files to ADL instead of each unpacked file. |
| configuration.unpackedAdl.container | string | "" | Specify the name of the Azure Data Lake container where extracted files will be saved. Required when this feature is enabled. |
| configuration.unpackedAdl.enabled | bool | false | Enable/disable storing extracted files to ADL. |
| configuration.unpackedAdl.folder | string | "" | Specify the name of a folder in the configured Azure container where extracted files will be stored. If the folder name is not provided, files are stored into the root of the configured container. |
| configuration.unpackedAdl.folderOption | string | "date_based" | Select the naming pattern that will be used when automatically creating subfolders for storing analyzed files. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash). |
| configuration.unpackedMsGraph | object | - | Settings for storing extracted files to Microsoft Cloud Storage. |
| configuration.unpackedMsGraph.archiveUnpacked | bool | true | Enable sending a single, smaller archive of unpacked files to Microsoft Cloud Storage instead of each unpacked file. |
| configuration.unpackedMsGraph.enabled | bool | false | Enable/disable storing extracted files. |
| configuration.unpackedMsGraph.folder | string | "" | Folder where unpacked files will be stored on the Microsoft Cloud Storage. If the folder name is not provided, files are stored into the root of the configured container. |
| configuration.unpackedMsGraph.folderOption | string | "date_based" | Select the naming pattern that will be used when automatically creating subfolders for storing analyzed files. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash). |
| configuration.unpackedS3 | object | - | Settings for storing extracted files to S3 container. |
| configuration.unpackedS3.advancedFilterEnabled | bool | false | Enable/disable the use of advanced filters. |
| configuration.unpackedS3.advancedFilterName | string | "" | Name of the advanced filter. |
| configuration.unpackedS3.archiveUnpacked | bool | true | Enable sending a single, smaller archive of unpacked files to S3 instead of each unpacked file. |
| configuration.unpackedS3.bucketName | string | "" | Specify the name of the S3 container where extracted files will be saved. Required when this feature is enabled. |
| configuration.unpackedS3.enabled | bool | false | Enable/disable storing extracted files in S3. |
| configuration.unpackedS3.folder | string | "" | The name of a folder in the configured S3 container where extracted files will be stored. If the folder name is not provided, files are stored into the root of the configured container. The folder can be up to 1024 bytes long when encoded in UTF-8, and can contain letters, numbers and special characters: "!", "-", "_", ".", "*", "'", "(", ")", "/". It must not start or end with a slash or contain leading or trailing spaces. Consecutive slashes ("//") are not allowed. |
| configuration.unpackedS3.folderOption | string | "date_based" | Select the naming pattern that will be used when automatically creating subfolders for storing analyzed files. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash). |
| configuration.wordlist | list | - | List of passwords for protected files. |
Worker Component Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| advancedFilters | object | {} | Contains key-value pairs in which keys are filter names and values are the filter definitions. |
| applyRestrictedPolicy | bool | false | Apply restricted policy to the pods and containers so they can run in a namespace with restricted policy enabled. If umbrella is used for deployment, this value is ignored, and global.applyRestrictedPolicy is used. |
| auth.image | object | - | Configuration values of the image used for authentication. |
| auth.image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| auth.image.tag | string | "8.0.0-35" | Image tag. |
| auth.resources | object | - | Resource requests and limits for the container. |
| auth.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| auth.resources.limits.cpu | string | "4000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| auth.resources.limits.memory | string | "256Mi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| auth.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| auth.resources.requests.cpu | string | "500m" | CPU request. |
| auth.resources.requests.memory | string | "128Mi" | Memory request. |
| authReverseProxy.image | object | - | Configuration values of the auth reverse proxy image. |
| authReverseProxy.image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| authReverseProxy.image.repository | string | "nginx" | Image repository. |
| authReverseProxy.image.tag | string | "stable" | Image tag. |
| authReverseProxy.resources | object | - | Resource requests and limits for the container. |
| authReverseProxy.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| authReverseProxy.resources.limits.cpu | string | "2000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| authReverseProxy.resources.limits.memory | string | "512Mi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| authReverseProxy.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| authReverseProxy.resources.requests.cpu | string | "250m" | CPU request. |
| authReverseProxy.resources.requests.memory | string | "128Mi" | Memory request. |
| checkHealth.failedJobsHistoryLimit | int | 1 | Number of failed finished jobs to keep. |
| checkHealth.image | object | - | Configuration values of the image used for health check job. |
| checkHealth.image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| checkHealth.image.tag | string | "6.0.0-14" | Image tag. |
| checkHealth.resources | object | - | Resource requests and limits for the container. |
| checkHealth.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| checkHealth.resources.limits.cpu | string | "2000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| checkHealth.resources.limits.memory | string | "2Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| checkHealth.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| checkHealth.resources.requests.cpu | string | "1000m" | CPU request. |
| checkHealth.resources.requests.memory | string | "1Gi" | Memory request. |
| checkHealth.startingDeadlineSeconds | int | 60 | Deadline (in seconds) for starting the Job, if that Job misses its scheduled time for any reason. After missing the deadline, the CronJob skips that instance of the Job. |
| checkHealth.successfulJobsHistoryLimit | int | 1 | Number of successful finished jobs to keep. |
| cleanup.failedJobsHistoryLimit | int | 1 | Number of failed finished jobs to keep. |
| cleanup.image | object | - | Configuration values of the image used for cleanup job. |
| cleanup.image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| cleanup.image.tag | string | "6.0.0-14" | Image tag. |
| cleanup.resources | object | - | Resource requests and limits for the container. |
| cleanup.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| cleanup.resources.limits.cpu | string | "2000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| cleanup.resources.limits.memory | string | "2Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| cleanup.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| cleanup.resources.requests.cpu | string | "1000m" | CPU request. |
| cleanup.resources.requests.memory | string | "1Gi" | Memory request. |
| cleanup.startingDeadlineSeconds | int | 180 | Deadline (in seconds) for starting the Job, if that Job misses its scheduled time for any reason. After missing the deadline, the CronJob skips that instance of the Job. |
| cleanup.successfulJobsHistoryLimit | int | 1 | Number of successful finished jobs to keep. |
| cloudCache.image | object | - | Configuration values of the cloud cache image. |
| cloudCache.image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| cloudCache.image.tag | string | "1.3.2-3" | Image tag. |
| cloudCache.resources | object | - | Resource requests and limits for the container. |
| cloudCache.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| cloudCache.resources.limits.cpu | string | "4000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| cloudCache.resources.limits.memory | string | "4Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| cloudCache.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| cloudCache.resources.requests.cpu | string | "1000m" | CPU request. |
| cloudCache.resources.requests.memory | string | "1Gi" | Memory request. |
| clusterDomainName | string | "cluster.local" | Cluster domain name. If umbrella is used for deployment, this value is ignored, and global.clusterDomainName is applied. |
| configManager.enabled | bool | false | Whether to enable the config manager service. MUST be set to false if SDM Portal is not deployed or configManager will attempt to register with SDM indefinitely. |
| configManager.image | object | - | Configuration values for the config manager service. |
| configManager.image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| configManager.image.tag | string | "8.0.0-35" | Image tag. |
| configManager.resources | object | - | Resource requests and limits for the container. |
| configManager.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| configManager.resources.limits.cpu | string | "1000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| configManager.resources.limits.memory | string | "256Mi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| configManager.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| configManager.resources.requests.cpu | string | "250m" | CPU request. |
| configManager.resources.requests.memory | string | "128Mi" | Memory request. |
| configManager.useK8sApi | bool | true | Enables an additional endpoint to fetch configuration via the K8s API (requires RBAC permissions). |
| imagePullSecrets | list | ["rl-registry-key"] | Set of stored credentials (authentication tokens) that allows Kubernetes node to "log in" to a private container registry to pull restricted images. |
| ingress | object | - | Ingress configuration values. |
| ingress.annotations | object | {} | Custom annotations to fine-tune the Ingress Controller behavior. |
| ingress.className | string | "nginx" | IngressClass that will handle this resource. Must match an existing 'IngressClass' in the cluster. |
| ingress.enabled | bool | true | Enable/disable the creation of the Ingress resource. |
| ingress.host | string | "" | The Fully Qualified Domain Name (FQDN) for the application. Required if 'enabled' is true. |
| ingress.tls | object | - | TLS / SSL Configuration. |
| ingress.tls.certificateArn | string | "" | The Amazon Resource Name (ARN) of the certificate for AWS Load Balancer Controller. Only applicable when 'className' is 'alb'. |
| ingress.tls.issuer | string | "" | The name of the cert-manager ClusterIssuer or Issuer to request certificates from. |
| ingress.tls.issuerKind | string | "Issuer" | The resource type of the issuer. Typically, 'Issuer' (namespace-scoped) or 'ClusterIssuer' (cluster-wide). |
| monitoring.enabled | bool | false | Enable/disable monitoring with Prometheus. |
| monitoring.prometheusReleaseName | string | "kube-prometheus-stack" | Prometheus release name. |
| persistence | object | - | Data storage configuration for storing samples and reports. |
| persistence.accessModes | list | ["ReadWriteMany"] | Specifies the access modes for the volume. When autoscaling or multiple worker is used should be set to [ "ReadWriteMany" ]. |
| persistence.requestStorage | string | "10Gi" | The amount of storage to request. |
| persistence.storageClassName | string | nil | Name of the StorageClass to use. When autoscaling or multiple worker is used storage class should support "ReadWriteMany". If EFS storage is used, disable disk health check by setting configuration.health.enableDiskUsageCheck to false. |
| postgres.customSecretName | string | nil | Postgres custom secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.postgresCustomSecretName will be used as a secret name. |
| postgres.releaseName | string | "" | Postgres release name, required when deployment is not done with the umbrella chart. |
| postprocessor.autoscaling | object | - | Autoscaling configuration values. |
| postprocessor.autoscaling.cooldownPeriod | int | 180 | The period to wait after the last trigger reported active before scaling the resource back to 0, in seconds. |
| postprocessor.autoscaling.enabled | bool | true | Enable/disable autoscaling. |
| postprocessor.autoscaling.maxReplicas | int | 8 | Maximum number of replicas that can be deployed when scaling is enabled. |
| postprocessor.autoscaling.minReplicas | int | 0 | Minimum number of replicas that need to be deployed. |
| postprocessor.autoscaling.pollingInterval | int | 10 | Interval to check each trigger, in seconds. |
| postprocessor.autoscaling.scaleDown | object | - | ScaleDown configuration values. |
| postprocessor.autoscaling.scaleDown.stabilizationWindow | int | 180 | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. |
| postprocessor.autoscaling.scaleUp | object | - | ScaleUp configuration values. |
| postprocessor.autoscaling.scaleUp.numberOfPods | int | 1 | Number of pods that can be scaled in the defined period. |
| postprocessor.autoscaling.scaleUp.period | int | 30 | Interval in which the numberOfPods value is applied. |
| postprocessor.autoscaling.scaleUp.stabilizationWindow | int | 15 | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. |
| postprocessor.autoscaling.targetInputQueueSize | int | 10 | Number of messages in backlog to trigger scaling on. Must be greater than 0. |
| postprocessor.image | object | - | Configuration values of the postprocessor image. |
| postprocessor.image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| postprocessor.image.tag | string | "8.0.0-35" | Image tag. |
| postprocessor.replicaCount | int | 1 | Number of desired pod instances. Ignored when autoscaling is enabled. |
| postprocessor.resources | object | - | Resource requests and limits for the container. |
| postprocessor.resources.limits.cpu | string | nil | CPU limit. Throttling occurs if the container exceeds this value. |
| postprocessor.resources.limits.memory | string | "16Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| postprocessor.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| postprocessor.resources.requests.cpu | string | "2500m" | CPU request. |
| postprocessor.resources.requests.memory | string | "2Gi" | Memory request. |
| preprocessor.autoscaling | object | - | Autoscaling configuration values. |
| preprocessor.autoscaling.cooldownPeriod | int | 180 | The period to wait after the last trigger reported active before scaling the resource back to 0, in seconds. |
| preprocessor.autoscaling.enabled | bool | true | Enable/disable autoscaling. |
| preprocessor.autoscaling.maxReplicas | int | 8 | Maximum number of replicas that can be deployed when scaling is enabled. |
| preprocessor.autoscaling.minReplicas | int | 0 | Minimum number of replicas that need to be deployed. |
| preprocessor.autoscaling.pollingInterval | int | 10 | Interval to check each trigger, in seconds. |
| preprocessor.autoscaling.scaleDown | object | - | ScaleDown configuration values. |
| preprocessor.autoscaling.scaleDown.stabilizationWindow | int | 180 | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. |
| preprocessor.autoscaling.scaleUp | object | - | ScaleUp configuration values. |
| preprocessor.autoscaling.scaleUp.numberOfPods | int | 1 | Number of pods that can be scaled in the defined period. |
| preprocessor.autoscaling.scaleUp.period | int | 30 | Interval in which the numberOfPods value is applied. |
| preprocessor.autoscaling.scaleUp.stabilizationWindow | int | 15 | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. |
| preprocessor.autoscaling.targetInputQueueSize | int | 10 | Number of messages in backlog to trigger scaling on. Must be greater than 0. |
| preprocessor.image | object | - | Configuration values of the preprocessor image. |
| preprocessor.image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| preprocessor.image.tag | string | "8.0.0-35" | Image tag. |
| preprocessor.replicaCount | int | 1 | Number of desired pod instances. Ignored when autoscaling is enabled. |
| preprocessor.resources | object | - | Resource requests and limits for the container. |
| preprocessor.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| preprocessor.resources.limits.cpu | string | "4000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| preprocessor.resources.limits.memory | string | "4Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| preprocessor.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| preprocessor.resources.requests.cpu | string | "1000m" | CPU request. |
| preprocessor.resources.requests.memory | string | "1Gi" | Memory request. |
| preprocessorUnpacker.autoscaling | object | - | Autoscaling configuration values. |
| preprocessorUnpacker.autoscaling.cooldownPeriod | int | 180 | The period to wait after the last trigger reported active before scaling the resource back to 0, in seconds. |
| preprocessorUnpacker.autoscaling.enabled | bool | true | Enable/disable autoscaling. |
| preprocessorUnpacker.autoscaling.maxReplicas | int | 8 | Maximum number of replicas that can be deployed when scaling is enabled. |
| preprocessorUnpacker.autoscaling.minReplicas | int | 0 | Minimum number of replicas that need to be deployed. |
| preprocessorUnpacker.autoscaling.pollingInterval | int | 10 | Interval to check each trigger, in seconds. |
| preprocessorUnpacker.autoscaling.scaleDown | object | - | ScaleDown configuration values. |
| preprocessorUnpacker.autoscaling.scaleDown.stabilizationWindow | int | 180 | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. |
| preprocessorUnpacker.autoscaling.scaleUp | object | - | ScaleUp configuration values. |
| preprocessorUnpacker.autoscaling.scaleUp.numberOfPods | int | 1 | Number of pods that can be scaled in the defined period. |
| preprocessorUnpacker.autoscaling.scaleUp.period | int | 30 | Interval in which the numberOfPods value is applied. |
| preprocessorUnpacker.autoscaling.scaleUp.stabilizationWindow | int | 15 | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. |
| preprocessorUnpacker.autoscaling.targetInputQueueSize | int | 10 | Number of messages in backlog to trigger scaling on. Must be greater than 0. |
| preprocessorUnpacker.replicaCount | int | 1 | Number of desired pod instances. Ignored when autoscaling is enabled. |
| preprocessorUnpacker.resources | object | - | Resource requests and limits for the container. |
| preprocessorUnpacker.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| preprocessorUnpacker.resources.limits.cpu | string | nil | CPU limit. Throttling occurs if the container exceeds this value. |
| preprocessorUnpacker.resources.limits.memory | string | "16Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| preprocessorUnpacker.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| preprocessorUnpacker.resources.requests.cpu | string | "4000m" | CPU request. |
| preprocessorUnpacker.resources.requests.memory | string | "4Gi" | Memory request. |
| preprocessorUnpacker.scaling.concurrencyCount | int | 0 | Defines the number of concurrent threads per Spectra Detect instance that should be used for processing. When set to 0, number of threads equals to the number of CPU cores on the system. Modifying this option may impact system performance. Consult with ReversingLabs Support before making any changes to the parameter. |
| preprocessorUnpacker.scaling.prefetchCount | int | 4 | Defines the maximum number of individual files that can simultaneously be processed by a single instance of Spectra Core. When one file is processed, another from the queue enters the processing state. Must be greater than 0. |
| processor.autoscaling | object | - | Autoscaling configuration values. |
| processor.autoscaling.cooldownPeriod | int | 180 | The period to wait after the last trigger reported active before scaling the resource back to 0, in seconds. |
| processor.autoscaling.enabled | bool | true | Enable/disable autoscaling. |
| processor.autoscaling.maxReplicas | int | 8 | Maximum number of replicas that can be deployed when scaling is enabled. |
| processor.autoscaling.minReplicas | int | 0 | Minimum number of replicas that need to be deployed. |
| processor.autoscaling.pollingInterval | int | 10 | Interval to check each trigger, in seconds. |
| processor.autoscaling.scaleDown | object | - | ScaleDown configuration values. |
| processor.autoscaling.scaleDown.stabilizationWindow | int | 180 | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. |
| processor.autoscaling.scaleUp | object | - | ScaleUp configuration values. |
| processor.autoscaling.scaleUp.numberOfPods | int | 1 | Number of pods that can be scaled in the defined period. |
| processor.autoscaling.scaleUp.period | int | 30 | Interval in which the numberOfPods value is applied. |
| processor.autoscaling.scaleUp.stabilizationWindow | int | 15 | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. |
| processor.autoscaling.targetInputQueueSize | int | 10 | Number of messages in backlog to trigger scaling on. Must be greater than 0. |
| processor.image | object | - | Configuration values of the processor image. |
| processor.image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| processor.image.tag | string | "8.0.0-35" | Image tag. |
| processor.replicaCount | int | 1 | Number of desired pod instances. Ignored when autoscaling is enabled. |
| processor.resources | object | - | Resource requests and limits for the container. |
| processor.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| processor.resources.limits.cpu | string | nil | CPU limit. Throttling occurs if the container exceeds this value. |
| processor.resources.limits.memory | string | "32Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| processor.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| processor.resources.requests.cpu | string | "4000m" | CPU request. |
| processor.resources.requests.memory | string | "4Gi" | Memory request. |
| processor.scaling.concurrencyCount | int | 0 | Defines the number of concurrent threads per Spectra Detect instance that should be used for processing. When set to 0, number of threads equals to the number of CPU cores on the system. Modifying this option may impact system performance. Consult with ReversingLabs Support before making any changes to the parameter. |
| processor.scaling.prefetchCount | int | 8 | Defines the maximum number of individual files that can simultaneously be processed by a single instance of Spectra Core. When one file is processed, another from the queue enters the processing state. Must be greater than 0. |
| processorRetry.autoscaling | object | - | Autoscaling configuration values. |
| processorRetry.autoscaling.cooldownPeriod | int | 180 | The period to wait after the last trigger reported active before scaling the resource back to 0, in seconds. |
| processorRetry.autoscaling.enabled | bool | true | Enable/disable autoscaling. |
| processorRetry.autoscaling.maxReplicas | int | 8 | Maximum number of replicas that can be deployed when scaling is enabled. |
| processorRetry.autoscaling.minReplicas | int | 0 | Minimum number of replicas that need to be deployed. |
| processorRetry.autoscaling.pollingInterval | int | 10 | Interval to check each trigger, in seconds. |
| processorRetry.autoscaling.scaleDown | object | - | ScaleDown configuration values. |
| processorRetry.autoscaling.scaleDown.stabilizationWindow | int | 180 | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. |
| processorRetry.autoscaling.scaleUp | object | - | ScaleUp configuration values. |
| processorRetry.autoscaling.scaleUp.numberOfPods | int | 1 | Number of pods that can be scaled in the defined period. |
| processorRetry.autoscaling.scaleUp.period | int | 30 | Interval in which the numberOfPods value is applied. |
| processorRetry.autoscaling.scaleUp.stabilizationWindow | int | 15 | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. |
| processorRetry.autoscaling.targetInputQueueSize | int | 10 | Number of messages in backlog to trigger scaling on. Must be greater than 0. |
| processorRetry.replicaCount | int | 1 | Number of desired pod instances. Ignored when autoscaling is enabled. |
| processorRetry.resources | object | - | Resource requests and limits for the container. |
| processorRetry.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| processorRetry.resources.limits.cpu | string | nil | CPU limit. Throttling occurs if the container exceeds this value. |
| processorRetry.resources.limits.memory | string | "64Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| processorRetry.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| processorRetry.resources.requests.cpu | string | "4000m" | CPU request. |
| processorRetry.resources.requests.memory | string | "8Gi" | Memory request. |
| processorRetry.scaling.concurrencyCount | int | 0 | Defines the number of concurrent threads per Spectra Detect instance that should be used for processing. When set to 0, number of threads equals to the number of CPU cores on the system. Modifying this option may impact system performance. Consult with ReversingLabs Support before making any changes to the parameter. |
| processorRetry.scaling.prefetchCount | int | 1 | Defines the maximum number of individual files that can simultaneously be processed by a single instance of Spectra Core. When one file is processed, another from the queue enters the processing state. Must be greater than 0. Recommended value for this type of processor is 1. |
| rabbitmq.customAdminSecretName | string | nil | RabbitMQ custom admin secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.rabbitmqAdminCustomSecretName will be used as a secret name. |
| rabbitmq.customSecretName | string | nil | RabbitMQ custom secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.rabbitmqCustomSecretName will be used as a secret name. |
| rabbitmq.releaseName | string | "" | Rabbitmq release name, required when deployment is not done with umbrella. |
| receiver.autoscaling | object | - | Autoscaling configuration values. |
| receiver.autoscaling.cooldownPeriod | int | 180 | The period to wait after the last trigger reported active before scaling the resource back to 0, in seconds. |
| receiver.autoscaling.enabled | bool | true | Enable/disable autoscaling. |
| receiver.autoscaling.maxReplicas | int | 8 | Maximum number of replicas that can be deployed when scaling is enabled. |
| receiver.autoscaling.minReplicas | int | 1 | Minimum number of replicas that need to be deployed. |
| receiver.autoscaling.pollingInterval | int | 10 | Interval to check each trigger, in seconds. |
| receiver.autoscaling.scaleDown | object | - | ScaleDown configuration values. |
| receiver.autoscaling.scaleDown.stabilizationWindow | int | 180 | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. |
| receiver.autoscaling.scaleUp | object | - | ScaleUp configuration values. |
| receiver.autoscaling.scaleUp.numberOfPods | int | 1 | Number of pods that can be scaled in the defined period. |
| receiver.autoscaling.scaleUp.period | int | 30 | Interval in which the numberOfPods value is applied. |
| receiver.autoscaling.scaleUp.stabilizationWindow | int | 30 | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. |
| receiver.autoscaling.triggerCPUValue | int | 75 | CPU value (in percentage), which will cause scaling when reached. The percentage is taken from the resource.limits.cpu value. Limits have to be set up. |
| receiver.image | object | - | Configuration values of the receiver image. |
| receiver.image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| receiver.image.tag | string | "8.0.0-35" | Image tag. |
| receiver.initImage | object | - | Configuration values of the image used for receiver initialization. |
| receiver.initImage.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| receiver.initImage.tag | string | "6.0.0-14" | Image tag. |
| receiver.replicaCount | int | 1 | Number of desired pod instances. Ignored when autoscaling is enabled. |
| receiver.resources | object | - | Resource requests and limits for the container. |
| receiver.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| receiver.resources.limits.cpu | string | "5000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| receiver.resources.limits.memory | string | "8Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| receiver.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| receiver.resources.requests.cpu | string | "2000m" | CPU request. |
| receiver.resources.requests.memory | string | "1Gi" | Memory request. |
| report.autoscaling | object | - | Autoscaling configuration values. |
| report.autoscaling.cooldownPeriod | int | 180 | The period to wait after the last trigger reported active before scaling the resource back to 0, in seconds. |
| report.autoscaling.enabled | bool | true | Enable/disable autoscaling. |
| report.autoscaling.maxReplicas | int | 8 | Maximum number of replicas that can be deployed when scaling in enabled. |
| report.autoscaling.minReplicas | int | 1 | Minimum number of replicas that need to be deployed. |
| report.autoscaling.pollingInterval | int | 10 | Interval to check each trigger, in seconds. |
| report.autoscaling.scaleDown | object | - | ScaleDown configuration values. |
| report.autoscaling.scaleDown.stabilizationWindow | int | 180 | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. |
| report.autoscaling.scaleUp | object | - | ScaleUp configuration values. |
| report.autoscaling.scaleUp.numberOfPods | int | 1 | Number of pods that can be scaled in the defined period. |
| report.autoscaling.scaleUp.period | int | 30 | Interval in which the numberOfPods value is applied. |
| report.autoscaling.scaleUp.stabilizationWindow | int | 30 | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. |
| report.autoscaling.triggerCPUValue | int | 75 | CPU value (in percentage), which will cause scaling when reached. The percentage is taken from the resource.limits.cpu value. Limits have to be set up. |
| report.image | object | - | Configuration values of the report image. |
| report.image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| report.image.tag | string | "8.0.0-35" | Image tag. |
| report.replicaCount | int | 1 | Number of desired pod instances. Ignored when autoscaling is enabled. |
| report.resources | object | - | Resource requests and limits for the container. |
| report.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| report.resources.limits.cpu | string | "8000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| report.resources.limits.memory | string | "8Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| report.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| report.resources.requests.cpu | string | "2000m" | CPU request. |
| report.resources.requests.memory | string | "2Gi" | Memory request. |
| reportTypes | object | {} | Contains key-value pairs where keys are the report type names and values are the report type definitions. |
| sdmPortal | object | - | Reference to the SDM portal service for config manager. Required if SDM Portal is deployed. |
| sdmPortal.namespace | string | "" | The namespace where SDM Portal is deployed. Defaults to the Worker's namespace if left empty. |
| sdmPortal.port | string | nil | The port used to access the SDM Portal. Defaults to 8080 if not specified. |
| sdmPortal.releaseName | string | "" | The release name of the SDM Portal. Fill this out if centralManager.queueLoggingEnabled is true and SDM is deployed without an umbrella chart. |
| sdmPortal.urlOverride | string | "" | Full URL of the SDM Portal (e.g., http://sdm.example.com). If provided, releaseName and namespace are ignored. Use this if SDM is outside the cluster. |
| tcScratch | object | - | tcScratch values configure generic ephemeral volume options for the Spectra Core /tc-scratch directory. |
| tcScratch.accessModes | list | ["ReadWriteOnce"] | Access modes. |
| tcScratch.requestStorage | string | "100Gi" | Requested storage size for the ephemeral volume. |
| tcScratch.storageClassName | string | nil | Sets the storage class for the ephemeral volume. If not set, emptyDir is used instead of an ephemeral volume. |
| tclibs.autoscaling | object | - | Autoscaling configuration values. |
| tclibs.autoscaling.cooldownPeriod | int | 180 | The period to wait after the last trigger reported active before scaling the resource back to 0, in seconds. |
| tclibs.autoscaling.enabled | bool | true | Enable/disable autoscaling. |
| tclibs.autoscaling.maxReplicas | int | 8 | Maximum number of replicas that can be deployed when scaling is enabled. |
| tclibs.autoscaling.minReplicas | int | 1 | Minimum number of replicas that need to be deployed. |
| tclibs.autoscaling.pollingInterval | int | 10 | Interval to check each trigger, in seconds. |
| tclibs.autoscaling.scaleDown | object | - | ScaleDown configuration values. |
| tclibs.autoscaling.scaleDown.stabilizationWindow | int | 180 | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. |
| tclibs.autoscaling.scaleUp | object | - | ScaleUp configuration values. |
| tclibs.autoscaling.scaleUp.numberOfPods | int | 1 | Number of pods that can be scaled in the defined period. |
| tclibs.autoscaling.scaleUp.period | int | 30 | Interval in which the numberOfPods value is applied. |
| tclibs.autoscaling.scaleUp.stabilizationWindow | int | 30 | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. |
| tclibs.autoscaling.triggerCPUValue | int | 75 | CPU value (in percentage), which will cause scaling when reached. The percentage is taken from the resource.limits.cpu value. Limits have to be set up. |
| tclibs.image | object | - | Configuration values of the tcLibs image. |
| tclibs.image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| tclibs.image.tag | string | "0.0.10-2" | Image tag. |
| tclibs.replicaCount | int | 1 | Number of desired pod instances. Ignored when autoscaling is enabled. |
| tclibs.resources | object | - | Resource requests and limits for the container. |
| tclibs.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| tclibs.resources.limits.cpu | string | "2000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| tclibs.resources.limits.memory | string | "2Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| tclibs.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| tclibs.resources.requests.cpu | string | "1000m" | CPU request. |
| tclibs.resources.requests.memory | string | "1Gi" | Memory request. |
| yaraRules | object | {} | Object in which keys are names of yara rule files and values are the yara rule definitions. |
| yaraSync.enabled | bool | false | Enables/disables yara sync. |
| yaraSync.image | object | - | Configuration values of the yara sync image. |
| yaraSync.image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| yaraSync.image.tag | string | "8.0.0-35" | Image tag. |
| yaraSync.resources | object | - | Resource requests and limits for the container. |
| yaraSync.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| yaraSync.resources.limits.cpu | string | "2000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| yaraSync.resources.limits.memory | string | "2Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| yaraSync.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| yaraSync.resources.requests.cpu | string | "1000m" | CPU request. |
| yaraSync.resources.requests.memory | string | "1Gi" | Memory request. |
Connector S3
Connector S3 Secrets
| Secret (customSecretName is set) | Secret (fullNameOverride is set) | Secret (deployment with umbrella) | Secret (deployment without umbrella) | Type | Description |
|---|---|---|---|---|---|
<customSecretName> | <fullNameOverride>-secret-<input.identifier> | <Release.Name>-connector-s3-secret-<input.identifier> | <Release.Name>-secret-<input.identifier> | required | Authentication secret used to connect to AWS S3 or any S3-compatible storage system. |
Connector S3 Application Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| configuration | object | - | Connector S3 configuration values. |
| configuration.dbCleanupPollInterval | int | 7200 | Specifies time in seconds, in which the database cleanup will be run. |
| configuration.dbCleanupSampleThresholdInDays | int | 21 | Number of previous days that the data will be preserved. |
| configuration.diskHighPercent | int | 0 | Disk High Percent |
| configuration.inputs | list | [] | Configuration for S3 File Storage Input. S3 input. |
| configuration.maxFileSize | int | 0 | The maximum sample size in bytes that will be transmitted from the connector to the appliance for analysis. Setting it to 0 will disable the option. |
| configuration.maxUploadDelayTime | int | 10000 | Delay in milliseconds. In case the Worker cluster is under high load, this parameter is used to delay any new upload to the Worker cluster. The delay parameter will be multiplied by the internal factor determined by the load on the Worker cluster. |
| configuration.maxUploadRetries | int | 100 | Number of times the connector will attempt to upload the file to the processing appliance. Upon reaching the number of retries it will be discarded. |
| configuration.systemInfo | object | - | Configuration for S3 System Info. S3 System Info. |
| configuration.uploadTimeout | int | 10000 | Period (in milliseconds) between upload attempts of the sample being re-uploaded. |
| configuration.uploadTimeoutAlgorithm | string | "exponential" | The algorithm used for managing delays between re-uploading the samples into the processing appliance. In exponential, the delay is defined by multiplying the max upload timeout parameter by 2, until max value of 5 minutes. Linear backoff will always use the Max upload timeout value for the timeout period between re-uploads. Allowed values: "exponential", "linear". |
Connector S3 Application Configuration - System Info Values
| Key | Type | Default | Description |
|---|---|---|---|
| configuration.systemInfo.centralLogging | bool | false | Enable central logging. |
| configuration.systemInfo.diskHighPercent | int | 0 | Dish high percent. |
| configuration.systemInfo.fetchChannelSize | int | 40 | Fetch channel size. |
| configuration.systemInfo.hostUUID | string | "" | Host UUID. |
| configuration.systemInfo.maxConnections | int | 10 | Max number of connections. |
| configuration.systemInfo.maxSlowFetches | int | 12 | Max slow fetches. |
| configuration.systemInfo.numberOfRetries | int | 300 | Number of retries. |
| configuration.systemInfo.requestTimeout | int | 43200 | Timeout for requests. |
| configuration.systemInfo.slowFetchChannelSize | int | 100 | Slow fetch channel size. |
| configuration.systemInfo.slowFetchPause | int | 5 | Slow fetch pause. |
| configuration.systemInfo.type | string | "tiscale" | Type. |
| configuration.systemInfo.verifyCert | bool | false | Verify SSL certificate. |
| configuration.systemInfo.version | string | "5.6.0" | Version. |
| configuration.systemInfo.waitTimeout | int | 1000 | Wait timeout. |
Connector S3 Application Configuration - File Storage Input Configuration
| Key | Type | Default | Description |
|---|---|---|---|
| inputs[n] | list | - | Configmap Configuration for S3 File Storage Input. |
| inputs[n].awsEnableArn | bool | false | Enable/disable the usage of AWS IAM roles to access S3 buckets without sharing secret keys. |
| inputs[n].awsExternalRoleId | string | "" | The external ID of the role that will be assumed. This can be any string. |
| inputs[n].awsRoleArn | string | "" | The role ARN created using the external role ID and an Amazon ID. In other words, the ARN which allows a Worker to obtain a temporary token, which then allows it to save to S3 buckets without a secret access key. |
| inputs[n].awsRoleSessionName | string | "ARNRoleSession" | Name of the session visible in AWS logs. This can be any string. |
| inputs[n].awsS3AccessKeyId | string | "" | AWS S3 access key ID. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| inputs[n].awsS3SecretAccessKey | string | "" | AWS S3 secret access key. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| inputs[n].bucket | string | "" | Name of an existing S3 bucket which contains the samples to process. |
| inputs[n].createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the awsS3AccessKeyId and awsS3SecretAccessKey parameters. WARNING: Use this for convenience/testing only. |
| inputs[n].customSecretName | string | nil | Name of the secret in which the S3 storage credentials can be found. If no value is set, default secret name will be used. |
| inputs[n].deleteSourceFile | bool | false | Selecting the checkbox will allow the connector to delete source files on S3 storage after they have been processed. Required if 'require_analyze' or 'post_actions_enabled' is true. |
| inputs[n].endpoint | string | "" | Custom S3 endpoint URL. Leave empty if using standard AWS S3. |
| inputs[n].folder | string | "" | The input folder inside the specified bucket which contains the samples to process. All other samples (except those classified as unknown) will be ignored. |
| inputs[n].identifier | string | "" | Unique name of S3 connection. Must contain only lowercase alphanumeric characters or hyphen (-). Must start and end with an alphanumeric character. Identifier length must be between 3 and 49 characters. |
| inputs[n].knownBucket | string | "" | Specify the bucket into which the connector will store files classified as 'Goodware'. If empty, the input bucket will be used. |
| inputs[n].knownDestination | string | "goodware" | The folder into which the connector will store files classified as 'Goodware'. The folder is contained within the specified bucket field. |
| inputs[n].maliciousBucket | string | "" | Specify the bucket into which the connector will store files classified as 'Malicious'. If empty, the input bucket will be used. |
| inputs[n].maliciousDestination | string | "malware" | The folder into which the connector will store files classified as 'Malicious'. The folder is contained within the specified bucket field. |
| inputs[n].objectMetadataFilter.classification | list | [] | Classification. |
| inputs[n].objectMetadataFilter.enabled | bool | false | Enable/disable selection criteria using metadata. |
| inputs[n].objectMetadataFilter.threatName | list | [] | Threat name. |
| inputs[n].paused | bool | false | Temporarily pause the continuous scanning of this Storage Input. This setting must be set to true to enable retro hunting. |
| inputs[n].postActionsEnabled | bool | false | Disable/enable post actions for S3 connectors. |
| inputs[n].priority | int | 5 | A higher Priority makes it more likely that files from this bucket will be processed first. The supported range is from 1 (highest) to 5 (lowest). Values outside of those minimum and maximum values will be replaced by the minimum or maximum, respectively. Multiple buckets may share the same priority. |
| inputs[n].requireAnalyze | bool | false | Disable/enable the requirement for analysis of data processed by connector. |
| inputs[n].serverSideEncryptionCustomerAlgorithm | string | "" | Customer provided encryption algorithm. |
| inputs[n].serverSideEncryptionCustomerKey | string | "" | Customer provided encryption key. |
| inputs[n].suspiciousBucket | string | "" | Specify the bucket into which the connector will store files classified as 'Suspicious'. If empty, the input bucket will be used. |
| inputs[n].suspiciousDestination | string | "suspicious" | The folder into which the connector will store files classified as 'Suspicious'. The folder is contained within the specified bucket field. |
| inputs[n].unknownBucket | string | "" | Specify the bucket into which the connector will store files classified as 'Unknown'. If empty, the input bucket will be used. |
| inputs[n].unknownDestination | string | "unknown" | The folder into which the connector will store files classified as 'Unknown'. The folder is contained within the specified bucket field. |
| inputs[n].verifySslCertificate | bool | true | Connect securely to the custom S3 instance. Deselect this to accept untrusted certificates. Applicable only when using a custom S3 endpoint. |
| inputs[n].zone | string | "us-east-1" | AWS S3 region. |
Connector S3 Component Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| applyRestrictedPolicy | bool | false | Apply restricted policy to the pods and containers so they can run in a namespace with restricted policy enabled. If umbrella is used for deployment, this value is ignored, and global.applyRestrictedPolicy is used. |
| boltdb.claimName | string | nil | PVC name. If empty, default pvc name will be used. |
| clusterDomainName | string | "cluster.local" | Cluster domain name. If umbrella is used for deployment, this value is ignored, and global.clusterDomainName is applied. |
| enabled | bool | false | Enable or disable the S3 connector deployment. |
| fullNameOverride | string | "" | Overrides connector-s3 chart full name. |
| image | object | - | Configuration values of the connector s3 image. |
| image.imagePullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| image.tag | string | "0.38.0-1" | Image tag. |
| imagePullSecrets | list | ["rl-registry-key"] | Set of stored credentials (authentication tokens) that allows Kubernetes node to "log in" to a private container registry to pull restricted images. |
| monitoring.enabled | bool | false | Enable/disable monitoring with Prometheus. |
| monitoring.prometheusReleaseName | string | "kube-prometheus-stack" | Prometheus release name. |
| nameOverride | string | "" | Overrides connector-s3 chart name. |
| persistence | object | - | Data storage configuration (BoltDB). |
| persistence.accessModes | list | ["ReadWriteOnce"] | Specifies the access modes for the volume. |
| persistence.requestStorage | string | "10Gi" | The amount of storage to request. |
| persistence.storageClassName | string | nil | Name of the StorageClass to use. |
| receiver | object | - | Receiver configuration. Needed if connector is deployed as a standalone chart. |
| receiver.baseUrl | string | nil | FQDN (Fully Qualified Domain Name) of the receiver service. Needed only if connector is deployed as a standalone chart. |
| receiver.service | object | - | Receiver service configuration values. |
| receiver.service.httpPort | int | 80 | HTTP port number the receiver service is listening on. |
| resources | object | - | Resource requests and limits for the container. |
| resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| resources.limits.cpu | string | nil | CPU limit. Throttling occurs if the container exceeds this value. |
| resources.limits.memory | string | "6Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| resources.requests.cpu | string | "4000m" | CPU request. |
| resources.requests.memory | string | "2Gi" | Memory request. |
| sharedStorage | object | {"enabled":true,"mode":"moving"} | Configuration for shared storage between connector and receiver/worker. |
| sharedStorage.enabled | bool | true | Enables shared storage between connector and receiver/worker. |
| sharedStorage.mode | string | "moving" | Defines if files are downloaded directly to the shared storage ('direct') or first to tmp and then moved ('moving'). Only applicable when sharedStorage is enabled. |
| tmp | object | - | Configuration values for generic ephemeral volume for the connectors' /data/connectors/connector-s3/tmp directory. |
| tmp.accessModes | list | ["ReadWriteOnce"] | Specifies the access modes for the volume. |
| tmp.requestStorage | string | "100Gi" | Requested storage size for the ephemeral volume. |
| tmp.storageClassName | string | nil | Name of the StorageClass to use for the ephemeral volume. If not set, emptyDir is used instead of an ephemeral volume. |
| worker.releaseName | string | nil | Spectra Detect Worker release for connector to connect to. It is required if the connector-s3 is deployed without umbrella chart. In case the worker it connects to was deployed with umbrella chart, the release name needs to be <release-name>-wrk, otherwise it should be <worker-release-name>. |
Spectra Detect Manager (SDM)
| Key | Type | Default | Description |
|---|---|---|---|
| sdmPortal.config.centralFileStorage.enabled | bool | false | If enabled, the following SDM components will be deployed: SeaWeedFS |
| sdmPortal.config.centralLogging.enabled | bool | true | If enabled, the following SDM components will be deployed: Data Change Service and Clickhouse. |
| sdmPortal.enabled | bool | true | If enabled, the following SDM components will be deployed: Portal, Celery Worker, SeaweedFS and Postfix. |
SDM Secrets Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| global.secrets.clickhouse | object | - | Secret configuration values for Clickhouse credentials. |
| global.secrets.clickhouse.customSecretName | string | "" | Custom secret name. If not set, default secret name will be used. |
| global.secrets.ldap | object | - | LDAP secret configuration values |
| global.secrets.ldap.cacert.customSecretName | string | "" | Custom secret name of the secret containing PEM file with CA certificate under the ca.pem key. |
| global.secrets.ldap.cert.customSecretName | string | nil | Custom secret name of the secret containing PEM encoded cert for client certificate authentication under the cert.pem key. |
| global.secrets.ldap.credentials.customSecretName | string | "" | Custom secret name which contains credentials used in LDAP authentication. If not set, default secret name will be used. |
| global.secrets.oidc | object | - | Secret configuration values for setting the OIDC client credentials. |
| global.secrets.oidc.customSecretName | string | "" | Custom secret name. If not set, default secret name will be used. |
| global.secrets.postfix | object | - | Secret configuration values for Postfix credentials. |
| global.secrets.postfix.customSecretName | string | "" | Custom secret name. If not set, default secret name will be used. |
| global.secrets.saml | object | - | SAML secret configuration values |
| global.secrets.saml.customSecretName | string | "" | Custom secret name of the secret containing federation metadata under the metadata.xml key. |
| global.secrets.tiCloud | object | - | Secret configuration values for setting the Spectra Intelligence credentials. |
| global.secrets.tiCloud.customSecretName | string | "" | Custom secret name. If not set, default secret name will be used. |
| global.secrets.tiCloudProxy | object | - | Secret configuration values for setting the Spectra Intelligence proxy credentials. |
| global.secrets.tiCloudProxy.customSecretName | string | "" | Custom secret name. If not set, default secret name will be used. |
SDM Portal
SDM Portal Secrets
| Secret (when custom secret name is used) | Secret (default secret name) | Type | Description | Used in Components (Pods) |
|---|---|---|---|---|
<global.secrets.tiCloud.customSecretName> | <release_name>-secret-sdm-cloud | Required when related feature is enabled | Basic authentication secret which contains username and password for Spectra Intelligence authentication. Required when Spectra Intelligence is enabled (config.ticloud.enabled). Secret is either created manually (sdm-portal chart) or already exists. | Portal, Celery, DCS |
<global.secrets.tiCloudProxy.customSecretName> | <release_name>-secret-sdm-cloud-proxy | Optional | Basic authentication secret which contains username and password for Spectra Intelligence proxy authentication. Secret is either created manually (sdm-portal chart) or already exists. | Portal, Celery |
<global.secrets.oidc.customSecretName> | <release_name>-secret-sdm-oidc | Required when related feature is enabled (config.oidc.enabled) | Opaque secret which contains rp_client_id and rp_client_secret for OIDC authentication. Required when OIDC authentication is enabled (config.oidc.enabled). Secret is either created manually (sdm-portal chart) or already exists. | Portal, Celery |
<global.secrets.ldap.credentials.customSecretName> | <release_name>-secret-sdm-ldap | Required when related feature is enabled | Opaque secret which contains bind_dn and bind_password for LDAP authentication. Required when LDAP authentication is enabled (config.ldap.enabled). Secret is either created manually (sdm-portal chart) or already exists. | Portal |
<global.secrets.ldap.cacert.customSecretName> | <release_name>-secret-sdm-ldap-cacert | Required when related feature is enabled | Opaque secret which contains CA certificate for LDAP authentication. Required when LDAP authentication is enabled, and TLS features are enabled (config.ldap.enabled and config.ldap.tls and config.ldap.tlsRequireCert). Secret is either created manually (sdm-portal chart) or already exists. | Portal |
<global.secrets.ldap.cert.customSecretName> | <release_name>-secret-sdm-ldap-cert | Required when related feature is enabled | Opaque secret which contains TLS certificate for LDAP server. Required when LDAP authentication is enabled and TLS features are enabled (config.ldap.enabled and config.ldap.tls and config.ldap.tlsRequireCert). Secret is either created manually (sdm-portal chart) or already exists. | Portal |
<global.secrets.saml.customSecretName> | <release_name>-secret-sdm-saml-metadata | Required when related feature is enabled | Opaque secret which contains SAML federation metadata. Required when SAML authentication is enabled (config.saml.enabled). Secret is either created manually (sdm-portal chart) or already exists. | Portal |
SDM Portal Application Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| config | object | - | SDM Portal configuration values. |
| config.centralConfig | object | - | Configure Central Configuration. |
| config.centralConfig.enabled | boolean | true | Enable/disable central configuration feature. |
| config.centralFileStorage | object | - | Configure file storage in Spectra Detect Manager, allowing connected Workers to store samples for pivoting to and reprocessing in Spectra Analyze. |
| config.centralFileStorage.enabled | boolean | false | Enable/disable file storage. |
| config.centralFileStorage.fileSizeLimit | int | 400 | File size limit in MiB. Samples larger than the set threshold will not be stored. Minimum: 1. Maximum: 400. |
| config.centralFileStorage.ttl | int | 24 | Sample retention period in hours after which the uploaded samples will be removed from the Central File Storage. Minimum: 1. Maximum: 2160. |
| config.centralLogging | object | - | Configure Central Logging, which is used to collect and display information about all events happening on connected Workers and Integrations. |
| config.centralLogging.enabled | bool | false | Enable/disable central logging. |
| config.centralLogging.retentionPeriod | int | 90 | Retention period in days. Minimum: 1. Maximum: 540. |
| config.classificationChanges | object | - | Monitor classification changes from Spectra Intelligence directly on the Spectra Detect Dashboard. |
| config.classificationChanges.enabled | bool | false | Subscribe to classification changes. |
| config.deepCloudAnalysis | object | - | Configure Deep Cloud Analysis. This uploads files to Spectra Intelligence for scanning with multiple AV engines, refining the final verdict (classification), risk score, and threat name. Depending on the configuration, this may increase processing load and require additional resources. |
| config.deepCloudAnalysis.enabled | bool | false | Enable/disable Deep Cloud Analysis. |
| config.deepCloudAnalysis.scanner1 | string | "" | AV scanner results to display in the Detections Overview preview. |
| config.deepCloudAnalysis.scanner2 | string | "" | AV scanner results to display in the Detections Overview preview. |
| config.deepCloudAnalysis.scanner3 | string | "" | AV scanner results to display in the Detections Overview preview. |
| config.deepCloudAnalysis.scanner4 | string | "" | AV scanner results to display in the Detections Overview preview. |
| config.deepCloudAnalysis.scanner5 | string | "" | AV scanner results to display in the Detections Overview preview. |
| config.rlapp | object | - | General SDM Portal configuration. |
| config.rlapp.allowedHosts | list | [] | A list of host/domain names that this application site can serve. |
| config.rlapp.sessionCookieAge | int | 604800 | Duration of the login session, in seconds. Minimum: 60 (1 minute). Maximum: 7776000 (90 days). |
| config.rlapp.sessionTimeoutAutomaticallyLogout | bool | false | If true, automatically log out inactive users. |
| config.rlapp.sessionTimeoutPeriod | int | 600 | Period of inactivity before sign out, in seconds. Minimum: 60 (1 minute). Maximum: 2592000 (30 days). |
| config.smtp | object | - | SMTP Settings. |
| config.smtp.defaultFromEmail | string | "online@reversinglabs.com" | Default email address to use for automated correspondence. |
| config.smtp.smtpUseTls | boolean | false | Use TLS. |
| config.sync | object | - | Synchronization settings. |
| config.sync.yaraRulesetsEnabled | boolean | false | Allow connected appliances to synchronize YARA rulesets. |
| config.ticloud | object | - | Spectra Intelligence settings. |
| config.ticloud.enabled | boolean | false | Enable Spectra Intelligence. |
| config.ticloud.proxyHost | string | nil | Proxy hostname for routing requests from the appliance to Spectra Intelligence. |
| config.ticloud.proxyPort | integer | 25 | Proxy port number. Minimum: 0. Maximum: 65535. |
| config.ticloud.timeout | integer | 60 | Specifies how long to wait before the Spectra Intelligence connection times out. Minimum: 1. Maximum: 1000. |
SDM Portal - LDAP Authentication Configuration
| Key | Type | Default | Description |
|---|---|---|---|
| config.ldap | object | - | Settings for LDAP authentication. |
| config.ldap.denyGroup | string | nil | Authentication will fail for any user that belongs to this group. |
| config.ldap.enabled | boolean | false | Enable/disable LDAP authentication. |
| config.ldap.groupSchemaClass | string | "group" | The objectClass value used when searching for groups. |
| config.ldap.groupSchemaNameAttr | string | "cn" | The group name field. |
| config.ldap.groupSchemaType | string | "member" | Group schema type. Allowed values: uniqueMember and member. |
| config.ldap.groupSearchBaseDn | string | nil | Root node in LDAP from which to search for groups. Example: 'cn=users,dc=example,dc=com'. |
| config.ldap.groupSearchScope | integer | 2 | Scope. Allowed values: 0 (Base), 1 (One level), 2 (Subtree), 3 (Subordinate). |
| config.ldap.host | string | "" | Hostname or IP of the server running LDAP. |
| config.ldap.port | integer | 389 | LDAP server port. Minimum: 0. Maximum: 65535. |
| config.ldap.requireGroup | string | nil | Authentication will fail for any user that does not belong to this group. Example: 'cn=enabled,ou=groups,dc=example,dc=com' |
| config.ldap.tls | boolean | true | If true, use Transport Layer Security (TLS) connection. |
| config.ldap.tlsRequireCert | boolean | false | If true, TLS certificate is required. |
| config.ldap.userAttrMapEmail | string | "mail" | Field to map to the email address. |
| config.ldap.userAttrMapFirstName | string | "givenName" | Field to map to the first name. |
| config.ldap.userAttrMapLastName | string | "sn" | Field to map to the last name. |
| config.ldap.userFlagsByGroupIsActive | string | nil | Users will be marked as active only if they belong to this group. Example: 'cn=active,ou=users,dc=example,dc=com'. |
| config.ldap.userFlagsByGroupIsSuperuser | string | nil | Users will be marked as superusers only if they belong to this group. Example: 'cn=admins,ou=groups,dc=example,dc=com'. |
| config.ldap.userSchemaClass | string | "user" | The objectClass value used when searching for users. |
| config.ldap.userSchemaNameAttr | string | "sAMAccountName" | The username field. Examples: 'sAMAccountName' or 'cn'. |
| config.ldap.userSearchBaseDn | string | nil | Root node in LDAP from which to search for users. Example: 'cn=users,dc=example,dc=com.' |
| config.ldap.userSearchScope | integer | 2 | Scope. Allowed values: 0 (Base), 1 (One level), 2 (Subtree), 3 (Subordinate). |
SDM Portal - OIDC Authentication Configuration
| Key | Type | Default | Description |
|---|---|---|---|
| config.oidc | object | - | Configure authentication with an OpenID Connect client. |
| config.oidc.accessDenyGroup | string | nil | Authentication will fail for any user that belongs to one or more of the provided group(s). Use the access groups delimiter to separate multiple group names. |
| config.oidc.accessRequireGroup | string | nil | Authentication will fail for any user that does not belong to one or more of the provided group(s). Use the access groups delimiter to separate multiple group names. |
| config.oidc.audience | string | nil | Identifies the intended recipient of the token. |
| config.oidc.claimsSource | string | "ID_TOKEN" | Source used to extract user claims. Allowed values: ID_TOKEN, USER_INFO_ENDPOINT, ACCESS_TOKEN. |
| config.oidc.clientType | string | "CONFIDENTIAL" | Authenticate with client secret (CONFIDENTIAL) or without (PUBLIC). |
| config.oidc.enabled | boolean | false | Enable/disable authentication with OIDC. |
| config.oidc.issuer | string | nil | Issuer. |
| config.oidc.mapClaimAccessGroupsDelimiter | string | nil | Character to split the User Access Groups on. Optional. The maximum length of the delimiter is 2 characters. These characters must not be used in any access group name. |
| config.oidc.mapClaimEmail | string | "email" | The claim containing the unique email address. |
| config.oidc.mapClaimFirstName | string | "given_name" | The claim containing the user's first name. |
| config.oidc.mapClaimGroups | string | "group" | The claim containing the list of user's groups. |
| config.oidc.mapClaimGroupsDelimiter | string | nil | Character to split the Groups string on. Optional. |
| config.oidc.mapClaimLastName | string | "family_name" | The claim containing the user's last name. |
| config.oidc.mapClaimUsername | string | "unique_name" | The claim containing the unique username. |
| config.oidc.opAuthorizationEndpoint | string | nil | URL of your OpenID Connect provider authorization endpoint. |
| config.oidc.opJwksEndpoint | string | nil | URL of your OpenID Connect provider JWKS endpoint. |
| config.oidc.opTokenEndpoint | string | nil | URL of your OpenID Connect provider token endpoint. |
| config.oidc.opUserEndpoint | string | nil | URL of your OpenID Connect provider userinfo endpoint. |
| config.oidc.pkceEnabled | boolean | false | If true, use PKCE (Proof Key of Code Exchange) to prevent auth code interception. |
| config.oidc.promptLogin | boolean | false | If true, require the authorization server to reauthenticate the user even if the user is already authenticated. |
| config.oidc.relyingPartId | string | nil | Relying Party ID. |
| config.oidc.rpIdpSignKey | string | nil | The key used to sign ID tokens when using an RSA sign algorithm. |
| config.oidc.rpScopes | string | "openid allatclaims" | The OpenID Connect scopes to request during login. |
| config.oidc.rpSignAlgo | string | "RS256" | Signature algorithm. Allowed values: RS256 and HS256. |
| config.oidc.userFlagsByGroupIsActive | string | nil | Users will be marked as active only if they belong to one or more of the provided group(s). Use the access group delimiter to separate multiple group names. |
| config.oidc.userFlagsByGroupIsSuperuser | string | nil | Users will be marked as superusers only if they belong to one or more of the provided group(s). Use the access groups delimiter to separate multiple group names. |
| config.oidc.verifySsl | boolean | true | Controls whether the OpenID Connect client verifies the SSL certificate of the OP responses. |
SDM Portal - SAML Authentication Configuration
| Key | Type | Default | Description |
|---|---|---|---|
| config.saml | object | - | Configure authentication with SAML. |
| config.saml.accessDenyGroup | string | nil | Authentication will fail for any user that belongs to one or more of the provided group(s). Use the access groups delimiter to separate multiple group names. |
| config.saml.accessRequireGroup | string | nil | Authentication will fail for any user that does not belong to one or more of the provided group(s). Use the access groups delimiter to separate multiple group names. |
| config.saml.allowUnsolicited | bool | false | Allow unsolicited responses from IdP. |
| config.saml.enabled | boolean | false | Enable/disable SAML authentication. |
| config.saml.entityId | string | nil | Entity ID. |
| config.saml.mapClaimAccessGroupsDelimiter | string | nil | Character to split the User Access Groups on. Optional. The maximum length of the delimiter is 2 characters. These characters must not be used in any access group name. |
| config.saml.mapClaimEmail | string | "email" | The claim containing the unique email address. |
| config.saml.mapClaimFirstName | string | "given_name" | The claim containing the user's first name. |
| config.saml.mapClaimGroups | string | "group" | The claim that contains the list of user's groups. |
| config.saml.mapClaimGroupsDelimiter | string | nil | Character to split the Groups string on. Optional. |
| config.saml.mapClaimLastName | string | "family_name" | The claim containing the user's last name. |
| config.saml.mapClaimUsername | string | "unique_name" | The claim containing the unique username. |
| config.saml.userFlagsByGroupIsActive | string | nil | Users will be marked as active only if they belong to one or more of the provided group(s). Use the access groups delimiter to separate multiple group names. |
| config.saml.userFlagsByGroupIsSuperuser | string | nil | Users will be marked as superusers only if they belong to one or more of the provided group(s). Use the access groups delimiter to separate multiple group names. |
SDM Portal Secret Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| secrets | object | - | Secret configuration values for Spectra Detect Portal |
| secrets.clickhouse | string | - | Custom secret name for Clickhouse credentials. |
| secrets.clickhouse.customSecretName | string | nil | Custom secret name for clickhouse credentials. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.clickhouse.customSecretName will be used as a secret name. |
| secrets.ldap | object | - | LDAP secret configuration values |
| secrets.ldap.cacert.caCertificate | string | nil | LDAP CA certificate in PEM format. Creates a Secret with the ca.pem key. Can be set with --set-file secrets.ldap.cacert.caCertificate=/path/to/file/ca_certificate.pem. |
| secrets.ldap.cacert.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the caCertificate value parameters. WARNING: Use this for convenience/testing only. |
| secrets.ldap.cacert.customSecretName | string | nil | Custom secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.ldap.cacert.customSecretName will be used as a secret name. |
| secrets.ldap.cert.certificate | string | nil | LDAP client certificate in PEM format. Creates a Secret with the cert.pem key. Can be set with --set-file secrets.ldap.cert.certificate=/path/to/file/certificate.pem. |
| secrets.ldap.cert.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the certificate value parameters. WARNING: Use this for convenience/testing only. |
| secrets.ldap.cert.customSecretName | string | nil | Custom secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.ldap.cert.customSecretName will be used as a secret name. |
| secrets.ldap.credentials.bindDn | string | "" | LDAP Bind DN. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.ldap.credentials.bindPassword | string | "" | LDAP Bind Password. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.ldap.credentials.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the bindDn and bindPassword parameters. WARNING: Use this for convenience/testing only. |
| secrets.ldap.credentials.customSecretName | string | nil | Custom secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.ldap.credentials.customSecretName will be used as a secret name. |
| secrets.oidc | object | - | Secret configuration values for setting the OIDC client credentials. |
| secrets.oidc.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the rpClientId and rpClientSecret parameters. WARNING: Use this for convenience/testing only. |
| secrets.oidc.customSecretName | string | nil | Custom secret name for OIDC credentials. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.oidc.customSecretName will be used as a secret name. |
| secrets.oidc.rpClientId | string | "" | OpenID Connect client ID provided by OpenID Connect provider. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.oidc.rpClientSecret | string | "" | OpenID Connect client secret provided by OpenID Connect provider. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.saml | object | - | Secret configuration values for setting the SAML integration. |
| secrets.saml.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the metadata value. WARNING: Use this for convenience/testing only. |
| secrets.saml.customSecretName | string | nil | Secret name containing SAML federation metadata under the metadata.xml key. |
| secrets.saml.metadata | string | nil | SAML federation metadata XML. Creates a Secret with the metadata.xml key. Can be set with --set-file secrets.saml.metadata=/path/to/file/metadata.xml. |
| secrets.tiCloud | object | - | Secret configuration values for setting the Spectra Intelligence. |
| secrets.tiCloud.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the username and password parameters. WARNING: Use this for convenience/testing only. |
| secrets.tiCloud.customSecretName | string | nil | Custom secret name for Spectra Intelligence Cloud credentials. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.tiCloud.customSecretName will be used as a secret name. |
| secrets.tiCloud.password | string | "" | Spectra Intelligence password. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.tiCloud.username | string | "" | Spectra Intelligence username. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.tiCloudProxy | object | - | Secret configuration values for setting the Spectra Intelligence credentials when proxy is used. |
| secrets.tiCloudProxy.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the username and password parameters. WARNING: Use this for convenience/testing only. |
| secrets.tiCloudProxy.customSecretName | string | nil | Custom secret name for Spectra Intelligence Cloud proxy credentials. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.tiCloudProxy.customSecretName will be used as a secret name. |
| secrets.tiCloudProxy.password | string | "" | Cloud proxy password. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.tiCloudProxy.username | string | "" | Cloud proxy username. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
SDM Portal Component Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| affinity | object | {} | Affinity for pod scheduling. Allows to constrain which nodes your pod can be scheduled on based on node labels or ensure pods are co-located (or isolated) from other pods. |
| autoscaling | object | - | Autoscaling configuration values. |
| autoscaling.cooldownPeriod | int | 180 | The period to wait after the last trigger reported active before scaling the resource back to 0, in seconds. |
| autoscaling.enabled | bool | false | Enable/disable autoscaling. |
| autoscaling.maxReplicas | int | 8 | Maximum number of replicas that can be deployed when scaling in enabled. |
| autoscaling.minReplicas | int | 1 | Minimum number of replicas that need to be deployed. |
| autoscaling.pollingInterval | int | 10 | Interval to check each trigger, in seconds. |
| autoscaling.scaleDown | object | - | ScaleDown configuration values. |
| autoscaling.scaleDown.stabilizationWindow | int | 180 | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. |
| autoscaling.scaleUp | object | - | ScaleUp configuration values. |
| autoscaling.scaleUp.numberOfPods | int | 1 | Number of pods that can be scaled in the defined period. |
| autoscaling.scaleUp.period | int | 30 | Interval in which the numberOfPods value is applied. |
| autoscaling.scaleUp.stabilizationWindow | int | 15 | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. |
| autoscaling.triggerCPUValue | int | 75 | CPU value (in percentage), which will cause scaling when reached. The percentage is taken from the resource.limits.cpu value. Limits have to be set up. |
| clickhouse.releaseName | string | "" | Clickhouse release name, required when deployment is not done with umbrella. |
| clusterDomainName | string | "cluster.local" | Cluster domain name. If umbrella is used for deployment, this value is ignored, and global.clusterDomainName is applied. |
| image | object | - | Configuration values of the SDM Portal image. |
| image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| image.tag | string | "6.0.0-24" | Image tag. |
| imagePullSecrets | list | [{"name":"rl-registry-key"}] | Set of stored credentials (authentication tokens) that allows Kubernetes node to "log in" to a private container registry to pull restricted images. Each item must be an object with a 'name' key. |
| ingress | object | - | Ingress configuration values. |
| ingress.annotations | object | {} | Custom annotations to fine-tune the Ingress Controller behavior. |
| ingress.className | string | "nginx" | IngressClass that will handle this resource. Must match an existing 'IngressClass' in the cluster. |
| ingress.enabled | bool | true | Enable/disable the creation of the Ingress resource. |
| ingress.host | string | "" | The Fully Qualified Domain Name (FQDN) for the application. Required if 'enabled' is true. |
| ingress.paths[n] | object | - | A list of paths for this host. Each path must have a path and a pathType. For the root path, use "/" with pathType "Prefix". |
| ingress.paths[n].path | string | "/" | The URL path that this rule applies to. A single forward slash (/) represents the root or "catch-all" path. |
| ingress.paths[n].pathType | string | "Prefix" | Determines how the Ingress controller matches the URL path. 'Prefix' matches based on a URL path prefix split by '/'. This is the most common and flexible setting for web applications. |
| ingress.tls | object | - | TLS / SSL Configuration. |
| ingress.tls.certificateArn | string | "" | The Amazon Resource Name (ARN) of the certificate for AWS Load Balancer Controller. Only applicable when 'className' is 'alb'. |
| ingress.tls.issuer | string | "" | The name of the cert-manager ClusterIssuer or Issuer to request certificates from. |
| ingress.tls.issuerKind | string | "Issuer" | The resource type of the issuer. Typically, 'Issuer' (namespace-scoped) or 'ClusterIssuer' (cluster-wide). |
| initImage | object | - | Configuration values of the image used for SDM Portal initialization. |
| initImage.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| initImage.tag | string | "6.0.0-14" | Image tag. |
| loki.releaseName | string | "" | Loki release name, required when deployment is not done with umbrella. |
| nodeSelector | object | {} | Node labels for pod assignment. Pods will only be scheduled to nodes that match all labels defined here. |
| postfix.releaseName | string | "" | Postfix release name, required when deployment is not done with umbrella. |
| postgres.customSecretName | string | nil | Postgres custom secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.postgresCustomSecretName will be used as a secret name. |
| postgres.releaseName | string | "" | Postgres release name, required when deployment is not done with umbrella. |
| prometheus.releaseName | string | "" | Prometheus configuration release name, required when deployment is not done with umbrella. |
| rabbitmq.customAdminSecretName | string | nil | RabbitMQ custom admin secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.rabbitmqAdminCustomSecretName will be used as a secret name. |
| rabbitmq.customSecretName | string | nil | RabbitMQ custom secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.rabbitmqCustomSecretName will be used as a secret name. |
| rabbitmq.releaseName | string | "" | Rabbitmq release name, required when deployment is not done with umbrella. |
| replicaCount | int | 1 | Number of desired pod instances. Ignored when autoscaling is enabled. |
| resources | object | - | Resource requests and limits for the container. |
| resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| resources.limits.cpu | string | "3000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| resources.limits.memory | string | "4Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| resources.requests.cpu | string | "2000m" | CPU request. |
| resources.requests.memory | string | "2Gi" | Memory request. |
| seaweedfs.releaseName | string | "" | SeaweedFS release name, required when deployment is not done with umbrella. |
| secrets.ldap.cacert | object | - | Secret containing PEM encoded file with CA certificate under the ca.pem key. |
| secrets.ldap.cert | object | - | Secret containing PEM encoded certificate for client authentication PEM under the cert.pem key. |
| secrets.ldap.credentials | object | - | Secret containing LDAP bind (bindDn and bindPassword) credentials. |
| serviceAccount | object | - | Service Account configuration values. |
| serviceAccount.annotations | object | {} | Additional annotations to add to the ServiceAccount. |
| serviceAccount.create | bool | true | Specifies whether a ServiceAccount should be created. If false, an existing ServiceAccount name should be provided in name variable. |
| serviceAccount.name | string | nil | The name of the ServiceAccount to use. If not set and create is true, default account name is used. |
| useReloader | string | nil | Whether to enable Reloader annotations. When defined, this value takes precedence over global.useReloader. |
Celery Worker
Celery Worker Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| affinity | object | {} | Affinity for pod scheduling. Allows to constrain which nodes your pod can be scheduled on based on node labels or ensure pods are co-located (or isolated) from other pods. |
| autoscaling | object | - | Autoscaling configuration values. |
| autoscaling.cooldownPeriod | int | 180 | The period to wait after the last trigger reported active before scaling the resource back to 0, in seconds. |
| autoscaling.enabled | bool | false | Enable/disable autoscaling. |
| autoscaling.maxReplicas | int | 8 | Maximum number of replicas that can be deployed when scaling in enabled. |
| autoscaling.minReplicas | int | 1 | Minimum number of replicas that need to be deployed. |
| autoscaling.pollingInterval | int | 10 | Interval to check each trigger, in seconds. |
| autoscaling.scaleDown | object | - | ScaleDown configuration values. |
| autoscaling.scaleDown.stabilizationWindow | int | 180 | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. |
| autoscaling.scaleUp | object | - | ScaleUp configuration values. |
| autoscaling.scaleUp.numberOfPods | int | 1 | Number of pods that can be scaled in the defined period. |
| autoscaling.scaleUp.period | int | 30 | Interval in which the numberOfPods value is applied. |
| autoscaling.scaleUp.stabilizationWindow | int | 15 | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. |
| autoscaling.targetInputQueueSize | int | 10 | Number of messages in backlog to trigger scaling on. Must be greater than 0. |
| clickhouse.releaseName | string | "" | Clickhouse release name, required when deployment is not done with umbrella. |
| image | object | - | Configuration values of the SDM celery worker image. |
| image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| image.tag | string | "6.0.0-24" | Image tag. |
| imagePullSecrets | list | [{"name":"rl-registry-key"}] | Set of stored credentials (authentication tokens) that allows Kubernetes node to "log in" to a private container registry to pull restricted images. Each item must be an object with a 'name' key. |
| nodeSelector | object | {} | Node labels for pod assignment. Pods will only be scheduled to nodes that match all labels defined here. |
| postfix.releaseName | string | "" | Postfix release name, required when deployment is not done with umbrella. |
| postgres.customSecretName | string | nil | Postgres custom secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.postgresCustomSecretName will be used as a secret name. |
| postgres.releaseName | string | "" | Postgres release name, required when deployment is not done with umbrella. |
| rabbitmq.customAdminSecretName | string | nil | RabbitMQ custom admin secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.rabbitmqAdminCustomSecretName will be used as a secret name. |
| rabbitmq.customSecretName | string | nil | RabbitMQ custom secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.rabbitmqCustomSecretName will be used as a secret name. |
| rabbitmq.releaseName | string | "" | Rabbitmq release name, required when deployment is not done with umbrella. |
| replicaCount | int | 1 | Number of desired pod instances. Ignored when autoscaling is enabled. |
| resources | object | - | Resource requests and limits for the container. |
| resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| resources.limits.cpu | string | "3000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| resources.limits.memory | string | "4Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| resources.requests.cpu | string | "1000m" | CPU request. |
| resources.requests.memory | string | "2Gi" | Memory request. |
| seaweedfs.releaseName | string | "" | SeaweedFS release name, required when deployment is not done with umbrella. |
| secrets.clickhouse | object | - | Secret configuration values for setting the clickhouse credentials. |
| secrets.clickhouse.customSecretName | string | nil | Custom secret name for clickhouse credentials. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.clickhouse.customSecretName will be used as a secret name. |
| secrets.oidc | object | - | Secret configuration values for setting the OIDC credentials. |
| secrets.oidc.customSecretName | string | nil | Custom secret name for OIDC credentials. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.oidc.customSecretName will be used as a secret name. |
| secrets.tiCloud | object | - | Secret configuration values for setting the Spectra Intelligence cloud credentials. |
| secrets.tiCloud.customSecretName | string | nil | Custom secret name for Spectra Intelligence credentials. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.tiCloud.customSecretName will be used as a secret name. Otherwise, if left unset, default secret name will be used. |
| secrets.tiCloudProxy | object | - | Secret configuration values for setting the Spectra Intelligence cloud proxy credentials. |
| secrets.tiCloudProxy.customSecretName | string | nil | Custom secret name for Spectra Intelligence proxy credentials. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.tiCloudProxy.customSecretName will be used as a secret name. Otherwise, if left unset, default secret name will be used. |
| serviceAccount | object | - | Service Account configuration values. |
| serviceAccount.annotations | object | {} | Additional annotations to add to the ServiceAccount. |
| serviceAccount.create | bool | true | Specifies whether a ServiceAccount should be created. If false, an existing ServiceAccount name should be provided in name variable. |
| serviceAccount.name | string | nil | The name of the ServiceAccount to use. If not set and create is true, default account name is used. |
| useReloader | string | nil | Whether to enable Reloader annotations. When defined, this value takes precedence over global.useReloader. |
Data Change Service (DCS)
DCS Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| clickhouse.releaseName | string | "" | Clickhouse release name, required when deployment is not done with umbrella. |
| image | object | - | Configuration values of the SDM celery worker image. |
| image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| image.tag | string | "1.31.0-1" | Image tag. |
| imagePullSecrets | list | [{"name":"rl-registry-key"}] | Set of stored credentials (authentication tokens) that allows Kubernetes node to "log in" to a private container registry to pull restricted images. Each item must be an object with a 'name' key. |
| postfix.releaseName | string | "" | Postfix release name, required when deployment is not done with umbrella. |
| postgres.customSecretName | string | nil | Postgres custom secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.postgresCustomSecretName will be used as a secret name. |
| postgres.releaseName | string | "" | Postgres release name, required when deployment is not done with umbrella. |
| resources | object | - | Resource requests and limits for the container. |
| resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| resources.limits.cpu | string | "2000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| resources.limits.memory | string | "2Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| resources.requests.cpu | string | "500m" | CPU request. |
| resources.requests.memory | string | "512Mi" | Memory request. |
| secrets.clickhouse | object | - | Secret configuration values for setting the clickhouse credentials. |
| secrets.clickhouse.customSecretName | string | nil | Custom secret name for clickhouse credentials. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.clickhouse.customSecretName will be used as a secret name. |
| secrets.tiCloud | object | - | Secret configuration values for setting the Spectra Intelligence cloud credentials. |
| secrets.tiCloud.customSecretName | string | nil | Custom secret name for Spectra Intelligence cloud credentials. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.tiCloud.customSecretName will be used as a secret name. |
| useReloader | string | nil | Whether to enable Reloader annotations. When defined, this value takes precedence over global.useReloader. |
ClickHouse
ClickHouse Secrets
| Secret (when custom secret name is used) | Secret (default secret name) | Type | Description | Used in Components (Pods) |
|---|---|---|---|---|
<global.secrets.clickhouse.customSecretName> | <release_name>-clickhouse-secret | Required when centralLogging is enabled | Basic authentication secret used to connect to Clickhouse. Secret is either created manually (clickhouse chart) or already exists. Required when central logging is enabled (config.centralLogging.enabled). | Portal, Celery, DCS, Clickhouse |
ClickHouse Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| clusterDomainName | string | "cluster.local" | Cluster domain name. If umbrella is used for deployment, this value is ignored, and global.clusterDomainName is applied. |
| database | string | "default" | Database name. |
| host | string | "" | Host for external Clickhouse. When configured, the Clickhouse deployment won't be created. |
| image | object | - | Configuration values of the clickhouse image. |
| image.pullPolicy | string | "Always" | Image pull policy. Options: Always, IfNotPresent, Never. |
| image.repository | string | "clickhouse/clickhouse-server" | Image repository. |
| image.tag | string | "22.12" | Image tag. |
| persistence | object | - | Data storage configuration. |
| persistence.accessModes | list | ["ReadWriteOnce"] | Specifies the access modes for the volume. |
| persistence.requestStorage | string | "10Gi" | The amount of storage to request. |
| persistence.storageClassName | string | nil | Name of the StorageClass to use. |
| port | int | 9000 | Clickhouse port. |
| resources | object | - | Resource requests and limits for the container. |
| resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| resources.limits.cpu | string | "16" | CPU limit. Throttling occurs if the container exceeds this value. |
| resources.limits.memory | string | "32Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| resources.requests.cpu | string | "8" | CPU request. |
| resources.requests.memory | string | "16Gi" | Memory request. |
| useReloader | string | nil | Whether to enable Reloader annotations. When defined, this value takes precedence over global.useReloader. |
Clickhouse Secret Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| secrets | object | - | Secret configuration values for Clickhouse |
| secrets.credentials.createUserSecret | bool | true | If true, the chart creates a Kubernetes Secret using the username and password parameters. WARNING: Use this for convenience/testing only. |
| secrets.credentials.customSecretName | string | nil | Custom secret name for Clickhouse credentials. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.clickhouse.customSecretName will be used as a secret name. |
| secrets.credentials.password | string | "chPass12345" | Clickhouse password. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.credentials.username | string | "chUser" | Clickhouse username. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
Postfix
Postfix Secrets
| Secret (when custom secret name is used) | Secret (default secret name) | Type | Description | Used in Components (Pods) |
|---|---|---|---|---|
<global.secrets.postfix.customSecretName> | <release_name>-postfix-secret | Required when related feature is enabled | Basic authentication secret for SMTP. Secret is either created manually (postfix chart) or already exists. | Postfix |
Postfix Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| clusterDomainName | string | "cluster.local" | Cluster domain name. If umbrella is used for deployment, this value is ignored, and global.clusterDomainName is applied. |
| image | object | - | Configuration values of the postfix image. |
| image.pullPolicy | string | "IfNotPresent" | Image pull policy. Options: Always, IfNotPresent, Never. |
| image.repository | string | "juanluisbaptiste/postfix" | Image repository. |
| image.tag | string | "1.8.0" | Image tag. |
| resources | object | - | Resource requests and limits for the container. |
| resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| resources.limits.cpu | string | "1000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| resources.limits.memory | string | "1Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| resources.requests.cpu | string | "500m" | CPU request. |
| resources.requests.memory | string | "512Mi" | Memory request. |
| smtp | object | - | SMTP configuration values. These values configure the upstream relay server used by the internal Postfix service. |
| smtp.hostname | string | "reversinglabs.com" | The hostname used for the SMTP handshake. Emails will appear to originate from this domain. |
| smtp.port | int | 25 | Port for the mail server. |
| smtp.server | string | "localhost" | The FQDN or IP address of the mail server. |
| smtp.useTls | bool | true | Whether to use TLS when connecting to the mail server. |
| useReloader | string | nil | Whether to enable Reloader annotations. When defined, this value takes precedence over global.useReloader. |
Postfix Secret Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| secrets | object | - | Secret configuration values for Postfix SMTP credentials. |
| secrets.credentials.createUserSecret | bool | false | If true, the chart creates a Kubernetes Secret using the username and password parameters. WARNING: Use this for convenience/testing only. |
| secrets.credentials.customSecretName | string | nil | Custom secret name for SMTP credentials. If umbrella chart is used for deployment, this value is ignored, and the value from global.secrets.postfix.customSecretName will be used as a secret name. |
| secrets.credentials.password | string | "" | SMTP password. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
| secrets.credentials.username | string | "" | SMTP username. Only used if 'createUserSecret' is set to 'true'. WARNING: Use this for convenience/testing only. Do not store sensitive credentials in configuration files that are committed to Git. Consider using a dedicated secret management solution (e.g., Sealed Secrets, External Secrets, or Vault) for production environments. |
SeaweedFS
SeaweedFS Configuration Values
| Key | Type | Default | Description |
|---|---|---|---|
| image.pullPolicy | string | "IfNotPresent" | Image pull policy. Options: Always, IfNotPresent, Never. |
| image.repository | string | "chrislusf/seaweedfs" | Repository name. |
| image.tag | string | "3.62" | Image tag. |
| persistence | object | - | SeaweedFS data storage configuration. |
| persistence.accessModes | list | ["ReadWriteOnce"] | Specifies the access mode for the volume. |
| persistence.requestStorage | string | "20Gi" | The amount of storage to request. |
| persistence.storageClassName | string | nil | Name of the StorageClass to use. |
| resources | object | - | Resource requests and limits for the container. |
| resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| resources.limits.cpu | string | "3000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| resources.limits.memory | string | "4Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| resources.requests.cpu | string | "1000m" | CPU request. |
| resources.requests.memory | string | "2Gi" | Memory request. |
| s3.port | int | 8333 | Port for the S3-compatible API service. Allows tools like AWS CLI, Minio, and various SDKs to interact with SeaweedFS as an object store. |
| server.filerPort | int | 8888 | Port for the Filer server. Provides the filesystem interface and metadata management. |
| server.masterPort | int | 9333 | Port for the Master server. Manages volume assignments and cluster coordination. |
| server.maxVolumes | int | 30 | Maximum number of physical volumes this node is allowed to host. total_capacity = maxVolumes * volumeSizeLimitMB |
| server.minFreeSpaceGiB | int | 1 | Minimum threshold of free disk space in GiB. If remaining space falls below this, the server stops accepting new writes to prevent disk exhaustion. |
| server.volumePort | int | 8099 | Port for the Volume server. Handles the actual read/write operations for data chunks. |
| server.volumeSizeLimitMB | int | 20000 | Maximum size in MB for each physical volume file before it is marked as read-only. |
| useReloader | string | nil | Whether to enable Reloader annotations. When defined, this value takes precedence over global.useReloader. |
RabbitMQ
RabbitMQ Secrets
| Secret (when custom secret name is used) | Secret (default secret name) | Type | Description |
|---|---|---|---|
<global.rabbitmqCustomSecretName> | <release_name>-rabbitmq-secret | required | Basic authentication secret which contains username and password for RabbitMQ. If rabbitmq.createUserSecret is set to true, it will be created automatically from username and password. Otherwise, secret has to already exist. |
<global.rabbitmqAdminCustomSecretName> | <release_name>-rabbitmq-secret-admin | optional | Basic authentication secret which contains username and password for RabbitMQ Admin. If rabbitmq.createManagementAdminSecret is set to true, it will be created automatically from managementAdminUsername and managementAdminPassword. If secret is missing or credentials given were invalid, credentials from the secret above (<release_name>-rabbitmq-secret) are used. |
RabbitMQ Values
| Key | Type | Default | Description |
|---|---|---|---|
| affinity | object | {} | Affinity for pod scheduling. Allows to constrain which nodes your pod can be scheduled on based on node labels or ensure pods are co-located (or isolated) from other pods. |
| applyRestrictedPolicy | bool | false | Apply restricted policy to RabbitMQ containers. If umbrella is used for deployment, this value is ignored, and global.applyRestrictedPolicy is used. |
| clusterDomainName | string | "cluster.local" | Cluster domain name. If umbrella is used for deployment, this value is ignored, and global.clusterDomain name is applied. |
| createManagementAdminSecret | bool | true | A user management admin secret will be created automatically with given admin username and admin password; otherwise, the secret must already exist. |
| createUserSecret | bool | true | A user secret will be created automatically with the given username and password; otherwise, the secret must already exist. |
| customAdminSecretName | string | nil | RabbitMQ custom admin secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.rabbitmqAdminCustomSecretName will be used as a secret name. |
| customSecretName | string | nil | RabbitMQ custom secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.rabbitmqCustomSecretName will be used as a secret name. |
| host | string | "" | Host for external RabbitMQ. When configured, the Detect RabbitMQ cluster won't be created. |
| image | object | - | Configuration values of the rabbitmq image. |
| image.repository | string | "rabbitmq" | Image repository. |
| image.tag | string | "4.1.0-management" | Image tag. |
| managementAdminPassword | string | "" | Management admin password. If left empty, defaults to password value. |
| managementAdminUrl | string | "" | Management Admin URL. If empty, defaults to http://<host>:15672. |
| managementAdminUsername | string | "" | Management admin username. If left empty, defaults to username value. |
| password | string | "guest_11223" | Password. |
| persistence | object | - | RabbitMQ data storage configuration. |
| persistence.requestStorage | string | "5Gi" | The amount of storage to request. |
| persistence.storageClassName | string | nil | Name of the StorageClass to use. |
| port | int | 5672 | RabbitMQ port. |
| replicas | int | 1 | Number of desired pod instances. |
| resources | object | - | Resource requests and limits for the container. |
| resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| resources.limits.cpu | string | "2" | CPU limit. Throttling occurs if the container exceeds this value. |
| resources.limits.memory | string | "2Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| resources.requests.cpu | string | "1" | CPU request. |
| resources.requests.memory | string | "2Gi" | Memory request. |
| useQuorumQueues | bool | false | Setting this to true defines queues as quorum type (recommended for multi-replica/HA setups); otherwise, queues are classic. |
| useSecureProtocol | bool | false | Setting this to true enables the secure AMQPS protocol for the RabbitMQ connection. |
| username | string | "guest" | Username. |
| vhost | string | "" | Vhost that worker will use. When empty, the default rl-detect vhost is used. |
| vhostCentralLogging | string | "" | Vhost that central logging will use. When empty, the default rl-detect-centrallogging vhost is used. |
| vhostSdm | string | "" | Vhost that SDM will use. When empty, the default rl-detect-portal vhost is used. |
PostgreSQL
PostgreSQL Secrets
| Secret (when custom secret name is used) | Secret (deployment with Detect chart) | Type | Description |
|---|---|---|---|
<global.postgresCustomSecretName> | <release_name>-postgres-secret | required | Basic authentication secret which contains username and password for Database. If postgres.createUserSecret is set to true, it will be created automatically from username and password. Otherwise, secret has to already exist. |
PostgreSQL Values
| Key | Type | Default | Description |
|---|---|---|---|
| affinity | object | {} | Affinity for pod scheduling. Allows to constrain which nodes your pod can be scheduled on based on node labels or ensure pods are co-located (or isolated) from other pods. |
| clusterDomainName | string | "cluster.local" | Cluster domain name. If umbrella is used for deployment, this value is ignored, and global.clusterDomain name is applied. |
| createUserSecret | bool | true | A user secret will be created automatically with the given username and password; otherwise, the secret must already exist. |
| customSecretName | string | nil | Postgres custom secret name. If umbrella chart is used for deployment, this value is ignored, and the value from global.postgresCustomSecretName will be used as a secret name. |
| database | string | "tiscale" | Database name. |
| host | string | "" | Host for external PostgreSQL. When configured, the Detect PostgreSQL cluster won't be created. |
| image | object | - | Configuration values of the postgres image. |
| image.repository | string | "ghcr.io/cloudnative-pg/postgresql" | Image repository. |
| image.tag | string | "17.6" | Image tag. |
| password | string | "tiscale_11223" | Password. |
| persistence | object | - | Postgres data storage configuration. |
| persistence.accessModes | list | ["ReadWriteOnce"] | Specifies the access modes for the volume. |
| persistence.requestStorage | string | "5Gi" | The amount of storage to request. |
| persistence.storageClassName | string | nil | Name of the StorageClass to use. |
| port | int | 5432 | PostgreSQL port. |
| replicas | int | 1 | Number of desired pod instances. |
| resources | object | - | Resource requests and limits for the container. |
| resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| resources.limits.cpu | string | nil | CPU limit. Throttling occurs if the container exceeds this value. |
| resources.limits.memory | string | nil | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| resources.requests.cpu | string | "500m" | CPU request. |
| resources.requests.memory | string | "1Gi" | Memory request. |
| schema | string | "" | Schema name which will be used for SDM. When empty, default name "portal" will be used. |
| username | string | "tiscale" | Username. Required if host is not set, because the Detect PostgreSQL cluster will be created and this user will be set as the database owner. |
Logging
| Key | Type | Default | Description |
|---|---|---|---|
| alloy.enabled | bool | true | If enabled, the Alloy subchart will be deployed for log collection. |
| grafana.enabled | bool | true | If enabled, the Grafana subchart will be deployed for data visualization. |
| loki.enabled | bool | true | If enabled, the Loki subchart will be deployed. |
Alloy
| Key | Type | Default | Description |
|---|---|---|---|
| clusterDomainName | string | "cluster.local" | Cluster domain name. If umbrella is used for deployment, this value is ignored, and global.clusterDomainName is applied. |
| config | object | - | Alloy configuration values. |
| config.extraLabels | object | {} | Extra labels to add to all collected logs. |
| config.lokiUrl | string | "" | Custom Loki URL for log forwarding. Leave empty to use the deployed Loki instance. |
| config.namespaces | list | ["default"] | List of namespaces to collect logs from. Empty array means all namespaces. |
| config.scrapeInterval | string | "15s" | Scrape interval for log collection. |
| enabled | bool | false | Enable or disable the Alloy deployment. |
| image | object | - | Configuration values of the alloy image. |
| image.pullPolicy | string | "IfNotPresent" | Image pull policy. Options: Always, IfNotPresent, Never. |
| image.repository | string | "grafana/alloy" | Image repository. |
| image.tag | string | "v1.11.3" | Image tag. |
| rbac.create | bool | true | Create RBAC resources (ClusterRole and ClusterRoleBinding for log collection). |
| resources | object | - | Resource requests and limits for the container. |
| resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| resources.limits.cpu | string | "500m" | CPU limit. Throttling occurs if the container exceeds this value. |
| resources.limits.memory | string | "512Mi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| resources.requests.cpu | string | "100m" | CPU request. |
| resources.requests.memory | string | "128Mi" | Memory request. |
| serviceAccount.annotations | object | {} | Annotations to add to the service account. |
| serviceAccount.create | bool | true | Create a service account. |
| serviceAccount.name | string | "" | Name of the service account to use. If empty, a name is generated using the fullname template. |
Grafana
| Key | Type | Default | Description |
|---|---|---|---|
| admin.password | string | "" | Grafana admin password. |
| admin.username | string | "" | Grafana admin username. |
| clusterDomainName | string | "cluster.local" | Cluster domain name. If umbrella is used for deployment, this value is ignored, and global.clusterDomainName is applied. |
| config | object | - | Configuration values. |
| config.lokiUrl | string | "" | Custom Loki URL for datasource. Leave empty to use the deployed Loki instance. |
| config.workerReleaseName | string | nil | Spectra Detect Worker release name to represent the logs from the Worker components in the created dashboard. It is required only if logging chart is deployed without umbrella chart. In case the worker in question was deployed with umbrella chart, the release name needs to be <release-name>-wrk, otherwise it should be <worker-release-name>. |
| enabled | bool | false | Enable or disable the Grafana deployment. |
| image | object | - | Configuration values of the grafana image. |
| image.pullPolicy | string | "IfNotPresent" | Image pull policy. Options: Always, IfNotPresent, Never. |
| image.repository | string | "grafana/grafana" | Image repository. |
| image.tag | string | "12.2.1" | Image tag. |
| ingress | object | - | Ingress configuration values. |
| ingress.annotations | object | {} | Custom annotations to fine-tune the Ingress Controller behavior. |
| ingress.className | string | "nginx" | IngressClass that will handle this resource. Must match an existing 'IngressClass' in the cluster. |
| ingress.enabled | bool | true | Enable/disable the creation of the Ingress resource. |
| ingress.host | string | "" | The Fully Qualified Domain Name (FQDN) for the application. Required if 'enabled' is true. |
| ingress.tls | object | - | TLS / SSL Configuration. |
| ingress.tls.certificateArn | string | "" | The Amazon Resource Name (ARN) of the certificate for AWS Load Balancer Controller. Only applicable when 'className' is 'alb'. |
| ingress.tls.issuer | string | "" | The name of the cert-manager ClusterIssuer or Issuer to request certificates from. |
| ingress.tls.issuerKind | string | "Issuer" | The resource type of the issuer. Typically, 'Issuer' (namespace-scoped) or 'ClusterIssuer' (cluster-wide). |
| persistence | object | - | Grafana data storage configuration. |
| persistence.accessMode | string | "ReadWriteOnce" | Specifies the access mode for the volume. |
| persistence.enabled | bool | true | Enable/disable persistent storage for Grafana data. |
| persistence.size | string | "1Gi" | Persistent data storage size. |
| persistence.storageClass | string | "" | Name of the StorageClass to use. |
| resources | object | - | Resource requests and limits for the container. |
| resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| resources.limits.cpu | string | "500m" | CPU limit. Throttling occurs if the container exceeds this value. |
| resources.limits.memory | string | "512Mi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| resources.requests.cpu | string | "100m" | CPU request. |
| resources.requests.memory | string | "128Mi" | Memory request. |
| service | object | - | Service configuration for network access. |
| service.port | int | 3000 | HTTP port number the service will listen on. |
| service.type | string | "ClusterIP" | Service type determines how the service is exposed. |
Loki
| Key | Type | Default | Description |
|---|---|---|---|
| clusterDomainName | string | "cluster.local" | Cluster domain name. If umbrella is used for deployment, this value is ignored, and global.clusterDomainName is applied. |
| config | object | - | Loki configuration values. |
| config.limits.maxQuerySeries | int | 500 | Maximum number of series returned by a query. |
| config.limits.maxStreamsPerUser | int | 10000 | Maximum number of streams per user. |
| config.retentionPeriod | string | "168h" | Log retention period (e.g., "168h" for 7 days). |
| config.storage.type | string | "filesystem" | Storage backend type (filesystem, s3, gcs). |
| enabled | bool | false | Enable or disable the Loki deployment. |
| host | string | "" | External Loki host URL. When set, local Loki deployment is skipped and external Loki is used. |
| image | object | - | Configuration values of the loki image. |
| image.pullPolicy | string | "IfNotPresent" | Image pull policy. Options: Always, IfNotPresent, Never. |
| image.repository | string | "grafana/loki" | Image repository. |
| image.tag | string | "3.5.8" | Image tag. |
| persistence | object | - | Loki data storage configuration. |
| persistence.accessMode | string | "ReadWriteOnce" | Specifies the access mode for the volume. |
| persistence.enabled | bool | true | Enable/disable persistent storage for Loki data. |
| persistence.size | string | "30Gi" | Persistent data storage size. |
| persistence.storageClass | string | "" | Name of the StorageClass to use. |
| resources | object | - | Resource requests and limits for the container. |
| resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| resources.limits.cpu | string | "1000m" | CPU limit. Throttling occurs if the container exceeds this value. |
| resources.limits.memory | string | "1Gi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| resources.requests.cpu | string | "100m" | CPU request. |
| resources.requests.memory | string | "256Mi" | Memory request. |
| service | object | - | Service configuration for network access. |
| service.grpcPort | int | 9096 | gRPC service port. |
| service.httpPort | int | 3100 | HTTP port number the service will listen on. |
| service.type | string | "ClusterIP" | Service type determines how the service is exposed. |
Prometheus
| Key | Type | Default | Description |
|---|---|---|---|
| clusterDomainName | string | "cluster.local" | Cluster domain name. If umbrella is used for deployment, this value is ignored, and global.clusterDomainName is applied. |
| enabled | bool | true | Enable or disable the Prometheus configuration deployment. It provides only configuration and secrets for connecting to Prometheus. Prometheus itself is not deployed when this value is 'true'. |
| namespace | string | "monitoring" | Namespace where the external Prometheus is deployed. |
| port | int | 9090 | Prometheus port. |
| releaseName | string | "kube-prometheus-stack" | External Prometheus release name. |
Reloader
| Key | Type | Default | Description |
|---|---|---|---|
| reloader.enabled | bool | true | If enabled, reloader will be deployed. Reloader ensures that latest configuration is applied at all time. |
| reloader.reloader | object | - | Reloader configuration values. |
| reloader.reloader.logFormat | string | "json" | Set the type of log format. Use "json" for structured logging or leave empty "" for plain text. |
| reloader.reloader.logLevel | string | "info" | Set the verbosity of the logs. Options: trace, debug, info, warning, error, fatal, panic. |
| reloader.reloader.resources | object | - | Resource requests and limits for the container. |
| reloader.reloader.resources.limits | object | - | The maximum amount of resources the container is allowed to consume. |
| reloader.reloader.resources.limits.cpu | string | "100m" | CPU limit. Throttling occurs if the container exceeds this value. |
| reloader.reloader.resources.limits.memory | string | "512Mi" | Memory limit. If exceeded, the container may be terminated with an OOMKilled error. |
| reloader.reloader.resources.requests | object | - | The minimum amount of resources the container is guaranteed. |
| reloader.reloader.resources.requests.cpu | string | "10m" | CPU request. |
| reloader.reloader.resources.requests.memory | string | "128Mi" | Memory limit. |