Skip to main content
Version: Spectra Detect 5.5.1

Helm Configuration References

Using Detect Umbrella Chart

Detect Helm is an umbrella chart that uses other Detect charts to deploy Detect deployment on K8s. The purpose of this chart is to simplify and streamline the process of deploying Detect on K8s.

Enabling or disabling subcharts

Each Detect subchart can be enabled or disabled, giving you the ability to deploy only what you need or want. You can enable or disable a subchart by setting chart enabled value (<<chart_name>>.enabled) to either true or false.

c1000:
enabled: true

tiscale-worker:
enabled: true

tiscale-worker-lf:
enabled: false

tiscale-hub:
enabled: false

tiscale-external-appliances:
enabled: false

Requirements

RepositoryNameVersion
file://../c1000c10005.5.1-9
file://../tiscale-hubtiscale-hub5.5.1-9
file://../tiscale-workertiscale-worker5.5.1-9
file://../tiscale-workertiscale-worker-lf(tiscale-worker)5.5.1-9
oci://alt-artifactory-prod.rl.lan/appliances-docker-test/detect/chartstiscale-external-appliances>=0.1.0

Values

KeyTypeDefaultDescription
detectNamespaceRbacobject{}A set of roles and role bindings that can be created in the Detect namespace for additional RBAC configuration.
global.connectA1000boolfalseEnable connecting A1000 to C1000
global.deployC1000booltrueIf enabled, C1000 will be deployed
global.deployHubbooltrueIf enabled, Hub will be deployed
global.deployWorkerbooltrueIf enabled, Workers will be deployed
global.enableLargeFileWorkerbooltrueIf true, large file worker mode will be enabled
global.umbrellabooltrueNotifies dependency charts that they are being used from the umbrella chart. Must always be set to true.
registry.authSecretNamestring"rl-registry-key"The name of a Kubernetes secret resource used for pulling images from a Docker registry.
registry.authSecretPasswordstringnilThe password used for authenticating with the registry.
registry.authSecretUsernamestringnilThe username used for authenticating with the registry.
registry.createRegistrySecretbooltrueIf enabled, a Kubernetes secret for pulling images will be created. If disabled, the secret must be created manually in the namespace.
registry.imageRegistrystring"registry.reversinglabs.com"The image registry address.

Spectra Detect Manager (C1000)

C1000 Helm chart for Kubernetes

Configmap configuration secrets

SecretTypeDescription
release_name-secret-ticloudoptionalBasic authentication secret which contains username and password for Spectra Intelligence authentication. Required when Spectra Intelligence is enabled.
release_name-secret-ticloud-proxyoptionalBasic authentication secret which contains username and password for Spectra Intelligence proxy authentication. Required when Spectra Intelligence is enabled and proxy is used.
release_name-secret-ldapoptionalBasic authentication secret which contains username and password for LDAP autehntication. Required when LDAP authentication is enabled.
release_name-secret-oidcoptionalBasic authentication secret which contains username and password for OIDC authentication. Required when OIDC authentication is enabled.
release_name-secret-smtpoptionalBasic authentication secret which contains username and password for SMTP.

Values

Configmap Configuration values for Spectra Detect Manager

KeyTypeDefaultDescription
appliance.configModestring"STANDARD"Configuration mode of the appliance. Allowed values: CONFIGMAP (configuration is provided via ConfigMap), STANDARD (configuration is provided via UI).
c1000Configuration.centralFileStorageobject-Configure file storage in Spectra Detect Manager, allowing connected Workers to store samples for pivoting to and reprocessing in Spectra Analyze.
c1000Configuration.centralFileStorage.enableboolfalseEnable/disable file storage.
c1000Configuration.centralFileStorage.fileSizeLimitint400File size limit in MiB. Samples larger than the set threshold will not be stored.
c1000Configuration.centralFileStorage.ttlint24Sample retention period in hours after which the uploaded samples will be removed from the Central File Storage.
c1000Configuration.centralLoggingobject-Configure Central Logging, which is used to collect and display information about all events happening on connected Workers and the Hub.
c1000Configuration.centralLogging.retentionPeriodint90Retention period in days.
c1000Configuration.centralStorageGeneralobject-Additional central storage settings.
c1000Configuration.centralStorageGeneral.minFreeSpaceint20Minimum disk space in GiB. When reached, new sample uploads will not be accepted.
c1000Configuration.ldapobject-Settings for LDAP authentication.
c1000Configuration.ldap.denyGroupstring""Authentication will fail for any user that belongs to this group.
c1000Configuration.ldap.enableboolfalseEnable/disable LDAP authentication.
c1000Configuration.ldap.groupSchemaClassstring"group"The objectClass value used when searching for groups.
c1000Configuration.ldap.groupSchemaNameAttrstring"cn"The group name field.
c1000Configuration.ldap.groupSchemaTypestring"member"Group schema type. Allowed values: uniqueMember and member.
c1000Configuration.ldap.groupSearchBaseDnstring""Root node in LDAP from which to search for groups. Example: 'cn=users,dc=example,dc=com'.
c1000Configuration.ldap.groupSearchScopeint2Scope. Allowed values: 0 (Base), 1 (One level), 2 (Subtree), 3 (Subordinate).
c1000Configuration.ldap.hoststring""Hostname or IP of the server running LDAP.
c1000Configuration.ldap.portint389LDAP server port. Default: 389 (ldap) or 636 (ldaps).
c1000Configuration.ldap.requireGroupstring""Authentication will fail for any user that does not belong to this group. Example: 'cn=enabled,ou=groups,dc=example,dc=com'
c1000Configuration.ldap.tlsbooltrueIf true, use Transport Layer Security (TLS) connection.
c1000Configuration.ldap.tlsCacertdirstring""Directory in which the TLS CA Certificate file can be found
c1000Configuration.ldap.tlsCacertfilestring""TLS CA Certificate file. Must be in PEM format.
c1000Configuration.ldap.tlsRequireCertboolfalseVerify TLS certificate.
c1000Configuration.ldap.userAttrMapEmailstring"mail"Field to map to the email address.
c1000Configuration.ldap.userAttrMapFirstNamestring"givenName"Field to map to the first name.
c1000Configuration.ldap.userAttrMapLastNamestring"sn"Field to map to the last name.
c1000Configuration.ldap.userFlagsByGroupIsActivestring""Users will be marked as active only if they belong to this group. Example: 'cn=active,ou=users,dc=example,dc=com'.
c1000Configuration.ldap.userFlagsByGroupIsSuperuserstring""Users will be marked as superusers only if they belong to this group. Example: 'cn=admins,ou=groups,dc=example,dc=com'.
c1000Configuration.ldap.userSchemaClassstring"user"The objectClass value used when searching for users.
c1000Configuration.ldap.userSchemaNameAttrstring"sAMAccountName"The username field. Examples: 'sAMAccountName' or 'cn'.
c1000Configuration.ldap.userSearchBaseDnstring""Root node in LDAP from which to search for users. Example: 'cn=users,dc=example,dc=com.'
c1000Configuration.ldap.userSearchScopeint2Scope. Allowed values: 0 (Base), 1 (One level), 2 (Subtree), 3 (Subordinate).
c1000Configuration.oidcobject-Configure authentication with an OpenID Connect client.
c1000Configuration.oidc.accessDenyGroupstring""Authentication will fail for any user that belongs to one or more of the provided group(s). Use the access groups delimiter to separate multiple group names.
c1000Configuration.oidc.accessRequireGroupstring""Authentication will fail for any user that does not belong to one or more of the provided group(s). Use the access groups delimiter to separate multiple group names.
c1000Configuration.oidc.audiencestring""Identifies the intended recipient of the token.
c1000Configuration.oidc.claimsSourcestring"ID_TOKEN"Source used to extract user claims. Allowed values: ID_TOKEN, USER_INFO_ENDPOINT, ACCESS_TOKEN.
c1000Configuration.oidc.clientTypestring"CONFIDENTIAL"Authenticate with client secret (CONFIDENTIAL) or without (PUBLIC).
c1000Configuration.oidc.enableboolfalseEnable/disable authentication with OIDC
c1000Configuration.oidc.issuerstring""Issuer.
c1000Configuration.oidc.mapClaimAccessGroupsDelimiterOptional""Character to split the User Access Groups on. Characters entered in this field must not be used in any access group name. The maximum length of the delimiter is 2 characters.
c1000Configuration.oidc.mapClaimEmailstring"email"The claim containing the unique email address.
c1000Configuration.oidc.mapClaimFirstNamestring"given_name"The claim containing the user's first name.
c1000Configuration.oidc.mapClaimGroupsstring"group"The claim containing the list of user's groups.
c1000Configuration.oidc.mapClaimGroupsDelimiterOptional""Character to split the Groups string on.
c1000Configuration.oidc.mapClaimLastNamestring"family_name"The claim containing the user's last name.
c1000Configuration.oidc.mapClaimUsernamestring"unique_name"The claim containing the unique username.
c1000Configuration.oidc.opAuthorizationEndpointstring""URL of your OpenID Connect provider authorization endpoint.
c1000Configuration.oidc.opJwksEndpointstring""URL of your OpenID Connect provider JWKS endpoint.
c1000Configuration.oidc.opTokenEndpointstring""URL of your OpenID Connect provider token endpoint.
c1000Configuration.oidc.opUserEndpointstring""URL of your OpenID Connect provider userinfo endpoint.
c1000Configuration.oidc.pkceEnableboolfalseIf true, use PKCE (Proof Key of Code Exchange) to prevent auth code interception.
c1000Configuration.oidc.promptLoginboolfalseIf true, require the authorization server to reauthenticate the user even if the user is already authenticated.
c1000Configuration.oidc.relyingPartIdstring""Relying Party ID.
c1000Configuration.oidc.rpIdpSignKeystring""The key used to sign ID tokens when using an RSA sign algorithm
c1000Configuration.oidc.rpScopesstring"openid allatclaims"The OpenID Connect scopes to request during login.
c1000Configuration.oidc.rpSignAlgostring"RS256"Signature algorithm. Allowed values: RS256 and HS256.
c1000Configuration.oidc.userFlagsByGroupIsActivestring""Users will be marked as active only if they belong to one or more of the provided group(s). Use the access group delimiter to separate multiple group names.
c1000Configuration.oidc.userFlagsByGroupIsSuperuserstring""Users will be marked as superusers only if they belong to one or more of the provided group(s). Use the access groups delimiter to separate multiple group names.
c1000Configuration.oidc.verifySslbooltrueControls whether the OpenID Connect client verifies the SSL certificate of the OP responses.
c1000Configuration.rlapp.allowedHostslist[]A list of host/domain names that this application site can serve.
c1000Configuration.rlapp.sessionCookieAgeint604800Duration of the login session, in seconds.
c1000Configuration.rlapp.sessionTimeoutAutomaticallyLogoutboolfalseIf true, automatically log out inactive users.
c1000Configuration.rlapp.sessionTimeoutPeriodint660Period of inactivity before sign out. In seconds.
c1000Configuration.samlobject-Configure authentication with SAML.
c1000Configuration.saml.accessDenyGroupstring""Authentication will fail for any user that belongs to one or more of the provided group(s). Use the access groups delimiter to separate multiple group names.
c1000Configuration.saml.accessRequireGroupstring""Authentication will fail for any user that does not belong to one or more of the provided group(s). Use the access groups delimiter to separate multiple group names.
c1000Configuration.saml.allowUnsolicitedboolfalseAllow unsolicited responses from IdP.
c1000Configuration.saml.enableboolfalseEnable/disable SAML authentication.
c1000Configuration.saml.entityIdstring""Entity ID.
c1000Configuration.saml.federationMetadatastringnilFederation metadata.
c1000Configuration.saml.mapClaimAccessGroupsDelimiterOptional""Character to split the User Access Groups on. Characters entered in this field must not be used in any access group name. The maximum length of the delimiter is 2 characters.
c1000Configuration.saml.mapClaimEmailstring"email"The claim containing the unique email address.
c1000Configuration.saml.mapClaimFirstNamestring"given_name"The claim containing the user's first name.
c1000Configuration.saml.mapClaimGroupsstring"group"The claim that contains the list of user's groups.
c1000Configuration.saml.mapClaimGroupsDelimiterOptional""Character to split the Groups string on.
c1000Configuration.saml.mapClaimLastNamestring"family_name"The claim containing the user's last name.
c1000Configuration.saml.mapClaimUsernamestring"unique_name"The claim containing the unique username.
c1000Configuration.saml.userFlagsByGroupIsActivestring""Users will be marked as active only if they belong to one or more of the provided group(s). Use the access groups delimiter to separate multiple group names.
c1000Configuration.saml.userFlagsByGroupIsSuperuserstring""Users will be marked as superusers only if they belong to one or more of the provided group(s). Use the access groups delimiter to separate multiple group names.
c1000Configuration.smtpobject-SMTP Settings.
c1000Configuration.smtp.defaultFromEmailstring"online@reversinglabs.com"Default email address to use for automated correspondence.
c1000Configuration.smtp.hoststring"localhost"SMTP Hostname.
c1000Configuration.smtp.portint25SMTP Port.
c1000Configuration.smtp.useTlsboolfalseUse TLS.
c1000Configuration.snmpobject-SNMP settings.
c1000Configuration.snmp.trapAvgLoadThreshold15Minint50Average System Load in 15 Minutes (%).
c1000Configuration.snmp.trapAvgLoadThreshold1Minint90Average System Load in 1 Minute (%).
c1000Configuration.snmp.trapAvgLoadThreshold5Minint70Average System Load in 5 Minutes (%).
c1000Configuration.snmp.trapCommunitystring""SNMP trap community string.
c1000Configuration.snmp.trapDiskThresholdint90Percentage of disk used.
c1000Configuration.snmp.trapMemoryThresholdint80Percentage of memory used.
c1000Configuration.snmp.trapSinkstring""Hostname or IP of the trap sink server.
c1000Configuration.snmp.trapSinkEnableboolfalseEnable sending traps to sink server.
c1000Configuration.syncobject-Synchronization settings.
c1000Configuration.sync.yaraRulesetsEnableboolfalseAllow connected appliances to synchronize YARA rulesets.
c1000Configuration.systemAlertingobject-System Alerting settings.
c1000Configuration.systemAlerting.syslogEnableboolfalseSend system alert messages to a syslog server.
c1000Configuration.systemAlerting.syslogEnableAuditLogsboolfalseAudit logs will be automatically sent to the syslog server in addition to other system messages. Enabling this will increase the traffic between Spectra Detect Manager and the syslog server.
c1000Configuration.systemAlerting.syslogHoststring""Syslog host.
c1000Configuration.systemAlerting.syslogPortint514Syslog port.
c1000Configuration.systemAlerting.syslogTransportProtocolstring"TCP"Syslog protocol.
c1000Configuration.ticloudobject-Spectra Intelligence settings.
c1000Configuration.ticloud.enableboolfalseEnable Spectra Intelligence.
c1000Configuration.ticloud.proxyHoststring""Proxy hostname for routing requests from the appliance to Spectra Intelligence.
c1000Configuration.ticloud.proxyPortint25Proxy port.
c1000Configuration.ticloud.timeoutint60Timeout in seconds. Maximum: 1000.

Other Values

KeyTypeDefaultDescription
affinityobject{}
filebeat.configstringnil
filebeat.enablebooltrue
filebeat.imagestring"docker.elastic.co/beats/filebeat"
filebeat.output.console.prettyboolfalse
filebeat.resources.limits.cpustring"1000m"
filebeat.resources.limits.memorystring"1500Mi"
filebeat.resources.requests.cpustring"100m"
filebeat.resources.requests.memorystring"100Mi"
filebeat.tagstring"8.12.0"
fullnameOverridestring""
image.pullPolicystring"Always"
image.repositorystring"registry.reversinglabs.com/detect/images/detect-manager-mono"
image.tagstring"5.5.1"
imagePullSecrets[0].namestring"rl-registry-key"
ingress.annotationsobject{}
ingress.classNamestring"nginx"
ingress.enabledboolfalse
ingress.hoststringnil
ingress.paths[0].pathstring"/"
ingress.paths[0].pathTypestring"Prefix"
ingress.tls.certificateArnstring""
ingress.tls.issuerstring""
ingress.tls.issuerKindstring"Issuer"
ingress.tls.secretNamestring"tls-c1000"
nameOverridestring""
nodeSelectorobject{}
persistenceobject-Persistence values configure options for PersistentVolumeClaim used for storing samples.
persistence.accessModeslist["ReadWriteOnce"]Access mode.
persistence.requestStoragestring"10Gi"Request storage. If Central Storage is enabled, at least 100 GiB is required for it to work properly.
podAnnotationsobject{}
podSecurityContextobject{}
replicaCountint1
resources.requests.cpustring"1500m"
resources.requests.memorystring"4Gi"
securityContext.capabilities.add[0]string"NET_ADMIN"
securityContext.capabilities.add[1]string"SYS_NICE"
securityContext.capabilities.add[2]string"IPC_LOCK"
securityContext.privilegedboolfalse
service.portint80
service.typestring"ClusterIP"
serviceAccount.annotationsobject{}
serviceAccount.createbooltrue
serviceAccount.namestringnil
tolerationslist[]

Spectra Detect Worker

TiScale Worker Helm chart for Kubernetes

Rabbitmq and Postgres secrets

SecretTypeDescription
release_name-secret-rabbitmqrequiredBasic authentication secret which contains username and password for RabbitMQ. If rabbitmq.createUserSecret is set to true, it will be created automatically from username and password. Otherwise, secret has to already exist.
release_name-secret-rabbitmq-adminoptionalBasic authentication secret which contains username and password for RabbitMQ Admin. If rabbitmq.createManagementAdminSecret is set to true, it will be created automatically from username and password.
release_name-secret-postgresrequiredBasic authentication secret which contains username and password for Database. If postgres.createUserSecret is set to true, it will be created automatically from username and password. Otherwise, secret has to already exist.

Configmap configuration secrets

SecretTypeDescription
release_name-secret-worker-cloudoptionalBasic authentication secret which contains username and password for Spectra Intelligence authentication. Required when Spectra Intelligence is enabled.
release_name-secret-worker-cloud-proxyoptionalBasic authentication secret which contains username and password for Spectra Intelligence Proxy authentication. Required when Spectra Intelligence Proxy is enabled.
release_name-secret-worker-awsoptionalBasic authentication secret which contains username and password for AWS authentication. Required if any type of S3 storage (File, SNS, Report, Unpacked) is enabled.
release_name-secret-worker-azureoptionalBasic authentication secret which contains username and password for Azure authentication. Required if any type of ADL storage (File, Report, Unpacked) is enabled.
release_name-secret-worker-ms-graphoptionalBasic authentication secret which contains username and password for Microsoft Cloud Storage authentication. Required if any type of Microsoft Cloud storage (File, Report, Unpacked) is enabled.
release_name-secret-unpacked-s3optionalSecret which contains only password. Used for encryption of the archive file. Relevant only when the unpackedS3.archiveUnpacked option is set to true.
release_name-secret-report-s3optionalSecret which contains only password. Used for encryption of the archive file. Relevant only when the reportS3.archiveSplitReport option is set to true.
release_name-secret-unpacked-adloptionalSecret which contains only password. Used for encryption of the archive file. Relevant only when the unpackedAdl.archiveUnpacked option is set to true.
release_name-secret-report-adloptionalSecret which contains only password. Used for encryption of the archive file. Relevant only when the reportAdl.archiveSplitReport option is set to true.
release_name-secret-unpacked-ms-graphoptionalSecret which contains only password. Used for encryption of the archive file. Relevant only when the unpackedMsGraph.archiveUnpacked option is set to true.
release_name-secret-report-ms-graphoptionalSecret which contains only password. Used for encryption of the archive file. Relevant only when the reportMsGraph.archiveSplitReport option is set to true.
release_name-secret-splunkoptionalToken secret which contains token for Splunk authentication. Required if Splunk Integration is enabled.
release_name-secret-archive-zipoptionalSecret which contains only password. Relevant only when the archive.fileWrapper value is set to "zip" or "mzip".
release_namespace-secret-worker-api-tokenoptionalToken secret which contains token that is used to upload files and to get YARA rule identifiers. The value configured here must correspond to the API token set for the Spectra Detect Hub Group. All Workers in the group must use the same token for services like Connectors to work properly.
release_namespace-secret-worker-api-task-tokenoptionalToken secret which contains token that is used to fetch reports or otherwise work with processing tasks: list tasks, delete one or delete multiple tasks.
release_namespace-secret-worker-srv-tokenoptionalToken secret which contains token that is used in authentication for Worker Service API endpoint. The value configured here must correspond to the SRV token set for the Spectra Detect Hub Group. Token is used by Workers for services like load balancing to work properly.
release_namespace-secret-worker-srv-available-tokenoptionalToken secret which contains the token used in endpoint authentication for the processing availability API.
release_namespace-secret-sa-integration-tokenoptionalToken secret which contains the token used in authentication on Spectra Analyze when Spectra Analyze Integration is enabled (setup with spectraAnalyzeIntegration). This token should be created in Spectra Analyze.

Values

Configmap Configuration values for Worker

KeyTypeDefaultDescription
a1000object-Integration with Spectra Analyze appliance.
a1000.hoststring""The hostname or IP address of the A1000 appliance associated with the Worker.
adlobject-Settings for storing files in an Azure Data Lake container.
adl.containerstring""The hostname or IP address of the Azure Data Lake container that will be used for storage. Required when storing files in ADL is enabled.
adl.enabledboolfalseEnable or disable the storage of processed files.
adl.folderstring""Specify the name of the folder on the container where files will be stored.
apiServerobject-Configures a custom Worker IP address which is included in the response when uploading a file to the Worker for processing.
apiServer.hoststring""Configures the hostname or IP address of the Worker. Only necessary if the default IP address or network interface is incorrect.
appliance.configModestring"STANDARD"Appliance configuration mode. Allowed values: CONFIGMAP (configuration provided with a ConfigMap), STANDARD (configuration provided via the UI).
archiveobject-After processing, files can be zipped before external storage. Available only for S3 and Azure.
archive.fileWrapperstring""Specify whether the files should be compressed as a ZIP archive before uploading to external storage. Supported values are: zip, mzip. If this parameter is left blank, files will be uploaded in their original format.
archive.zipCompressint0ZIP compression level to use when storing files in a ZIP file. Allowed range: 0 (no compression) to 9 (maximum compression).
archive.zipMaxfilesint0Maximum allowed number of files that can be stored in one ZIP archive. Allowed range: 1-65535. 0 represents unlimited.
awsobject-Configuration of integration with AWS or AWS-compatible storage to be used for SNS, and for uploading files and analysis reports to S3.
aws.caPathstring""Path on the file system pointing to the certificate of a custom (self-hosted) S3 server.
aws.endpointUrlstring""Only required in non-AWS setups in order to store files to an S3-compatible server. When this parameter is left blank, the default is https://aws.amazonaws.com. Supported pattern(s): https?://.+".
aws.maxReattemptsint5Maximum number of retries when saving a report to an S3-compatible server.
aws.payloadSigningEnabledboolfalseSpecifies whether to include an SHA-256 checksum with Amazon Signature Version 4 payloads.
aws.regionstring"us-east-1"Specify the correct AWS geographical region where the S3 bucket is located. Required parameter, ignored for non-AWS setups.
aws.serverSideEncryptionstring""Specify the encryption algorithm used on the target S3 bucket (e.g. aws:kms or AES256).
aws.sslVerifyboolfalseEnable/disable SSL verification.
awsRoleobject-Configures the AWS IAM roles used to access S3 buckets without sharing secret keys. The IAM role which will be used to obtain temporary tokens has to be created in the AWS console.
awsRole.enableArnboolfalseEnables or disables this entire feature.
awsRole.externalRoleIdstring""The external ID of the role that will be assumed. This can be any string. Usually, it’s an ID provided by the entity which uses (but doesn’t own) an S3 bucket. The owner of that bucket takes that external ID and builds an ARN with it."
awsRole.refreshBufferint5Number of seconds to fetch a new ARN token before the token timeout is reached.
awsRole.roleArnstring""The role ARN created using the external role ID and an Amazon ID. In other words, the ARN which allows a Worker to obtain a temporary token, which then allows it to save to S3 buckets without a secret access key.
awsRole.roleSessionNamestring""Name of the session visible in AWS logs. Can be any string.
awsRole.tokenDurationint900How long before the authentication token expires and is refreshed. The minimum value is 900 seconds.
azureobject-Configures integration with Azure Data Lake Gen2 for the purpose of storing processed files in Azure Data Lake containers.
azure.endpointSuffixstring"core.windows.net"Specify the suffix for the address of your Azure Data Lake container.
callbackobject-Settings for automatically sending file analysis reports via POST request.
callback.advancedFilterEnabledboolfalseEnable/disable advanced filter.
callback.advancedFilterNamestring""Name of the advanced filter.
callback.caPathstring""If the url parameter uses HTTPS, set the path to the certificate file here. This automatically enables SSL verification. If this parameter is left blank or not configured, SSL verification will be disabled, and the certificate will not be validated.
callback.enabledboolfalseEnable/disable connection.
callback.maliciousOnlyboolfalseReport contains only malicious and suspicious children.
callback.reportTypestring"medium"Specifies which report_type is returned. By default, or when empty, only the medium (summary) report is provided in the callback response. Set to extended_small, small, medium, or large to view results of filtering the full report.
callback.splitReportboolfalseBy default, reports contain information on parent files and all extracted children files. If set to true, reports for extracted files will be separated from the full report and saved as standalone files. If any user-defined data was appended to the analyzed parent file, it will be included in every split child report.
callback.sslVerifyboolfalseEnable/disable SSL verification
callback.timeoutint5Specify the timeout in seconds for the POST request. On failure, the Worker retries up to six times, increasing the delay after the second retry. With the default timeout, the total possible wait before a final failure is 159 seconds.
callback.topContainerOnlyboolfalseIf set to true, the reports will only contain metadata for the top container. Reports for unpacked files will not be generated.
callback.urlstring""Specify the full URL used to send the callback POST request. Both HTTP and HTTPS protocols are supported. If this parameter is left blank, reports will not be sent, and the callback feature will be disabled. Supported pattern(s): http?://.+
callback.viewstring""Specifies whether a custom report view should be applied to the report.
cefobject-Configures Common Event Format (CEF) settings. CEF is an extensible, text-based logging and auditing format that uses a standard header and a variable extension, formatted as key-value pairs.
cef.cefMsgHashTypestring"md5"Specify the type of hash that will be included in CEF messages. Supported values are: md5, sha1, sha256.
cef.enableCefMsgboolfalseEnable or disable sending CEF messages to syslog. Defaults to false to avoid flooding.
classifyobject-Configure settings for Worker analysis and classification of files using the Spectra Core static analysis engine.
classify.certificatesbooltrueEnable checking whether file certificate passes the certificate validation, in addition to checking certificate whitelists and blacklists.
classify.documentsbooltrueEnable document format threat detection.
classify.emailsbooltrueEnable detection of phishing and other email threats.
classify.hyperlinksbooltrueEnable embedded hyperlinks detection.
classify.ignoreAdwareboolfalseWhen set to true, classification results that match adware will be ignored.
classify.ignoreHacktoolboolfalseWhen set to true, classification results that match hacktool will be ignored.
classify.ignorePackerboolfalseWhen set to true, classification results that match packer will be ignored.
classify.ignoreRiskwareboolfalseWhen set to true, classification results that match riskware will be ignored.
classify.ignoreSpamboolfalseWhen set to true, classification results that match spam will be ignored.
classify.ignoreSpywareboolfalseWhen set to true, classification results that match spyware will be ignored.
classify.imagesbooltrueWhen true, the heuristic image classifier for supported file formats is used.
classify.pecoffbooltrueWhen true, the heuristic Windows executable classifier for supported PE file formats is used.
cleanupobject-Configures how often the Worker file system is cleaned up.
cleanup.fileAgeLimitint1440 (LargeWorker: 4320)Time before an unprocessed file present on the appliance is deleted, in minutes.
cleanup.taskAgeLimitint90 (LargeWorker: 4320)Time before analysis reports and records of processed tasks are deleted, in minutes.
cleanup.taskUnprocessedLimitint1440 (LargeWorker: 4320)Time before an incomplete processing task is canceled, in minutes.
cloudobject-Configures integration with the Spectra Intelligence service or a T1000 instance to receive additional classification information.
cloud.enabledboolfalseEnable/disable connection.
cloud.proxyobject-Configure an optional proxy connection.
cloud.proxy.enabledboolfalseEnable/disable proxy server.
cloud.proxy.portint8080Specify the TCP port number if using an HTTP proxy. Allowed range(s): 1 … 65535. Required only if proxy is used.
cloud.proxy.serverstring""Proxy hostname or IP address for routing requests from the appliance to Spectra Intelligence. Required only if proxy is used.
cloud.serverstring""Hostname or IP address of the Spectra Intelligence server. Required if Spectra Intelligence integration is enabled. Format: https://<ip_or_hostname>.
cloud.timeoutint5Specify the number of seconds to wait when connecting to Spectra Intelligence before terminating the connection request.
cloudAutomationobject-Configures the Worker to automatically submit files to Spectra Intelligence for antivirus scanning (in addition to local static analysis and remote reputation lookup (from previous antivirus scans)).
cloudAutomation.dataChangeSubscribeboolfalseSubscribe to the Spectra Intelligence data change notification mechanism.
cloudAutomation.spexUploadobject-Scanning settings.
cloudAutomation.spexUpload.enabledboolfalseEnable/disable this feature.
cloudAutomation.spexUpload.rescanEnabledbooltrueEnable/disable rescan of files upon submission based on the configured interval to include the latest AV results in the reports.
cloudAutomation.spexUpload.rescanThresholdInDaysint3Set the interval in days for triggering an AV rescan. If the last scan is older than the specified value, a rescan will be initiated. A value of 0 means files will be rescanned with each submission.
cloudAutomation.spexUpload.scanUnpackedFilesboolfalseEnable/disable sending unpacked files to Deep Cloud Analysis for scanning. Consumes roughly double the processing resources compared to standard analysis.
cloudAutomation.waitForAvScansTimeoutInMinutesint240Sets the maximum wait time (in minutes) for Deep Cloud Analysis to complete. If the timeout is reached, the report will be generated without the latest AV results.
cloudAutomation.waitForAvScansToFinishboolfalseIf set to true, delays report generation until Deep Cloud Analysis completes, ensuring the latest AV results are included.
cloudCacheobject-Caches Spectra Intelligence results to preserve quota and bandwidth when analyzing sets of samples containing many duplicates or identical extracted files.
cloudCache.cacheMaxSizePercentagefloat6.25Maximum cache size expressed as a percentage of the total allocated RAM on the Worker. Allowed range: 5 - 15.
cloudCache.cleanupWindowint10How often to run the cache cleanup process, in minutes. It is advisable for this value to be lower, or at least equal to the TTL value. Max: 5 - 60.
cloudCache.enabledboolfalse (LargeWorker: true)Enable or disable the caching feature.
cloudCache.maxIdleUpstreamConnectionsint50The maximum number of idle upstream connections. Allowed range: 10 - 50.
cloudCache.ttlint240 (LargeWorker: 480)Time to live for cached records, in minutes. Allowed range: 1 - 7200.
filterobject-Configures file filters that can be used to limit which files are saved after processing based on specified criteria. If a file matches a filter, it will not be saved after processing.
filter.enabledboolfalseEnable/disable the file filter functionality.
filter.filterslist-List of filters.
filter.filters[0].activeboolfalseSet to either true or false to activate or deactivate the current file filter.
filter.filters[0].classificationstring"Goodware"Allows filtering out files based on their classification. Malicious and suspicious files cannot be filtered out, so the only supported values are Goodware and Unknown, or both 'Goodware,Unknown'.
filter.filters[0].factorint0When a file is processed, it is assigned a factor, represented as a number from 0 to 5. For goodware, it indicates trust level (0 is highest trust, 5 is lowest), while in all other cases, it indicates the threat factor of the file (1 is least dangerous, 5 is most dangerous). For this parameter, the user should specify one value from 0 to 5. Files with a threat or trust factor equal to or less than the configured value are filtered out.
filter.filters[0].fileSizeint0Specify the file size in bytes, and use the fileSizeCond parameter to define the conditions related to file size.
filter.filters[0].fileSizeCondstring"greater_than"Specify the conditions that apply to the value defined in the file_size parameter. Allowed values: less_than, greater_than, greater_than_or_equal_to.
filter.filters[0].fileTypestring"All"Specify one or more file types (such as Binary, Image, Text, Document, PE…). If specifying multiple file types, they should be comma-separated. Files matching the specified type(s) will be filtered out. Possible values: "All", "Audio", "Binary", "DEX", "Document", "ELF32.Big", "ELF32.Little", "ELF64.Big", "ELF64.Little" , "Image", "MachO32.Big", "MachO32.Little", "MachO64.Big", "MachO64.Little", "Media.Container", "MZ", "ODEX", "PE", "PE+", "PE16", "Text", "Video".
filter.filters[0].namestring"Default"Specify a unique identifier for the particular file filter to distinguish it from others.
general.maxUploadSizeMbint2048The largest file (in MB) that Worker will accept and start processing. Ignored if Spectra Intelligence is connected and file upload limits are set there.
general.postprocessingCheckThresholdMinsint720 (LargeWorker: 4320)How often the postprocessing service will be checked for timeouts. If any issues are detected, the process will be restarted.
general.tsWorkerCheckThresholdMinsint720 (LargeWorker: 4320)How often the processing service will be checked for timeouts. If any issues are detected, the process will be restarted.
general.uploadSizeLimitEnabledboolfalseWhether or not the upload size filter is active. Ignored if Spectra Intelligence is connected and file upload limits are set there.
hashesobject-Spectra Core calculates file hashes during analysis and includes them in the analysis report. The following options configure which additional hash types should be calculated and included in the Worker report. SHA1 and SHA256 are always included and therefore aren’t configurable. Selecting additional hash types (especially SHA384 and SHA512) may slow report generation.
hashes.enableCrc32boolfalseInclude CRC32 hashes in reports.
hashes.enableMd5booltrueInclude MD5 hashes in reports.
hashes.enableSha384boolfalseInclude SHA384 hashes in reports.
hashes.enableSha512boolfalseInclude SHA512 hashes in reports.
hashes.enableSsdeepboolfalseInclude SSDEEP hashes in reports.
hashes.enableTlshboolfalseInclude TLSH hashes in reports.
healthobject-Configures thresholds for when the system will be considered unhealthy and will reject traffic until the system resources are restored to values below the defined thresholds.
health.enabledbooltrueSet to true or false to enable or disable the thresholds functionality.
health.queueHighint5000 (LargeWorker: 500)Specify the maximum number of items allowed in the queue. If it exceeds the configured value, the appliance will start rejecting traffic. Allowed range(s): 10+.
largeReportobject-The large report method uses a different library which prevents memory spikes when generating the report, and is therefore better suited to reports with many fields. However, the report generation is slower.
largeReport.sizeLimitMbint0 (LargeWorker: 1)Reports larger than this value (MB) will be generated using the large report method. Value 0 disables this.
loggingobject-Configures the severity above which events will be logged or sent to a remote syslog server. Severity can be: INFO, WARNING, or ERROR.
logging.syslogLogLevelstring"ERROR"Events below this severity level will not be sent to a remote syslog server.
logging.tiscaleLogLevelstring"INFO"Events below this level will not be saved to logs (/var/log/messages and /var/log/tiscale/*.log).
msGraph.enabledboolfalseTurns the Microsoft Cloud Storage file integration on or off.
msGraph.folderstring""Folder where samples will be stored in Microsoft Cloud Storage.
msGraphGeneralobject-Configures the general options for the Microsoft Cloud Storage integration.
msGraphGeneral.customDomainstring""Application’s custom domain configured in the Azure portal.
msGraphGeneral.siteHostnamestring""Used only if storageType is set to SharePoint. This is the SharePoint hostname.
msGraphGeneral.siteRelativePathstring""SharePoint Online site relative path. Only used when storageType is set to SharePoint.
msGraphGeneral.storageTypestring"onedrive"Specifies the storage type. Supported values are: onedrive or sharepoint.
msGraphGeneral.usernamestring""Used only if storageType is set to OneDrive. Specifies which user’s drive will be used."
processingobject-Configure the Worker file processing capabilities to improve performance and load balancing.
processing.cacheEnabledboolfalse (LargeWorker: true)Enable/disable caching. When enabled, Spectra Core can skip reprocessing the same files (duplicates) if uploaded consecutively in a short period.
processing.cacheTimeToLiveint0If file processing caching is enabled, specify how long (in seconds) the analysis reports should be preserved in the cache before they expire. A value of 0 uses the default. Default: 600. Maximum: 86400.
processing.depthint0Specifies how "deep" a file is unpacked. By default, when set to 0, Workers will unpack files recursively until no more files can be unpacked. Setting a value greater than 0 limits the depth of recursion, which can speed up analyses but provide less detail.
processing.largefileThresholdint100If advanced mode is enabled, files larger than this threshold (in MB) will be processed individually, one by one. This parameter is ignored in standard mode.
processing.modeint2Configures the Worker processing mode to improve load balancing. Supported modes are standard (1) and advanced (2).
processing.timeoutstring"28800" (LargeWorker: "259200")Specifies how many seconds the Worker should wait for a file to process before terminating the task. Default: 28800. Maximum: 259200.
propagationobject-Configure advanced classification propagation options supported by the Spectra Core static analysis engine. When Spectra Core classifies files, the classification of a child file can be applied to the parent file.
propagation.enabledbooltrueEnable/disable the classification propagation feature. When propagation is enabled, files can be classified based on the content extracted from them. This means that files containing a malicious or suspicious file will also be considered malicious or suspicious.
propagation.goodwareOverridesEnabledbooltrueEnable/disable goodware overrides. When enabled, any files extracted from a parent file and whitelisted by certificate, source or user override can no longer be classified as malicious or suspicious. This is an advanced goodware whitelisting technique that can be used to reduce the amount of false positive detections.
propagation.goodwareOverridesFactorint1When goodware overrides are enabled, this parameter must be configured to determine the factor to which overrides will be applied. Supported values are 0 to 5, where zero represents the best trust factor (highest confidence that a sample contains goodware). Overrides will apply to files with a trust factor equal to or lower than the value configured here.
reportobject-Configure the contents of the Spectra Detect file analysis report.
report.firstReportOnlyboolfalseIf disabled, the reports for samples with child files will include relationships for all descendant files. Enabling this setting will only include relationship metadata for the root parent file to reduce redundancy.
report.includeStringsboolfalseWhen enabled, includes strings in the file analysis report. Spectra Core can extract strings from binaries. This can be useful but may result in extensive metadata. To reduce noise, the types of included strings can be customized in the strings section.
report.relationshipsboolfalseIncludes sample relationship metadata in the file analysis report. When enabled, the relationships section lists the hashes of files found within the given file.
reportAdlobject-Settings to configure how reports saved to Azure Data Lake are formatted.
reportAdl.archiveSplitReportbooltrueEnable sending a single, smaller archive of split report files to ADL instead of each file. Relevant only when the 'Split report' option is used.
reportAdl.containerstring""Container where reports will be stored. Required when this feature is enabled.
reportAdl.enabledboolfalseEnable/disable storing file processing reports to ADL.
reportAdl.filenameTimestampFormatstring""File naming pattern for the report itself. A timestamp is appended to the SHA1 hash of the file. The timestamp format must follow the strftime specification and be enclosed in quotation marks. If not specified, the ISO 8601 format is used.
reportAdl.folderstring""Specify the name of a folder where analysis reports will be stored. If the folder name is not provided, files are stored into the root of the configured container.
reportAdl.folderOptionstring"date_based"Select the naming pattern that will be used when automatically creating subfolders for storing analysis reports. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash).
reportAdl.maliciousOnlyboolfalseReport contains only malicious and suspicious children.
reportAdl.reportTypestring"large"Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory.
reportAdl.splitReportboolfalseBy default, reports contain information on parent files and all extracted children files. When this option is enabled, analysis reports for extracted files are separated from their parent file report, and saved as individual report files.
reportAdl.timestampEnabledbooltrueEnable/disable appending a timestamp to the report name.
reportAdl.topContainerOnlyboolfalseWhen enabled, file analysis report will only include metadata for the top container and subreports for unpacked files will not be generated.
reportAdl.viewstring""Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker.
reportApiobject-Configures the settings applied to the file analysis report fetched using the GET endpoint.
reportApi.maliciousOnlyboolfalseReport contains only malicious and suspicious children.
reportApi.reportTypestring""Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory.
reportApi.topContainerOnlyboolfalseWhen enabled, file analysis report will only include metadata for the top container and subreports for unpacked files will not be generated.
reportApi.viewstring""Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker.
reportMsGraphobject-Settings to configure how reports saved to OneDrive or SharePoint are formatted.
reportMsGraph.archiveSplitReportbooltrueEnable sending a single, smaller archive of split report files to Microsoft Cloud Storage instead of each file. Relevant only when the "Split Report" option is used.
reportMsGraph.enabledboolfalseEnable/disable storing file processing reports.
reportMsGraph.filenameTimestampFormatstring""This refers to the naming of the report file itself. A timestamp is appended to the SHA1 hash of the file. The timestamp format must follow the strftime specification and be enclosed in quotation marks. If not specified, the ISO 8601 format is used.
reportMsGraph.folderstring""Folder where report files will be stored on the Microsoft Cloud Storage. If the folder name is not provided, files are stored into the root of the configured container.
reportMsGraph.folderOptionstring"date_based"Select the naming pattern that will be used when automatically creating subfolders for storing analysis reports. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash).
reportMsGraph.maliciousOnlyboolfalseReport contains only malicious and suspicious children.
reportMsGraph.reportTypestring"large"Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory.
reportMsGraph.splitReportboolfalseBy default, reports contain information on parent files and all extracted children files. When this option is enabled, analysis reports for extracted files are separated from their parent file report, and saved as individual report files.
reportMsGraph.topContainerOnlyboolfalseWhen enabled, file analysis report will only include metadata for the top container, and subreports for unpacked files will not be generated.
reportMsGraph.viewstring""Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker.
reportS3object-Settings to configure how reports saved to S3 buckets are formatted.
reportS3.advancedFilterEnabledboolfalseEnable/disable usage of advance filters.
reportS3.advancedFilterNamestring""Name of the advanced filter.
reportS3.archiveSplitReportbooltrueEnable sending a single, smaller archive of split report files to S3 instead of each file. Relevant only when the 'Split report' option is used.
reportS3.bucketMappingobject{}Used if destinationType is set to mapping. Accepts a list of bucket-mapping structures. See Bucket Mapping Structure.
reportS3.bucketNamestring""Name of the S3 bucket where processed files will be stored. Required when this feature is enabled.
reportS3.bucketS3ConnectionMappinglist[]Accepts a nested dictionary that sets individual AWS connection methods for each target output bucket. Connection parameters are those described under aws and awsRole.
reportS3.destinationTypestring"default"Supported values are default (saves the reports into the bucket configured by bucketName), source (saves the reports into the S3 bucket where the samples originated from), mapping (saves the reports according to the mapping configured by bucketMapping).
reportS3.enabledboolfalseEnable/disable storing file processing reports to S3.
reportS3.filenameTimestampFormatstring""This refers to the naming of the report file itself. A timestamp is appended to the SHA1 hash of the file. The timestamp format must follow the strftime specification and be enclosed in quotation marks. If not specified, the ISO 8601 format is used.
reportS3.folderstring""Folder where report files will be stored in the given S3 bucket. The folder can be up to 1024 bytes long when encoded in UTF-8, and can contain letters, numbers and special characters: "!", "-", "_", ".", "*", "'", "(", ")", "/". It must not start or end with a slash or contain leading or trailing spaces. Consecutive slashes ("//") are not allowed.
reportS3.folderOptionstring"date_based"Select the naming pattern used when automatically creating subfolders for storing analysis reports. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash).
reportS3.maliciousOnlyboolfalseReport contains only malicious and suspicious children.
reportS3.reportTypestring"large" (LargeWorker: "extended_small")Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory.
reportS3.splitReportboolfalseBy default, reports contain information on parent files and all extracted children files. When this option is enabled, analysis reports for extracted files are separated from their parent file report, and saved as individual report files.
reportS3.timestampEnabledbooltrueEnable/disable appending a timestamp to the report name.
reportS3.topContainerOnlyboolfalseWhen enabled, file analysis report will only include metadata for the top container and subreports for unpacked files will not be generated.
reportS3.viewstring""Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker.
s3object-Settings for storing a copy of all files uploaded for analysis on Worker to an S3 or a third-party, S3-compatible server.
s3.advancedFilterEnabledboolfalseEnable/disable usage of advance filters.
s3.advancedFilterNamestring""Name of the advanced filter.
s3.bucketMappingobject{}Used if destinationType is set to mapping. Accepts a dictionary of S3 input buckets mapped to output buckets, enclosed in quotation marks.
s3.bucketNamestring""Name of the S3 bucket where processed files will be stored. Required when this feature is enabled.
s3.bucketS3ConnectionMappinglist[]Accepts a nested dictionary that sets individual AWS connection methods for each target output bucket. Connection parameters are those described under aws and awsRole.
s3.destinationTypestring"default"Supported values are default (saves the reports into the bucket configured by bucketName), mapping (saves the reports according to the mapping configured by bucketMapping.
s3.enabledboolfalseEnable/disable storing file processed files on S3.
s3.folderstring""Specify the name of a folder where analyzed files will be stored. If the folder name is not provided, files are stored into the root of the configured bucket.
s3.storeMetadatabooltrueWhen true, analysis metadata will be stored to the uploaded S3 object.
scalingobject-Configures the number of concurrent processes and the number of files analyzed concurrently. Parameters in this section can be used to optimize the file processing performance on Worker.
scaling.concurrencyCountint0Sets the number of threads per Spectra Core instance. Value 0 means the number of threads will equal the number of CPU cores.
scaling.loadSizeint0Defines the maximum number of individual files that can simultaneously be processed by a single instance of Spectra Core. When one file is processed, another from the queue enters the processing state. Value 0 sets the maximum number of files to be processed to the number of CPU cores on the system.
scaling.postprocessingint1Specify how many post-processing instances to run. Post-processing instances will then modify and save reports or upload processed files to external storage. Increasing this value can increase throughput for servers with extra available cores. Maximum: 256.
scaling.preprocessingUnpackerint1Specify how many copies of Spectra Core are used to unpack samples for Deep Cloud Analysis. This setting only has effect if Deep Cloud Analysis is enabled with Scan Unpacked Files capability.
scaling.processingint1Specify how many copies of Spectra Core engine instances to run. Each instance starts threads to process files. Maximum: 256.
snsobject-Configures settings for publishing notifications about file processing status and links the reports to an Amazon SNS (Simple Notification Service) topic.
sns.enabledboolfalseEnable/disable publishing notifications to Amazon SNS.
sns.topicstring""Specify the SNS topic ARN that the notifications should be published to. Prerequisite: the AWS account in the AWS settings must be given permission to publish to this topic. Required when this feature is enabled.
spectraAnalyzeIntegrationobject-Configuration settings to upload processed samples to configured Spectra Analyze.
spectraAnalyzeIntegration.addressstring""Spectra Analyze address. Required when this feature is enabled. Has to be in the following format: https://<ip_or_hostname>.
spectraAnalyzeIntegration.advancedFilterEnabledbooltrueEnable/disable advanced filter.
spectraAnalyzeIntegration.advancedFilterNamestring"default_filter"Name of the advanced filter.
spectraAnalyzeIntegration.enabledboolfalseEnable/disable integration with Spectra Analyze.
splunkobject-Configures integration with Splunk, a logging server that can receive Spectra Detect file analysis reports.
splunk.caPathstring""Path to the certificate.
splunk.chunkSizeMbint0The maximum size (MB) of a single request sent to Splunk. If an analysis report exceeds this size, it will be split into multiple parts. The report is split into its subreports (for child files). A request can contain one or multiple subreports, as long as its total size doesn’t exceed this limit. The report is never split by size alone - instead, complete subreports are always preserved and sent to Splunk. Default: 0 (disabled)
splunk.enabledboolfalseEnable/disable Splunk integration.
splunk.hoststring""Specify the hostname or IP address of the Splunk server that should connect to the Worker appliance.
splunk.httpsbooltrueIf set to true, HTTPS will be used for sending information to Splunk. If set to false, HTTP is used.
splunk.portint8088Specify the TCP port of the Splunk server’s HTTP Event Collector.
splunk.reportTypestring"large"Specifies which report_type is returned. By default or when empty, only the medium (summary) report is provided in the callback response. Set to small, medium or large to view results of filtering the full report.
splunk.sslVerifyboolfalseIf HTTPS is enabled, setting this to true enables certificate verification.
splunk.timeoutint5Specify how many seconds to wait for a response from the Splunk server before the request fails. If the request fails, the report will not be uploaded to the Splunk server, and an error will be logged. The timeout value must be greater than or equal to 1, and not greater than 999.
splunk.topContainerOnlyboolfalseWhether or not Splunk should receive the report for the top (parent) file only. If set to true, no subreports will be sent.
splunk.viewstring""Specifies whether a custom Report View should be applied to the file analysis report and returned in the response.
stringsobject-Configure the output of strings extracted from files during Spectra Core static analysis.
strings.enableStringExtractionboolfalseIf set to true, user-provided criteria for string extraction will be used.
strings.maxLengthint32768Maximum number of characters in strings.
strings.minLengthint4Minimum number of characters in strings. Strings shorter than this value are not extracted.
strings.unicodePrintableboolfalseSpecify whether strings are Unicode printable or not.
strings.utf16bebooltrueAllow/disallow extracting UTF-16BE strings.
strings.utf16lebooltrueAllow/disallow extracting UTF-16LE strings.
strings.utf32beboolfalseAllow/disallow extracting UTF-32BE strings.
strings.utf32leboolfalseAllow/disallow extracting UTF-32LE strings.
strings.utf8booltrueAllow/disallow extracting UTF-8 strings.
ticoreobject-Configures cloud options supported by Spectra Core. Worker must be connected to Spectra Intelligence for these settings to take effect.
ticore.maxDecompressionFactorfloat1.0Decimal value between 0 and 999.9. If multiple decimals are given, it will be rounded to one decimal. Used to protect the user from intentional or unintentional archive bombs, terminating decompression if size of unpacked content exceeds a set quota.
ticore.mwpExtendedboolfalseEnable/disable information from antivirus engines in Spectra Intelligence.
ticore.mwpGoodwareFactorint2Determines when a file classified as KNOWN in Spectra Intelligence Cloud is classified as Goodware by Spectra Core. By default, all KNOWN cloud classifications are converted to Goodware. Supported values are 0 - 5, where zero represents the best trust factor (highest confidence that a sample contains goodware). Lowering the value reduces the number of samples classified as goodware. Samples with a trust factor above the configured value are considered UNKNOWN.
ticore.processingModestring"best" (LargeWorker: "fast")Determines which file formats are unpacked by Spectra Core for detailed analysis. "best" fully processes all supported formats; "fast" processes a limited set.
ticore.useXrefboolfalseEnabling XREF service will enrich analysis reports with cross-reference metadata like AV scanner results.
unpackedAdlobject-Settings for storing extracted files in an Azure Data Lake container.
unpackedAdl.archiveUnpackedbooltrueEnable sending a single, smaller archive of unpacked files to ADL instead of each unpacked file.
unpackedAdl.containerstring""Specify the name of the Azure Data Lake container where extracted files will be saved. Required when this feature is enabled.
unpackedAdl.enabledboolfalseEnable/disable storing extracted files to ADL.
unpackedAdl.folderstring""Specify the name of a folder in the configured Azure container where extracted files will be stored. If the folder name is not provided, files are stored into the root of the configured container.
unpackedAdl.folderOptionstring"date_based"Select the naming pattern that will be used when automatically creating subfolders for storing analyzed files. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash).
unpackedMsGraphobject-Settings for storing extracted files to Microsoft Cloud Storage.
unpackedMsGraph.archiveUnpackedbooltrueEnable sending a single, smaller archive of unpacked files to Microsoft Cloud Storage instead of each unpacked file.
unpackedMsGraph.enabledboolfalseEnable/disable storing extracted files.
unpackedMsGraph.folderstring""Folder where unpacked files will be stored on the Microsoft Cloud Storage. If the folder name is not provided, files are stored into the root of the configured container.
unpackedMsGraph.folderOptionstring"date_based"Select the naming pattern that will be used when automatically creating subfolders for storing analyzed files. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash).
unpackedS3object-Settings for storing extracted files to S3 container.
unpackedS3.archiveUnpackedbooltrueEnable sending a single, smaller archive of unpacked files to S3 instead of each unpacked file.
unpackedS3.bucketNamestring""Specify the name of the S3 container where extracted files will be saved. Required when this feature is enabled.
unpackedS3.enabledboolfalseEnable/disable storing extracted files in S3.
unpackedS3.folderstring""Specify the name of a folder in the configured S3 container where extracted files will be stored. If the folder name is not provided, files are stored into the root of the configured container. The folder can be up to 1024 bytes long when encoded in UTF-8, and can contain letters, numbers and special characters: "!", "-", "_", ".", "*", "'", "(", ")", "/". It must not start or end with a slash or contain leading or trailing spaces. Consecutive slashes ("//") are not allowed.
unpackedS3.folderOptionstring"date_based"Select the naming pattern that will be used when automatically creating subfolders for storing analyzed files. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash).
wordlistlist-List of passwords for protected files.

Configmap Configuration values for Worker - Bucket Mapping Structure

KeyTypeDefaultDescription
bucketMapping[n]list-Bucket Mapping Structure - list of S3 input buckets mapped to output buckets.
bucketMapping[n].advancedFilterNamestring""Advanced filter name to apply. If global filter is on (s3.advancedFilterEnabled or reportS3.advancedFilterEnabled is true), it is not possible to add the filter, it has to be "".
bucketMapping[n].inputBucketstring""Input bucket name.
bucketMapping[n].outputBucketstring""Output bucket name. If the output bucket is empty, any sample uploaded from the specified input bucket will be ignored.

Configmap Configuration values for Worker - Connection Method Bucket Mapping Structure

KeyTypeDefaultDescription
bucketS3ConnectionMapping[n]list-Connection Method Bucket Mapping Structure
bucketS3ConnectionMapping[n].awsConnectionStrategystring"standard"Strategy of connection to AWS. Allowed values: 'standard' and 'ARN'.
bucketS3ConnectionMapping[n].bucketslist[]List of buckets for which the given connection info is valid. Required if s3 connection bucket mapping is used.
bucketS3ConnectionMapping[n].caPathstring"/etc/pki/tls/certs/ca-bundle.crt"Path on the file system pointing to the certificate of a custom (self-hosted) S3 server.
bucketS3ConnectionMapping[n].endpointUrlstring""Only required in non-AWS setups to store files to an S3-compatible server. When this parameter is left blank, the default value is used (https://aws.amazonaws.com). Supported pattern(s): https?://.+".
bucketS3ConnectionMapping[n].externalRoleIdstring""The external ID of the role that will be assumed. This can be any string. Usually, it’s an ID provided by the entity which uses (but doesn’t own) an S3 bucket. The owner of that bucket takes that external ID and builds an ARN with it.
bucketS3ConnectionMapping[n].identifierstring""Unique name of the S3 connection bucket mapping. Required if S3 connection bucket mapping is used. Important for creating secrets that contain AWS credentials.
bucketS3ConnectionMapping[n].maxReattemptsint5Maximum number of retries when saving a report to an S3-compatible server.
bucketS3ConnectionMapping[n].refreshBufferint5Number of seconds to fetch a new ARN token before the token timeout is reached.
bucketS3ConnectionMapping[n].regionstring"us-east-1"Specify the correct AWS geographical region where the S3 bucket is located. Required parameter, ignored for non-AWS setups.
bucketS3ConnectionMapping[n].roleArnstring""The role ARN created using the external role ID and an Amazon ID. In other words, the ARN which allows a Worker to obtain a temporary token, which then allows it to save to S3 buckets without a secret access key.
bucketS3ConnectionMapping[n].roleSessionNamestring"ARNRoleSession"Name of the session visible in AWS logs; can be any string.
bucketS3ConnectionMapping[n].serverSideEncryptionstring""Specify the encryption algorithm used on the target S3 bucket (e.g. aws:kms or AES256).
bucketS3ConnectionMapping[n].sslVerifyboolfalseEnable/disable SSL verification.
bucketS3ConnectionMapping[n].stsConnectionStrategystring"default"Strategy for role assumption. Allowed values: 'default' (uses data set in the AWS section) and 'custom' (new AWS data is needed).
bucketS3ConnectionMapping[n].tokenDurationint900Time before the authentication token expires and is refreshed. Minimum: 900 seconds.

Other Values

KeyTypeDefaultDescription
affinityobject{}
autoscaling.enabledboolfalse
autoscaling.maxReplicasint8
autoscaling.minReplicasint1
autoscaling.targetCPUUtilizationPercentageint0
autoscaling.targetInputQueueSizeint100
autoscaling.targetMemoryUtilizationPercentageint0
c1000.releaseNamestringnilSets the Spectra Detect Manager release that the Detect Worker should connect to.
c1000.releaseNamespacestring""Sets the Spectra Detect Manager release namespace. Leave blank to use the current namespace.
filebeat.configstringnilOverrides the Filebeat configuration.
filebeat.enablebooltrue
filebeat.imagestring"docker.elastic.co/beats/filebeat"
filebeat.output.console.prettyboolfalse
filebeat.resources.limits.cpustring"1000m"
filebeat.resources.limits.memorystring"1500Mi"
filebeat.resources.requests.cpustring"100m"
filebeat.resources.requests.memorystring"100Mi"
filebeat.tagstring"8.12.0"
fullnameOverridestring""
image.pullPolicystring"Always"
image.repositorystring"registry.reversinglabs.com/detect/images/detect-worker-mono"
image.tagstring"5.5.1"
imagePullSecrets[0].namestring"rl-registry-key"
ingress.annotationsobject{}
ingress.classNamestring"nginx"
ingress.enabledboolfalse
ingress.hoststringnil
ingress.paths[0].pathstring"/"
ingress.paths[0].pathTypestring"Prefix"
ingress.tls.certificateArnstring""
ingress.tls.issuerstring""
ingress.tls.issuerKindstring"Issuer"
ingress.tls.secretNamestring"tls-tiscale-worker"
nameOverridestring""
nodeSelectorobject{}
persistenceobject-Persistence values configure options for the PersistentVolumeClaim used for storing samples and reports.
persistence.accessModeslist["ReadWriteMany"]Access mode. When using multiple Workers or autoscaling, set the value to ["ReadWriteMany"].
persistence.requestStoragestring"10Gi"Requested storage.
persistence.storageClassNamestringnilStorage class name. When using multiple Workers or autoscaling, the storage class must support "ReadWriteMany".
podAnnotationsobject{}
podSecurityContextobject{}
postgres.affinityobject{}
postgres.createUserSecretbooltrueA user secret will be created automatically with the given username and password; otherwise, the secret must already exist.
postgres.databasestring"tiscale"Database name
postgres.hoststring""Host for external PostgreSQL. When configured, the Detect PostgreSQL cluster won't be created.
postgres.passwordstring"tiscale_11223"Password
postgres.persistence.accessModes[0]string"ReadWriteOnce"
postgres.persistence.requestStoragestring"5Gi"
postgres.persistence.storageClassNamestringnil
postgres.portint5432PostgreSQL port.
postgres.replicasint1Number of replicas.
postgres.resources.limits.cpustringnil
postgres.resources.limits.memorystringnil
postgres.resources.requests.cpustring"500m"
postgres.resources.requests.memorystring"1Gi"
postgres.usernamestring"tiscale"Username. Required if host is not set, because the Detect PostgreSQL cluster will be created and this user will be set as the database owner.
rabbitmq.affinityobject{}
rabbitmq.createManagementAdminSecretbooltrueA user management admin secret will be created automatically with given admin username and admin password; otherwise, the secret must already exist.
rabbitmq.createUserSecretbooltrueA user secret will be created automatically with the given username and password; otherwise, the secret must already exist.
rabbitmq.hoststring""Host for external RabbitMQ. When configured, the Detect RabbitMQ cluster won't be created.
rabbitmq.managementAdminPasswordstring""Management admin password. If left empty, defaults to password.
rabbitmq.managementAdminUrlstring""
rabbitmq.managementAdminUsernamestring""Management admin username. If left empty, defaults to username.
rabbitmq.passwordstring"guest_11223"Password
rabbitmq.persistence.requestStoragestring"5Gi"
rabbitmq.persistence.storageClassNamestringnil
rabbitmq.portint5672RabbitMQ port
rabbitmq.replicasint1Number of replicas
rabbitmq.resources.limits.cpustring"2"
rabbitmq.resources.limits.memorystring"2Gi"
rabbitmq.resources.requests.cpustring"1"
rabbitmq.resources.requests.memorystring"2Gi"
rabbitmq.useQuorumQueuesboolfalseSetting this to true defines queues as quorum type (recommended for multi-replica/HA setups); otherwise, queues are classic.
rabbitmq.useSecureProtocolboolfalseSetting this to true enables the secure AMQPS protocol for the RabbitMQ connection.
rabbitmq.usernamestring"guest"Username
rabbitmq.vhoststring""Vhost. When empty, the default rl-detect vhost is used.
rabbitmq.vhostLargestring""Vhost used for the Large Spectra Detect Worker. It should differ from the Vhost in the Large Worker configuration. When empty, the default rl-detect-large vhost is used.
replicaCountint1
report.networkReputationboolfalseIf enabled, analysis reports include a top-level network_reputation object with reputation information for every extracted network resource. For this feature, Spectra Intelligence must be configured on the Worker, and the ticore.processingMode option must be set to "best".
resources.requests.cpustring"1000m"
resources.requests.memorystring"4Gi"
securityContext.capabilities.add[0]string"SYS_NICE"
securityContext.privilegedboolfalse
service.httpPortint80
service.httpsPortint443
service.typestring"ClusterIP"
tcScratchobject-tcScratch values configure generic ephemeral volume options for the Spectra Core /tc-scratch directory.
tcScratch.accessModeslist["ReadWriteOnce"]Access modes.
tcScratch.freeThresholdstring"5Gi"Threshold for triggering cleanup when free space in tc-scratch falls below this value.
tcScratch.requestStoragestring"10Gi"Requested storage size for the ephemeral volume.
tcScratch.storageClassNamestringnilSets the storage class for the ephemeral volume. If not set, emptyDir is used instead of an ephemeral volume.
tolerationslist[]
worker.config.health.queue_highint5000
worker.config.scaling.concurrency_countint8
worker.config.scaling.load_sizeint8
worker.config.scaling.postprocessingint2
worker.config.scaling.processingint1
worker.largeFileProxyboolfalseForwards large files to a Large File Worker for processing. Set to true if you are using the Large File Worker feature.
worker.largeFileWorkerboolfalseSet to true if deploying a Helm release for a Large File Worker deployment. For a regular Worker deployment, leave false.
worker.mainReleaseNamestringnilName of the main tiscale-worker release that forwards large files.

Spectra Detect Hub

TiScale Hub Helm chart for Kubernetes

Configmap configuration secrets

SecretTypeDescription
release_name-secret-connector-connector_name-input_identifierrequiredBasic authentication secret which contains username and password for the connector. For each connector input secret has to be created and it needs to contain identifier in its name.
release_namespace-secret-worker-api-tokenoptionalToken secret which contains worker api token. Required when worker api token is set up on Worker.

Values

Configmap Configuration values

KeyTypeDefaultDescription
appliance.configModestring"STANDARD"Configuration mode of the appliance. Allowed values: CONFIGMAP (configuration is provided with configmap), STANDARD (configuration is provided over UI).

Configmap Configuration values for all connectors

KeyTypeDefaultDescription
connectorslist[]List of connectors
connectors[n].enableboolfalseEnable/disable connector.
connectors[n].namestring""Name of the connector. Currently supported values: s3 and smtp. Configuration values differ for each connector.

Configmap Configuration values for S3 connector

KeyTypeDefaultDescription
connectors[n].dbCleanupPollIntervalint3600Period in seconds during which a database cleanup is run.
connectors[n].dbCleanupSampleThresholdInDaysint21Number of days for which data is preserved.
connectors[n].diskHighPercentint0Disk high is used to compute the maximum amount of disk space that can be used by temporary files during transfer. Set it to 0 to disable it.
connectors[n].inputslist[]List of inputs. Possible configuration values can be found under Configmap Configuration values for S3 connector input.
connectors[n].maxFileSizeint0Maximum sample size in megabytes that is transmitted from the connector to the appliance for analysis. Set it to 0 to disable it.
connectors[n].maxUploadDelayTimeint10000Delay in seconds. In case the Worker cluster is under high load, this parameter is used to delay new uploads to it. The delay parameter is multiplied by an internal factor determined by the load on the Worker cluster.
connectors[n].maxUploadRetriesint100Number of times the connector attempts to upload a file to the processing appliance. Upon reaching the maximum number of retries, it is either saved in the error_files/destination or discarded.
connectors[n].saveFilesWithErrorboolfalseOriginal files that were not successfully uploaded are saved to /data/connectors/connector-s3/error_files/.
connectors[n].uploadTimeoutint10000Period in milliseconds between reupload attempts of a sample.
connectors[n].uploadTimeoutAlgorithmstring"exponential"Algorithm used for managing delays between reuploading a sample into the processing appliance. Use exponential to double the max upload timeout parameter until it reaches the max value of 5 minutes. Use linear to always use the max upload timeout value as the timeout period between reuploads.

Configmap Configuration values for S3 connector input

KeyTypeDefaultDescription
connectors[n].inputs[n].awsArnTokenDurationint900Period in seconds before the authentication token expires and is refreshed.
connectors[n].inputs[n].awsEnableArnboolfalseEnable/disable role ARN.
connectors[n].inputs[n].awsExternalRoleIdstring""The external ID used to assume the AWS role.
connectors[n].inputs[n].awsRoleArnstring""The ARN of the AWS role to be assumed. The trust policy for this role must include the specified Amazon ID and external ID.
connectors[n].inputs[n].awsRoleSessionNamestring"ARNRoleSession"Name of the session visible in AWS logs.
connectors[n].inputs[n].bucketstring""Specify an existing S3 bucket containing the samples to process. Required value.
connectors[n].inputs[n].deleteSourceFileboolfalseAllow the connector to delete source files in S3 storage after they have been processed.
connectors[n].inputs[n].endpointstring""Enter a custom S3 endpoint URL. Leave empty if using standard AWS S3.
connectors[n].inputs[n].folderstring""Specify an input folder inside the bucket containing the samples to process. All other samples are ignored.
connectors[n].inputs[n].identifierstring""Unique input identifier. Required value that must also be included in the AWS credentials secret. Minimum 3 characters.
connectors[n].inputs[n].knownBucketstring""Specify the bucket for the connector to store files classified as "Goodware". If empty, the input bucket is used.
connectors[n].inputs[n].knownDestinationstring"goodware"Specify the folder for the connector to store files classified as "Goodware". The folder is contained within the specified bucket field.
connectors[n].inputs[n].maliciousBucketstring""Specify the bucket for the connector to store files classified as "Malicious". If empty, the input bucket is used.
connectors[n].inputs[n].maliciousDestinationstring"malware"Specify the folder for the connector to store files classified as "Malicious". The folder is contained within the specified bucket field.
connectors[n].inputs[n].objectMetadataFilterobject-Configure selection criteria using metadata.
connectors[n].inputs[n].objectMetadataFilter.classificationlist[]Classification.
connectors[n].inputs[n].objectMetadataFilter.enabledboolfalseEnable/disable selection criteria using metadata.
connectors[n].inputs[n].objectMetadataFilter.threatNamelist[]Threat name.
connectors[n].inputs[n].pausedboolfalseTemporarily pause the continuous scanning of this storage input.
connectors[n].inputs[n].postActionsEnabledboolfalseAllow the connector to store analyzed files and sort them into folders based on their classification.
connectors[n].inputs[n].priorityint5Assign a priority for processing files from this bucket on a scale of 1 (highest) to 5 (lowest). Multiple buckets may share the same priority.
connectors[n].inputs[n].requireAnalyzeboolfalseAllow any duplicate file hashes to be rescanned.
connectors[n].inputs[n].serverSideEncryptionstring""Server Side Encryption (SSE) Type. This field can be left blank unless a bucket policy requires that SSE headers be sent to S3. Valid options are 'AES256' or 'aws:kms'.
connectors[n].inputs[n].serverSideEncryptionCustomerAlgorithmstring""Customer-provided encryption algorithm. Must be AES256.
connectors[n].inputs[n].serverSideEncryptionCustomerKeystring""Customer-provided encryption key.
connectors[n].inputs[n].suspiciousBucketstring""Specify the bucket for the connector to store files classified as "Suspicious". If empty, the input bucket is used.
connectors[n].inputs[n].suspiciousDestinationstring"suspicious"Specify the folder for the connector to store files classified as "Suspicious". The folder is contained within the specified bucket field.
connectors[n].inputs[n].unknownBucketstring""Specify the bucket for the connector to store files classified as "Unknown". If empty, the input bucket is used.
connectors[n].inputs[n].unknownDestinationstring"unknown"Specify the folder for the connector to store files classified as "Unknown". The folder is contained within the specified bucket field.
connectors[n].inputs[n].verifySslCertificatebooltrueConnect securely to the custom S3 instance. Deselect this to accept untrusted certificates. Applicable only when a custom S3 endpoint is entered.
connectors[n].inputs[n].zonestring""AWS S3 region.

Configmap Configuration values for SMTP connector

KeyTypeDefaultDescription
connectors[n].myNetworkslist[]List of authorized networks. Required when profileType is secure.
connectors[n].profileTypestring"default"The SMTP Profile is used to determine the Postfix configuration options. Allowed values: 'default' (more open, but less secure), 'secure' (more closed, but more secure). For Strict (secure), you must add the list of trusted remote SMTP clients which will appear under 'mynetworks' in the Postfix configuration.

Other Values

KeyTypeDefaultDescription
c1000.releaseNamestringnilSet Spectra Detect Manager release for Detect Hub to connect to
c1000.releaseNamespacestring""Set Spectra Detect Manager release namespace. Blank indicates current namespace
filebeat.configstringnilOverrides filebeat configuration
filebeat.enablebooltrue
filebeat.imagestring"docker.elastic.co/beats/filebeat"
filebeat.output.console.prettyboolfalse
filebeat.resources.limits.cpustring"1000m"
filebeat.resources.limits.memorystring"1500Mi"
filebeat.resources.requests.cpustring"100m"
filebeat.resources.requests.memorystring"100Mi"
filebeat.tagstring"8.12.0"
fullnameOverridestring""
icapConnectorobject-Configuration of ICAP Connector service
icapConnector.service.annotationsobject{}Additional annotations to add to the Service metadata
icapConnector.service.portint1344Port
icapConnector.service.typestring"ClusterIP"Type of Kubernetes service to create
image.pullPolicystring"Always"
image.repositorystring"registry.reversinglabs.com/detect/images/detect-hub-mono"
image.tagstring"latest-dev"
imagePullSecrets[0].namestring"rl-registry-key"
nameOverridestring""
persistenceobject-persistance values configure options for PersistentVolumeClaim used for storing samples
persistence.accessModeslist["ReadWriteOnce"]Access mode
persistence.requestStoragestring"10Gi"Request Storage
persistence.storageClassNamestringnilStorage class name
podAnnotationsobject{}
podSecurityContextobject{}
resources.limits.memorystring"4Gi"
resources.requests.cpustring"1000m"
resources.requests.memorystring"4Gi"
securityContext.capabilities.add[0]string"SYS_NICE"
securityContext.capabilities.add[1]string"NET_ADMIN"
securityContext.capabilities.add[2]string"SYS_ADMIN"
securityContext.privilegedboolfalse
service.httpPortint80
service.httpsPortint443
service.typestring"ClusterIP"
serviceAccount.annotationsobject{}
serviceAccount.createbooltrue
serviceAccount.namestringnil
smtpConnectorobject-Configuration of SMTP Connector service
smtpConnector.service.annotationsobject{}Additional annotations to add to the Service metadata
smtpConnector.service.portint25Port
smtpConnector.service.typestring"ClusterIP"Type of Kubernetes service to create
smtpConnector.ssl.crtFilestring""Path to the SSL certificate file. Used if SSL/TLS termination is required.
smtpConnector.ssl.keyFilestring""Path to the SSL private key file. Used together with crtFile for enabling SSL/TLS.
worker.hostnamestring""Set Worker hostname. If empty, tiscale-worker LoadBalancer Service will be created
worker.releaseNamestringnilSet Spectra Detect Worker release for Detect Hub to connect to
worker.service.httpPortint80HTTP Port
worker.service.httpsPortint443HTTPS Port
worker.service.typestring"LoadBalancer"Type of Kubernetes service to create