Skip to main content
Version: Spectra Detect 5.7.0

Configuration References

Processing

Configmap configuration secrets

SecretTypeDescriptionUsed in deployments (Pods)
<release_name>-secret-worker-api-tokenOptionalToken secret which contains the token that is used to protect all endpoints with /api/ prefix, e.g. file upload.Auth
<release_name>-secret-worker-api-task-tokenOptionalToken secret which contains the token that is used to protect /api/tiscale/v1/task endpoints. If left empty, the mentioned API is protected by <release_name>-secret-worker-api-tokenAuth
<release_name>-secret-worker-cloudRequired when related feature is enabledBasic authentication secret which contains username and password for Spectra Intelligence authentication. Required when Spectra Intelligence is enabled (configuration.cloud.enabled).Processor, Retry Processor, Preprocessor, Postprocessor, Receiver
<release_name>-secret-worker-cloud-proxyRequired when related feature is enabledBasic authentication secret which contains username and password for Spectra Intelligence Proxy authentication. Required when Spectra Intelligence Proxy is enabled (configuration.cloud.proxy.enabled).Processor, Retry processor, Preprocessor, Postprocessor, Receiver, Cloud Cache
<release_name>-secret-worker-awsRequired when related feature is enabledBasic authentication secret which contains username and password for AWS authentication. Required if any type of S3 storage (File, SNS, Report, Unpacked) is enabled (configuration.s3.enabled, configuration.sns.enabled, configuration.reportS3.enabled, configuration.unpackedS3.enabled)Postprocessor
<release_name>-secret-worker-azureRequired when related feature is enabledBasic authentication secret which contains username and password for Azure authentication. Required if any type of ADL storage (File, Report, Unpacked) is enabled (configuration.adl.enabled, configuration.reportAdl.enabled, configuration.unpackedAdl.enabled).Postprocessor
<release_name>-secret-worker-ms-graphRequired when related feature is enabledBasic authentication secret which contains username and password for Microsoft Cloud Storage authentication. Required if any type of Microsoft Cloud storage (File, Report, Unpacked) is enabled (configuration.msGraph.enabled, configuration.reportMsGraph.enabled, configuration.unpackedMsGraph.enabled).Postprocessor
<release_name>-secret-unpacked-s3OptionalSecret which contains only the password. Used for encryption of the archive file. Relevant only when the configuration.unpackedS3.archiveUnpacked option is set to true.Postprocessor
<release_name>-secret-report-s3OptionalSecret which contains only the password. Used for encryption of the archive file. Relevant only when the configuration.reportS3.archiveSplitReport option is set to true.Postprocessor
<release_name>-secret-unpacked-adlOptionalSecret which contains only the password. Used for encryption of the archive file. Relevant only when the configuration.unpackedAdl.archiveUnpacked option is set to true.Postprocessor
<release_name>-secret-report-adlOptionalSecret which contains only the password. Used for encryption of the archive file. Relevant only when the configuration.reportAdl.archiveSplitReport option is set to true.Postprocessor
<release_name>-secret-unpacked-ms-graphOptionalSecret which contains only the password. Used for encryption of the archive file. Relevant only when the configuration.unpackedMsGraph.archiveUnpacked option is set to true.Postprocessor
<release_name>-secret-report-ms-graphOptionalSecret which contains only the password. Used for encryption of the archive file. Relevant only when the configuration.reportMsGraph.archiveSplitReport option is set to true.Postprocessor
<release_name>-secret-splunkOptionalToken secret which contains the token for Splunk authentication. Relevant only if Splunk Integration is enabled (configuration.splunk.enabled).Postprocessor
<release_name>-secret-archive-zipOptionalSecret which contains only the password. Relevant only when the configuration.archive.fileWrapper value is set to "zip" or "mzip".Postprocessor
<release_name>-secret-sa-integration-tokenRequired when related feature is enabledToken secret which contains the token used in authentication on Spectra Analyze when Spectra Analyze Integration is enabled (configuration.spectraAnalyzeIntegration.enabled). This token should be created in Spectra Analyze.Postprocessor

Configmap Configuration values for Worker pods

KeyTypeDefaultDescription
appliance.configModestring"STANDARD"Configuration mode of the appliance. Allowed values: CONFIGMAP (Configuration is provided with configmap), STANDARD (configuration is provided over UI).
configuration.a1000object-Integration with Spectra Analyze appliance.
configuration.a1000.hoststring""The hostname or IP address of the A1000 appliance associated with the Worker.
configuration.adlobject-Settings for storing files in an Azure Data Lake container.
configuration.adl.containerstring""The hostname or IP address of the Azure Data Lake container that will be used for storage. Required when storing files in ADL is enabled.
configuration.adl.enabledboolfalseEnable or disable the storage of processed files.
configuration.adl.folderstring""Specify the name of the folder on the container where files will be stored.
configuration.apiServerobject-Configures a custom Worker IP address which is included in the response when uploading a file to the Worker for processing.
configuration.apiServer.hoststring""Configures the hostname or IP address of the Worker. Only necessary if the default IP address or network interface is incorrect.
configuration.archiveobject-After processing, files can be zipped before external storage. Available only for S3 and Azure.
configuration.archive.fileWrapperstring""Specify whether the files should be compressed as a ZIP archive before uploading to external storage. Supported values are: zip, mzip. If this parameter is left blank, files will be uploaded in their original format.
configuration.archive.zipCompressint0ZIP compression level to use when storing files in a ZIP file. Allowed range: 0 (no compression) to 9 (maximum compression).
configuration.archive.zipMaxfilesint0Maximum allowed number of files that can be stored in one ZIP archive. Allowed range: 1-65535. 0 represents unlimited.
configuration.authenticationobject-Authentication settings for Detect Worker
configuration.authentication.enabledboolfalseEnable/disable authentication on Detect Worker ingress APIs
configuration.authentication.externalAuthUrlstring""If set, external/custom authentication service will be used for authentication, otherwise simple Token service is deployed which protects paths with tokens defined in the secrets.
configuration.awsobject-Configuration of integration with AWS or AWS-compatible storage to be used for SNS, and for uploading files and analysis reports to S3.
configuration.aws.caPathstring""Path on the file system pointing to the certificate of a custom (self-hosted) S3 server.
configuration.aws.endpointUrlstring""Only required in non-AWS setups in order to store files to an S3-compatible server. When this parameter is left blank, the default is https://aws.amazonaws.com. Supported pattern(s): https?://.+".
configuration.aws.maxReattemptsint5Maximum number of retries when saving a report to an S3-compatible server.
configuration.aws.payloadSigningEnabledboolfalseSpecifies whether to include an SHA-256 checksum with Amazon Signature Version 4 payloads.
configuration.aws.regionstring"us-east-1"Specify the correct AWS geographical region where the S3 bucket is located. Required parameter, ignored for non-AWS setups.
configuration.aws.serverSideEncryptionstring""Specify the encryption algorithm used on the target S3 bucket (e.g. aws:kms or AES256).
configuration.aws.sslVerifyboolfalseEnable/disable SSL verification.
configuration.awsRoleobject-Configures the AWS IAM roles used to access S3 buckets without sharing secret keys. The IAM role which will be used to obtain temporary tokens has to be created in the AWS console.
configuration.awsRole.enableArnboolfalseEnables or disables this entire feature.
configuration.awsRole.externalRoleIdstring""The external ID of the role that will be assumed. This can be any string. Usually, it’s an ID provided by the entity which uses (but doesn’t own) an S3 bucket. The owner of that bucket takes that external ID and builds an ARN with it.
configuration.awsRole.refreshBufferint5Number of seconds to fetch a new ARN token before the token timeout is reached.
configuration.awsRole.roleArnstring""The role ARN created using the external role ID and an Amazon ID. In other words, the ARN which allows a Worker to obtain a temporary token, which then allows it to save to S3 buckets without a secret access key.
configuration.awsRole.roleSessionNamestring""Name of the session visible in AWS logs. Can be any string.
configuration.awsRole.tokenDurationint900How long before the authentication token expires and is refreshed. The minimum value is 900 seconds.
configuration.azureobject-Configures integration with Azure Data Lake Gen2 for the purpose of storing processed files in Azure Data Lake containers.
configuration.azure.endpointSuffixstring"core.windows.net"Specify the suffix for the address of your Azure Data Lake container.
configuration.callbackobject-Settings for automatically sending file analysis reports via POST request.
configuration.callback.advancedFilterEnabledboolfalseEnable/disable the advanced filter.
configuration.callback.advancedFilterNamestring""Name of the advanced filter.
configuration.callback.caPathstring""If the url parameter is configured to use HTTPS, this parameter can be used to set the path to the certificate file. This automatically enables SSL verification. If this parameter is left blank or not configured, SSL verification will be disabled, and the certificate will not be validated.
configuration.callback.enabledboolfalseEnable/disable connection.
configuration.callback.maliciousOnlyboolfalseWhen set, the report will only contain malicious and suspicious children.
configuration.callback.reportTypestring"medium"Specifies which report_type is returned. By default, or when empty, only the medium (summary) report is provided in the callback response. Set to extended_small, small, medium or large to view results of filtering the full report.
configuration.callback.splitReportboolfalseBy default, reports contain information on parent files and all extracted children files. If set to true, reports for extracted files will be separated from the full report and saved as standalone files. If any user-defined data was appended to the analyzed parent file, it will be included in every split child report.
configuration.callback.sslVerifyboolfalseEnable/disable SSL verification
configuration.callback.timeoutint5Specify the number of seconds to wait before the POST request times out. In case of failure, the Worker will retry the request up to six times, increasing the waiting time between requests after the second retry has failed. With the default timeout set, the total possible waiting time before a request finally fails is 159 seconds.
configuration.callback.topContainerOnlyboolfalseIf set to true, the reports will only contain metadata for the top container. Reports for unpacked files will not be generated.
configuration.callback.urlstring""Specify the full URL that will be used to send the callback POST request. Both HTTP and HTTPS are supported. If this parameter is left blank, reports will not be sent, and the callback feature will be disabled. Supported pattern(s): http?://.+
configuration.callback.viewstring""Specifies whether a custom report view should be applied to the report.
configuration.cefobject-Configures Common Event Format (CEF) settings. CEF is an extensible, text-based logging and auditing format that uses a standard header and a variable extension, formatted as key-value pairs.
configuration.cef.cefMsgHashTypestring"md5"Specify the type of hash that will be included in CEF messages. Supported values are: md5, sha1, sha256.
configuration.cef.enableCefMsgboolfalseEnable or disable sending CEF messages to syslog. Defaults to false to avoid flooding.
configuration.classifyobject-Configure settings for Worker analysis and classification of files using the Spectra Core static analysis engine.
configuration.classify.certificatesbooltrueEnable checking whether file certificate passes the certificate validation, in addition to checking certificate whitelists and blacklists.
configuration.classify.documentsbooltrueEnable document format threat detection.
configuration.classify.emailsbooltrueEnable detection of phishing and other email threats.
configuration.classify.hyperlinksbooltrueEnable embedded hyperlinks detection.
configuration.classify.ignoreAdwareboolfalseWhen set to true, classification results that match adware will be ignored.
configuration.classify.ignoreHacktoolboolfalseWhen set to true, classification results that match hacktool will be ignored.
configuration.classify.ignorePackerboolfalseWhen set to true, classification results that match packer will be ignored.
configuration.classify.ignoreProtestwareboolfalseWhen set to true, classification results that match protestware will be ignored.
configuration.classify.ignoreRiskwareboolfalseWhen set to true, classification results that match riskware will be ignored.
configuration.classify.ignoreSpamboolfalseWhen set to true, classification results that match spam will be ignored.
configuration.classify.ignoreSpywareboolfalseWhen set to true, classification results that match spyware will be ignored.
configuration.classify.imagesbooltrueWhen true, the heuristic image classifier for supported file formats is used.
configuration.classify.pecoffbooltrueWhen true, the heuristic Windows executable classifier for supported PE file formats is used.
configuration.cleanupobject-Configures how often the Worker file system is cleaned up.
configuration.cleanup.fileAgeLimitint1440Time before an unprocessed file present on the appliance is deleted, in minutes.
configuration.cleanup.taskAgeLimitint90Time before analysis reports and records of processed tasks are deleted, in minutes.
configuration.cleanup.taskUnprocessedLimitint1440Time before an incomplete processing task is canceled, in minutes.
configuration.cloudobject-Configures integration with the Spectra Intelligence service or a T1000 instance to receive additional classification information.
configuration.cloud.enabledboolfalseEnable/disable connection.
configuration.cloud.proxyobject-Configure an optional proxy connection.
configuration.cloud.proxy.enabledboolfalseEnable/disable proxy server.
configuration.cloud.proxy.portint8080Specify the TCP port number if using an HTTP proxy. Allowed range(s): 1 … 65535. Required only if proxy is used.
configuration.cloud.proxy.serverstring""Proxy hostname or IP address for routing requests from the appliance to Spectra Intelligence. Required only if proxy is used.
configuration.cloud.serverstring"https://appliance-api.reversinglabs.com"Hostname or IP address of the Spectra Intelligence server. Required if Spectra Intelligence integration is enabled. Format: https://<ip_or_hostname>.
configuration.cloud.timeoutint6Specify the number of seconds to wait when connecting to Spectra Intelligence before terminating the connection request.
configuration.cloudAutomationobject-Configures the Worker to automatically submit files to Spectra Intelligence for antivirus scanning (in addition to local static analysis and remote reputation lookup (from previous antivirus scans)).
configuration.cloudAutomation.dataChangeSubscribeboolfalseSubscribe to the Spectra Intelligence data change notification mechanism.
configuration.cloudAutomation.spexUploadobject-Scanning settings.
configuration.cloudAutomation.spexUpload.enabledboolfalseEnable/disable this feature.
configuration.cloudAutomation.spexUpload.rescanEnabledbooltrueEnable/disable rescan of files upon submission based on the configured interval to include the latest AV results in the reports.
configuration.cloudAutomation.spexUpload.rescanThresholdInDaysint3Set the interval in days for triggering an AV rescan. If the last scan is older than the specified value, a rescan will be initiated. A value of 0 means files will be rescanned with each submission.
configuration.cloudAutomation.spexUpload.scanUnpackedFilesboolfalseEnable/disable sending unpacked files to Deep Cloud Analysis for scanning. Consumes roughly double the processing resources compared to standard analysis.
configuration.cloudAutomation.waitForAvScansTimeoutInMinutesint240Sets the maximum wait time (in minutes) for Deep Cloud Analysis to complete. If the timeout is reached, the report will be generated without the latest AV results.
configuration.cloudAutomation.waitForAvScansToFinishboolfalseIf set to true, delays report generation until Deep Cloud Analysis completes, ensuring the latest AV results are included.
configuration.cloudCache.cacheMaxSizePercentagefloat6.25Maximum cache size expressed as a percentage of the total allocated RAM on the Worker. Allowed range: 5 - 15.
configuration.cloudCache.cleanupWindowint10How often to run the cache cleanup process, in minutes. It is advisable for this value to be lower, or at least equal to the TTL value. Max: 5 - 60.
configuration.cloudCache.enabledbooltrueEnable or disable the caching feature.
configuration.cloudCache.maxIdleUpstreamConnectionsint50The maximum number of idle upstream connections. Allowed range: 10 - 50.
configuration.cloudCache.ttlint240Time to live for cached records, in minutes. Allowed range: 1 - 7200.
configuration.general.maxUploadSizeMbint2048The largest file (in MB) that Worker will accept and start processing. Ignored if Spectra Intelligence is connected and file upload limits are set there.
configuration.general.postprocessingCheckThresholdMinsint720How often the postprocessing service will be checked for timeouts. If any issues are detected, the process will be restarted.
configuration.general.tsWorkerCheckThresholdMinsint720How often the processing service will be checked for timeouts. If any issues are detected, the process will be restarted.
configuration.general.uploadSizeLimitEnabledboolfalseWhether or not the upload size filter is active. Ignored if Spectra Intelligence is connected and file upload limits are set there.
configuration.hashesobject-Spectra Core calculates file hashes during analysis and includes them in the analysis report. The following options configure which additional hash types should be calculated and included in the Worker report. SHA1 and SHA256 are always included and therefore aren’t configurable. Selecting additional hash types (especially SHA384 and SHA512) may slow report generation.
configuration.hashes.enableCrc32boolfalseInclude CRC32 hashes in reports.
configuration.hashes.enableMd5booltrueInclude MD5 hashes in reports.
configuration.hashes.enableSha384boolfalseInclude SHA384 hashes in reports.
configuration.hashes.enableSha512boolfalseInclude SHA512 hashes in reports.
configuration.hashes.enableSsdeepboolfalseInclude SSDEEP hashes in reports.
configuration.hashes.enableTlshboolfalseInclude TLSH hashes in reports.
configuration.healthobject-Configures system health check configuration.
configuration.health.disk_highint95Threshold for high disk usage
configuration.health.enabledbooltrueEnable/disable system health check.
configuration.health.queue_highint2000Specify the maximum number of items allowed in the queue. If it exceeds the configured value, the appliance will start rejecting traffic. Allowed range(s): 10+
configuration.loggingobject-Configures the severity above which events will be logged or sent to a remote syslog server. Severity can be: INFO, WARNING, or ERROR.
configuration.logging.tiscaleLogLevelstring"INFO"Events below this level will not be saved to logs (/var/log/messages and /var/log/tiscale/*.log).
configuration.msGraph.enabledboolfalseTurns the Microsoft Cloud Storage file integration on or off.
configuration.msGraph.folderstring""Folder where samples will be stored in Microsoft Cloud Storage.
configuration.msGraphGeneralobject-Configures the general options for the Microsoft Cloud Storage integration.
configuration.msGraphGeneral.customDomainstring""Application’s custom domain configured in the Azure portal.
configuration.msGraphGeneral.siteHostnamestring""Used only if storageType is set to SharePoint. This is the SharePoint hostname.
configuration.msGraphGeneral.siteRelativePathstring""SharePoint Online site relative path. Only used when storageType is set to SharePoint.
configuration.msGraphGeneral.storageTypestring"onedrive"Specifies the storage type. Supported values are: onedrive or sharepoint.
configuration.msGraphGeneral.usernamestring""Used only if storageType is set to OneDrive. Specifies which user’s drive will be used."
configuration.processingobject-Configure the Worker file processing capabilities to improve performance and load balancing.
configuration.processing.cacheEnabledboolfalseEnable/disable caching. When enabled, Spectra Core can skip reprocessing the same files (duplicates) if uploaded consecutively in a short period.
configuration.processing.cacheTimeToLiveint0If file processing caching is enabled, specify how long (in seconds) the analysis reports should be preserved in the cache before they expire. A value of 0 uses the default. Default: 600. Maximum: 86400.
configuration.processing.depthint0Specifies how "deep" a file is unpacked. By default, when set to 0, Workers will unpack files recursively until no more files can be unpacked. Setting a value greater than 0 limits the depth of recursion, which can speed up analyses but provide less detail.
configuration.processing.largefileThresholdint100If advanced mode is enabled, files larger than this threshold (in MB) will be processed individually, one by one. This parameter is ignored in standard mode.
configuration.processing.modeint2Configures the Worker processing mode to improve load balancing. Supported modes are standard (1) and advanced (2).
configuration.processing.timeoutint28800Specifies how many seconds the Worker should wait for a file to process before terminating the task. Default: 28800. Maximum: 259200.
configuration.propagationobject-Configure advanced classification propagation options supported by the Spectra Core static analysis engine. When Spectra Core classifies files, the classification of a child file can be applied to the parent file.
configuration.propagation.enabledbooltrueEnable/disable the classification propagation feature. When propagation is enabled, files can be classified based on the content extracted from them. This means that files containing a malicious or suspicious file will also be considered malicious or suspicious.
configuration.propagation.goodwareOverridesEnabledbooltrueEnable/disable goodware overrides. When enabled, any files extracted from a parent file and whitelisted by certificate, source or user override can no longer be classified as malicious or suspicious. This is an advanced goodware whitelisting technique that can be used to reduce the amount of false positive detections.
configuration.propagation.goodwareOverridesFactorint1When goodware overrides are enabled, this parameter must be configured to determine the factor to which overrides will be applied. Supported values are 0 to 5, where zero represents the best trust factor (highest confidence that a sample contains goodware). Overrides will apply to files with a trust factor equal to or lower than the value configured here.
configuration.reportobject-Configure the contents of the Spectra Detect file analysis report.
configuration.report.firstReportOnlyboolfalseIf disabled, the reports for samples with child files will include relationships for all descendant files. Enabling this setting will only include relationship metadata for the root parent file to reduce redundancy.
configuration.report.includeStringsboolfalseWhen enabled, strings are included in the file analysis report. Spectra Core can extract strings from binaries. This can be useful but may result in extensive metadata. To reduce noise, the types of included strings can be customized in the strings section.
configuration.report.networkReputationboolfalseIf enabled, analysis reports include a top-level network_reputation object with reputation information for every extracted network resource. For this feature, Spectra Intelligence must be configured on the Worker, and the ticore.processingMode option must be set to "best".
configuration.report.relationshipsboolfalseIncludes sample relationship metadata in the file analysis report. When enabled, the relationships section lists the hashes of files found within the given file.
configuration.reportAdlobject-Settings to configure how reports saved to Azure Data Lake are formatted.
configuration.reportAdl.archiveSplitReportbooltrueEnable sending a single, smaller archive of split report files to ADL instead of each file. Relevant only when the 'Split report' option is used.
configuration.reportAdl.containerstring""Container where reports will be stored. Required when this feature is enabled.
configuration.reportAdl.enabledboolfalseEnable/disable storing file processing reports to ADL.
configuration.reportAdl.filenameTimestampFormatstring""File naming pattern for the report itself. A timestamp is appended to the SHA1 hash of the file. The timestamp format must follow the strftime specification and be enclosed in quotation marks. If not specified, the ISO 8601 format is used.
configuration.reportAdl.folderstring""Specify the name of a folder where analysis reports will be stored. If the folder name is not provided, files are stored into the root of the configured container.
configuration.reportAdl.folderOptionstring"date_based"Select the naming pattern that will be used when automatically creating subfolders for storing analysis reports. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash).
configuration.reportAdl.maliciousOnlyboolfalseWhen set, the report will only contain malicious and suspicious children.
configuration.reportAdl.reportTypestring"large"Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory.
configuration.reportAdl.splitReportboolfalseBy default, reports contain information on parent files and all extracted children files. When this option is enabled, analysis reports for extracted files are separated from their parent file report, and saved as individual report files.
configuration.reportAdl.timestampEnabledbooltrueEnable/disable appending a timestamp to the report name.
configuration.reportAdl.topContainerOnlyboolfalseWhen enabled, the file analysis report will only include metadata for the top container and subreports for unpacked files will not be generated.
configuration.reportAdl.viewstring""Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker.
configuration.reportApiobject-Configures the settings applied to the file analysis report fetched using the GET endpoint.
configuration.reportApi.maliciousOnlyboolfalseReport contains only malicious and suspicious children.
configuration.reportApi.reportTypestring"large"Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory.
configuration.reportApi.topContainerOnlyboolfalseWhen enabled, thefile analysis report will only include metadata for the top container and subreports for unpacked files will not be generated.
configuration.reportApi.viewstring""Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker.
configuration.reportMsGraphobject-Settings to configure how reports saved to OneDrive or SharePoint are formatted.
configuration.reportMsGraph.archiveSplitReportbooltrueEnable sending a single, smaller archive of split report files to Microsoft Cloud Storage instead of each file. Relevant only when the "Split Report" option is used.
configuration.reportMsGraph.enabledboolfalseEnable/disable storing file processing reports.
configuration.reportMsGraph.filenameTimestampFormatstring""This refers to the naming of the report file itself. A timestamp is appended to the SHA1 hash of the file. The timestamp format must follow the strftime specification and be enclosed in quotation marks. If not specified, the ISO 8601 format is used.
configuration.reportMsGraph.folderstring""Folder where report files will be stored on the Microsoft Cloud Storage. If the folder name is not provided, files are stored into the root of the configured container.
configuration.reportMsGraph.folderOptionstring"date_based"Select the naming pattern that will be used when automatically creating subfolders for storing analysis reports. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash).
configuration.reportMsGraph.maliciousOnlyboolfalseWhen set, the report will only contain malicious and suspicious children.
configuration.reportMsGraph.reportTypestring"large"Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory.
configuration.reportMsGraph.splitReportboolfalseBy default, reports contain information on parent files and all extracted children files. When this option is enabled, analysis reports for extracted files are separated from their parent file report, and saved as individual report files.
configuration.reportMsGraph.topContainerOnlyboolfalseWhen enabled, file analysis report will only include metadata for the top container, and subreports for unpacked files will not be generated.
configuration.reportMsGraph.viewstring""Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker.
configuration.reportS3object-Settings to configure how reports saved to S3 buckets are formatted.
configuration.reportS3.advancedFilterEnabledboolfalseEnable/disable usage of the advanced filter.
configuration.reportS3.advancedFilterNamestring""Name of the advanced filter.
configuration.reportS3.archiveSplitReportbooltrueEnable sending a single, smaller archive of split report files to S3 instead of each file. Relevant only when the 'Split report' option is used.
configuration.reportS3.bucketNamestring""Name of the S3 bucket where processed files will be stored. Required when this feature is enabled.
configuration.reportS3.enabledboolfalseEnable/disable storing file processing reports to S3.
configuration.reportS3.filenameTimestampFormatstring""This refers to the naming of the report file itself. A timestamp is appended to the SHA1 hash of the file. The timestamp format must follow the strftime specification and be enclosed in quotation marks. If not specified, the ISO 8601 format is used.
configuration.reportS3.folderstring""Folder where report files will be stored in the given S3 bucket. The folder can be up to 1024 bytes long when encoded in UTF-8, and can contain letters, numbers and special characters: "!", "-", "_", ".", "*", "'", "(", ")", "/". It must not start or end with a slash or contain leading or trailing spaces. Consecutive slashes ("//") are not allowed.
configuration.reportS3.folderOptionstring"date_based"Select the naming pattern used when automatically creating subfolders for storing analysis reports. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash).
configuration.reportS3.maliciousOnlyboolfalseWhen set, the report will only contain malicious and suspicious children.
configuration.reportS3.reportTypestring"large"Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory.
configuration.reportS3.splitReportboolfalseBy default, reports contain information on parent files and all extracted children files. When this option is enabled, analysis reports for extracted files are separated from their parent file report, and saved as individual report files.
configuration.reportS3.timestampEnabledbooltrueEnable/disable appending a timestamp to the report name.
configuration.reportS3.topContainerOnlyboolfalseWhen enabled, the file analysis report will only include metadata for the top container and subreports for unpacked files will not be generated.
configuration.reportS3.viewstring""Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker.
configuration.s3object-Settings for storing a copy of all files uploaded for analysis on Worker to an S3 or a third-party, S3-compatible server.
configuration.s3.advancedFilterEnabledboolfalseEnable/disable usage of the advanced filter.
configuration.s3.advancedFilterNamestring""Name of the advanced filter.
configuration.s3.bucketNamestring""Name of the S3 bucket where processed files will be stored. Required when this feature is enabled.
configuration.s3.enabledboolfalseEnable/disable storing file processed files on S3.
configuration.s3.folderstring""Specify the name of a folder where analyzed files will be stored. If the folder name is not provided, files are stored into the root of the configured bucket.
configuration.s3.storeMetadatabooltrueWhen true, analysis metadata will be stored to the uploaded S3 object.
configuration.scalingobject-Configures the number of concurrent processes and the number of files analyzed concurrently. Parameters in this section can be used to optimize the file processing performance on Worker.
configuration.scaling.postprocessingint1Specify how many post-processing instances to run. Post-processing instances will then modify and save reports or upload processed files to external storage. Increasing this value can increase throughput for servers with extra available cores. Maximum: 256.
configuration.scaling.preprocessingUnpackerint1Specify how many copies of Spectra Core are used to unpack samples for Deep Cloud Analysis. This setting only has effect if Deep Cloud Analysis is enabled with Scan Unpacked Files capability.
configuration.scaling.processingint1Specify how many copies of Spectra Core engine instances to run. Each instance starts threads to process files. Maximum: 256.
configuration.snsobject-Configures settings for publishing notifications about file processing status and links the reports to an Amazon SNS (Simple Notification Service) topic.
configuration.sns.enabledboolfalseEnable/disable publishing notifications to Amazon SNS.
configuration.sns.topicstring""Specify the SNS topic ARN that the notifications should be published to. Prerequisite: the AWS account in the AWS settings must be given permission to publish to this topic. Required when this feature is enabled.
configuration.spectraAnalyzeIntegrationobject-Configuration settings to upload processed samples to configured Spectra Analyze.
configuration.spectraAnalyzeIntegration.addressstring""Spectra Analyze address. Required when this feature is enabled. Has to be in the following format: https://<ip_or_hostname>.
configuration.spectraAnalyzeIntegration.advancedFilterEnabledbooltrueEnable/disable the advanced filter.
configuration.spectraAnalyzeIntegration.advancedFilterNamestring"default_filter"Name of the advanced filter.
configuration.spectraAnalyzeIntegration.enabledboolfalseEnable/disable integration with Spectra Analyze.
configuration.splunkobject-Configures integration with Splunk, a logging server that can receive Spectra Detect file analysis reports.
configuration.splunk.caPathstring""Path to the certificate.
configuration.splunk.chunkSizeMbint0The maximum size (MB) of a single request sent to Splunk. If an analysis report exceeds this size, it will be split into multiple parts. The report is split into its subreports (for child files). A request can contain one or multiple subreports, as long as its total size doesn’t exceed this limit. The report is never split by size alone - instead, complete subreports are always preserved and sent to Splunk. Default: 0 (disabled)
configuration.splunk.enabledboolfalseEnable/disable Splunk integration.
configuration.splunk.hoststring""Specify the hostname or IP address of the Splunk server that should connect to the Worker appliance.
configuration.splunk.httpsbooltrueIf set to true, HTTPS will be used for sending information to Splunk. If set to false, HTTP is used.
configuration.splunk.portint8088Specify the TCP port of the Splunk server’s HTTP Event Collector.
configuration.splunk.reportTypestring"large"Specifies which report_type is returned. By default or when empty, only the medium (summary) report is provided in the callback response. Set to small, medium or large to view results of filtering the full report.
configuration.splunk.sslVerifyboolfalseIf HTTPS is enabled, setting this to true will enable certificate verification."
configuration.splunk.timeoutint5Specify how many seconds to wait for a response from the Splunk server before the request fails. If the request fails, the report will not be uploaded to the Splunk server, and an error will be logged. The timeout value must be greater than or equal to 1, and not greater than 999.
configuration.splunk.topContainerOnlyboolfalseWhether or not Splunk should receive the report for the top (parent) file only. If set to true, no subreports will be sent.
configuration.splunk.viewstring""Specifies whether a custom Report View should be applied to the file analysis report and returned in the response.
configuration.stringsobject-Configure the output of strings extracted from files during Spectra Core static analysis.
configuration.strings.enableStringExtractionboolfalseIf set to true, user-provided criteria for string extraction will be used.
configuration.strings.maxLengthint32768Maximum number of characters in strings.
configuration.strings.minLengthint4Minimum number of characters in strings. Strings shorter than this value are not extracted.
configuration.strings.unicodePrintableboolfalseSpecify whether strings are Unicode printable or not.
configuration.strings.utf16bebooltrueAllow/disallow extracting UTF-16BE strings.
configuration.strings.utf16lebooltrueAllow/disallow extracting UTF-16LE strings.
configuration.strings.utf32beboolfalseAllow/disallow extracting UTF-32BE strings.
configuration.strings.utf32leboolfalseAllow/disallow extracting UTF-32LE strings.
configuration.strings.utf8booltrueAllow/disallow extracting UTF-8 strings.
configuration.ticoreobject-Configures options supported by Spectra Core.
configuration.ticore.maxDecompressionFactorfloat1.0Decimal value between 0 and 999.9. If multiple decimals are given, it will be rounded to one decimal. Used to protect the user from intentional or unintentional archive bombs, terminating decompression if size of unpacked content exceeds a set quota.
configuration.ticore.mwpExtendedboolfalseEnable/disable information from antivirus engines in Spectra Intelligence. Requires Spectra Intelligence to be configured
configuration.ticore.mwpGoodwareFactorint2Determines when a file classified as KNOWN in Spectra Intelligence Cloud is classified as Goodware by Spectra Core. By default, all KNOWN cloud classifications are converted to Goodware. Supported values are 0 - 5, where zero represents the best trust factor (highest confidence that a sample contains goodware). Lowering the value reduces the number of samples classified as goodware. Samples with a trust factor above the configured value are considered UNKNOWN. Requires Spectra Intelligence to be configured
configuration.ticore.processingModestring"best"Determines which file formats are unpacked by Spectra Core for detailed analysis. "best" fully processes all supported formats; "fast" processes a limited set.
configuration.ticore.useXrefboolfalseEnabling XREF service will enrich analysis reports with cross-reference metadata like AV scanner results. Requires Spectra Intelligence to be configured
configuration.unpackedAdlobject-Settings for storing extracted files in an Azure Data Lake container.
configuration.unpackedAdl.archiveUnpackedbooltrueEnable sending a single, smaller archive of unpacked files to ADL instead of each unpacked file.
configuration.unpackedAdl.containerstring""Specify the name of the Azure Data Lake container where extracted files will be saved. Required when this feature is enabled.
configuration.unpackedAdl.enabledboolfalseEnable/disable storing extracted files to ADL.
configuration.unpackedAdl.folderstring""Specify the name of a folder in the configured Azure container where extracted files will be stored. If the folder name is not provided, files are stored into the root of the configured container.
configuration.unpackedAdl.folderOptionstring"date_based"Select the naming pattern that will be used when automatically creating subfolders for storing analyzed files. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash).
configuration.unpackedMsGraphobject-Settings for storing extracted files to Microsoft Cloud Storage.
configuration.unpackedMsGraph.archiveUnpackedbooltrueEnable sending a single, smaller archive of unpacked files to Microsoft Cloud Storage instead of each unpacked file.
configuration.unpackedMsGraph.enabledboolfalseEnable/disable storing extracted files.
configuration.unpackedMsGraph.folderstring""Folder where unpacked files will be stored on the Microsoft Cloud Storage. If the folder name is not provided, files are stored into the root of the configured container.
configuration.unpackedMsGraph.folderOptionstring"date_based"Select the naming pattern that will be used when automatically creating subfolders for storing analyzed files. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash).
configuration.unpackedS3object-Settings for storing extracted files to S3 container.
configuration.unpackedS3.advancedFilterEnabledboolfalseEnable/disable the use of advanced filters.
configuration.unpackedS3.advancedFilterNamestring""Name of the advanced filter.
configuration.unpackedS3.archiveUnpackedbooltrueEnable sending a single, smaller archive of unpacked files to S3 instead of each unpacked file.
configuration.unpackedS3.bucketNamestring""Specify the name of the S3 container where extracted files will be saved. Required when this feature is enabled.
configuration.unpackedS3.enabledboolfalseEnable/disable storing extracted files in S3.
configuration.unpackedS3.folderstring""The name of a folder in the configured S3 container where extracted files will be stored. If the folder name is not provided, files are stored into the root of the configured container. The folder can be up to 1024 bytes long when encoded in UTF-8, and can contain letters, numbers and special characters: "!", "-", "_", ".", "*", "'", "(", ")", "/". It must not start or end with a slash or contain leading or trailing spaces. Consecutive slashes ("//") are not allowed.
configuration.unpackedS3.folderOptionstring"date_based"Select the naming pattern that will be used when automatically creating subfolders for storing analyzed files. Supported options are: date_based (YYYY/mm/dd/HH), datetime_based (YYYY/mm/dd/HH/MM/SS), and sha1_based (using the first 4 characters of the file hash).
configuration.wordlistlist-List of passwords for protected files.

Other Values

KeyTypeDefaultDescription
advancedFiltersobject{}Contains key-value pairs in which keys are filter names and values are the filter definitions.
auth.image.pullPolicystring"Always"
auth.image.repositorystring"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-detect-auth"
auth.image.tagstring"latest-dev"
auth.resources.limits.cpustring"4000m"
auth.resources.limits.memorystring"256Mi"
auth.resources.requests.cpustring"500m"
auth.resources.requests.memorystring"128Mi"
auth.serverPortint8080
authReverseProxy.image.pullPolicystring"Always"
authReverseProxy.image.repositorystring"nginx"
authReverseProxy.image.tagstring"stable"
authReverseProxy.resources.limits.cpustring"2000m"
authReverseProxy.resources.limits.memorystring"512Mi"
authReverseProxy.resources.requests.cpustring"250m"
authReverseProxy.resources.requests.memorystring"128Mi"
authReverseProxy.serverPortint80
cleanup.failedJobsHistoryLimitint1Number of failed finished jobs to keep.
cleanup.image.pullPolicystring"Always"
cleanup.image.repositorystring"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-detect-utilities"
cleanup.image.tagstring"latest-dev"
cleanup.resources.limits.cpustring"2000m"
cleanup.resources.limits.memorystring"2Gi"
cleanup.resources.requests.cpustring"1000m"
cleanup.resources.requests.memorystring"1Gi"
cleanup.startingDeadlineSecondsint180Deadline (in seconds) for starting the Job, if that Job misses its scheduled time for any reason. After missing the deadline, the CronJob skips that instance of the Job.
cleanup.successfulJobsHistoryLimitint1Number of successful finished jobs to keep.
cloudCache.image.pullPolicystring"Always"
cloudCache.image.repositorystring"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-cloud-cache"
cloudCache.image.tagstring"latest-dev"
cloudCache.resources.limits.cpustring"4000m"
cloudCache.resources.limits.memorystring"4Gi"
cloudCache.resources.requests.cpustring"1000m"
cloudCache.resources.requests.memorystring"1Gi"
cloudCache.serverPortint8080
global.umbrellaboolfalse
imagePullSecrets[0]string"rl-registry-key"
ingress.annotationsobject{}
ingress.classNamestring"nginx"
ingress.enabledbooltrue
ingress.hoststring""
ingress.paths[0].pathstring"/"
ingress.paths[0].pathTypestring"Prefix"
ingress.tls.certificateArnstring""
ingress.tls.issuerstring""
ingress.tls.issuerKindstring"Issuer"
ingress.tls.secretNamestring"tls-tiscale-worker"
monitoring.enabledboolfalseEnable/disable monitoring with Prometheus
monitoring.prometheusReleaseNamestring"kube-prometheus-stack"Prometheus release name
persistenceobject-Persistence values configure options for PersistentVolumeClaim used for storing samples and reports
persistence.accessModeslist["ReadWriteMany"]Access mode. When autoscaling or multiple worker is used should be set to [ "ReadWriteMany" ]
persistence.requestStoragestring"10Gi"Request Storage
persistence.storageClassNamestringnilStorage class name. When autoscaling or multiple worker is used storage class should support "ReadWriteMany"
postgres.releaseNamestring""Postgres release name, required when deployment is not done with the umbrella Helm chart.
postprocessor.autoscaling.enabledboolfalseEnable/disable autoscaling
postprocessor.autoscaling.maxReplicasint8Maximum number of replicas that can be deployed when scaling in enabled
postprocessor.autoscaling.minReplicasint1Minimum number of replicas that need to be deployed
postprocessor.autoscaling.pollingIntervalint10Interval to check each trigger. In seconds.
postprocessor.autoscaling.scaleDownobject{"stabilizationWindow":180}ScaleDown configuration values
postprocessor.autoscaling.scaleDown.stabilizationWindowint180Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started.
postprocessor.autoscaling.scaleUpobject{"numberOfPods":1,"period":30,"stabilizationWindow":15}ScaleUp configuration values
postprocessor.autoscaling.scaleUp.numberOfPodsint1Number of pods that can be scaled in the defined period
postprocessor.autoscaling.scaleUp.periodint30Interval in which the numberOfPods value is applied
postprocessor.autoscaling.scaleUp.stabilizationWindowint15Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started.
postprocessor.autoscaling.targetInputQueueSizeint10Number of messages in backlog to trigger scaling on.
postprocessor.image.pullPolicystring"Always"
postprocessor.image.repositorystring"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-detect-postprocessor"
postprocessor.image.tagstring"latest-dev"
postprocessor.resources.limits.cpustring"8000m"
postprocessor.resources.limits.memorystring"16Gi"
postprocessor.resources.requests.cpustring"2500m"
postprocessor.resources.requests.memorystring"2Gi"
preprocessor.autoscaling.enabledboolfalseEnable/disable autoscaling
preprocessor.autoscaling.maxReplicasint8Maximum number of replicas that can be deployed when scaling in enabled
preprocessor.autoscaling.minReplicasint1Minimum number of replicas that need to be deployed
preprocessor.autoscaling.pollingIntervalint10Interval to check each trigger. In seconds.
preprocessor.autoscaling.scaleDownobject{"stabilizationWindow":180}ScaleDown configuration values
preprocessor.autoscaling.scaleDown.stabilizationWindowint180Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started.
preprocessor.autoscaling.scaleUpobject{"numberOfPods":1,"period":30,"stabilizationWindow":15}ScaleUp configuration values
preprocessor.autoscaling.scaleUp.numberOfPodsint1Number of pods that can be scaled in the defined period
preprocessor.autoscaling.scaleUp.periodint30Interval in which the numberOfPods value is applied
preprocessor.autoscaling.scaleUp.stabilizationWindowint15Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started.
preprocessor.autoscaling.targetInputQueueSizeint10Number of messages in backlog to trigger scaling on.
preprocessor.image.pullPolicystring"Always"
preprocessor.image.repositorystring"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-detect-preprocessor"
preprocessor.image.tagstring"latest-dev"
preprocessor.replicaCountint1
preprocessor.resources.limits.cpustring"4000m"
preprocessor.resources.limits.memorystring"4Gi"
preprocessor.resources.requests.cpustring"1000m"
preprocessor.resources.requests.memorystring"1Gi"
preprocessorUnpacker.autoscaling.enabledboolfalseEnable/disable autoscaling
preprocessorUnpacker.autoscaling.maxReplicasint8Maximum number of replicas that can be deployed when scaling in enabled
preprocessorUnpacker.autoscaling.minReplicasint1Minimum number of replicas that need to be deployed
preprocessorUnpacker.autoscaling.pollingIntervalint10Interval to check each trigger. In seconds.
preprocessorUnpacker.autoscaling.scaleDownobject{"stabilizationWindow":180}ScaleDown configuration values
preprocessorUnpacker.autoscaling.scaleDown.stabilizationWindowint180Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started.
preprocessorUnpacker.autoscaling.scaleUpobject{"numberOfPods":1,"period":30,"stabilizationWindow":15}ScaleUp configuration values
preprocessorUnpacker.autoscaling.scaleUp.numberOfPodsint1Number of pods that can be scaled in the defined period
preprocessorUnpacker.autoscaling.scaleUp.periodint30Interval in which the numberOfPods value is applied
preprocessorUnpacker.autoscaling.scaleUp.stabilizationWindowint15Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started.
preprocessorUnpacker.autoscaling.targetInputQueueSizeint10Number of messages in backlog to trigger scaling on.
preprocessorUnpacker.replicaCountint1
preprocessorUnpacker.resources.limits.cpustring"16000m"
preprocessorUnpacker.resources.limits.memorystring"16Gi"
preprocessorUnpacker.resources.requests.cpustring"4000m"
preprocessorUnpacker.resources.requests.memorystring"4Gi"
preprocessorUnpacker.scaling.prefetchCountint4Defines the maximum number of individual files that can simultaneously be processed by a single instance of Spectra Core. When one file is processed, another from the queue enters the processing state. Value 0 sets the maximum number of files to be processed to the number of CPU cores on the system.
processor.autoscaling.enabledboolfalseEnable/disable autoscaling
processor.autoscaling.maxReplicasint8Maximum number of replicas that can be deployed when scaling in enabled
processor.autoscaling.minReplicasint1Minimum number of replicas that need to be deployed
processor.autoscaling.pollingIntervalint10Interval to check each trigger. In seconds.
processor.autoscaling.scaleDownobject{"stabilizationWindow":180}ScaleDown configuration values
processor.autoscaling.scaleDown.stabilizationWindowint180Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started.
processor.autoscaling.scaleUpobject{"numberOfPods":1,"period":30,"stabilizationWindow":15}ScaleUp configuration values
processor.autoscaling.scaleUp.numberOfPodsint1Number of pods that can be scaled in the defined period
processor.autoscaling.scaleUp.periodint30Interval in which the numberOfPods value is applied
processor.autoscaling.scaleUp.stabilizationWindowint15Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started.
processor.autoscaling.targetInputQueueSizeint10Number of messages in backlog to trigger scaling on.
processor.image.pullPolicystring"Always"
processor.image.repositorystring"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-processor"
processor.image.tagstring"latest-dev"
processor.replicaCountint1
processor.resources.limits.cpustring"16000m"
processor.resources.limits.memorystring"32Gi"
processor.resources.requests.cpustring"4000m"
processor.resources.requests.memorystring"4Gi"
processor.scaling.prefetchCountint8Defines the maximum number of individual files that can simultaneously be processed by a single instance of Spectra Core. When one file is processed, another from the queue enters the processing state. Value 0 sets the maximum number of files to be processed to the number of CPU cores on the system.
processorRetry.autoscaling.enabledboolfalseEnable/disable autoscaling
processorRetry.autoscaling.maxReplicasint8Maximum number of replicas that can be deployed when scaling in enabled
processorRetry.autoscaling.minReplicasint1Minimum number of replicas that need to be deployed
processorRetry.autoscaling.pollingIntervalint10Interval to check each trigger. In seconds.
processorRetry.autoscaling.scaleDownobject{"stabilizationWindow":180}ScaleDown configuration values
processorRetry.autoscaling.scaleDown.stabilizationWindowint180Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started.
processorRetry.autoscaling.scaleUpobject{"numberOfPods":1,"period":30,"stabilizationWindow":15}ScaleUp configuration values
processorRetry.autoscaling.scaleUp.numberOfPodsint1Number of pods that can be scaled in the defined period
processorRetry.autoscaling.scaleUp.periodint30Interval in which the numberOfPods value is applied
processorRetry.autoscaling.scaleUp.stabilizationWindowint15Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started.
processorRetry.autoscaling.targetInputQueueSizeint10Number of messages in backlog to trigger scaling on.
processorRetry.replicaCountint1
processorRetry.resources.limits.cpustring"16000m"
processorRetry.resources.limits.memorystring"64Gi"
processorRetry.resources.requests.cpustring"4000m"
processorRetry.resources.requests.memorystring"8Gi"
processorRetry.scaling.prefetchCountint1Defines the maximum number of individual files that can simultaneously be processed by a single instance of Spectra Core. When one file is processed, another from the queue enters the processing state. Value 0 sets the maximum number of files to be processed to the number of CPU cores on the system. Recommended value for this type of processor is 1.
rabbitmq.releaseNamestring""Rabbitmq release name, required when deployment is not done using the umbrella Helm chart.
receiver.autoscaling.enabledboolfalseEnable/disable autoscaling
receiver.autoscaling.maxReplicasint8Maximum number of replicas that can be deployed when scaling in enabled
receiver.autoscaling.minReplicasint1Minimum number of replicas that need to be deployed
receiver.autoscaling.pollingIntervalint10Interval to check each trigger. In seconds.
receiver.autoscaling.scaleDownobject{"stabilizationWindow":180}ScaleDown configuration values
receiver.autoscaling.scaleDown.stabilizationWindowint180Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started.
receiver.autoscaling.scaleUpobject{"numberOfPods":1,"period":30,"stabilizationWindow":30}ScaleUp configuration values
receiver.autoscaling.scaleUp.numberOfPodsint1Number of pods that can be scaled in the defined period
receiver.autoscaling.scaleUp.periodint30Interval in which the numberOfPods value is applied
receiver.autoscaling.scaleUp.stabilizationWindowint30Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started.
receiver.autoscaling.triggerCPUValueint75CPU value (in percentage), which will cause scaling when reached. The percentage is taken from the resource.limits.cpu value. Limits have to be set up.
receiver.image.pullPolicystring"Always"
receiver.image.repositorystring"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-detect-receiver"
receiver.image.tagstring"latest-dev"
receiver.initImage.pullPolicystring"Always"
receiver.initImage.repositorystring"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-detect-utilities"
receiver.initImage.tagstring"latest-dev"
receiver.replicaCountint1
receiver.resources.limits.cpustring"4000m"
receiver.resources.limits.memorystring"8Gi"
receiver.resources.requests.cpustring"1500m"
receiver.resources.requests.memorystring"1Gi"
report.autoscaling.enabledboolfalseEnable/disable autoscaling
report.autoscaling.maxReplicasint8Maximum number of replicas that can be deployed when scaling in enabled
report.autoscaling.minReplicasint1Minimum number of replicas that need to be deployed
report.autoscaling.pollingIntervalint10Interval to check each trigger. In seconds.
report.autoscaling.scaleDownobject{"stabilizationWindow":180}ScaleDown configuration values
report.autoscaling.scaleDown.stabilizationWindowint180Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started.
report.autoscaling.scaleUpobject{"numberOfPods":1,"period":30,"stabilizationWindow":30}ScaleUp configuration values
report.autoscaling.scaleUp.numberOfPodsint1Number of pods that can be scaled in the defined period
report.autoscaling.scaleUp.periodint30Interval in which the numberOfPods value is applied
report.autoscaling.scaleUp.stabilizationWindowint30Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started.
report.autoscaling.triggerCPUValueint75CPU value (in percentage), which will cause scaling when reached. The percentage is taken from the resource.limits.cpu value. Limits have to be set.
report.image.pullPolicystring"Always"
report.image.repositorystring"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-report"
report.image.tagstring"latest-dev"
report.resources.limits.cpustring"8000m"
report.resources.limits.memorystring"8Gi"
report.resources.requests.cpustring"2000m"
report.resources.requests.memorystring"2Gi"
reportTypesobject{}Contains key-value pairs where keys are the report type names and values are the report type definitions.
securityContext.privilegedboolfalse
tcScratchobject-tcScratch values configure generic ephemeral volume options for the Spectra Core /tc-scratch directory.
tcScratch.accessModeslist["ReadWriteOnce"]Access modes.
tcScratch.requestStoragestring"100Gi"Requested storage size for the ephemeral volume.
tcScratch.storageClassNamestringnilSets the storage class for the ephemeral volume. If not set, emptyDir is used instead of an ephemeral volume.
tclibs.autoscaling.enabledboolfalseEnable/disable autoscaling
tclibs.autoscaling.maxReplicasint8Maximum number of replicas that can be deployed when scaling in enabled
tclibs.autoscaling.minReplicasint1Minimum number of replicas that need to be deployed
tclibs.autoscaling.pollingIntervalint10Interval to check each trigger. In seconds.
tclibs.autoscaling.scaleDownobject{"stabilizationWindow":180}ScaleDown configuration values
tclibs.autoscaling.scaleDown.stabilizationWindowint180Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started.
tclibs.autoscaling.scaleUpobject{"numberOfPods":1,"period":30,"stabilizationWindow":30}ScaleUp configuration values
tclibs.autoscaling.scaleUp.numberOfPodsint1Number of pods that can be scaled in the defined period
tclibs.autoscaling.scaleUp.periodint30Interval in which the numberOfPods value is applied
tclibs.autoscaling.scaleUp.stabilizationWindowint30Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started.
tclibs.autoscaling.triggerCPUValueint75CPU value (in percentage), which will cause scaling when reached. The percentage is taken from the resource.limits.cpu value. Limits have to be set.
tclibs.image.pullPolicystring"Always"
tclibs.image.repositorystring"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-tclibs"
tclibs.image.tagstring"latest-dev"
tclibs.replicaCountint1
tclibs.resources.limits.cpustring"2000m"
tclibs.resources.limits.memorystring"2Gi"
tclibs.resources.requests.cpustring"1000m"
tclibs.resources.requests.memorystring"1Gi"
yaraSync.enabledboolfalse
yaraSync.image.pullPolicystring"Always"
yaraSync.image.repositorystring"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-detect-yara-sync"
yaraSync.image.tagstring"latest-dev"
yaraSync.replicaCountint1
yaraSync.resources.limits.cpustring"2000m"
yaraSync.resources.limits.memorystring"2Gi"
yaraSync.resources.requests.cpustring"1000m"
yaraSync.resources.requests.memorystring"1Gi"
yaraSync.serverPortint8080

Connector S3

Secrets

Secret (fullNameOverride is set)Secret (deployment with umbrella)Secret (deployment without umbrella)TypeDescription
<fullNameOverride>-secret-<input.identifier><Release.Name>-connector-s3-secret-<input.identifier><Release.Name>-secret-<input.identifier>requiredAuthentication secret used to connect to AWS S3 or any S3-compatible storage system.

Values

KeyTypeDefaultDescription
configuration.dbCleanupPollIntervalint7200Specifies time in seconds, in which the database cleanup will be run.
configuration.dbCleanupSampleThresholdInDaysint21Number of previous days that the data will be preserved.
configuration.diskHighPercentint0disk high percent
configuration.inputslist[]Configuration for S3 File Storage Input. S3 input.
configuration.maxFileSizeint0The maximum sample size in bytes that will be transmitted from the connector to the appliance for analysis. Setting it to 0 will disable the option.
configuration.maxUploadDelayTimeint10000Delay in milliseconds. In case the Worker cluster is under high load, this parameter is used to delay any new upload to the Worker cluster. The delay parameter will be multiplied by the internal factor determined by the load on the Worker cluster.
configuration.maxUploadRetriesint100Number of times the connector will attempt to upload the file to the processing appliance. Upon reaching the number of retries, it will be saved in the error_files/ destination or be discarded.
configuration.uploadTimeoutint10000Period (in milliseconds) between upload attempts of the sample being re-uploaded.
configuration.uploadTimeoutAlgorithmstring"exponential"Enum: "exponential" "linear". The algorithm used for managing delays between re-uploading the samples into the processing appliance. In exponential, the delay is defined by multiplying the max upload timeout parameter by 2, until max value of 5 minutes. Linear backoff will always use the Max upload timeout value for the timeout period between re-uploads.

Configmap System Info values for Connector S3

KeyTypeDefaultDescription
configuration.systemInfoobject-Configuration for S3 System Info. S3 System Info.
configuration.systemInfo.centralLoggingboolfalseEnable central logging
configuration.systemInfo.diskHighPercentint0Dish high percent
configuration.systemInfo.fetchChannelSizeint40Fetch channel size
configuration.systemInfo.hostUUIDstring""Host UUID
configuration.systemInfo.maxConnectionsint10Max number of connections
configuration.systemInfo.maxSlowFetchesint12Max slow fetches
configuration.systemInfo.numberOfRetriesint300Number of retries
configuration.systemInfo.requestTimeoutint43200Timeout for requests
configuration.systemInfo.slowFetchChannelSizeint100Slow fetch channel size.
configuration.systemInfo.slowFetchPauseint5Slow fetch pause
configuration.systemInfo.typestring"tiscale"Type
configuration.systemInfo.verifyCertboolfalseVerify SSL certificate
configuration.systemInfo.versionstring"5.6.0"Version
configuration.systemInfo.waitTimeoutint1000Wait timeout

Configmap Configuration values for S3 connector - Configuration for S3 File Storage Input

KeyTypeDefaultDescription
inputs[n]list-Configmap Configuration for S3 File Storage Input.
inputs[n].awsEnableArnboolfalseEnable/disable the usage of AWS IAM roles to access S3 buckets without sharing secret keys.
inputs[n].awsExternalRoleIdstring""The external ID of the role that will be assumed. This can be any string.
inputs[n].awsRoleArnstring""The role ARN created using the external role ID and an Amazon ID. In other words, the ARN which allows a Worker to obtain a temporary token, which then allows it to save to S3 buckets without a secret access key.
inputs[n].awsRoleSessionNamestring"ARNRoleSession"Name of the session visible in AWS logs. Can be any string.
inputs[n].bucketstring""Name of an existing S3 bucket which contains the samples to process.
inputs[n].deleteSourceFileboolfalseSelecting the checkbox will allow the connector to delete source files on S3 storage after they have been processed. Required if 'require_analyze' or 'post_actions_enabled' is true
inputs[n].endpointstring""Custom S3 endpoint URL. Leave empty if using standard AWS S3.
inputs[n].folderstring""The input folder inside the specified bucket which contains the samples to process. All other samples will be ignored.
inputs[n].identifierstring""Unique name of S3 connection. Must contain only lowercase alphanumeric characters or hyphen (-). Must start and end with an alphanumeric character. Identifier length must be between 3 and 49 characters.
inputs[n].knownBucketstring""Specify the bucket into which the connector will store files classified as "Malicious". If empty, the input bucket will be used.
inputs[n].knownDestinationstring"goodware"The folder into which the connector will store files classified as 'Goodware'. The folder is contained within the specified bucket field.
inputs[n].maliciousBucketstring""Specify the bucket into which the connector will store files classified as 'Malicious'. If empty, the input bucket will be used.
inputs[n].maliciousDestinationstring"malware"The folder into which the connector will store files classified as 'Malicious'. The folder is contained within the specified bucket field.
inputs[n].pausedboolfalseTemporarily pause the continuous scanning of this Storage Input. This setting must be set to true to enable retro hunting.
inputs[n].postActionsEnabledboolfalseDisable/enable post actions for S3 connectors
inputs[n].priorityint5A higher Priority makes it more likely that files from this bucket will be processed first. The supported range is from 1 (highest) to 5 (lowest). Values outside of those minimum and maximum values will be replaced by the minimum or maximum, respectively. Multiple buckets may share the same priority.
inputs[n].requireAnalyzeboolfalseDisable/enable the requirement for analysis of data processed by connector
inputs[n].serverSideEncryptionCustomerAlgorithmstring""Customer provided encryption algorithm.
inputs[n].serverSideEncryptionCustomerKeystring""Customer provided encryption key.
inputs[n].suspiciousBucketstring""Specify the bucket into which the connector will store files classified as 'Suspicious'. If empty, the input bucket will be used.
inputs[n].suspiciousDestinationstring"suspicious"The folder into which the connector will store files classified as 'Suspicious'. The folder is contained within the specified bucket field.
inputs[n].unknownBucketstring""Specify the bucket into which the connector will store files classified as "Unknown". If empty, the input bucket will be used.
inputs[n].unknownDestinationstring"unknown"The folder into which the connector will store files classified as 'Unknown'. The folder is contained within the specified bucket field.
inputs[n].verifySslCertificatebooltrueConnect securely to the custom S3 instance. Deselect this to accept untrusted certificates. Applicable only when using a custom S3 endpoint.
inputs[n].zonestring"us-east-1"AWS S3 region

Other Values

KeyTypeDefaultDescription
boltdb.claimNamestringnil
enabledboolfalse
fullNameOverridestring""overrides connector-s3 chart full name
image.imagePullPolicystring"Always"
image.repositorystring"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-integration-s3"
image.tagstring"latest-dev"
imagePullSecrets[0]string"rl-registry-key"
nameOverridestring""overrides connector-s3 chart name
persistence.accessModeslist["ReadWriteOnce"]Access mode
persistence.requestStoragestring"10Gi"Request Storage
persistence.storageClassNamestring"encrypted-gp2"Storage class name
receiver.baseUrlstringnil
receiver.service.httpPortint80
tmpobject-tmp values configure generic ephemeral volume options for the connectors /data/connectors/connector-s3/tmp directory.
tmp.accessModeslist["ReadWriteOnce"]Access modes.
tmp.requestStoragestring"100Gi"Requested storage size for the ephemeral volume.
tmp.storageClassNamestringnilSets the storage class for the ephemeral volume. If not set, emptyDir is used instead of an ephemeral volume.
worker.releaseNamestringnilSet Spectra Detect Worker release for connector to connect to. It is required if not using the umbrella Helm chart.

RabbitMQ

KeyTypeDefaultDescription
affinityobject{}
createManagementAdminSecretbooltrueA user management admin secret will be created automatically with given admin username and admin password; otherwise, the secret must already exist.
createUserSecretbooltrueA user secret will be created automatically with the given username and password; otherwise, the secret must already exist.
global.umbrellaboolfalse
hoststring""Host for external RabbitMQ. When configured, the Detect RabbitMQ cluster won't be created.
managementAdminPasswordstring""Management admin password. If left empty, defaults to password.
managementAdminUrlstring""Management Admin URL. If empty, defaults to "http://:15672".
managementAdminUsernamestring""Management admin username. If left empty, defaults to username.
passwordstring"guest_11223"Password
persistence.requestStoragestring"5Gi"
persistence.storageClassNamestringnil
portint5672RabbitMQ port
replicasint1Number of replicas
resources.limits.cpustring"2"
resources.limits.memorystring"2Gi"
resources.requests.cpustring"1"
resources.requests.memorystring"2Gi"
useQuorumQueuesboolfalseSetting this to true defines queues as quorum type (recommended for multi-replica/HA setups); otherwise, queues are classic.
useSecureProtocolboolfalseSetting this to true enables the secure AMQPS protocol for the RabbitMQ connection.
usernamestring"guest"Username
vhoststring""Vhost. When empty, the default rl-detect vhost is used.

PostgreSQL

KeyTypeDefaultDescription
affinityobject{}
createUserSecretbooltrueA user secret will be created automatically with the given username and password; otherwise, the secret must already exist.
databasestring"tiscale"Database name
global.umbrellaboolfalse
hoststring""Host for external PostgreSQL. When configured, the Detect PostgreSQL cluster won't be created.
image.repositorystring"ghcr.io/cloudnative-pg/postgresql"
image.tagstring"17.6"
passwordstring"tiscale_11223"Password
persistence.accessModes[0]string"ReadWriteOnce"
persistence.requestStoragestring"5Gi"
persistence.storageClassNamestringnil
portint5432PostgreSQL port.
replicasint1Number of replicas.
resources.limits.cpustringnil
resources.limits.memorystringnil
resources.requests.cpustring"500m"
resources.requests.memorystring"1Gi"
usernamestring"tiscale"Username. Required if host is not set, because the Detect PostgreSQL cluster will be created and this user will be set as the database owner.