# Spectra Detect Documentation > Enterprise file analysis platform documentation including deployment, API, and configuration. This file contains all documentation content in a single document following the llmstxt.org standard. ## Analysis Timeout Issues File analysis timeouts can occur when processing complex or large files that require extensive analysis time. Understanding the causes and solutions helps ensure successful file processing. ## Common Causes Analysis timeouts typically happen due to: - **Large file sizes** - Files approaching or exceeding the size limits for your appliance tier - **Deep nesting** - Archives containing multiple layers of compressed files - **Extensive unpacking** - Files that trigger recursive decompression operations - **Complex file structures** - Files with intricate internal structures requiring detailed parsing - **Resource constraints** - Insufficient RAM or CPU allocation for the analysis workload ## Configuration Options ### Spectra Analyze The analysis timeout can be adjusted in the appliance configuration: 1. Navigate to **Administration > Configuration** 2. Locate the analysis timeout setting 3. Increase the timeout value based on your file processing requirements 4. Save the configuration changes ### File Inspection Engine Use the `--analysis-timeout` flag to control the per-file time limit: ```bash rl-scan --analysis-timeout 300 /path/to/file ``` The timeout value is specified in seconds. ## Troubleshooting Steps If analysis timeouts persist: 1. **Increase allocated resources** - Ensure the appliance or container has sufficient RAM (32 GB+ recommended) and CPU cores 2. **Check decompression ratio limits** - Verify that recursive unpacking isn't exceeding configured limits 3. **Review file characteristics** - Examine the file structure to identify potential issues 4. **Monitor system resources** - Check if the appliance is under heavy load from concurrent analyses 5. **Adjust timeout values** - Increase timeout settings for complex file processing workflows ## Related Topics - [Platform Requirements](/General/DeploymentAndIntegration/PlatformRequirements) - Hardware specifications for different appliance tiers - [How Spectra Core analysis works](/General/AnalysisAndClassification/SpectraCoreAnalysis) - Understanding the analysis process --- ## Antivirus Result Availability When a sample is uploaded or rescanned in Spectra Intelligence, it will usually get new antivirus results **within 30 minutes**. When a sample has new antivirus results, these will available in relevant APIs, for example [TCA-0104 File analysis](/SpectraIntelligence/API/FileThreatIntel/tca-0104/). --- ## Certificate Revocation ReversingLabs maintains a certificate revocation database that is updated with each [Spectra Core](/General/AnalysisAndClassification/SpectraCoreAnalysis) release. Because the database is offline, some recently revoked certificates may not appear as revoked until the next update. Certificate Authority (CA) revocation alone is not sufficient to classify a sample as malicious. Most CAs backdate revocations to the certificate's issuance date, regardless of when or whether the certificate was abused. When additional context is available, ReversingLabs adjusts the revocation date to reflect the most appropriate point in time. If a certificate is whitelisted, this correction is not applied. ## Searching for Revoked Certificates You can find samples signed with revoked certificates using **Advanced Search** with the `tag:cert-revoked` keyword. Advanced Search is available both through the [Spectra Analyze user interface](/SpectraAnalyze/search-page/) and as the [TCA-0320 Advanced Search](/SpectraIntelligence/API/MalwareHunting/tca-0320/) API. --- ## File Classification and Risk Scoring — ReversingLabs # Classification File classification assigns a risk score (0-10) and threat verdict (malicious, suspicious, goodware, or unknown) to every analyzed file using ReversingLabs Spectra Core. The classification algorithm combines YARA rules, machine learning, heuristics, certificate validation, and file similarity matching to determine security status. YARA rules take precedence as the most authoritative signal, followed by other detection methods that contribute to the final verdict. The classification of a sample is based on a comprehensive assessment of its assigned risk factor, threat level, and trust factor; however, it can be manually or automatically overridden when necessary. Based on this evaluation, files are placed into one of the following buckets: - No threats found (unclassified) - Goodware/known - Suspicious - Malicious The classification process weighs signals from all available sources to arrive at the most accurate verdict. Some signals are considered more authoritative than others and take priority. For example, [Spectra Core](/General/AnalysisAndClassification/SpectraCoreAnalysis) YARA rules always take precedence because they are written and curated by ReversingLabs analysts. These rules provide the highest degree of accuracy, as they target specific, named threats. This does not mean that other classification methods are less important. Similarity matching, heuristics, and machine learning still contribute valuable signals and may produce additional matches. In cases where multiple detections apply, YARA rules simply serve as the deciding factor for the final classification. ## Risk score A risk score is a value representing the trustworthiness or malicious severity of a sample. Risk score is expressed as a number from 0 to 10, with 0 indicating whitelisted samples from a reputable origin, and 10 indicating the most dangerous threats. At a glance: Files with no threats found don't get assigned a risk score and are therefore **unclassified**. Values from 0 to 5 are reserved for samples classified as **goodware/known**, and take into account the source and structural metadata of the file, among other things. Since goodware samples do not have threat names associated with them, they receive a description based on their risk score. Risk scores from 6 to 10 are reserved for **suspicious** and **malicious** samples, and express their severity. They are calculated by a ReversingLabs proprietary algorithm, and based on many factors such as file origin, threat type, how frequently it occurs in the wild, YARA rules, and more. Lesser threats like adware get a risk score of 6, while ransomware and trojans always get a risk score of 10. ### Malware type and risk score In cases where multiple threats are detected and there are no other factors (such as user overrides) involved, the final classification is always the one that presents the biggest threat. If they belong to the same risk score group, malware types are prioritized in this order: | Risk score | Malware types | |------------|---------------------------------------------------------------------------------------------------------------------| | 10 | EXPLOIT > BACKDOOR > RANSOMWARE > INFOSTEALER > KEYLOGGER > WORM > VIRUS > CERTIFICATE > PHISHING > FORMAT > TROJAN | | 9 | ROOTKIT > COINMINER > ROGUE > BROWSER | | 8 | DOWNLOADER > DROPPER > DIALER > NETWORK | | 7 | SPYWARE > HYPERLINK > SPAM > MALWARE | | 6 | ADWARE > HACKTOOL > PUA > PACKED | ## Threat level and trust factor The [risk score table](#risk-score) describes the relationship between the risk score, and the threat level and trust factor used by the [File Reputation API](/SpectraIntelligence/API/FileThreatIntel/tca-0101). The main difference is that the risk score maps all classifications onto one numerical scale (0-10), while the File Reputation API uses two different scales for different classifications. ### Nomenclature The following classifications are equivalent: | File Reputation API | Spectra Analyze | Spectra Detect Worker | | ------------------- | --------------- | ------------------------ | | known | goodware | 1 (in the Worker report) | In the Worker report, the [risk score](#risk-score) is called `rca_factor`. ## Deciding sample priority The [risk score table](#risk-score) highlights that the a sample's risk score and its classification don't have a perfect correlation. This means that a sample's risk score cannot be interpreted on its own, and that the primary criterion in deciding a sample's priority is its classification. Samples classified as suspicious can be a result of heuristics, or a possible early detection. A suspicious file may be declared malicious or known at a later time if new information is received that changes its threat profile, or if the user manually modifies its status. The system always considers a malicious sample with a risk score of 6 as a higher threat than a suspicious sample with a risk score of 10, meaning that samples classified as malicious always supersede suspicious samples, regardless of the calculated risk score. The reason for this is certainty - a malicious sample is decidedly malicious, while suspicious samples need more data to confirm the detected threat. It is a constant effort by ReversingLabs to reduce the number of suspicious samples. While a suspicious sample with a risk score of 10 does deserve user attention and shouldn't be ignored, a malicious sample with a risk score of 10 should be triaged as soon as possible. ## Malware naming standard --- ## Handling False Positives # Handling False Positives A false positive occurs when a legitimate file is incorrectly classified as malicious. While ReversingLabs strives for high accuracy, false positives can occasionally happen due to the complexity of malware detection across hundreds of file formats and millions of samples. ## What You Can Do If you encounter a false positive, you have several options: ### 1. Local Classification Override On Spectra Analyze, you can immediately override the classification using the classification override feature: - Navigate to the file's Sample Details page - Use the classification override option to manually set the file as goodware - The override takes effect immediately on your appliance - All users on the same appliance will see the updated classification ### 2. Spectra Intelligence Reclassification Request Submit a reclassification request through Spectra Intelligence: - The override propagates across all appliances connected to the same Spectra Intelligence account - Other appliances in your organization will automatically receive the updated classification - This is the recommended approach for organization-wide corrections ### 3. Goodware Overrides Use Goodware Overrides to propagate trusted parent classifications to extracted child files: - If a trusted parent file (e.g., from Microsoft or another reputable vendor) contains files that trigger false positives - The parent's goodware classification can automatically override the child files - This is particularly useful for legitimate installers that may contain components flagged by heuristics ## How ReversingLabs Handles False Positive Reports If a customer reports a false positive (through Zendesk, or by contacting the Support team at support@reversinglabs.com), the first thing we do is re-scan the sample to make sure that the results are up-to-date. If the results are still malicious, our Threat Analysis team will: 1. Conduct our own research of the software and the vendor 2. Contact the AV scanners and notify them of the issue 3. Change the classification in our system (we do not wait for AVs to correct the issue) --- If the file is confirmed to be a false positive, we begin by analyzing why the incorrect classification occurred. Then we try to correct the result by making adjustments related to file relationships, certificates, AV product detection velocity (e.g. are detections being added or removed), we will re-scan and reanalyze samples, adjust/add sources and, if necessary, manually investigate the file. If these efforts do not yield a correct result, we have the ability to **manually override the classification** — but we only do so after thorough analysis confirms the file is benign. --- ## ReversingLabs malware naming standard The ReversingLabs detection string consists of three main parts separated by dots. All parts of the string will always appear (all three parts are mandatory). ``` platform-subplatform.type.familyname ``` 1. The first part of the string indicates the **platform** targeted by the malware. This string is always one of the strings listed in the [Platform string](#platform-string) table. If the platform is Archive, Audio, ByteCode, Document, Image or Script, then it has a subplatform string. Platform and subplatform strings are divided by a hyphen (`-`). The lists of available strings for Archive, Audio, ByteCode, Document, Image and Script subplatforms can be found in their respective tables. 2. The second part of the detection string describes the **malware type**. Strings that appear as malware type descriptions are listed in the [Type string](#type-string) table. 3. The third and last part of the detection string represents the malware family name, i.e. the name given to a particular malware strain. Names "Agent", "Gen", "Heur", and other similar short generic names are not allowed. Names can't be shorter than three characters, and can't contain only numbers. Special characters (apart from `-`) must be avoided as well. The `-` character is only allowed in exploit (CVE/CAN) names (for example CVE-2012-0158). #### Examples If a trojan is designed for the Windows 32-bit platform and has the family name "Adams", its detection string will look like this: ``` Win32.Trojan.Adams ``` If some backdoor malware is a PHP script with the family name "Jones", the detection string will look like this: ``` Script-PHP.Backdoor.Jones ``` Some potentially unwanted application designed for Android that has the family name "Smith" will have the following detection string: ``` Android.PUA.Smith ``` Some examples of detections with invalid family names are: ``` Win32.Dropper.Agent ByteCode-MSIL.Keylogger.Heur Script-JS.Hacktool.Gen Android.Backdoor.12345 Document-PDF.Exploit.KO Android.Spyware.1a Android.Spyware.Not-a-CVE Win32.Trojan.Blue_Banana Win32.Ransomware.Hydra:Crypt Win32.Ransomware.HDD#Cryptor ``` #### Platform string The platform string indicates the operating system that the malware is designed for. The following table contains the available strings and the operating systems for which they are used. | String | Short description | | ----------- | ------------------------------------------------------------------------------------------ | | ABAP | SAP / R3 Advanced Business Application Programming environment | | Android | Applications for Android OS | | AOL | America Online environment | | Archive | Archives. See [Archive subplatforms](#archive-subplatforms) for more information. | | Audio | Audio. See [Audio subplatforms](#audio-subplatforms) for more information. | | BeOS | Executable content for Be Inc. operating system | | Boot | Boot, MBR | | Binary | Binary native type | | ByteCode | ByteCode, platform-independent. See [ByteCode subplatforms](#bytecode-subplatforms) for more information. | | Blackberry | Applications for Blackberry OS | | Console | Executables or applications for old consoles (e.g. Nintendo, Amiga, ...) | | Document | Documents. See [Document subplatforms](#document-subplatforms) for more information. | | DOS | DOS, Windows 16 bit based OS | | EPOC | Applications for EPOC mobile OS | | Email | Emails. See [Email subplatforms](#email-subplatforms) for more information. | | Firmware | BIOS, Embedded devices (mp3 players, ...) | | FreeBSD | Executable content for 32-bit and 64-bit FreeBSD platforms | | Image | Images. See [Image subplatforms](#image-subplatforms) for more information. | | iOS | Applications for Apple iOS (iPod, iPhone, iPad…) | | Linux | Executable content for 32 and 64-bit Linux operating systems | | MacOS | Executable content for Apple Mac OS, OS X | | Menuet | Executable content for Menuet OS | | Novell | Executable content for Novell OS | | OS2 | Executable content for IBM OS/2 | | Package | Software packages. See [Package subplatforms](#package-subplatforms) for more information. | | Palm | Applications for Palm mobile OS | | Script | Scripts. See [Script subplatforms](#script-subplatforms) for more information. | | Shortcut | Shortcuts | | Solaris | Executable content for Solaris OS | | SunOS | Executable content for SunOS platform | | Symbian | Applications for Symbian OS | | Text | Text native type | | Unix | Executable content for the UNIX platform | | Video | Videos | | WebAssembly | Binary format for executable code in Web pages | | Win32 | Executable content for 32-bit Windows OS's | | Win64 | Executable content for 64-bit Windows OS's | | WinCE | Executable content for Windows Embedded Compact OS | | WinPhone | Applications for Windows Phone | ##### Archive subplatforms | String | Short description | | ---------------------------------- | ------------------------------------------------------------ | | ACE | WinAce archives | | AR | AR archives | | ARJ | ARJ (Archived by Robert Jung) archives | | BZIP2 | Bzip2 archives | | CAB | Microsoft Cabinet archives | | GZIP | GNU Zip archives | | ISO | ISO image files | | JAR | JAR (Java ARchive) archives | | LZH | LZH archives | | RAR | RAR (Roshal Archive) archives | | 7ZIP | 7-Zip archives | | SZDD | Microsoft SZDD archives | | TAR | Tar (tarball) archives | | XAR | XAR (eXtensible ARchive) archives | | ZIP | ZIP archives | | ZOO | ZOO archives | | *Other Archive identification* | All other valid [Spectra Core](/General/AnalysisAndClassification/SpectraCoreAnalysis) identifications of Archive type | ##### Audio subplatforms | String | Short description | | -------------------------------- | ---------------------------------------------------------- | | WAV | Wave Audio File Format | | *Other Audio identification* | All other valid Spectra Core identifications of Audio type | ##### ByteCode subplatforms | String | Short description | | ------ | ----------------- | | JAVA | Java bytecode | | MSIL | MSIL bytecode | | SWF | Adobe Flash | ##### Document subplatforms | String | Short description | | ----------------------------------- | ------------------------------------------------------------ | | Access | Microsoft Office Access | | CHM | Compiled HTML | | Cookie | Cookie files | | Excel | Microsoft Office Excel | | HTML | HTML documents | | Multimedia | Multimedia containers that aren't covered by other platforms (e.g. ASF) | | Office | File that affects multiple Office components | | OLE | Microsoft Object Linking and Embedding | | PDF | PDF documents | | PowerPoint | Microsoft Office PowerPoint | | Project | Microsoft Office Project | | Publisher | Microsoft Office Publisher | | RTF | RTF documents | | Visio | Microsoft Office Visio | | XML | XML and XML metafiles (ASX) | | Word | Microsoft Office Word | | *Other Document identification* | All other valid Spectra Core identifications of Document type | ##### Email subplatforms | String | Short description | | ------ | ------------------------------------- | | MIME | Multipurpose Internet Mail Extensions | | MSG | Outlook MSG file format | ##### Image subplatforms | String | Short description | | -------------------------------- | ------------------------------------------------------------ | | ANI | File format used for animated mouse cursors on Microsoft Windows | | BMP | Bitmap images | | EMF | Enhanced Metafile images | | EPS | Adobe Encapsulated PostScript images | | GIF | Graphics Interchange Format | | JPEG | JPEG images | | OTF | OpenType Font | | PNG | Portable Network Graphics | | TIFF | Tagged Image File Format | | TTF | Apple TrueType Font | | WMF | Windows Metafile images | | *Other Image identification* | All other valid Spectra Core identifications of Image type | ##### Package subplatforms | String | Short description | | ---------------------------------- | ------------------------------------------------------------ | | NuGet | NuGet packages | | DEB | Debian Linux DEB packages | | RPM | Linux RPM packages | | WindowStorePackage | Packages for distributing and installing Windows apps | | *Other Package identification* | All other valid Spectra Core identifications of Package type | ##### Script subplatforms | String | Short description | | --------------------------------- | ------------------------------------------------------------ | | ActiveX | ActiveX scripts | | AppleScript | AppleScript scripts | | ASP | ASP scripts | | AutoIt | AutoIt scripts (Windows) | | AutoLISP | AutoCAD LISP scripts | | BAT | Batch scripts | | CGI | CGI scripts | | CorelDraw | CorelDraw scripts | | Ferite | Ferite scripts | | INF | INF Script, Windows installer scripts | | INI | INI configuration file | | IRC | IRC, mIRC, pIRC/Pirch Script | | JS | Javascript, JScript | | KiXtart | KiXtart scripts | | Logo | Logo scripts | | Lua | Lua scripts | | Macro | Macro (e.g. VBA, AmiPro macros, Lotus123 macros) | | Makefile | Makefile configuration | | Matlab | Matlab scripts | | Perl | Perl scripts | | PHP | PHP scripts | | PowerShell | PowerShell scripts, Monad (MSH) | | Python | Python scripts | | Registry | Windows Registry scripts | | Ruby | Ruby scripts | | Shell | Shell scripts | | Shockwave | Shockwave scripts | | SQL | SQL scripts | | SubtitleWorkshop | SubtitleWorkshop scripts | | WinHelp | WinHelp Script | | WScript | Windows Scripting Host related scripts (can be VBScript, JScript, …) | | *Other Script identification* | All other valid Spectra Core identifications of Script type | #### Type string This string is used to describe the general type of malware. The following table contains the available strings and describes what each malware type is capable of. For a catalog of common software weaknesses that enable malware, see [CWE](https://cwe.mitre.org/) maintained by MITRE. CISA maintains advisories on actively exploited vulnerabilities at [cisa.gov/known-exploited-vulnerabilities](https://www.cisa.gov/known-exploited-vulnerabilities). | String | Description | | ----------- | ------------------------------------------------------------ | | Adware | Presents unwanted advertisements | | Backdoor | Bypasses device security and allows remote access | | Browser | Browser helper objects, toolbars, and malicious extensions | | Certificate | Classification derived from certificate data | | Coinminer | Uses system resources for cryptocurrency mining without the user's permission | | Dialer | Applications used for war-dialing and calling premium numbers | | Downloader | Downloads other malware or components | | Dropper | Drops malicious artifacts including other malware | | Exploit | Exploits for various vulnerabilities, CVE/CAN entries | | Format | Malformations of the file format. Classification derived from graylisting, validators on unpackers | | Hacktool | Software used in hacking attacks, that might also have a legitimate use | | Hyperlink | Classifications derived from extracted URLs | | Infostealer | Steals personal info, passwords, etc. | | Keylogger | Records keystrokes | | Malware | New and recently discovered malware not yet named by the research community | | Network | Networking utilities, such as tools for DoS, DDoS, etc. | | Packed | Packed applications (UPX, PECompact…) | | Phishing | Email messages (or documents) created with the aim of misleading the victim by disguising itself as a trustworthy entity into opening malicious links, disclosing personal information or opening malicious files. | | PUA | Potentially unwanted applications (hoax, joke, misleading...) | | Ransomware | Malware which encrypts files and demands money for decryption | | Rogue | Fraudulent AV installs and scareware | | Rootkit | Provides undetectable administrator access to a computer or a mobile device | | Spam | Other junk mail that does not unambiguously fall into the Phishing category, but contains unwanted or illegal content. | | Spyware | Collects personal information and spies on users | | Trojan | Allows remote access, hides in legit applications | | Virus | Self-replicating file/disk/USB infectors | | Worm | Self-propagating malware with exploit payloads | --- ## Risk score reference table --- ## SpectraDetect Appendix — Reference Materials and Open Source Licenses --- ## SpectraDetect Analysis Input — Connector Configuration for Email and S3 **Connectors** allow users to automatically retrieve a large number of files from external sources such as email or S3 buckets. Users can configure this service from the Appliance status page. Select a connected Hub, then select the **Connectors** button. **Note: The Hub must belong to a Hub group with at least one Worker.** If a connector is disabled or if it has not been previously configured on the appliance, the dialog contains only the **Enable connector** button. Click the button to start configuring the connector. Never use the same folder for both input and output of files. ## Starting the Connector When the configuration is complete, click **Start connector** at the bottom of the page. This will initiate the connector service on the appliance. After starting, the connector service connects to the configured user account, automatically retrieves files from it, and submits them for analysis on the appliance. Each file retrieved via the connector has a set of User Tags automatically assigned to it. Those User Tags are based on the file metadata, and can contain information about the file source, the last modification time in the original location, file permissions, email subject, recipient and sender addresses, and more. If advanced options are not enabled, the connector service will not perform any additional actions on the files retrieved from the server after the appliance finishes analyzing them. The users can see the analysis results for each file on its Sample Details page. After providing the required information, click **Test connection** to verify that the appliance can access the configured service storage. When the button is clicked, the appliance attempts to connect and mount the service storage. To remove all configured settings for the current service storage, click **Remove item**. To add another service storage, click **Add item**. Up to five units of storage can be added this way. If there are already five units of storage connected to the appliance, at least one must be removed by clicking **Remove item** before adding a new one. Note that for S3 connector, this limit is 20. ## Pausing and Disabling the Connector While the connector service is active, the *Start connector* button changes into **Pause connector**. Clicking this button temporarily halts the connector service. The connector service records the last state and is able to resume scanning when **Start connector** is clicked again. While the connector is running, it is possible to modify its configuration and save it by clicking **Save changes** without having to pause or disable the connector. If the connector service is active during a scheduled or manually executed Purge action, the system will automatically stop the service before performing the Purge action, and start it after the Purge action is complete. To disable the entire connector service on the appliance, click **Disable connector** at the bottom of the page. When the connector is disabled, it will not be possible to reconfigure, start, or pause it until the service is enabled again. Note that the current connector configuration will be preserved and restored when the service is enabled again. Likewise, all files that have been retrieved and analyzed by Spectra Analyze will remain on the appliance. All files retrieved from the server and analyzed on the appliance are accessible to Spectra Analyze users from the Submissions page. They are distinguished from other files by a unique username specific for each connector. Spectra Detect Workers will follow the default data retention policy. All processed files are deleted immediately after processing. If a file is queued but not processed within 9 hours, the processing task will be canceled (and the file deleted) but the record of the unsuccessful task will still be present in the database for 24 hours. All file processing results are retained until deleted by the user, or for 9 hours after processing (whichever comes first). ## IMAP - MS Exchange - AbuseBox Connector The IMAP - MS Exchange AbuseBox connector allows connecting to a Microsoft Exchange server and analyzing retrieved emails on the appliance. ### Requirements - IMAP must be enabled on the Exchange server. - A new user account must be configured on the mail server and its credentials provided to the connector in the configuration dialog. - A dedicated email folder must be created in the Exchange user account, and its name provided to the connector in the configuration dialog. All emails forwarded to that folder are collected by the connector and automatically sent to the appliance for analysis. When the analysis is complete, email samples with detected threats will get classified as malicious and, if the automatic message filing option is enabled, moved to the specified *Malware* folder. Emails with no detected malicious content do not get classified. They can optionally be moved to the specified *Unknown* folder on the configured Exchange user account. To improve performance and minimize processing delays on Spectra Analyze, each email sample will get analyzed and classified only once. When the *Automatic message filing* option is enabled, each email sample is moved only once, based on its first available classification. Because of that, it is recommended to enable classification propagation and allow retrieving [Spectra Intelligence](/SpectraIntelligence/) classification information during sample analysis instead of after. Administrators can enable these two options in the **Administration ‣ Configuration ‣ Spectra Detect Processing Settings** dialog. This will improve classification of emails with malicious attachments. Workers do this by default and no configuration is necessary. The connector can be configured to automatically sort emails after analysis into user-defined email folders in the configured Exchange user account. ### Configuring the Exchange user account To configure the connection with the Exchange user account: - make sure the connector is enabled - fill in the fields in the *Exchange setup* section of the Email AbuseBox Connector dialog. | | | | | ---------------- | --------- | ------------------------------------------------------------ | | Server domain | Mandatory | Enter the domain or IP address of the Exchange server. The value should be FQDN, hostname or IP. This should not include the protocol (e.g., http) | | Email folder | Mandatory | Enter the name of the email folder from which the email messages will be collected for analysis. This folder must belong to the same Exchange user account for which the credentials are configured in this section. The folder name is case-sensitive. | | Connection Type | Mandatory | Supports IMAP (Basic Authentication) and Exchange (OAuth 2) methods of authentication. Depending on the selection, the next section of the form will ask for different user credentials: Basic Authentication asks for a username and password, OAuth 2 asks for a Client ID, Client Secret and Tenant ID. | | Email address | Mandatory | Enter the primary email address of the configured Exchange user account. | | Access Type | Mandatory | Delegate is used in environments where there’s a one-to-one relationship between users. Impersonation is used in environments where a single account needs to access many accounts. | | Connect securely | Optional | If selected, the connector will not accept connections to Exchange servers with untrusted or expired certificates. | Note that the connector will operate and analyze emails even if these advanced options are disabled. They only control the post-analysis activities. ## Network File Share Connector The Network File Share connector allows connecting up to five shared network resources to the appliance. Once the network shares are connected and mounted to the appliance, it can automatically scan the network shares and submit files for analysis. After analyzing the files, the appliance can optionally sort the files into folders on the network share based on their classification status. The Network File Share connector supports SMB and NFS file sharing protocols. Currently, it is not possible to assign a custom name to each network share. The only way to distinguish between configured network shares is to look at their addresses. If there are 3 configured network shares, and the network share 2 is removed, the previous network share 3 will automatically "move up" in the list and become network share 2. ### Configuring Network Shares To add a new network share to the appliance, expand the *Shares* section in the Network File Share Connector dialog and fill in the relevant fields. | | | | | ------------ | ------------------ | ------------------------------------------------------------ | | Address | Mandatory | Enter the address of the shared network resource that will be mounted to the appliance. The address must include the protocol (SMB or NFS). Leading slashes are not required for NFS shares (example: *nfs:storage.example.lan*). The address can point to the entire network drive, or to a specific folder (example: *smb://storage.example.lan/samples/collection*). When the input folder and/or sorting folders are configured, their paths are treated as relative to the address configured here. **Note:** If the address contains special characters, it may not be possible to mount the share to the appliance. The comma character cannot be used in the address for SMB shares. Some combinations of ? and # will result in errors when attempting to mount both the SMB and the NFS shares. | | Username | Optional, SMB only | Enter the username for authenticating to the SMB network share (if required). Usernames and passwords for SMB authentication can only use a limited range of characters (ASCII-printable characters excluding the comma). | | Password | Optional, SMB only | Enter the password for authenticating to the SMB network share (if required). Usernames and passwords for SMB authentication can only use a limited range of characters (ASCII-printable characters excluding the comma). | | Input folder | Optional | Specify the path to the folder on the network share containing the files to be analyzed by the appliance. The folder must exist on the network share. The path specified here is relative to the root (address of the network file share). If the input folder is not configured, the root is treated as the input folder. | ### Folder Naming Restrictions Folder names used in the **Address**, **Input folder** and **Automatic File Sorting** folder fields (described below) can include: - Alphanumeric characters (`A–Z`, `a–z`, `0–9`) - Spaces - The following special characters: `_`, `-`, `.`, `(`, `)` To ensure maximum compatibility across network file systems, avoid using special characters beyond underscore and hyphen. The following characters are **not allowed**: `/`, `\`, `:`, `*`, `?`, `"`, `<`, `>`, `|`, and the null byte (`\0`). When specifying subfolders in the **Input folder** or within the **Address**, each folder name must conform to these rules. --- The service will continually scan the network shares for new files (approximately every 5 minutes). If any of the existing files on the network share has changed since the last scan, it will be treated as a new file and analyzed again. ## Microsoft Cloud Storage: Azure Data Lake The Azure Data Lake connector allows connecting up to 20 Azure Data Lake Gen2 containers to the appliance. When the containers are connected and mounted to the appliance, it can automatically scan them and submit files for analysis. The files can be placed into the root of each container, or into an optional folder in each of the containers. **Important: This connector is not compatible with containers that have the Blob Soft Delete feature enabled.** After analyzing the files, the appliance can optionally sort the files into folders on the container based on their classification status. Currently, it is not possible to assign a custom name to each data lake input. The only way to distinguish between configured containers is to look at their names. If there are three configured data lake inputs, and input 2 is removed, the previous input 3 will automatically "move up" in the list and become input 2. ### Configuring Azure Data Lake containers To add a new Azure Data Lake container: - make sure the connector is enabled - expand the *Azure Data Lake Inputs* section in the Azure data lake dialog and fill in the relevant fields. | | | | | -------------------- | --------- | ------------------------------------------------------------ | | Storage account name | Mandatory | The name of the storage account. | | Storage access key | Mandatory | The access key used for Shared Key Authentication. This value should end in `==` | | Container | Mandatory | Specify the name of an existing Azure Data Lake container which contains the samples to process. The value must start and end with a letter or number and must contain only letters, numbers, and the dash (-) character. Consecutive dashes are not permitted. All letters must be lowercase. The value must have between 3 and 63 characters. | | Folder | Optional | The input folder inside the specified container which contains the samples to process. All other samples will be ignored. | ## Microsoft Cloud Storage: OneDrive / SharePoint Online The Microsoft Cloud Storage connector allows connecting up to five OneDrive or SharePoint storages to the appliance. When the storages are connected and mounted to the appliance, it can automatically scan them and submit files for analysis. The files can be placed into the root of each storage, or into an optional subfolder, After analyzing the files, the appliance can optionally sort the files into folders based on their classification status. ### Configuring File Storage Inputs To add a new File Storage Input: - make sure the connector is enabled - expand the *File Storage Inputs* section in the Microsoft Cloud Storage dialog and fill in the relevant fields. | | | | --------------- | ------------------------------------------------------------ | | Host | Host of the OAuth2 authentication server. | | Client ID | Identifier value of the OAuth2 client. | | Client Secret | The value used by the service to authenticate to the authorization server. | | Scopes | When the user is signed in, these values dictate the type of permissions Microsoft Cloud Storage connector needs in order to function. Provide one or more OAuth2.0 scopes that should be requested during login. These values should be separated by comma. | | Auth URL | Authentication endpoint for OAuth2. | | Token URL | Link to access token for OAuth2 after the authorization. | | Resource | The server that hosts the needed resources. | | Source | A choice between **OneDrive** and **Online Sharepoint**. | | Drive ID | Identifier value for the drive when **OneDrive** source option is chosen. | | Sharepoint Site | A dropdown which appears when the **Online Sharepoint** source option is chosen. | | Folder | The input folder which contains samples to be processed, while all other samples will be ignored. | ## AWS S3 Connector The S3 connector allows connecting up to 20 S3 buckets to the appliance. When the buckets are connected and mounted to the appliance, it can automatically scan the buckets and submit files for analysis. The files can be placed into the root of each bucket, or into an optional folder in each of the buckets. After analyzing the files, the appliance can optionally sort the files into folders on the S3 bucket based on their classification status. Currently, it is not possible to assign a custom name to each S3 file storage input. The only way to distinguish between configured buckets is to look at their names. If there are 3 configured S3 file storage inputs, and input 2 is removed, the previous input 3 will automatically "move up" in the list and become input 2. To ensure that all files in AWS S3 buckets can be reprocessed, users can use the **Clear All Processed Files** button. This option resets the connector’s tracking of previously processed files. As a result, all files in the buckets will be treated as new and reprocessed when the connector is restarted. No files in the buckets are deleted. ### Configuring S3 Buckets To add a new S3 bucket: - make sure the connector is enabled - expand the *S3 File Storage Inputs* section in the S3 dialog and fill in the relevant fields. | | | | | --------------------------- | --------------------------------------------------- | ------------------------------------------------------------ | | AWS S3 access key ID | Mandatory | The access key ID for AWS S3 account authentication. In cases where the appliance is hosted by ReversingLabs and Role ARN is used, this value will be provided by ReversingLabs. | | AWS S3 secret access key | Mandatory | The secret access key for AWS S3 account authentication. In cases where the appliance is hosted by ReversingLabs and Role ARN is used, this value will be provided by ReversingLabs. | | AWS S3 region | Mandatory | Specify the correct AWS geographical region where the S3 bucket is located. This parameter is ignored for non-AWS setups. | | Enable Role ARN | Optional | Enables or disables authentication using an external AWS role. This allows the customers to use the connector without forwarding their access keys between services. The IAM role which will be used to obtain temporary tokens has to be created for the connector in the AWS console. These temporary tokens allow ingesting files from S3 buckets without using the customer secret access key. If enabled, it will expose more configuration options below. | | Role ARN | Mandatory and visible only if `Role ARN` is enabled | The role ARN created using the external role ID and an Amazon ID. In other words, the ARN which allows the appliance to obtain a temporary token, which then allows it to connect to S3 buckets without using the customer secret access key. | | External ID | Optional, visible only if `Role ARN` is enabled | The external ID of the role that will be assumed. Usually, it’s an ID provided by the entity which uses (but doesn’t own) an S3 bucket. The owner of that bucket takes the external ID and creates an ARN with it. In non-production or test environments, you can enter a placeholder value for the External ID if your use case doesn't require a real one. This is useful when you do not want to enforce the External ID requirement while testing configurations. However, it is strongly recommended to use a valid External ID in production environments to maintain security. | | Role session name | Mandatory and visible only if `Role ARN` is enabled | Name of the session visible in AWS logs. Can be any string. | | ARN token duration | Mandatory and visible only if `Role ARN` is enabled | How long before the authentication token expires and is refreshed. The minimum value is 900 seconds. | | AWS S3 bucket | Mandatory | Specify the name of an existing S3 bucket which contains the samples to process. The bucket name can be between 3 and 63 characters long, and can contain only lower-case characters, numbers, periods, and dashes. Each label in the bucket name must start with a lowercase letter or number. The bucket name cannot contain underscores, end with a dash, have consecutive periods, or use dashes adjacent to periods. The bucket name cannot be formatted as an IP address. | | Processing Priority | Mandatory | Assign a priority for processing files from an S3 bucket on a scale of 1 (highest) to 5 (lowest). Multiple buckets may share the same priority. The default value is 5. | | AWS S3 folder | Optional | The input folder inside the specified bucket which contains the samples to process. All other samples will be ignored. The folder name can be between 3 and 63 characters long, and can contain only lower-case characters, numbers, periods, and dashes. Each label in the folder name must start with a lowercase letter or number. The folder name cannot contain underscores, end with a dash, have consecutive periods, or use dashes adjacent to periods. The folder name cannot be formatted as an IP address. If the folder is not configured, the root of the bucket is treated as the input folder. | | S3 endpoint URL | Optional | Enter a custom S3 endpoint URL. Specifying the protocol is optional. Leave empty if using standard AWS S3. | | Server Side Encryption Type | Optional | Specify the server-side encryption method managed by AWS for your S3 bucket. You can choose either “AES256” to enable Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) or “aws:kms” to enable Server-Side Encryption with AWS Key Management Service (SSE-KMS). This setting should be left blank unless your bucket policy requires SSE headers to be sent to S3. It is mutually exclusive with the Customer Encryption settings, meaning you should not configure this option alongside Customer Encryption Algorithm or Customer Encryption Key. | | Customer Encryption Algorithm | Optional | Defines the encryption algorithm used when you provide your own encryption keys. The only valid value for this field is “AES256”. It must be used in conjunction with the Customer Encryption Key field and cannot be used simultaneously with the Server Side Encryption Type. This option is intended for users who prefer to manage their own encryption keys rather than relying on AWS-managed keys. | | Customer Encryption Key | Optional | Provide a customer-managed encryption key for encrypting and decrypting objects in your S3 bucket. The key must be a valid Base64-encoded AES256 key. It must be used together with the Customer Encryption Algorithm field and is mutually exclusive with the Server Side Encryption Type. | | Connect securely | Optional | If selected, the connector will not accept connections to S3 buckets with untrusted or expired certificates. This setting only applies when a custom S3 endpoint is used. | | Enable Selection Criteria Using Metadata | Optional | If selected, the connector will fetch and process samples that either match the specified metadata or have no associated metadata. This option can be used to target specific samples for further ingestion and processing via Spectra Analyze. | | Classification | Optional | Specify classifications to ensure that only samples matching the selected classification criteria, or samples that have no associated metadata are considered for processing. The available classifications are `Unknown`, `Goodware`, `Suspicious`, `Malicious`. | | Threat Name | Optional | Specify only the samples containing any of the specified threat names in their metadata for further processing. Multiple threat names can be specified using the enter or tab key. | ## SMTP Connector The SMTP connector allows analyzing incoming email traffic on the appliance to protect users from malicious content. When enabled, the connector service collects emails (with attachments) and uploads them to the appliance for analysis. Each email message is saved as one file. If email uploading fails for any reason, the connector automatically retries to upload it to the appliance. When the analysis is complete, each email message receives a classification status from the appliance. In this operating mode, the connector acts as an SMTP Relay. Therefore, the connector should not be used as a front-end service for accepting raw email traffic, but only as a system inside an already established secure processing pipeline for SMTP email. To allow the SMTP connector to inspect and collect email traffic, users must ensure that the SMTP traffic in their network is diverted to port 25/TCP prior to configuring the connector on the appliance. **Warning: Additional port configuration may be required on the appliance. Because it involves manually modifying configuration files, this action can cause the appliance to malfunction. Contact [ReversingLabs Support](mailto:support@reversinglabs.com) for instructions and guidance.** **Profiles** There are two profiles for this connector: *Default* and *Strict*. These two profiles correspond to different Postfix configuration files. Clicking the underlined **See How It Affects Postfix Config** text will display a pop-up modal with a detailed *Default* and *Strict* profile Postfix configuration. In the *Default* profile, you don’t enforce TLS traffic and you accept any SMTP client. This corresponds to the following Postfix configuration: ``` mynetworks = 0.0.0.0/0 [::]/0 smtpd_tls_security_level = may smtp_tls_security_level = may ``` In the *Strict* profile, you do enforce TLS and you can also specify trusted SMTP clients (highlighted line 1 in the example below; see [Postfix docs](https://www.postfix.org/postconf.5.html#mynetworks) for the specific syntax). The relevant portion of the configuration looks like this in *Strict* mode: ``` mynetworks = 0.0.0.0/0 [::]/0 smtpd_tls_security_level = encrypt smtp_tls_security_level = encrypt smtpd_tls_mandatory_ciphers = high smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1 smtpd_tls_mandatory_exclude_ciphers = aNULL, MD5 ``` **Starting the connector** After the connector is enabled, click the **Start connector** button. This will initiate the connector service on the appliance. **Pausing and disabling the connector** While the connector service is active, the *Start connector* button changes into **Pause connector**. Clicking this button temporarily halts the connector service, which in turn stops receiving and analyzing new email traffic. The connector service records the last state and is able to resume scanning when **Start connector** is clicked again. If the connector service is active during a scheduled or manually executed Purge action, the system will automatically stop the service before performing the Purge action, and start it after the Purge action is complete. To disable the entire connector service on the appliance, click **Disable connector** at the bottom of the page. When the connector is disabled, it will not be possible to reconfigure, start, or pause it until the service is enabled again. ### Use Case: Accept Email from Any Email Client or Server To configure the service to accept email from any email client or server, follow these steps: 1. **Enable the SMTP connector** 2. **Contact [ReversingLabs Support](mailto:support@reversinglabs.com)** with: - Your intended email forwarding address range - A formal request to enable email forwarding 3. After receiving your request, ReversingLabs will: - Create an MX DNS record for your hosted service - Configure receiver restrictions to accept email only from your domain 4. Once ReversingLabs has added the MX record to the DNS for your hosted service, **create email forwarding rules** in your email system to automatically forward emails to the hosted service for processing. ## Citrix ShareFile Connector The Citrix ShareFile connector allows integration with ShareFile to scan and classify files stored on the platform. It supports advanced options for sorting files based on classification (`Goodware`, `Malware`, `Suspicious`, `Unknown`) into designated folders on ShareFile. Additionally, users can enable automatic deletion of source files post-analysis. ### Configuring Citrix ShareFile To add a new Citrix ShareFile input: - make sure the connector is enabled. - expand the *Citrix ShareFile Inputs* section in the Citrix ShareFile dialog and fill in the relevant fields. | Input | Description | |---------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Hostname | The hostname used to access ShareFile servers. Usually `.sharefile.com`. | | Client ID | Identifier value of the OAuth2 client. | | Client Secret | The value used by the service to authenticate to the authorization server. | | Username | The username for authenticating to the ShareFile servers. | | Password | The password for authenticating to the ShareFile servers. | | Root Folder | Root folder used for scanning, needs to be defined as GUID/Item Id in format `foxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`, and available on Citrix ShareFile. If left empty, the connector will scan all the shared folders assigned to the user account, including the account's home folder, if it has one. | ## ICAP Server **The ICAP Connector acts as an ICAP server** and processes files sent through ICAP-enabled systems. It applies rules based on classifications, file size, and processing time, ensuring files meet the specified criteria before being analyzed by Spectra Detect. ### Supported Topologies The ICAP Server Connector is designed to integrate seamlessly in any ICAP deployment model, including but not limited to: - Application Delivery Controllers (ADCs) - Forward Proxy - Ingress Controller - Intrusion Prevention System - Load Balancer - Managed File Transfer - Next-Generation Firewall - Protecting Enterprise Storage - Reverse Proxies - Secure Remote Access - SSL Inspection and Termination - Web Application Firewall (WAF) - Web Gateway - Web Traffic Security ### Configuration Guides - [Spectra Detect integration with Kiteworks](/Integrations/ICAP/kiteworks/). ### Request and Response Modes The ICAP Server Connector supports two workflows: request mode (REQMOD) and response mode (RESPMOD). Configure your ICAP client to use one mode depending on your requirements: #### Request Mode (REQMOD) - Processes outgoing client requests before they reach the destination server. - Example: Validating uploaded files to ensure they meet security requirements. #### Response Mode (RESPMOD) - Processes server responses before they are delivered to the client. - Example: Scanning downloaded files for malware or sanitizing content before delivery. ### ICAP Response Codes The table below lists the ICAP response codes and their meanings: | ICAP Response Code | Description | |--------------------|-------------| | **100 Continue** | Signals the client to continue sending data to the ICAP server after the ICAP Preview message. | | **200 OK** | The request was successfully processed. This status is returned whether the file is blocked or not. If the file was blocked, the `X-Blocked` header will be set to `True`. In RESPMOD, a blocked file will result in a 403 HTTP status code. | | **204 No modification needed** | Returned when the file was not blocked and the client sent an `Allow: 204` header in the request. | | **400 Bad request** | Indicates a failure during file submission, analysis processing, or delivering results back to the client. | | **404 Not found** | The ICAP service could not be found. | | **405 Method Not Allowed** | The requested method is not supported on the endpoint. | | **500 Internal Server Error** | A generic server-side error occurred. | ### Configuring ICAP To configure the ICAP Server connector, ensure that it is enabled and adjust the settings in the ICAP Server configuration section. | | | | ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Max File Size | Specify the maximum file size (in megabytes) that the ICAP Connector will process. Files exceeding this size will not be analyzed. Default: `0 (unlimited)`. | | Allow Classifications | Select which classifications to allow. Available options: `goodware`, `unknown`, `suspicious`, `malicious`. Files not matching the selected classifications will be blocked. | | Service Alias | The base service name is `spectraconnector` and cannot be changed. Here you may define any additional aliases your ICAP client can use to connect. Do not re-enter `spectraconnector` here. | | Timeout | Set the timeout period (in seconds) for processing requests. If a file is not processed within this time, the request will be terminated. Default: `300`. | | REQMOD Block Page URL | Enter the full URL your ICAP clients will fetch when a request is blocked. **To use the default block page, set it to `https://{RL_APPLIANCE_IP}/icap-block-page` and replace `{RL_APPLIANCE_IP}` with your appliance’s actual IP or hostname.** If you host a custom block page, enter its URL here and ensure it's accessible to ICAP clients. **Note:** In some deployments, `RL_APPLIANCE_IP` may refer to a virtual IP or a load-balancer IP. | | RESPMOD Block Page | Upload a custom file that will replace the blocked HTTP response content. This page will be served to the client instead of the original response. **Maximum file size:** 0.5 MB. | | Use TLS | Select the checkbox to use a secure connection (TLS v1.3) when communicating with the ICAP server. | | ICAP server listen port | Specify the network port on which the ICAP server listens for incoming requests. The default is `11344` when TLS is enabled, and `1344` when TLS is disabled. | | Scan Raw Data | Enable this to extract the raw HTTP message body and send it to the ReversingLabs analysis engine unmodified. | ## SFTP Connector The SFTP Connector allows the appliance to automatically retrieve files from a remote SFTP server for analysis. Users can configure authentication using either a password or a public key. Once started, the connector continuously scans the specified folder and submits new files for analysis. Classification results can be viewed on the appliance after processing. | | | |---------------------|---------------------------------------------------------------------------------------------------| | Host/Server Address | Hostname or IP address of the target SFTP server. | | Port | Port used for the SFTP connection. Defaults to `22`. | | Username | Account name used to authenticate with the SFTP server. | | Authentication Type | Method used to authenticated with the SFTP server. Supported methods: `Password` or `Public Key`. | | Password | Password associated with the specified username. | | Input Folder | Path to remote directory from which files will be retrieved. Example: `/incoming/data` | **Tip: Key pairs can also be created using APIs. Refer to the API documentation on the appliance, specifically the **Appliances > Generate key pair** and **Appliances > Download public key** endpoints.** ## Using Advanced Connector Options In addition to main connector options for every connector service, users can set advanced options. Advanced options for a connector refer to actions that the connector service can perform on files after the appliance finishes analyzing them. Specifically, the connector can be configured to automatically sort files into user-defined sorting folders on the connector user account. Files are sorted into folders based on the classification status they receive during analysis (malicious, suspicious, known, no threats found). For Azure Data Lake, Network File Share, and S3 Connectors, advanced options can be configured for every storage unit individually. This means that the sorting criteria, folder names, and folder paths can be different on each configured storage unit. ### IMAP - MS Exchange - AbuseBox Connector | | | | ------------------------------- | ------------------------------------------------------------ | | Enable automatic message filing | Selecting the checkbox will allow the connector to move analyzed emails and sort them into email folders in the configured Exchange email user account. This checkbox toggles the availability of other options in the Advanced Options section. | | Malware folder | Specify the name of the email folder into which the connector will store emails classified as "Malicious" (malware). This folder will be created if it doesn’t exist. This field is mandatory when *Enable automatic message filing* is selected. | | Unknown folder | Specify the name of the email folder into which the connector will store emails with no malicious content detected (classified as Known, or not classified at all = Unknown). This folder will be created if it doesn’t exist. This field is mandatory when *Enable automatic message filing* is selected. | | Allow suspicious | When selected, emails classified as "Suspicious" will be moved to the configured Unknown folder. If this checkbox is not selected, files classified as "Suspicious" will by default be sorted into the configured Malware folder. | ### Azure Data Lake, Microsoft Cloud Storage, S3 Connectors | | | | ------------------------------------------------ | ------------------------------------------------------------ | | Enable Same Hash Rescan | Selecting the checkbox will force the connector to rescan samples that share the same hash. **This checkbox can only be enabled for the S3 connector.** | | Delete source files | Selecting the checkbox will allow the connector to delete source files on the connector storage after they have been processed. | | Enable automatic file sorting | Selecting the checkbox will allow the connector to store analyzed files and sort them into folders based on their classification. Usually, the connector skips already uploaded files. If this option is enabled and some files have already been uploaded, they will be uploaded to the Worker again. | | Sort Malware detected by Microsoft Cloud Storage | If enabled, the samples which are identified as Malware by Microsoft Cloud Storage will be moved to the Malware folder. These samples are not processed by Spectra Detect. **This checkbox can only be enabled for Microsoft Cloud Storage connector.** | | Goodware folder | Specify the path to folder into which the connector will store files classified as "Known" (goodware). This field is mandatory when *Enable automatic file sorting* is selected. The path specified here is relative to the address of the connector storage unit. If the folder doesn’t already exist on the container, it will be automatically created after saving the configuration. | | Malware folder | Specify the path to folder into which the connector will store files classified as "Malicious" (malware). This field is mandatory when *Enable automatic file sorting* is selected. The path specified here is relative to the address of the connector storage unit. If the folder doesn’t already exist on the container, it will be automatically created after saving the configuration. | | Unknown folder | Specify the path to folder into which the connector will store files without classification ("Unknown" status). The path specified here is relative to the address of the connector storage unit. If the folder doesn’t already exist on the container, it will be automatically created after saving the configuration. Files stored in the Unknown folder are regularly rescanned.| | Suspicious folder | Specify the path to folder into which the connector will store files classified as "Suspicious". The path specified here is relative to the address of the connector storage unit. If the folder doesn’t already exist on the container, it will be automatically created after saving the configuration. | ### Network File Share Connector | | | | ----------------------------- | ------------------------------------------------------------ | | Delete source files | Selecting the checkbox will allow the connector to delete source files on the network share after they have been processed. | | Enable automatic file sorting | Selecting the checkbox will allow the connector to store analyzed files and sort them into folders on every configured S3 bucket based on their classification status. This checkbox toggles the availability of other options in the Advanced Options section. | |Rescan Unknowns|If enabled, the connector rescans samples previously classified as unknown in intervals defined by the Rescan Unknowns Interval value (in days). If Allow Unknown is enabled, unknown files will be stored in the Goodware folder, and will not be rescanned.| | Goodware folder | Specify the path to folder into which the connector will store files classified as "Known" (goodware). This field is mandatory when *Enable automatic file sorting* is selected. The path specified here is relative to the address of the network file share. If the folder doesn’t already exist on the network share, it will be automatically created after saving the configuration. | | Malware folder | Specify the path to folder into which the connector will store files classified as "Malicious" (malware). This field is mandatory when *Enable automatic file sorting* is selected. The path specified here is relative to the address of the network file share. If the folder doesn’t already exist on the network share, it will be automatically created after saving the configuration. | | Unknown folder | Specify the path to folder into which the connector will store files without classification ("Unknown" status). The path specified here is relative to the address of the network file share. If this field is left empty, unknown files will be stored either to the Goodware or to the Malware folder, depending on the "Allow unknown" setting. | | Known threshold | Files classified as "Known" (goodware) with the trust factor value higher than the one configured here will be stored into the configured Malware folder. "Known" files with the trust factor less than or equal to the value configured here will be stored into the configured Goodware folder. Supported values are 0 to 5. Default is 5 (saves all to the Goodware folder). This field is mandatory when *Enable automatic file sorting* is selected. | | Allow unknown | When selected, files with the "Unknown" classification status are stored into the configured Goodware folder. If this checkbox is not selected, files with the "Unknown" status are either stored into the Unknown folder (if the Unknown folder is configured), or to the Malware folder (if the Unknown folder is not configured). | | Allow suspicious | When selected, files classified as "Suspicious" will be stored into the configured Goodware folder. If this checkbox is not selected, files classified as "Suspicious" will be stored into the configured Malware folder. | ## Global Configuration In addition to every connector service having specific configuration settings, there is a **Global Configuration** section at the bottom of every connector page. These settings apply to all configured connectors. | | | | -------------------------------------------------------- | ------------------------------------------------------------ | | Save files that had encountered errors during processing | Original files that were not successfully uploaded will be saved to `/data/connectors/connector-[CONNECTOR_SHORTNAME]/error_files/` | | Max upload retries | Number of times the connector will attempt to upload the file to the processing appliance. Upon reaching the number of retries, it will be saved in the `error_files/` destination or be discarded | | Max upload timeout | Period (in seconds) between upload attempts of the sample being re-uploaded. | | Upload algorithm | The algorithm used for managing delays between attempting to reupload the samples. In **Exponential backoff**, the delay is defined by multiplying the *Max upload timeout* parameter by 2, until reaching the maximum value of 5 minutes. **Linear backoff** will always use the *Max upload timeout* value for the timeout period between reuploads. | | Max upload delay | In case the Worker cluster is under high load, this parameter is used to delay any new upload to the cluster. The delay parameter will be multiplied by the internal factor determined by the load on the appliance. If set to 0, the delay will be disabled. | | Database cleanup period | Specifies the number of days for which the data will be preserved. | | Database cleanup interval | Specifies the time (in seconds), in which the database cleanup will be performed. | | Max File Size (MB) | The maximum file size the connector will transfer to the appliance for analysis. Setting it to 0 disables the option. (Available for AWS S3 and Azure Data Lake Connectors) | ## Unique Usernames | Connector | Unique Username | | ---------------------------- | ------------------------- | | Email AbuseBox Connector | abusebox_connector | | Azure Data Lake Connector | azure-data-lake_connector | | Network File Share Connector | fileshare_connector | | Microsoft Cloud Storage | graph-api-connector | | S3 Connector | s3_connector | | SMTP Connector | smtp_connector | | SFTP Connector | sftp_connector | --- ## SpectraDetect Appliance Configuration — Central Configuration and Hub Groups Spectra Detect Manager allows users to modify configuration settings on Spectra Detect appliances directly from the Manager interface. The Central Configuration feature makes it easier to configure appliances remotely, and to ensure that the settings are consistent and correct across multiple appliances. The appliances must first be connected and authorized on the Manager instance. **Note: **Spectra Analyze** appliances can be managed using the Spectra Detect Manager APIs.** To start working with this feature, access the Central Configuration page by selecting **Central Configuration** in the upper right corner of the Manager interface. The Central Configuration page contains different configuration modules that users can enable. Different combinations of modules and their configuration values can be saved as **configuration groups** or **Hub groups**. For example, users can create a configuration group for Worker appliances that should be connected to [Spectra Intelligence](/SpectraIntelligence/), and a Hub group for Hub and Worker appliances that should be connected to T1000. In addition to options described below, appliance groups containing a Hub instance provide more configuration options (such as Connector services) if they are configured as a [Spectra Detect Hub group](./Redundancy.md#hub-groups) rather than a regular configuration group. When appliances are added to a group, all settings configured in the modules are applied to them, overwriting their previous settings. Generally, the Central Configuration workflow includes the following steps: 1. Create a configuration group or edit an existing one. 2. Select which appliances should be in the group. 3. Modify settings in configuration modules. 4. Save changes 5. Apply modified settings to all appliances in the group. ## Central Configuration Page The Central Configuration page contains the *Select configuration group* pull-down list at the top, allowing users to switch between existing groups. There are also buttons for creating a new group and deleting the currently selected group. If there are no configuration groups created on the Manager instance, the `default` group is displayed on the Central Configuration page. Users can manage appliances and modify settings in the default group, or create their own groups. All configuration modules supported by the Manager are listed in the sidebar on the left. Selecting a module opens its configuration dialog on the right side of the page. ![Central Configuration page with configuration group selection and modules sidebar](../images/central-configuration.png) If the selected group is a [Spectra Detect Hub group](./Redundancy.md#hub-groups), an additional section is present at the top of the page. The section indicates which Hub instance in the group is the primary, and which is the fallback node. Clicking **Details** in this section displays more information about both Hub instances, such as their router IDs and configured priority values. To see the list of appliances that can be added to the currently selected configuration group, select *Appliances* in the sidebar. Appliances that are already in the current group have a tick in the checkbox next to their name. Appliances that are in other configuration groups have an indicator next to the appliance name. Users can save and/or apply configurations to appliances in the group by clicking on the **Save** button. This opens a pop-up dialog with the options to **Save** or **Save and Apply** the configuration to all appliances in the group. To apply the configurations to specific appliances, select their checkboxes in the appliance list below, and click the **Apply** button at the top of the list. Note that applying configuration removes any configurations that aren’t defined in the current configuration group, preventing mismatches caused by values from previous groups or manual changes. ### Configuration status The configuration status of appliances can be one of the following: - Applied - Not Applied - Pending - Error - Out of Sync **Note: Older appliances (i.e. before Spectra Detect v5.2) will show different status messages.** ## Adding Appliances to a Configuration Group Appliances that can be added to the current configuration group are listed in the *Appliances* section. Select the checkbox next to the appliance(s) that should be added to the group, and click **Save**. This opens a dialog with the options to save the selected appliances to the group, and to optionally apply the current group configuration. An appliance cannot be in two configuration groups at the same time. If an appliance is already in another configuration group, the Manager displays a warning message after clicking **Save**. Confirming the change will move the appliance from the previous configuration group to the current one. When an appliance is successfully added to a configuration group, the group’s configuration has to be manually applied to it either by clicking the **Save** button and selecting the **Save and Apply** option, or by selecting its checkbox in the Apply Configuration list and clicking the **Apply** button. The appliance will restart and start using the new configuration. The configuration dialogs on the appliance will indicate that the settings are being managed by the Manager. Although the configuration values modified in the group will still be editable in the appliance’s configuration dialogs, any changes saved on the appliance will not be applied as long as the appliance is managed through Central Configuration. If an appliance is added to a group and the configuration is applied, but the appliance is offline or unreachable by the Manager at that time, its settings will be modified when it becomes reachable. **Warning: Moving a Worker between Hub groups can result in configuration mismatches if the Worker’s settings do not align with the target Hub group. Applying configuration changes to the affected Worker, either through the GUI or the API, is expected to resolve the mismatches.** ## Creating Configuration Groups To create a new configuration group, click **Add new group** at the top of the Central Configuration page. **Tip: It’s also possible to create a configuration group by clicking **Add new group** on the Appliance Management tab on the Dashboard page.** Group names are case-sensitive, so "example" and "Example" are treated as two different groups. Supported characters for group names are `a-z`, `A-Z`, `0-9`, and the underscore ( `_` ). If the group name contains an unsupported character, an error message is displayed. Likewise, a warning is displayed when trying to create a configuration group with a name that is already taken by another group. The dialog also requires selecting the group type. Two types are supported: 1. Configuration group (for Spectra Detect Worker appliances without a Hub), 2. Hub group (for [setting up a high-availability cluster](./Redundancy.md#hub-groups)). Select the first type ("Configuration Group") and click **Add** to confirm changes and create a new configuration group. The newly created group will not contain any appliances, and there won’t be any active configuration modules. **Important: Some configuration modules and options apply only to specific appliance types or versions. For example, the "Splunk" configuration module and its options apply only to the Worker. Read more in the [configuration modules](#configuration-modules) section.** To enable a configuration module, select it in the sidebar on the Central Configuration page and modify the options in it. The indicator in the sidebar warns about unsaved changes in the module. Unsaved changes are lost if the user navigates away from the Central Configuration page without clicking **Save** first. Configuration modules that are not enabled do not have any indicators in the sidebar. Those that are enabled and properly configured have a green indicator. If there are issues with the configuration of a module, the indicator changes to red. Save changes in the module by clicking **Save**. The indicator in the sidebar shows whether the module is configured properly. Repeat this procedure for every configuration module that needs to be enabled in the configuration group. To disable a configuration module, select it in the sidebar and click **Reset to Default**. This action removes all current configuration entries and restores the default settings in the UI. The changes are temporary until you **save** and then click **Apply**. Applying pushes the new settings to all connected appliances. The full list of supported configuration modules and options for all appliance types is available in the [configuration modules](#configuration-modules) section. ## Managing Configuration Groups The following changes can be made to any configuration group on the Manager: - enable and disable configuration modules - change options in enabled configuration modules - add and remove appliances from a group - move appliances from one group to another - delete the entire group (does not apply to the *default* group, which cannot be deleted) **Depending on the type of change, appliances may be automatically restarted. Only applying new configurations to an appliance will trigger a restart of that specific appliance. Adding an appliance to a group, removing it from a group, moving it between groups, or deleting a group will not restart the appliances.** Depending on the type of appliance, the process of restarting and reloading configuration values might take some time. Spectra Detect Worker appliances generally take longer to restart. ## Configuration modules The configuration modules listed in this section can be enabled on the Central Configuration page, and their options can be modified. Some configuration modules support all versions of Spectra Detect appliances, but specific options within the modules apply only to specific versions. Such options are indicated by a comment in the Manager interface. ### General Root SSH login can be enabled for use with password management systems. These checkboxes are not available by default. To enable them, do the following: 1. Log in via SSH to the Manager. 2. Run `sudo rlapp configure --sshd-control-enable`. This will enable the checkboxes on the Manager. 3. In the browser, go to *Spectra Detect Manager > Central configuration*, select the Hub group which will have root SSH login enabled, then go to *General > SSH* 4. Enable *Permit SSH configuration* 5. Enable *Permit root SSH login* Note that this can only be applied to Hub groups. For SSH credentials, contact [ReversingLabs Support](mailto:support@reversinglabs.com) . This section also includes the option to disable the use of swap memory. Swap memory is disk space used as RAM. Note that this option isn’t applicable if the appliances are deployed as Docker images. ### [Spectra Core](/General/AnalysisAndClassification/SpectraCoreAnalysis) This section lists configuration settings related to static analysis performed by Spectra Core. #### Processing Settings This setting determines which file formats will be unpacked by Spectra Core for detailed analysis. "Best" fully processes all formats supported by the appliance. "Fast" processes a limited set of file formats. Fast option does not support unpacking and/or validation of several file formats, providing only minimal information for: - Archives (ActiveMimeMSO, ARC (.arc, .ark), ARSC, BLZ, CGBI, CRTF, DICOM (.dicom, .dcm, .dic), PE .Net Resources, LZ4, LZIP, LZMA, LZOP, MAR, NuGet, PAK, PCAP (http, smtp), PYZ, SQX, TIFF, WARC, XAR, ZOO) - Documents (bplist, Certutil (.crt, .cert, .pem), CHM, HTML (.html, .htm, .xhtml, .xht), IQY, SettingContent (.xml), SYLK, URL) - Mobile Applications (Android (.apk), iOS (.ipa), Windows Phone (.xap), APPX) - Multimedia (BMP, GIF, JPEG, PNG, SWF) - File System/Firmware (cramfs, HFSP, jffs2, squashfs, yaffs) - Web Applications (Google Chrome (.crx), Opera (.oex), Mozilla FireFox (.xpi)) - Quarantine formats (KasperskyKLQ, SymantecQBD, SymantecVBN) - Emails (UUE, YENC) - Disk Images (VHD, WIM (.wim, .swm)) - ...and others (CxMacro, Docker, PyInstaller, SMTP, sqlite3 (.db, .sqlite), VBE(.vbe, .jse)). Additionally, the report metadata will no longer include overlay and resources hashes, storyteller descriptions, Spectra Intelligence XREF data, Mitre ATT&CK mappings, IoC reasons, as well as mobile, browser and media details. #### CEF Messages Configuration Spectra Detect can log events using the Common Event Format (CEF) to ensure compatibility with security information and event management software products (SIEMs). CEF is an extensible, text-based logging and auditing format that uses a standard header and a variable extension, formatted as key-value pairs. Select the checkbox to enable sending CEF messages to a syslog receiver. **CEF Message Hash Type**: The hash type to use for CEF messages. #### String Extraction Configuration **Note: Changing the strings configuration can impact classification.** Spectra Core can extract information from binaries in the form of strings. While useful in some contexts, this metadata can also be very extensive. **Minimum String Length**: The minimum length of extracted strings that make it into the analysis report. Default is 4. **Maximum String Length**: The maximum length of extracted strings that make it into the analysis report. Default is 32768. A value of 0 is interpreted as unlimited length, and can increase processing memory requirements. Additionally, the following options can be enabled: Unicode Printable, UTF-8 Encoding, UTF-16LE Encoding, UTF-16BE Encoding, UTF-32LE Encoding, UTF-32BE Encoding. #### MWP-related settings **MWP goodware factor**: The value configured here determines the threshold at which the KNOWN classification for a file (from the Malware Presence algorithm) will change to the Spectra Core Goodware classification. By default, all KNOWN classifications are converted to Goodware. Lowering the value reduces the number of files classified as goodware. Files with a trust factor higher than the configured value are considered to have no threats. Supported values are 0 - 5. The default is 2. **Extended MWP Metadata**: Select the checkbox to include extended malware presence metadata in Worker analysis reports for files analyzed with AV engines in the Spectra Intelligence system. Spectra Detect Worker must be connected to Spectra Intelligence, and the user account must have appropriate access rights for this feature to work. Note that extended metadata is displayed in the report only for those files that have been analyzed by AV engines at some point. #### Decompression configuration Decimal value between 0 and 999.9. Used to protect the user from intentional or unintentional archive bombs, terminating decompression if size of unpacked content exceeds a set quota. The maximum allowed decompression ratio is calculated as: ``` MaximumDecompressionFactor * (1000 / ln(1 + InputFileSize * pow(10, -5))) ``` The `InputFileSize` must be in bytes. To calculate the maximum decompressed file size, multiply this ratio by the `InputFileSize`. Unpacking will stop once the size of all extracted content exceeds the theoretical maximum of the best-performing compression algorithm. Setting it to 0 will disable decompression management. ReversingLabs recommend against disabling decompression management. #### Propagation When propagation is enabled, files can be classified based on the content extracted from them. This means that files containing a malicious or suspicious file will also be considered malicious or suspicious. Goodware overrides ensure that any files extracted from a parent file and whitelisted by certificate, source or user override can no longer be classified as malicious or suspicious. Additionally, this goodware classification can be propagated from extracted files to their parent files in order to prevent and suppress possible false positives within highly trusted software packages. Goodware overrides will apply to all files with the trust factor value equal to or lower than the value configured here. Trust factor is expressed as a number from 0 to 5, with 0 representing the best trust factor (highest confidence that a file contains goodware). #### Enable Classification Scanners Fine-tune which scanners are used in the static analysis performed by Workers. - Images: heuristic image classifier - PECOFF: Heuristic Windows executable classifier - Documents: Document format threat detection - Certificates: Checks whether the file certificate passes the certificate validation in addition to checking white and black certificate lists - Hyperlinks: Embedded hyperlink threat detection - Emails: Phishing and email threat detection #### ML Model Configuration Configure the behavior of machine learning models used in static analysis. Each model can be set to one of the following options: - **Ignored**: The model's output is not considered in the classification decision. - **Disabled**: The model is not executed during analysis. - **Suspicious**: Detections from this model result in a suspicious classification. - **Malicious**: Detections from this model result in a malicious classification. The default option for all models is **Malicious**. **Important: General models are early warning detectors that can identify novel malware. Being both predictive and more aggressive than specialized models, they can result in unwanted detections for legitimate software interacting with low-level system functions.** ##### Scripts | Option | Description | Example Detection | | ------ | ----------- | ----------------- | | Python | ML model for detecting malicious Python scripts. | Script-Python.Malware.Heuristic | | Visual Basic | ML model for detecting malicious Visual Basic scripts. | Script-Macro.Malware.Heuristic | | PowerShell | ML model for detecting malicious PowerShell scripts. | Script-PowerShell.Malware.Heuristic | | AutoIT | ML model for detecting malicious AutoIT scripts. | Script-AutoIt.Malware.Heuristic | | Excel | ML model for detecting malicious Excel macros. | Script-Macro.Malware.Heuristic | ##### Windows | Option | Description | Example Detection | | ------ | ----------- | ----------------- | | General | General ML model for detecting malicious Windows executables. | Win[32\|64].Malware.Heuristic | | Backdoor | ML model for detecting backdoor threats in Windows executables. | Win[32\|64].Backdoor.Heuristic | | Downloader | ML model for detecting downloader threats in Windows executables. | Win[32\|64].Downloader.Heuristic | | InfoStealer | ML model for detecting information stealer threats in Windows executables. | Win[32\|64].Infostealer.Heuristic | | Keylogger | ML model for detecting keylogger threats in Windows executables. | Win[32\|64].Keylogger.Heuristic | | Ransomware | ML model for detecting ransomware threats in Windows executables. | Win[32\|64].Ransomware.Heuristic | | Riskware | ML model for detecting riskware threats in Windows executables. | Win[32\|64].PUA.Heuristic | | Worm | ML model for detecting worm threats in Windows executables. | Win[32\|64].Worm.Heuristic | ##### Linux | Option | Description | Example Detection | | ------ | ----------- | ----------------- | | General | General ML model for detecting malicious Linux executables. | Linux.Malware.Heuristic | #### Ignore the Following Threat Types Selected threat types will be excluded from final classification decision. The classification returned will be Goodware with reason Graylisting. - Adware - Packer - Riskware - Hacktool - Spyware - Spam #### Password List Appliances use these passwords to decrypt password-protected compressed files submitted for analysis. Prior to submitting password-protected compressed files to the appliance, users can add the password for each file to this list (one password per line). Passwords can also be provided on each upload using the optional `user_data` request field. ### Spectra Intelligence **Applies to Spectra Detect Worker** | Option | Description | | -------------------------------------------------- | ------------------------------------------------------------ | | Enable Spectra Intelligence | Receive additional classification from the Spectra Intelligence cloud. By default, it is false. | | Spectra Intelligence URL | The host address for the Spectra Intelligence cloud. Click the *Test connection* button to test the connectivity. The default URL is *https://appliance-api.reversinglabs.com* | | Username | Spectra Intelligence username | | Password | Spectra Intelligence password | | Timeout | Default Spectra Intelligence connection timeout in seconds (maximum 1000). | | Enable proxy | Enables the configuration of an optional proxy connection. By default, it is false. | | Proxy host | Proxy host name for routing requests from the appliance to Spectra Intelligence (e.g., 192.168.1.15). | | Proxy port | Proxy port number (e.g., 1080). | | Proxy username | User name for proxy authentication. | | Proxy password | Password for proxy authentication. | Cache Spectra Intelligence results to preserve quota and bandwidth when analyzing sets of samples containing a lot of duplicates or identical extracted files. | Option | Description | |---------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Enable | Enable or disable the caching feature. Default: False | | Cache max size (%) | Maximum cache size expressed as a percentage of the total allocated RAM on the Worker. Default: 6.25, Range: 5 - 15 | | Cache cleanup window | How often to run the cache cleanup process, in minutes. It is advisable for this value to be lower, or at least equal to the TTL value. Default: 10, Range: 5 - 60 | | Maximum number of idle upstream connections | The maximum number of idle upstream connections. Default: 50, Range: 10 - 50 | | Cache entry TTL | Time to live for cached records, in minutes. Default: 240, Range: 1 - 3600 | Deep Cloud Analysis enables uploading files to Spectra Intelligence for scanning, making the results available in real-time on the Spectra Detect Dashboard. You can enable or disable Deep Cloud Analysis in the **Administration > Spectra Detect Manager > Dashboard Configuration** section by clicking the "Enable Deep Cloud Analysis" checkbox. | Option | Description | |--------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Wait for Deep Cloud Analysis Results | Delays report generation until the latest AV scan completes, ensuring updated AV metadata is included. This option is disabled by default. | | Deep Cloud Analysis Timeout | Maximum time in minutes AV waits for a scan to complete before timing out, ensuring system performance isn't impacted by long delays. Default: 240. Minimum value: 1. Maximum value: 1440. | | Scan Unpacked Files | Sends unpacked files to Deep Cloud Analysis for scanning. | | Rescan Files on Submission | Rescans files upon submission based on the configured interval to include the latest AV results in the reports. This option is enabled by default. | | Rescan Interval | Set the interval in days for triggering an AV rescan. If the last scan is older than the specified value, a rescan will be initiated. A value of 0 means files will be rescanned with each submission. Default: 3. | | File type filter | Select which file types should be excluded from Deep Cloud Analysis. `PE` maps to `PE16`, `PE`, `PE+`, `ELF` maps to `ELF32 Little`, `ELF32 Big`, `ELF64 Little`, `ELF64 Big`, `MachO` maps to `MachO32 Little`, `MachO32 Big`, `MachO64 Little`, `MachO64 Big`, `Package/JAR` maps to `Package/Java/JAR` and `Binary/Archive/JAR`. | | File name filter | Define which file name patterns should be excluded from Deep Cloud Analysis. Use wildcard (*) to match multiple characters. | ### T1000 File Reputation Appliance **Applies to Spectra Detect Worker** | Option | Description | | -------------- | ------------------------------------------------------------ | | Enable T1000 | When enabled, an integration with ReversingLabs T1000 instance to receive additional classification information is configured. By default, it is false. | | T1000 URL | The host address for the on-premises T1000 File Reputation appliance. | | Username | T1000 user name for authentication. Note: this user name needs to be created via the T1000 Web administration application. | | Password | T1000 password for authentication. | | Timeout | Default T1000 service connection timeout in seconds (maximum 60). | | Enable proxy | Enables the configuration of an optional proxy connection. By default, it is false. | | Proxy host | Proxy host name for routing request from the appliance to T1000 (e.g., 192.168.1.15). | | Proxy port | Proxy port number (e.g., 1080). | | Proxy username | User name for proxy authentication. | | Proxy password | Password for proxy authentication. | ### SNMP **Applies to Spectra Detect Worker** | Option | Description | | -------------------- | ------------------------------------------------------------ | | Enable SNMP service | Select the checkbox to enable the Simple Network Management Protocol service. | | Community | Enter the name of an SNMP community list for authentication. Community is a list of SNMP clients authorized to make requests. The SNMP service will not function properly if this field is not configured. | | Enable trap sink | Select the checkbox to enable sending SNMP traps to the sink server. Traps are asynchronous, unsolicited SNMP messages sent by the SNMP agent to notify about important events on the appliances. | | Trap community | Enter the SNMP trap community string. If the *Enable SNMP service* and *Enable trap sink* checkboxes are selected, then this field is required. | | Trap sink server | Enter the host name or the IP address of the trap sink server. The sink server is the location to which SNMP traps will be sent. If the *Enable SNMP service* and *Enable trap sink* checkboxes are selected, then this field is required. | | SNMP trap thresholds | A set of configuration fields allowing the user to set the thresholds (values that will trigger an SNMP trap) for supported types of events. Thresholds can be configured for average system load in 1, 5, and 10 minutes (as a percentage), used memory and used disk space (as a percentage), the size of Spectra Detect queues (maximum value is 20000) and the size of the classifications queue (maximum value is 20000). | ### System Time **Applies to Spectra Detect Worker** | Option | Description | | ----------------------------------- | ------------------------------------------------------------ | | Enable network time synchronization | Select the checkbox to enable server clock synchronization via NTP, which uses port 123. | | NTP servers | A list of servers, one per line, to use for system clock synchronization. | ### Spectra Detect Worker Configuration #### General ##### Limits It is possible to set up limits on file processing: - maximum file size - number of daily uploads File size is in MB, and the daily limit includes files uploaded through a connector. It also resets at midnight. **Large Report Size Limit (MB)** - Reports over this size will be handled by optimizing memory consumption (RAM), which may result in longer processing and post-processing times. Views are not supported for the full report; they can only be used with the split report option. Use this option when minimizing memory usage is important. Setting to 0 disables this option. ##### Health Monitoring **Processing and Postprocessing Service Status Check** *Processing* and *postprocessing* service status fields can be used to configure how often the services will be checked for timeouts. If any issues are detected, the process will be restarted. The default for both fields is 720 minutes. Setting to 0 will disable this option. **Monit Memory Threshold** *Monit Memory Threshold* is the percentage of memory, between 50 and 100, that services can use. If memory usage reaches the number configured here, the system will restart services. **If this number is set to 100, the restart will be disabled.** **Health Thresholds** Set the health thresholds to true or false to enable or disable the thresholds functionality. - **Disk High Threshold**: Specify the highest allowed percentage of hard disk usage on the system. If it exceeds the configured value, the appliance will start rejecting traffic. - **Queue High Threshold**: Specify the maximum number of items allowed in the queue. If it exceeds the configured value, the appliance will start rejecting traffic. Default: 100. ##### Cleanup *All values are in minutes* - File age limit How long an unprocessed file is present on the appliance before being deleted. Processed files are deleted immediately after processing. Default: 1440. - Task age limit How long before the record of a completed processing task is deleted. Default: 90. - Unprocessed task limit How long before an incomplete processing task is cancelled. Default: 1440. ##### Spectra Analyze Configuration - **Spectra Analyze IP address or FQDN**: Specify the hostname or IP address of Spectra Analyze appliance associated with the Worker. This address will be referenced in Splunk reports to enable retrieving additional processing information. #### File Processing ##### Processing - **Processing Mode**: Choose the processing mode of the Worker instance to improve pressure balancing. Supported modes are *standard* and *advanced*. In advanced mode, files larger than the threshold specified below are processed individually. - **Large File Threshold**: If advanced mode is selected, files larger than the threshold specified here will be processed individually, one by one. If standard mode is enabled, this parameter is ignored. The threshold value is expressed in MB. Default is 1000. Limit is 5000. - **Unpacking Depth**: Select how "deep" a file is unpacked. For example, if a file contains other files, each of those containing other files etc., by default (when this value is set to zero), Workers will unpack everything until no more files can be unpacked. Setting this value to something else than zero specifies the depth of recursion, which can be useful for quicker (but shallower) analyses. - **Processing Timeout**: Specify how many seconds Worker should wait for a file to process before terminating the task. The default is 28800 seconds (8 hours). The minimum allowed value is 1. ##### Caching - **Enable caching**: When caching is enabled, the SHA1 of file contents is used to determine if there have been recent analysis reports for the same file, and if those reports can be reused instead of processing the file again. - **Cache Timeout**: If file processing caching is enabled, this parameter can be used to specify for how long the analysis reports should be preserved in the cache and reused before they expire (in seconds). Restarting the Worker or changing configuration will clean the cache. Setting the value to 0 will use the timeout of 600 seconds. ##### Scaling - Processing Specify how many copies of Spectra Core instances to run. Changing this setting from the default is not recommended. - Post-processing Specify how many report post-processing instances to run. These instances will then modify and save reports as specified by the user. Increasing this value can increase throughput for servers with extra available cores. Default: 1. - Preprocessing Unpacker Specify how many copies of Spectra Core is used to unpack samples for Deep Cloud Analysis. This setting only has effect if Deep Cloud Analysis is enabled with the Scan Unpacked Files capability. *Applies only to Spectra Detect Worker v5.4.1 and higher* - Load size Defines the maximum number of individual files that can simultaneously be processed by a single instance of Spectra Core. When one file is processed, another from the queue enters the processing state. Default is zero (0), which sets the maximum number of files to be processed to the number of CPU cores on the system. - Concurrency Count Defines the number of concurrent threads per Spectra Detect instance that should be used for processing. Default is zero (0), which sets the number of threads to the number of CPU cores on the system. Modifying this option may cause issues with the system. Consult with ReversingLabs Support before making any changes to the parameter. #### Analysis Report ##### Default Report Settings - **Strings**: Select the checkbox to enable extracting strings from files during Spectra Detect file analysis. - **Relationships**: When enabled, the relationships section of the report lists hashes of files that are found within a given file. - **Relationships for First Report Only**: If disabled, the reports for samples that contain children files will include the relationships of all their descendants. This can lead to a lot of redundant information in the report. If enabled, relationship metadata will be included only for the root parent file. - **Network Reputation Report**: If enabled, Spectra Detect Worker (4.1+) file analysis reports will contain a new section, `network_reputation` , with reputation information on any network resources found within the file. This feature is unavailable if **Spectra Core > Processing Settings** is set to `Fast`, as it relies on interesting strings extracted during analysis. ##### API Report Settings This section configures the default report view applied to a Spectra Detect report if no other view has been applied elsewhere. Specify the report type that should be applied to the Worker analysis report. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. - **Report Type**: Available report types are *extended_small*, *small*, *medium*, and *large*, as well as *classification*, *classification_tags*, *extended*, *mobile_detections* and *short_cert* which contain metadata equivalent to views with the same name. Click the *Upload* button to submit a custom report type to the appliance. - **Report View**: Apply a view for transforming report data to the *large* report type to ensure maximum compatibility. See **Spectra Detect Product Documentation > Analysis and Classification > Customizing Analysis Reports** for detailed information about how report types and views work. Enable the **Top Container Only** option to only include metadata for the top container. Reports for unpacked files will not be generated. Enable the **Malicious Only** option for the report to contain only malicious and suspicious children. - **Timeout (Seconds)**: Specify the timeout for API reports in seconds. The limit is 432000. Applies only to synchronous API calls. - **Number of Concurrent Connections**: Specify the number of concurrent connections for API reports. The limit is 5000. Applies only to synchronous API calls. ##### Additional hashes - CRC32 - MD5 - SHA384 - SHA512 - SSDEEP Spectra Core calculates file hashes during analysis and includes them in the analysis report. Select which additional hash types should be calculated for files analyzed on connected Worker appliances. MD5 is selected by default. SHA1 and SHA256 hashes are always included, and therefore aren’t configurable. Note that selecting additional hash types may cause the report to generate slower. #### Authentication ##### Tokens Specify tokens required for authorizing to the listed Spectra Detect Worker endpoints. Every token must be a string of alphanumeric characters between 16 and 128 characters in length. ### Egress Integrations After analysis, Spectra Detect can save: - original files - unpacked files - file analysis reports These are forwarded to one or more external storage providers: - AWS S3 - Network file share (SMB, NFS) - Microsoft Cloud storage (Azure Data Lake, OneDrive, SharePoint) #### Spectra Analyze Integration *Applies only to Spectra Detect Worker v5.4 and higher* Spectra Analyze integration allows Spectra Detect to upload processed samples to a configured Spectra Analyze instance. This can be used for sharing samples between Spectra Detect and Spectra Analyze, and for further analysis within a configured Spectra Analyze instance. - Enable Spectra Analyze Integration A checkbox to enable the Spectra Analyze integration. - Spectra Analyze Instance An instance of Spectra Analyze to which the Worker will upload samples. - Enable Global Filter A checkbox to enable the global filter set in the [Filter Management](../Config/FilterManagement.md) section. When enabled, the Worker will only upload samples that pass the global filter criteria to Spectra Analyze. - Filter Name A filter that will be used to determine which samples are uploaded to Spectra Analyze. Samples uploaded to Spectra Analyze will have tags and comments visible on the Spectra Analyze Sample Summary page. - Supported tags: `filter_name`, `source_address`, `connector_name`, and `hostname`. - Supported comments: `full_file_path`. **Note:** If the upload is performed via the API, `source_address`, `connector_name`, and `full_file_path` will not be shown on the Spectra Analyze Sample Summary page. Clicking the `Create` button opens a filter creation dialog. The filter creation dialog follows the [Filter Management](./FilterManagement.md) workflow. #### AWS S3 There are two ways to connect to an output bucket: 1. Using your own S3 credentials. 2. Using an **IAM role**. The **AWS S3 Access Key ID** and **AWS S3 Secret Access Key** (in the *General* tab) must be provided in **both cases**. If ReversingLabs hosts the appliance and you use an IAM role, we will provide the access key and secret key. **Bucket naming conventions**: regardless of the type of storage (files, unpacked files, reports), input fields for S3 buckets expect the bucket names to conform to specific rules. The bucket name can be between 3 and 63 characters long, and can contain only lowercase characters, numbers, periods, and dashes. Each label in the bucket name must start with a lowercase letter or number. The bucket name cannot contain underscores, end with a dash, have consecutive periods, or use dashes adjacent to periods. The bucket name cannot be formatted as an IP address. ##### General - AWS S3 Access Key ID Specify your access key ID. - AWS S3 Secret Access Key Specify your secret access key. - AWS S3 Endpoint URL Enter S3 endpoint URL if you want to use S3 over HTTP. Only required in non-AWS setups in order to store files to an S3-compatible server. When this parameter is left blank, the default value is used ([https://aws.amazonaws.com](https://aws.amazonaws.com/)). Supported pattern(s): https?://.+ - SSL Verify This checkbox enables SSL verification in case of an `https` connection. - CA Path Path to the certificate file for SSL verification. If this parameter is left blank or not configured, SSL verification is disabled. By default, it is set to **/etc/pki/tls/certs/ca-bundle.crt**. - AWS S3 Region The default value is `us-east-1`. - AWS S3 Signature Used to authenticate requests to the S3 service. In most AWS regions, only Signature Version 4 is supported. For AWS regions other than `us-east-1` , the value `s3v4` must be configured here. (*Deprecated*) - AWS S3 Number of Upload Retries Maximum number of retries when saving a report to an S3-compatible server. ##### AWS IAM Role Settings Using S3 storage in a way where the customer secret key isn’t shared with the Spectra Detect system requires setting up an IAM role for Spectra Detect in the AWS console. This requires setting up the Amazon Resource Name (ARN) for Workers, which they can use to obtain temporary tokens. These temporary tokens allow saving files to S3 buckets without the customer secret access key. For this setup, an **external ID** is also required. This is provided by the entity which owns an S3 bucket. The owner of that bucket takes the AWS Account ID of the account that owns the appliance and builds an ARN with it (in the hosted Spectra Detect deployment, we provide our AWS Account ID). - ARN Session Name Name of the session visible in AWS logs. - Token duration How long the authentication token lasts before it expires and is refreshed. The minimum value is 900 seconds. - Refresh Buffer Number of seconds defined to fetch a new ARN token before the token timeout is reached. This must be a positive number, and the default value is 5. ##### File Storage This section configures how **original** files are stored. - Enable S3 File Storage A checkbox to enable storing files in an S3 bucket. - Store metadata This option stores the analysis metadata to the uploaded S3 object as key-value pairs. Metadata in S3 is used to attach additional information to objects. It can also be used to forward the data to another connector, appliance group, or Spectra Analyze for further processing. For example, you can use this metadata as a filter for only retrieving malicious files. By default, this option is enabled. For more details, see [AWS S3 Using Metadata](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html). - Enable Global Filter A checkbox to enable the global filter set in the [Filter Management](../Config/FilterManagement.md) section. When enabled, the Worker will only upload samples that pass the global filter criteria. - Filter Name A filter that will be used to determine which samples are uploaded. Clicking the `Create` button opens a filter creation dialog. The filter creation dialog follows the [Filter Managemenet](./FilterManagement.md) workflow. - Output Bucket Destination There are two "modes" for setting the output bucket destination: 1. **Only Default Bucket**: storing files in the default bucket by inputting the bucket name. 2. **Map to Input Bucket**: storing files in buckets that are mapped to specific input buckets. - **Input Bucket**: A name of the input bucket that will be mapped to the output bucket. - **Output Bucket**: A name of the output bucket where the samples will be stored. If the output bucket is empty, any sample uploaded from the specified input bucket will be ignored. - **Mapping Filter name**: A filter that will be used to determine which samples are uploaded. Clicking the `Create` button opens a filter creation dialog. The filter creation dialog follows the [Filter Managemenet](./FilterManagement.md) workflow. After configuring the output to input bucket mappings, a table will be displayed with the columns *Input Bucket*, *Output Bucket*, *Filter*, and *Actions* (edit or delete), where users can view and manage the mappings. - Connection Method for Output Buckets Used to set individual AWS connection methods for target buckets. Clicking the `Add New Mapping` button opens a dialog where users can set up a new mapping. Multiple mappings can be set up. The dialog contains the following fields: - Buckets A list of input buckets that will be mapped to the output bucket. - Connection Strategy - **Standard AWS**: requires setting the *AWS S3 Access Key ID* and *AWS S3 Secret Access Key*, with additional options for setting the *AWS S3 Endpoint URL*, *AWS S3 Region*, *AWS S3 Signature* (deprecated in Spectra Detect version 5.4.0+), *AWS S3 Number of Upload Retries*, *Spectra Detect Worker AWS S3 Server-Side Encryption Algorithm*, verifying the HTTPS connection against the CA bundle, and setting the *CA Path*. - **ARN Connection**: requires selecting a strategy for role assumption: *Default AWS* or *Custom AWS Connection to Assume ARN Role*. Based on the selected role assumption strategy, input the configurations. - **Default AWS**: *Role ARN*, *External ID*, *ARN Session Name*, *Token Duration*, *Refresh Buffer*, *Spectra Detect Worker AWS S3 Servier-Side Encryption Algorithm*, *verify the HTTPS connection against the CA bundle*, and the *CA Path*. - **Custom AWS Connection to Assume ARN Role**: *AWS S3 Access Key ID*, *AWS S3 Secret Access Key*, *AWS S3 Endpoint URL*, *AWS S3 Region*, *AWS S3 Signature* (deprecated in Spectra Detect version 5.4.0+), *AWS S3 Number of Upload Retries*, *Role ARN*, *External ID*, *ARN Session Name*, *Token Duration*, *Refresh Buffer*, *Spectra Detect Worker AWS S3 Server-Side Encryption Algorithm*, *verify the HTTPS connection against the CA bundle*, and the *CA Path*. After configuring the output to input bucket mappings, a table will be displayed with the columns *Input Bucket*, *Output Bucket*, *Filter*, and *Actions* (edit or delete), where users can view and manage the mappings. Clicking the `Delete` button removes the selected mapping. - Server-Side Encryption Algorithm The server-side encryption algorithm can be any server-side encryption configured on the target default bucket (such as `aws:kms` or `AES256`). Clicking the `Test Connection` button will attempt to verify the selected server-side encryption. - Folder Folder where samples will be stored on given S3 bucket. The folder can be up to 1024 bytes long when encoded in UTF-8. It must not start or end with a slash or contain leading or trailing spaces. Consecutive slashes ("//") are not allowed. The folder key containing relative path elements ("../") are valid if, when parsed left-to-right, the cumulative count of relative path segments never exceeds the number of non-relative path elements encountered. **Note: Storage age policy** Spectra Detect retains data stored in customer S3 buckets for **12 months**. After this period, the data is automatically deleted from the S3 storage. ##### Unpacked Files Storage This section configures how **unpacked** files are stored. There is just one **output mode**: a default bucket that needs to be specified in the *Bucket Name* field (with an optional *Folder* name). Unpacked files are saved in **two possible formats**: 1. Individual unpacked files are saved in subfolders, which can be named in three different ways: | Method | Format | | ------------------- | ---------------------------------------------- | | Date-based | `YYYY/mm/dd/HH` | | Date-and-time-based | `YYYY/mm/dd/HH/MM/SS` | | SHA-1-based | first four characters of a sample’s SHA-1 hash | 2. Unpacked files are saved as ZIP archives, which can optionally be password-protected. ##### Report Storage This section configures how **analysis reports** are stored. - Enable S3 Report Storage A checkbox to enable storing reports in an S3 bucket. - Enable Global Filter A checkbox to enable the global filter set in the [Filter Management](../Config/FilterManagement.md) section. When enabled, mapping filters will be automatically disabled. - Filter Name A filter that will be used to determine which samples are uploaded. Clicking the `Create` button opens a filter creation dialog. The filter creation dialog follows the [Filter Managemenet](./FilterManagement.md) workflow. - Output Bucket Destination There are three "modes" for setting the output bucket destination: 1. **Map to Input Bucket**: storing reports in buckets that are mapped to specific input buckets. - **Input Bucket**: A name of the input bucket that will be mapped to the output bucket. - **Output Bucket**: A name of the output bucket where the samples will be stored. If the output bucket is empty, any sample uploaded from the specified input bucket will be ignored. - **Mapping Filter name**: A filter that will be used to determine which samples are uploaded. Clicking the `Create` button opens a filter creation dialog. The filter creation dialog follows the [Filter Managemenet](./FilterManagement.md) workflow. After configuring the output to input bucket mappings, a table will be displayed with the columns *Input Bucket*, *Output Bucket*, *Filter*, and *Actions* (edit or delete), where users can view and manage the mappings. 2. **Same as Input Bucket**: storing reports in the same bucket as the input bucket. 3. **Only Default Bucket**: storing reports in the default bucket by inputting the bucket name. - Connection Method for Output Buckets Used to set individual AWS connection methods for target buckets. Clicking the `Add New Mapping` button opens a dialog where users can set up a new mapping. Multiple mappings can be set up. The dialog contains the following fields: - Buckets A list of input buckets that will be mapped to the output bucket. - Connection Strategy - **Standard AWS**: requires setting the *AWS S3 Access Key ID* and *AWS S3 Secret Access Key*, with additional options for setting the *AWS S3 Endpoint URL*, *AWS S3 Region*, *AWS S3 Signature* (deprecated in Spectra Detect version 5.4.0+), *AWS S3 Number of Upload Retries*, *Spectra Detect Worker AWS S3 Server-Side Encryption Algorithm*, verifying the HTTPS connection against the CA bundle, and setting the *CA Path*. - **ARN Connection**: requires selecting a strategy for role assumption: *Default AWS* or *Custom AWS Connection to Assume ARN Role*. Based on the selected role assumption strategy, input the configurations. - **Default AWS**: *Role ARN*, *External ID*, *ARN Session Name*, *Token Duration*, *Refresh Buffer*, *Spectra Detect Worker AWS S3 Servier-Side Encryption Algorithm*, *verify the HTTPS connection against the CA bundle*, and the *CA Path*. - **Custom AWS Connection to Assume ARN Role**: *AWS S3 Access Key ID*, *AWS S3 Secret Access Key*, *AWS S3 Endpoint URL*, *AWS S3 Region*, *AWS S3 Signature* (deprecated in Spectra Detect version 5.4.0+), *AWS S3 Number of Upload Retries*, *Role ARN*, *External ID*, *ARN Session Name*, *Token Duration*, *Refresh Buffer*, *Spectra Detect Worker AWS S3 Server-Side Encryption Algorithm*, *verify the HTTPS connection against the CA bundle*, and the *CA Path*. After configuring the output to input bucket mappings, a table will be displayed with the columns *Input Bucket*, *Output Bucket*, *Filter*, and *Actions* (edit or delete), where users can view and manage the mappings. Clicking the `Delete` button removes the selected mapping. - Folder Folder where reports will be stored on given S3 bucket. The folder can be up to 1024 bytes long when encoded in UTF-8. It must not start or end with a slash or contain leading or trailing spaces. Consecutive slashes ("//") are not allowed. The folder key containing relative path elements ("../") are valid if, when parsed left-to-right, the cumulative count of relative path segments never exceeds the number of non-relative path elements encountered. - Report Type / View Upload, manage and configure report types. See **Spectra Detect Product Documentation > Analysis and Classification > Customizing Analysis Reports** for detailed information about how report types and views work. - Top Container Only Enable to only include metadata for the top container. Reports for unpacked files will not be generated. - Malicious Only For the report to contain only malicious and suspicious children. - Split Report Split reports of extracted files into individual reports. - Archive and Compress Split Report Files Enable sending a single, smaller archive of split report files to S3 instead of each file. Relevant only when the "Split report" option is used. - Archive Password If set, enables encryption of the archive file using this value as the password. Relevant only when the "Archive and compress split report files" option is used. - Subfolder Reports can be saved into subfolders, with specific naming formats: | Subfolder naming | Format | |---------------------|------------------------------------------------| | Date-based | `YYYY/mm/dd/HH` | | Date-and-time-based | `YYYY/mm/dd/HH/MM/SS` | | SHA-1-based | first four characters of a sample’s SHA-1 hash | - Enable Filename Timestamp This refers to the naming of the report file itself (and not the subfolder in which it is stored). A timestamp is appended to the SHA1 hash of the file. The timestamp format must follow [the strftime specification](https://strftime.org/) and be enclosed in quotation marks. If not specified, the ISO 8601 format is used. ##### SNS You can enable publishing notifications about file processing status and links to Worker analysis reports to an Amazon SNS (Simple Notification Service) topic. The configured AWS account must be given permission to publish to this topic. The topic name is limited to 1000 characters. #### Network Share Configuration Specify the protocol and the address of the network share. The supported protocols are NFS and SMB, so for an NFS share, the address would start with `nfs://`. The address should include the IP address or URL of the share, and should also include the full path to the shared directory. Note: the *username* and *password* fields are not required for NFS, only for SMB. #### Microsoft Cloud Storage Configuration ##### Azure To set up saving on Azure, specify the storage account name and storage account key. In order to use a custom server, specify the endpoint suffix for the address of your Azure Data Lake container, which defaults to **core.windows.net**. The Azure integration can save analyzed files, unpacked files, as well as file analysis reports. The integration **doesn’t** support saving unpacked files or split reports into a single ZIP archive. Regardless of the use, the container name must be a valid DNS name, conforming to the following naming rules: 1. Container names must start or end with a letter or number, and can contain only letters, numbers, and the dash (-) character. 2. Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not permitted in container names. 3. All letters in a container name must be lowercase. 4. Container names must be from 3 through 63 characters long. ##### OneDrive / SharePoint Online Microsoft Cloud Storage allows saving of reports and analyzed files on either SharePoint Online or OneDrive storage types. After configuring the *Client ID*, *Client Secret*, and the application’s *Custom Domain* (as configured in the Azure portal), select the desired storage type and fill in the appropriate additional credentials: OneDrive storage requires the OneDrive account *Username*, while SharePoint requires the *Site Hostname* and *Site Relative Path* (for example, `sharepoint.com` and `sites/test-public`). #### Saving Files and Reports Different integrations have different options available for saving analyzed files and reports, but all will require configuring the storage location (S3 bucket or buckets, Microsoft Cloud storage, Azure container, or network share) and a folder on that storage (optional). These settings relate to where Spectra Detect Workers appliances save files. ##### Report customization It’s possible to change the look of the report to further customize it. See **Spectra Detect Product Documentation > Analysis and Classification > Customizing Analysis Reports** for detailed information about how report types and views work. ##### Splitting reports All storage options allow splitting the report into several JSON files, one report per extracted file. In addition to that, S3 and network share sections also allow saving these individual reports together in an archive (optionally password-protected). ##### Report name format Reports are saved with their timestamp at the end of their file name. By default, they will end with an ISO 8601 datetime string (YYYY-MM-DDTHH:mm:ss.SSSSSS) but this can be modified following the Python `strftime()` syntax. For example, to save reports only with their year and month, set the *Filename Timestamp Format* to `%Y%m`. This field is editable only if the **Enable Filename Timestamp** option is turned on. ### Saving unpacked files During analysis, a Worker can extract "children" from a parent file (the file initially submitted for analysis). Such child files can be saved to one of the external storage options (S3 bucket, Azure container, or network share). It’s also possible to sort them into subfolders based on date, date and time, or based on the first four characters of the sample’s SHA-1 hash. S3 and network share sections also allow saving these unpacked files as a ZIP archive instead of each individual file (*Archive and Compress Unpacked Files*). #### Callback Select the checkbox to enable sending file analysis results to an HTTP server ("callback"), and optionally return the analysis report in the response. Specify the full URL that will be used to send the callback POST request. Only HTTP is supported. If this parameter is left blank, no requests will not be sent. Additionally, specify the number of seconds to wait before the POST request times out. Default is 5 seconds. In case of failure, the Worker will retry the request up to six times, increasing waiting time between requests after the second retry has failed. With the default timeout set, the total possible waiting time before a request finally fails is 159 seconds. ##### CA path If the Callback URL parameter is configured to use HTTPS, this field can be used to set the path to the certificate file. This automatically enables SSL verification. If this parameter is left blank or not configured, SSL verification will be disabled, and the certificate will not be validated. ##### Report options and views It’s possible to change the look of the report to further customize it. Click the *Upload* button to submit a custom report type, or select one of the default options. Names for views can only include alphanumerical characters, underscores and dashes. See **Spectra Detect Product Documentation > Analysis and Classification > Customizing Analysis Reports** for detailed information about how report types and views work. Enable the **Top Container Only** option to only include metadata for the top container. Reports for unpacked files will not be generated. Enable the **Malicious Only** option for the report to contain only malicious and suspicious children. Enable the **Split Report** option to split reports of extracted files into individual reports. Enable the **Include Full Report** option to retrieve the full report. By default, only the summary report is provided in the callback response. ### Archiving *only for S3 and Azure* Files can be stored either as-is or in a ZIP archive. This archive can further be password-protected and customized: - Zip Compression Level: 0 (no compression) to 9 (maximum compression). The default is 0. - Maximum Number of Files: Specify the maximum allowed number of files that can be stored in one ZIP archive. Allowed range: 1 … 65535 ### File filters **Tip: You can also configure [advanced file filters](../FilterManagement).** The file filter is used by the Worker to control which files won't be stored after processing. You can filter out files based on their classification, factor, file type, and file size. For a file to be filtered out by the Worker, at least one of the filters has to match. To enable the feature, select the *File filters* checkbox in the **Central Configuration ‣ Egress Integrations ‣ File Filters** dialog. Then, use the **Add new filter** button to create custom file filters. Every filter can be individually enabled or disabled by selecting or deselecting the *Active* checkbox, or the checkbox to the right of the filter name. All created filters are listed in the dialog. Every filter can be expanded by clicking the arrow to the left of the filter name. When a filter is expanded, users can modify any of the filtering criteria, or remove the entire filter by clicking **Delete**. #### File Filtering Criteria | CRITERIA | DESCRIPTION | | -------------- | ------------------------------------------------------------ | | Classification | Allows the user to filter out files by their classification. Supported values are "Known" and "Unknown". Both values can be provided at the same time. Malicious and suspicious files cannot be filtered out. | | Factor | Allows file filtering based on threat factor. When a file is processed, it is assigned a threat factor value, represented as a number from 0 to 5, with 5 indicating the most dangerous threats (highest severity). Enter one value from 0 to 5. The filter program will filter out files with the threat factor of N (entered value) or less. | | File Type | Spectra Detect Worker can identify the file type for every analyzed file. To filter out files by type, select one or more file types, or select the "All" checkbox. | | File Size | To filter out files by size, specify the file size in any of the supported units, and the file size condition (greater than or less than). The file size value should be provided as an integer; if it is not, it will automatically be rounded down to the nearest whole integer. | ### Splunk **Applies only to Spectra Detect Worker** | Option | Description | | ------------------------- | ------------------------------------------------------------ | | Enable | Select the checkbox to enable Splunk integration. | | Host | Specify the hostname or IP address of the Splunk server that should connect to the Worker appliance. | | Port | Specify the TCP port of the Splunk server’s HTTP Event Collector. Allowed range(s): 1 … 65535 | | Token | Specify the API token for authenticating to the Splunk server. Not mandatory. | | Use HTTPS | Select the checkbox to use the HTTPS protocol when sending information to Splunk. If it’s not selected, non-encrypted HTTP will be used. | | SSL require certificate | If HTTPS is enabled, selecting this checkbox will enable certificate verification. The Worker host needs to have correct certificates installed in order to successfully pass verification. | | Timeout - **Worker only** | Specify how many seconds to wait for a response from the Splunk server before the request is considered failed. If the request fails, the report will not be uploaded to the Splunk server, and an error will be logged. Default is 5 seconds | | Report type | Specify the report type that should be applied to the Worker analysis report before sending it to Splunk. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Click the *Upload* button to submit a custom report type to the appliance. Report types are stored in the `/etc/ts-report/report-types` directory on each Worker. | | Report view | Specify the name of an existing transformation view that should be applied to the Worker analysis report before sending it to Splunk. Views can be used to control the presentation format and the contents of the analysis report; for example, to flatten the JSON hierarchy, or to preserve only selected parts of the report. Allowed characters are alphanumeric characters, underscore and dash. Views are stored in the `/usr/libexec/ts-report-views.d` directory on each Worker. | | Top Container Only | Whether or not the report sent to Splunk should contain the reports for child files. | ### System Alerting **Applies to Spectra Detect Worker** | Option | Description | | ----------------------------------------- | ------------------------------------------------------------ | | **Syslog receiver** | | | Enable | Select the checkbox to receive alerts about the status of critical system services to a syslog server. Read more about which services are supported in the table below. | | Host | Host address of the remote syslog server to send alerts to. | | Port | Port of the remote syslog server. | | Protocol | Communication protocol to use when sending alerts to remote syslog server. Options are TCP (default) and UDP. | **System Alerting: Supported Services** Syslog notifications are sent when any of the services or operations meets the condition(s) defined in the table. | SYSTEM OPERATION OR SERVICE | NOTIFICATION TRIGGER | | ------------------------------- | ------------------------------------------- | | RAM | usage is over 90% for 10 minutes | | CPU | usage is over 40% for 2 minutes | | CPU wait (waiting for IO) | over 20% for 2 minutes | | Disk usage | over 90% for 10 minutes | | UWSGI service | down for 2 minutes | | NGINX service | down for 2 minutes | | RABBIT-MQ service | down for 2 minutes | | POSTGRES service | down for 2 minutes | | MEMCACHED service | down for 2 minutes | | CROND service | down for 2 minutes | | SSHD service | down for 2 minutes | | SUPERVISORD service | down for 2 minutes | | SMTP | if enabled, but stopped for 4 minutes | | NTPD | if enabled, but stopped for 4 minutes | | Any of the SUPERVISORD services | if it has crashed | | SCALE socket | not detected/does not exist for 4 minutes | | SCALE INPUT queue | receiving over 500 messages for 10 minutes | | SCALE RETRY queue | receiving over 100 messages for 10 minutes | | COLLECTOR queue | receiving over 1000 messages for 10 minutes | | CLASSIFICATION queue | receiving over 5000 messages for 10 minutes | In addition, the Manager sends syslog alerts for files that haven’t been processed in 24 hours and for file processing failures. ### Log Management **Applies to Spectra Detect Worker** Users can configure the [level](https://en.wikipedia.org/wiki/Syslog#Severity_level) of events that are sent to a syslog receiver (*Syslog log level*) or that are saved in internal logs (*TiScale log level*). Users cannot save high-severity events only but send lower-severity events to a syslog receiver. To send events, they first need to be saved, which is why the *TiScale log level* must always be equal or lower-severity than the value in *Syslog log level*. --- ## SpectraDetect Certificate Management — Root CA Trust Store *Administration > Certificates* The **Root CA Trust Store Management** page enables administrators to manage Root CA certificates for Spectra Detect and [Spectra Analyze](/SpectraAnalyze/) appliances. Users can add, remove, trust, and distrust Root CA certificates through the web interface to customize the appliance's certificate validation behavior for secure communications. ## Certificate Scope Certificates managed through the Spectra Detect interface apply to: - **Spectra Detect Manager** - **Spectra Detect Hub** - **Spectra Detect Worker** - **Spectra Analyze** ## Accessing Certificate Management To access the certificate management interface: 1. Log into the appliance web interface as an administrator. 2. Navigate to **Administration > Certificates**. 3. The certificate management page displays all Root CA certificates currently in the Trust Store. ## Certificate List Overview The certificate management page displays a table with the following information for each certificate: | Column | Description | | -------------- | ------------------------------------------ | | **ID** | Number of certificate. | | **Subject** | Name of certificate. | | **Issuer** | Certificate issuer. | | **Valid From** | Certificate validity start date. | | **Valid To** | Certificate validity end date. | | **Filename** | Name of .pem file. | | **Trusted** | Whether the certificate is currently trusted. | | **Actions** | Available operations for the certificate. | ## Adding Root CA Certificates To add a new Root CA certificate to the Trust Store: 1. Under **Administration > Certificates**, click **Add Certificate**. 2. Click **Browse Files**, and select a valid certificate file in `.pem` format. 3. Click **Upload** to add the certificate to the Trust Store. 4. The certificate appears in the list with the **Trusted** status set to *Yes*. ## Managing Certificates To distrust or blocklist a certificate: 1. Locate the certificate in the list. 2. Under **Actions**, click **Distrust**. 3. Confirm the change in the modal dialog. 4. The **Trusted** status changes to *No*. To re-trust a certificate: 1. Locate the certificate in the list. 2. Under **Actions**, click **Trust**. 3. Confirm the change in the modal dialog. 4. The **Trusted** status changes to *Yes*. To remove a certificate: 1. Locate the certificate in the list. 2. Under **Actions**, click **Remove**. 3. Confirm the change in the modal dialog. 4. The certificate is deleted from the Trust Store. **Info: Some certificates may be greyed out because you don't have the necessary permissions to modify them.** To reset the Trust Store: 1. Click **Remove All Added Certificates**. 2. Confirm the action in the modal dialog. 3. All user-added certificates are removed from the Trust Store. **Warning: Removal of certificates works for both trusted and distrusted certificates and completely deletes them from the system. ** Clicking **Remove All Added Certificates** removes all certificates added by users and cannot be undone. --- ## Spectra Detect Filter Management — Advanced Egress Filtering Rules In addition to regular [file filtering on egress integrations](../ApplianceConfiguration#file-filters), you can set up an advanced filter. Advanced filters have more options and therefore allow more granularity. **Note: Advanced filters are currently available for [Spectra Analyze](/SpectraAnalyze/) Integration (Central Configuration > Egress Integrations > Spectra Analyze Integration), AWS S3 (Central Configuration > Egress Integrations > AWS S3 > File Storage/Unpacked Files Storage/Report Storage > Enable Filter), and Callback (Central Configuration > Egress Integrations > Callback > Enable Filter)** ### Types of filters Filters can be either **inclusive** or **exclusive**. If a filter is inclusive, that means that all files that match you criteria will be saved to an S3 bucket. If it's exclusive, you will save everything *except* the files that match your criteria. When creating a filter, you can also specify that it should only apply to top-level parent files: **Filter Applies Only to Container**. In this case, extracted files will not be saved. ### Conditions Conditions allow you to specify rules based on various attributes such as file type, file size, classification, and more. Conditions are grouped into categories: - **File**: filters related to one of the recognized file types and file size - **Classification**: filters related to [how we classify files](/General/AnalysisAndClassification/Classification) - **Identification**: filters for file format identification - **Behavior**: filters for network calls made by a file (see the [report schema](/SpectraDetect/report-schema#uri)) - **Document**: if a file is identified as a document, these filters allow you to narrow down what you save based on the document attributes, such as number of pages or word count - **Unpacking**: similar to how the "Identification" category of conditions relates to file format identification, this category contains filters based on file unpacking - **File statistics**: filters operating on statistics produced by an analysis (for example, the `count` of files of a particular type found in an analyzed file) - **Capabilities**: different capabilities detected in the analyzed file - **YARA matches**: see description of the filters [in the report schema](/SpectraDetect/report-schema/#yara) - **Indicators**: see [report schema](/SpectraDetect/report-schema/#indicator) - **Mitre**: one of the Mitre [techniques](https://attack.mitre.org/techniques/enterprise/) - **Tags**: the full list is available in the [report schema appendix](/SpectraDetect/report-schema/#appendix-c-spectra-core-tags) --- ## Spectra Detect Notifications — Classification Alerts and Delivery Rules Users can access the notifications page from the header by clicking the notifications icon, which will display unread notifications, providing a quick overview of alerts that require attention. Clicking the **See all notifications** link redirects users to the notifications page, where they can view all notifications. A table on the notifications page displays all notifications, separated in columns by *Type*, *Time*, and *Notification*. The *Type* column indicates the type of notification. The *Time* column displays the timestamp of the notification, indicating when the event occurred. The *Notification* column provides a brief description of the event, such as a classification change from unknown (no threats found) to malicious. Filtering options are available to help users quickly find relevant information. Notifications can be [filtered by period](#filter-by-period), allowing users to view alerts from the last hour, day, week, month, or all time. Users can also [filter by read status](#filter-by-read-status), distinguishing between read and unread notifications, [filter by notification type](#filter-by-notification-type), including Cloud Classification Changes and Classification Detection, or [filter by classification](#filter-by-classification), narrowing the results to cloud classification changes where a sample was marked as unknown (no threats found), malicious, suspicious, or goodware. Clicking on the hash value within the alert redirects users to the **Dashboard > Analytics > Detections Overview** table, providing additional context and information about the sample. The **Mark All as Read** button allows users to clear unread notifications by marking them as read. This can be used to quickly clear the notification list and focus on new alerts as they arrive. ## Filter by Period The filter by period option allows users to view notifications from the last hour, day, week, month, or all time. This can be used to quickly identify recent alerts and track changes in the classification status. ## Filter by Read Status The filter by read status option allows users to distinguish between read and unread notifications. This can be used to quickly identify new alerts that require attention or to review previously read notifications for additional context. ## Filter by Notification Type The filter by notification type option allows users to distinguish notifications from Cloud Classification Changes and Classification Detection. ## Filter by Classification The filter by classification option allows users to narrow results based on the classification of the sample, including samples marked as Unknown (No Threats Found), Malicious, Suspicious, or Goodware. ## Notification Settings The notification settings page allows users to configure and manage custom notification rules for tracking cloud classification changes. The page provides an overview of existing notifications, displaying their *Name*, *Type*, associated *Alert* type, *Description*, and *Action*. Users can navigate through the list using pagination controls and adjust the number of rows displayed per page. If no notifications are configured, the table remains empty. A button labeled **Add Notification** in the upper-right corner allows users to create new notification rules. ### Adding a Notification To add a new notification, users must first specify a *name*, *description*, and select a *notification type*. When choosing cloud classification changes, users can define the conditions by selecting the original classification (*cloud classification changes from*) and the new classification (*cloud classification changes to*) that will be used to trigger the notifications. Users can also choose the delivery method, including *E-mail*, *Splunk*, or *Syslog*, to ensure alerts are sent through the appropriate channels. - *E-mail* delivery method requires users to enter the recipients' *email addresses* and select the desired *notification frequency*. - *Splunk* delivery method requires users to enter the Splunk *protocol* (`http` or `https`), *host*, *port*, and *token*. - *Syslog* delivery method requires users to enter the Syslog *server*, *port*, *protocol* (`UDP` or `TCP`), *tag*, and *priority* level. ### Manage Profiles The Profile section allows users to manage their personal information and credentials. It includes fields for *First Name*, *Last Name*, and *Email Address*. Users can update their password by entering a new password, repeating the new password, and providing their current password for verification. These options ensure users can securely manage their account settings within the notification system. --- ## Spectra Detect Redundancy — Manager Clustering and Failover Spectra appliances are designed for full redundancy. If one component fails, another automatically takes over using the same configuration settings. This chapter describes how redundancy works for **Spectra Detect Manager**. ## Creating Manager Redundancy **Important: A proxy or Load balancer is REQUIRED for Spectra Detect Manager to function properly in Redundancy.** Use the following endpoint to identify the active/primary Spectra Detect Manager: `GET /api/cluster/check/` Before clustering, add the active/primary manager (the one you will use to initiate redundancy) to the proxy or load balancer. After redundancy setup completes, update the proxy to include both managers (this is described in [step 7](#7-update-the-proxy-or-load-balancer)). ### 1. Configure Allowed Hosts As admin, go to [Administration > Spectra Detect Manager > General > Network Settings](../Admin/ManagerSettings.md#general). - Verify that the **Application URL** is set to the proxy IP or domain. - In **Allowed Hosts**, ensure the following are listed: 1. Current Detect Manager IP address and domain 2. Proxy IP and domain 3. Secondary IP and domain 4. localhost and 127.0.0.1 ![Allowed Hosts](./images/redundancy_1.png) - Click **Save** on the primary manager. - Repeat for the secondary manager, but **only update the Allowed Hosts section**. Do not modify the Application URL on the secondary manager. It will be automatically replaced during clustering. **Note:** Changing the Application URL updates the configuration on all connected appliances (Hub, Workers, and Analyze). ### 2. Create Redundant Cluster Go to **Administration > Redundancy**, then click **Create redundant cluster**. ![Create Redundant Cluster](./images/redundancy_2.png) ### 3. Establish Connection ![Establish Connection](./images/redundancy_3.png) - Fill out the required fields. - A VPN is automatically established between redundant managers. - The user on the secondary manager must be an admin. - Set **Failover Timeout (seconds)** between 3 and 600; 30 is recommended. - Usually, the machines in a redundant cluster share the same TLS certificate. However, if you need individual TLS certificates for each machine, you can *Disable TLS Certificate Sharing*. - Click **Next**. ### 4. Check Prerequisites ![Check Prerequisites](./images/redundancy_4.png) - The system validates if your environment supports redundancy. - When all checks pass, the message "All checks successfully passed" appears. - Click **Next** to continue. ### 5. Run the Configuration Process ![Run the Configuration Process](./images/redundancy_5.png) - Click **Start Configuration**, then confirm in the popup. - **Do not refresh or navigate away** during setup; leaving the page interrupts the process and it cannot be reopened. - The system automatically switches to the maintenance screen while redundancy is configured. - When setup completes, both managers reboot and return online automatically. Wait until both are reachable before proceeding. **Important: When initiating redundancy on a Spectra Detect Manager with Central logging enabled and a large volume of stored sample data, the initial sync to the secondary manager may take longer than 30 minutes.** If you refresh the screen after 30 minutes, you may see a "Configuration failed" status with a "Rollback Configuration" button, even though clustering is still in progress. Do not roll back while data is syncing. To verify sync progress, SSH to each SDM and run the `top` command. If the `rsync` process is using ~70% CPU or more, the sync is still in progress. Wait until the `rsync` process on both the primary and secondary is under ~10% CPU before refreshing the GUI. - When the "Configuration finished successfully" message appears, click **Finish**. ![Finish the Configuration Process](./images/redundancy_6.png) ### 6. Verify Status - Open the **Cluster Configuration** tab to see which manager is primary and which is secondary. - The secondary manager is read-only while redundancy is active. - The **Status** tab provides redundancy health and logs. - You can initiate a switchover or remove the cluster configuration here. - Check the [Manage and View the Status of Redundancy](#manage-and-view-the-status-of-redundancy) section for more details. ### 7. Update the Proxy or Load Balancer - After the redundancy setup is complete, update the proxy or load balancer to include both manager IPs. - Access the Spectra Detect GUI via the proxy or load balancer; it automatically directs users to the primary node. - When you later remove redundancy, follow the guidance in the [Post De-clustering Actions](#post-de-clustering-actions) section below to keep only one manager active. ## Manage and View the Status of Redundancy ### Redundancy Status Go to **Administration > Redundancy > Cluster Configuration**. ![Cluster Configuration](./images/redundancy_7.png) - The **Cluster Configuration** tab shows which manager is primary or secondary. - Click the **Status** tab for detailed health checks and logs. - Three log views are available: RL Cluster, RL Daemon, and Corosync. ![Cluster Status](./images/redundancy_8.png) ### Managing Switchovers ![Manual Switchover](./images/redundancy_9.png) 1. Go to **Administration > Redundancy > Cluster Configuration**. 2. Click **Manual Switchover** to make the secondary manager primary. 3. Confirm the action when prompted. ![Manual Switchover](./images/redundancy_10.png) 4. The system enters maintenance mode for several minutes. 5. When complete, the proxy or load balancer automatically directs traffic to the new primary. ## Removing Manager Redundancy 1. Go to **Administration > Redundancy > Cluster Configuration**. ![Remove Cluster Configuration](./images/redundancy_11.png) 2. Click **Remove Cluster Configuration**, then confirm - If the Application URL was set to the proxy IP or domain, no changes are sent to the connected appliances - The system enters maintenance mode for a few minutes - Once complete, you will have two independent Spectra Detect Managers. Review [Post De-clustering Actions](#post-de-clustering-actions) after this step to retire the unused manager. ### If the Secondary is unavailable during de-clustering - You can still remove the cluster following the directions in Step 2 above. If the offline secondary manager was permanently removed or destroyed, no additional cleanup is required after Step 2. - If you have access to the server, you can manually run the below command to clean up the de-clustered Spectra Detect Manager that was offline during the initial de-cluster action. ```bash sudo /bin/cluster_manage destroy ``` If the secondary comes back online after de-clustering, it must be manually de-clustered. Contact [ReversingLabs Support](mailto:support@reversinglabs.com) for help. ## Post De-clustering Actions **Important: **After de-clustering, shut down or remove one of the Spectra Detect Managers.**** All connected appliances continue communicating through the proxy or load balancer, which remains configured to route to the active manager. Remove the inactive or retired manager from the proxy or load balancer configuration to prevent connection attempts to a non-existent node. If you plan to re-establish redundancy with a new secondary manager, repeat the steps in [Creating Manager Redundancy](#creating-manager-redundancy). --- ## Spectra Detect YARA Sync — Ruleset Synchronization Across Appliances The YARA Sync page (**Administration ‣ YARA Synchronization**) allows users to easily track the status of YARA ruleset synchronization between connected appliances, and trigger a manual synchronization if rules are not up-to-date. The Manager stores all synchronized rules in a local database and becomes the single source of truth for all connected appliances. When *YARA ruleset synchronization* is enabled, the YARA Sync page displays a table of all appliances connected to the Manager and their YARA ruleset synchronization status. Any connected [Spectra Analyze](/SpectraAnalyze/) appliances must have YARA Synchronization enabled (**Administration ‣ Spectra Detect Manager ‣ General ‣ Synchronization**) to properly display the current status and synchronize rulesets. Appliances can show one of the following statuses: - InSync - OutOfSync - Error - Unavailable - PendingNew - Disabled - NoRules Workers poll the Manager for rule changes every minute. Spectra Analyze appliances push new rules to the Manager as soon as they are created, and pull new rules every 5 minutes. Appliances that are *Not In Sync* can be manually synchronized at any time by clicking the *Start YARA Sync* button in the far right column of the table. Rulesets created on Spectra Analyze appliances before YARA synchronization was enabled will not synchronize to the Manager until the user changes their status or modifies them in any way. Rules present on the Manager, however, will synchronize to newly connected Spectra Analyze appliances regardless of when they were created. Apart from new rulesets, changes in existing rulesets will be synchronized as well. If a ruleset is disabled or deleted on one appliance, its status will be distributed to other appliances. In case of Workers, disabled rulesets will be removed until re-enabled on another appliance. When enabled again, rulesets will be synchronized on the Worker as if they have been newly created. Since all rulesets have owners, their user accounts will be mirrored to other connected appliances, but won’t be able to log into that instance until an administrator enables their account by assigning it a password. YARA Ruleset Restrictions - Naming restrictions: - YARA ruleset names must be between 3 and 48 characters. - The underscore ( _ ) should be used instead of spaces, and any other special characters should be avoided. Ruleset names should only use numbers (0-9) and a-z/A-Z letters. - Ruleset size restrictions: - A ruleset file should not be larger than 4 MB. - A ruleset file should not contain more than 5000 individual rules. - A ruleset larger than 1 MB (1048576 bytes) cannot be saved and run in the [Spectra Intelligence](/SpectraIntelligence/) cloud. - File size restrictions: - YARA rulesets on Spectra Analyze are not applied to files larger than 700 MB. Only rules that have been successfully compiled on Spectra Analyze can be synchronized. --- ## Spectra Detect Configuration — Appliances, Integrations, YARA, and Settings --- ## SpectraDetect AWS EKS Config Reference — Secrets and ConfigMap Values ## Processing ### Configmap configuration secrets | Secret | Type | Description | Used in deployments (Pods) | | :---- | :---- | :---- | :---- | | \-secret-worker-api-token | Optional | Token secret which contains the token that is used to protect all endpoints with /api/ prefix, e.g. file upload. | Auth | | \-secret-worker-api-task-token | Optional | Token secret which contains the token that is used to protect /api/tiscale/v1/task endpoints. If left empty, the mentioned API is protected by \-secret-worker-api-token | Auth | | \-secret-worker-cloud | Required when related feature is enabled | Basic authentication secret which contains username and password for Spectra Intelligence authentication. Required when Spectra Intelligence is enabled (configuration.cloud.enabled). | Processor, Retry Processor, Preprocessor, Postprocessor, Receiver | | \-secret-worker-cloud-proxy | Required when related feature is enabled | Basic authentication secret which contains username and password for Spectra Intelligence Proxy authentication. Required when Spectra Intelligence Proxy is enabled (configuration.cloud.proxy.enabled). | Processor, Retry processor, Preprocessor, Postprocessor, Receiver, Cloud Cache | | \-secret-worker-aws | Required when related feature is enabled | Basic authentication secret which contains username and password for AWS authentication. Required if any type of S3 storage (File, SNS, Report, Unpacked) is enabled (configuration.s3.enabled, configuration.sns.enabled, configuration.reportS3.enabled, configuration.unpackedS3.enabled) | Postprocessor | | \-secret-worker-azure | Required when related feature is enabled | Basic authentication secret which contains username and password for Azure authentication. Required if any type of ADL storage (File, Report, Unpacked) is enabled (configuration.adl.enabled, configuration.reportAdl.enabled, configuration.unpackedAdl.enabled). | Postprocessor | | \-secret-worker-ms-graph | Required when related feature is enabled | Basic authentication secret which contains username and password for Microsoft Cloud Storage authentication. Required if any type of Microsoft Cloud storage (File, Report, Unpacked) is enabled (configuration.msGraph.enabled, configuration.reportMsGraph.enabled, configuration.unpackedMsGraph.enabled). | Postprocessor | | \-secret-unpacked-s3 | Optional | Secret which contains only the password. Used for encryption of the archive file. Relevant only when the configuration.unpackedS3.archiveUnpacked option is set to true. | Postprocessor | | \-secret-report-s3 | Optional | Secret which contains only the password. Used for encryption of the archive file. Relevant only when the configuration.reportS3.archiveSplitReport option is set to true. | Postprocessor | | \-secret-unpacked-adl | Optional | Secret which contains only the password. Used for encryption of the archive file. Relevant only when the configuration.unpackedAdl.archiveUnpacked option is set to true. | Postprocessor | | \-secret-report-adl | Optional | Secret which contains only the password. Used for encryption of the archive file. Relevant only when the configuration.reportAdl.archiveSplitReport option is set to true. | Postprocessor | | \-secret-unpacked-ms-graph | Optional | Secret which contains only the password. Used for encryption of the archive file. Relevant only when the configuration.unpackedMsGraph.archiveUnpacked option is set to true. | Postprocessor | | \-secret-report-ms-graph | Optional | Secret which contains only the password. Used for encryption of the archive file. Relevant only when the configuration.reportMsGraph.archiveSplitReport option is set to true. | Postprocessor | | \-secret-splunk | Optional | Token secret which contains the token for Splunk authentication. Relevant only if Splunk Integration is enabled (configuration.splunk.enabled). | Postprocessor | | \-secret-archive-zip | Optional | Secret which contains only the password. Relevant only when the configuration.archive.fileWrapper value is set to "zip" or "mzip". | Postprocessor | | \-secret-sa-integration-token | Required when related feature is enabled | Token secret which contains the token used in authentication on Spectra Analyze when Spectra Analyze Integration is enabled (configuration.spectraAnalyzeIntegration.enabled). This token should be created in Spectra Analyze. | Postprocessor | ### Configmap Configuration values for Worker pods | Key | Type | Default | Description | | :---- | :---- | :---- | :---- | | appliance.configMode | string | `"STANDARD"` | Configuration mode of the appliance. Allowed values: CONFIGMAP (Configuration is provided with configmap), STANDARD (configuration is provided over UI). | | configuration.a1000 | object | \- | Integration with Spectra Analyze appliance. | | configuration.a1000.host | string | `""` | The hostname or IP address of the A1000 appliance associated with the Worker. | | configuration.adl | object | \- | Settings for storing files in an Azure Data Lake container. | | configuration.adl.container | string | `""` | The hostname or IP address of the Azure Data Lake container that will be used for storage. Required when storing files in ADL is enabled. | | configuration.adl.enabled | bool | `false` | Enable or disable the storage of processed files. | | configuration.adl.folder | string | `""` | Specify the name of the folder on the container where files will be stored. | | configuration.apiServer | object | \- | Configures a custom Worker IP address which is included in the response when uploading a file to the Worker for processing. | | configuration.apiServer.host | string | `""` | Configures the hostname or IP address of the Worker. Only necessary if the default IP address or network interface is incorrect. | | configuration.archive | object | \- | After processing, files can be zipped before external storage. Available only for S3 and Azure. | | configuration.archive.fileWrapper | string | `""` | Specify whether the files should be compressed as a ZIP archive before uploading to external storage. Supported values are: zip, mzip. If this parameter is left blank, files will be uploaded in their original format. | | configuration.archive.zipCompress | int | `0` | ZIP compression level to use when storing files in a ZIP file. Allowed range: 0 (no compression) to 9 (maximum compression). | | configuration.archive.zipMaxfiles | int | `0` | Maximum allowed number of files that can be stored in one ZIP archive. Allowed range: 1-65535. 0 represents unlimited. | | configuration.authentication | object | \- | Authentication settings for Detect Worker | | configuration.authentication.enabled | bool | `false` | Enable/disable authentication on Detect Worker ingress APIs | | configuration.authentication.externalAuthUrl | string | `""` | If set, external/custom authentication service will be used for authentication, otherwise simple Token service is deployed which protects paths with tokens defined in the secrets. | | configuration.aws | object | \- | Configuration of integration with AWS or AWS-compatible storage to be used for SNS, and for uploading files and analysis reports to S3. | | configuration.aws.caPath | string | `""` | Path on the file system pointing to the certificate of a custom (self-hosted) S3 server. | | configuration.aws.endpointUrl | string | `""` | Only required in non-AWS setups in order to store files to an S3-compatible server. When this parameter is left blank, the default is `https://aws.amazonaws.com`. Supported pattern(s): `https?://.+"`. | | configuration.aws.maxReattempts | int | `5` | Maximum number of retries when saving a report to an S3-compatible server. | | configuration.aws.payloadSigningEnabled | bool | `false` | Specifies whether to include an SHA-256 checksum with Amazon Signature Version 4 payloads. | | configuration.aws.region | string | `"us-east-1"` | Specify the correct AWS geographical region where the S3 bucket is located. Required parameter, ignored for non-AWS setups. | | configuration.aws.serverSideEncryption | string | `""` | Specify the encryption algorithm used on the target S3 bucket (e.g. aws:kms or AES256). | | configuration.aws.sslVerify | bool | `false` | Enable/disable SSL verification. | | configuration.awsRole | object | \- | Configures the AWS IAM roles used to access S3 buckets without sharing secret keys. The IAM role which will be used to obtain temporary tokens has to be created in the AWS console. | | configuration.awsRole.enableArn | bool | `false` | Enables or disables this entire feature. | | configuration.awsRole.externalRoleId | string | `""` | The external ID of the role that will be assumed. This can be any string. Usually, it’s an ID provided by the entity which uses (but doesn’t own) an S3 bucket. The owner of that bucket takes that external ID and builds an ARN with it. | | configuration.awsRole.refreshBuffer | int | `5` | Number of seconds to fetch a new ARN token before the token timeout is reached. | | configuration.awsRole.roleArn | string | `""` | The role ARN created using the external role ID and an Amazon ID. In other words, the ARN which allows a Worker to obtain a temporary token, which then allows it to save to S3 buckets without a secret access key. | | configuration.awsRole.roleSessionName | string | `""` | Name of the session visible in AWS logs. Can be any string. | | configuration.awsRole.tokenDuration | int | `900` | How long before the authentication token expires and is refreshed. The minimum value is 900 seconds. | | configuration.azure | object | \- | Configures integration with Azure Data Lake Gen2 for the purpose of storing processed files in Azure Data Lake containers. | | configuration.azure.endpointSuffix | string | `"core.windows.net"` | Specify the suffix for the address of your Azure Data Lake container. | | configuration.callback | object | \- | Settings for automatically sending file analysis reports via POST request. | | configuration.callback.advancedFilterEnabled | bool | `false` | Enable/disable the advanced filter. | | configuration.callback.advancedFilterName | string | `""` | Name of the advanced filter. | | configuration.callback.caPath | string | `""` | If the url parameter is configured to use HTTPS, this parameter can be used to set the path to the certificate file. This automatically enables SSL verification. If this parameter is left blank or not configured, SSL verification will be disabled, and the certificate will not be validated. | | configuration.callback.enabled | bool | `false` | Enable/disable connection. | | configuration.callback.maliciousOnly | bool | `false` | When set, the report will only contain malicious and suspicious children. | | configuration.callback.reportType | string | `"medium"` | Specifies which report\_type is returned. By default, or when empty, only the medium (summary) report is provided in the callback response. Set to extended\_small, small, medium or large to view results of filtering the full report. | | configuration.callback.splitReport | bool | `false` | By default, reports contain information on parent files and all extracted children files. If set to true, reports for extracted files will be separated from the full report and saved as standalone files. If any user-defined data was appended to the analyzed parent file, it will be included in every split child report. | | configuration.callback.sslVerify | bool | `false` | Enable/disable SSL verification | | configuration.callback.timeout | int | `5` | Specify the number of seconds to wait before the POST request times out. In case of failure, the Worker will retry the request up to six times, increasing the waiting time between requests after the second retry has failed. With the default timeout set, the total possible waiting time before a request finally fails is 159 seconds. | | configuration.callback.topContainerOnly | bool | `false` | If set to true, the reports will only contain metadata for the top container. Reports for unpacked files will not be generated. | | configuration.callback.url | string | `""` | Specify the full URL that will be used to send the callback POST request. Both HTTP and HTTPS are supported. If this parameter is left blank, reports will not be sent, and the callback feature will be disabled. Supported pattern(s): http?://.+ | | configuration.callback.view | string | `""` | Specifies whether a custom report view should be applied to the report. | | configuration.cef | object | \- | Configures Common Event Format (CEF) settings. CEF is an extensible, text-based logging and auditing format that uses a standard header and a variable extension, formatted as key-value pairs. | | configuration.cef.cefMsgHashType | string | `"md5"` | Specify the type of hash that will be included in CEF messages. Supported values are: md5, sha1, sha256. | | configuration.cef.enableCefMsg | bool | `false` | Enable or disable sending CEF messages to syslog. Defaults to `false` to avoid flooding. | | configuration.classify | object | \- | Configure settings for Worker analysis and classification of files using the Spectra Core static analysis engine. | | configuration.classify.certificates | bool | `true` | Enable checking whether file certificate passes the certificate validation, in addition to checking certificate whitelists and blacklists. | | configuration.classify.documents | bool | `true` | Enable document format threat detection. | | configuration.classify.emails | bool | `true` | Enable detection of phishing and other email threats. | | configuration.classify.hyperlinks | bool | `true` | Enable embedded hyperlinks detection. | | configuration.classify.ignoreAdware | bool | `false` | When set to true, classification results that match adware will be ignored. | | configuration.classify.ignoreHacktool | bool | `false` | When set to true, classification results that match hacktool will be ignored. | | configuration.classify.ignorePacker | bool | `false` | When set to true, classification results that match packer will be ignored. | | configuration.classify.ignoreProtestware | bool | `false` | When set to true, classification results that match protestware will be ignored. | | configuration.classify.ignoreRiskware | bool | `false` | When set to true, classification results that match riskware will be ignored. | | configuration.classify.ignoreSpam | bool | `false` | When set to true, classification results that match spam will be ignored. | | configuration.classify.ignoreSpyware | bool | `false` | When set to true, classification results that match spyware will be ignored. | | configuration.classify.images | bool | `true` | When true, the heuristic image classifier for supported file formats is used. | | configuration.classify.pecoff | bool | `true` | When true, the heuristic Windows executable classifier for supported PE file formats is used. | | configuration.cleanup | object | \- | Configures how often the Worker file system is cleaned up. | | configuration.cleanup.fileAgeLimit | int | `1440` | Time before an unprocessed file present on the appliance is deleted, in minutes. | | configuration.cleanup.taskAgeLimit | int | `90` | Time before analysis reports and records of processed tasks are deleted, in minutes. | | configuration.cleanup.taskUnprocessedLimit | int | `1440` | Time before an incomplete processing task is canceled, in minutes. | | configuration.cloud | object | \- | Configures integration with the Spectra Intelligence service or a T1000 instance to receive additional classification information. | | configuration.cloud.enabled | bool | `false` | Enable/disable connection. | | configuration.cloud.proxy | object | \- | Configure an optional proxy connection. | | configuration.cloud.proxy.enabled | bool | `false` | Enable/disable proxy server. | | configuration.cloud.proxy.port | int | `8080` | Specify the TCP port number if using an HTTP proxy. Allowed range(s): 1 … 65535\. Required only if proxy is used. | | configuration.cloud.proxy.server | string | `""` | Proxy hostname or IP address for routing requests from the appliance to Spectra Intelligence. Required only if proxy is used. | | configuration.cloud.server | string | `"https://appliance-api.reversinglabs.com"` | Hostname or IP address of the Spectra Intelligence server. Required if Spectra Intelligence integration is enabled. Format: `https://`. | | configuration.cloud.timeout | int | `6` | Specify the number of seconds to wait when connecting to Spectra Intelligence before terminating the connection request. | | configuration.cloudAutomation | object | \- | Configures the Worker to automatically submit files to Spectra Intelligence for antivirus scanning (in addition to local static analysis and remote reputation lookup (from previous antivirus scans)). | | configuration.cloudAutomation.dataChangeSubscribe | bool | `false` | Subscribe to the Spectra Intelligence data change notification mechanism. | | configuration.cloudAutomation.spexUpload | object | \- | Scanning settings. | | configuration.cloudAutomation.spexUpload.enabled | bool | `false` | Enable/disable this feature. | | configuration.cloudAutomation.spexUpload.rescanEnabled | bool | `true` | Enable/disable rescan of files upon submission based on the configured interval to include the latest AV results in the reports. | | configuration.cloudAutomation.spexUpload.rescanThresholdInDays | int | `3` | Set the interval in days for triggering an AV rescan. If the last scan is older than the specified value, a rescan will be initiated. A value of 0 means files will be rescanned with each submission. | | configuration.cloudAutomation.spexUpload.scanUnpackedFiles | bool | `false` | Enable/disable sending unpacked files to Deep Cloud Analysis for scanning. Consumes roughly double the processing resources compared to standard analysis. | | configuration.cloudAutomation.waitForAvScansTimeoutInMinutes | int | `240` | Sets the maximum wait time (in minutes) for Deep Cloud Analysis to complete. If the timeout is reached, the report will be generated without the latest AV results. | | configuration.cloudAutomation.waitForAvScansToFinish | bool | `false` | If set to true, delays report generation until Deep Cloud Analysis completes, ensuring the latest AV results are included. | | configuration.cloudCache.cacheMaxSizePercentage | float | `6.25` | Maximum cache size expressed as a percentage of the total allocated RAM on the Worker. Allowed range: 5 \- 15\. | | configuration.cloudCache.cleanupWindow | int | `10` | How often to run the cache cleanup process, in minutes. It is advisable for this value to be lower, or at least equal to the TTL value. Max: 5 \- 60\. | | configuration.cloudCache.enabled | bool | `true` | Enable or disable the caching feature. | | configuration.cloudCache.maxIdleUpstreamConnections | int | `50` | The maximum number of idle upstream connections. Allowed range: 10 \- 50\. | | configuration.cloudCache.ttl | int | `240` | Time to live for cached records, in minutes. Allowed range: 1 \- 7200\. | | configuration.general.maxUploadSizeMb | int | `2048` | The largest file (in MB) that Worker will accept and start processing. Ignored if Spectra Intelligence is connected and file upload limits are set there. | | configuration.general.postprocessingCheckThresholdMins | int | `720` | How often the postprocessing service will be checked for timeouts. If any issues are detected, the process will be restarted. | | configuration.general.tsWorkerCheckThresholdMins | int | `720` | How often the processing service will be checked for timeouts. If any issues are detected, the process will be restarted. | | configuration.general.uploadSizeLimitEnabled | bool | `false` | Whether or not the upload size filter is active. Ignored if Spectra Intelligence is connected and file upload limits are set there. | | configuration.hashes | object | \- | Spectra Core calculates file hashes during analysis and includes them in the analysis report. The following options configure which additional hash types should be calculated and included in the Worker report. SHA1 and SHA256 are always included and therefore aren’t configurable. Selecting additional hash types (especially SHA384 and SHA512) may slow report generation. | | configuration.hashes.enableCrc32 | bool | `false` | Include CRC32 hashes in reports. | | configuration.hashes.enableMd5 | bool | `true` | Include MD5 hashes in reports. | | configuration.hashes.enableSha384 | bool | `false` | Include SHA384 hashes in reports. | | configuration.hashes.enableSha512 | bool | `false` | Include SHA512 hashes in reports. | | configuration.hashes.enableSsdeep | bool | `false` | Include SSDEEP hashes in reports. | | configuration.hashes.enableTlsh | bool | `false` | Include TLSH hashes in reports. | | configuration.health | object | `-` | Configures system health check configuration. | | configuration.health.disk\_high | int | `95` | Threshold for high disk usage | | configuration.health.enabled | bool | `true` | Enable/disable system health check. | | configuration.health.queue\_high | int | `2000` | Specify the maximum number of items allowed in the queue. If it exceeds the configured value, the appliance will start rejecting traffic. Allowed range(s): 10+ | | configuration.logging | object | \- | Configures the severity above which events will be logged or sent to a remote syslog server. Severity can be: INFO, WARNING, or ERROR. | | configuration.logging.tiscaleLogLevel | string | `"INFO"` | Events below this level will not be saved to logs (/var/log/messages and /var/log/tiscale/\*.log). | | configuration.msGraph.enabled | bool | `false` | Turns the Microsoft Cloud Storage file integration on or off. | | configuration.msGraph.folder | string | `""` | Folder where samples will be stored in Microsoft Cloud Storage. | | configuration.msGraphGeneral | object | \- | Configures the general options for the Microsoft Cloud Storage integration. | | configuration.msGraphGeneral.customDomain | string | `""` | Application’s custom domain configured in the Azure portal. | | configuration.msGraphGeneral.siteHostname | string | `""` | Used only if `storageType` is set to SharePoint. This is the SharePoint hostname. | | configuration.msGraphGeneral.siteRelativePath | string | `""` | SharePoint Online site relative path. Only used when `storageType` is set to SharePoint. | | configuration.msGraphGeneral.storageType | string | `"onedrive"` | Specifies the storage type. Supported values are: onedrive or sharepoint. | | configuration.msGraphGeneral.username | string | `""` | Used only if `storageType` is set to OneDrive. Specifies which user’s drive will be used." | | configuration.processing | object | \- | Configure the Worker file processing capabilities to improve performance and load balancing. | | configuration.processing.cacheEnabled | bool | `false` | Enable/disable caching. When enabled, Spectra Core can skip reprocessing the same files (duplicates) if uploaded consecutively in a short period. | | configuration.processing.cacheTimeToLive | int | `0` | If file processing caching is enabled, specify how long (in seconds) the analysis reports should be preserved in the cache before they expire. A value of 0 uses the default. Default: 600\. Maximum: 86400\. | | configuration.processing.depth | int | `0` | Specifies how "deep" a file is unpacked. By default, when set to 0, Workers will unpack files recursively until no more files can be unpacked. Setting a value greater than 0 limits the depth of recursion, which can speed up analyses but provide less detail. | | configuration.processing.largefileThreshold | int | `100` | If advanced mode is enabled, files larger than this threshold (in MB) will be processed individually, one by one. This parameter is ignored in standard mode. | | configuration.processing.mode | int | `2` | Configures the Worker processing mode to improve load balancing. Supported modes are standard (1) and advanced (2). | | configuration.processing.timeout | int | `28800` | Specifies how many seconds the Worker should wait for a file to process before terminating the task. Default: 28800\. Maximum: 259200\. | | configuration.propagation | object | \- | Configure advanced classification propagation options supported by the Spectra Core static analysis engine. When Spectra Core classifies files, the classification of a child file can be applied to the parent file. | | configuration.propagation.enabled | bool | `true` | Enable/disable the classification propagation feature. When propagation is enabled, files can be classified based on the content extracted from them. This means that files containing a malicious or suspicious file will also be considered malicious or suspicious. | | configuration.propagation.goodwareOverridesEnabled | bool | `true` | Enable/disable goodware overrides. When enabled, any files extracted from a parent file and whitelisted by certificate, source or user override can no longer be classified as malicious or suspicious. This is an advanced goodware whitelisting technique that can be used to reduce the amount of false positive detections. | | configuration.propagation.goodwareOverridesFactor | int | `1` | When goodware overrides are enabled, this parameter must be configured to determine the factor to which overrides will be applied. Supported values are 0 to 5, where zero represents the best trust factor (highest confidence that a sample contains goodware). Overrides will apply to files with a trust factor equal to or lower than the value configured here. | | configuration.report | object | \- | Configure the contents of the Spectra Detect file analysis report. | | configuration.report.firstReportOnly | bool | `false` | If disabled, the reports for samples with child files will include relationships for all descendant files. Enabling this setting will only include relationship metadata for the root parent file to reduce redundancy. | | configuration.report.includeStrings | bool | `false` | When enabled, strings are included in the file analysis report. Spectra Core can extract strings from binaries. This can be useful but may result in extensive metadata. To reduce noise, the types of included strings can be customized in the strings section. | | configuration.report.networkReputation | bool | `false` | If enabled, analysis reports include a top-level `network_reputation` object with reputation information for every extracted network resource. For this feature, Spectra Intelligence must be configured on the Worker, and the `ticore.processingMode` option must be set to "best". | | configuration.report.relationships | bool | `false` | Includes sample relationship metadata in the file analysis report. When enabled, the relationships section lists the hashes of files found within the given file. | | configuration.reportAdl | object | \- | Settings to configure how reports saved to Azure Data Lake are formatted. | | configuration.reportAdl.archiveSplitReport | bool | `true` | Enable sending a single, smaller archive of split report files to ADL instead of each file. Relevant only when the 'Split report' option is used. | | configuration.reportAdl.container | string | `""` | Container where reports will be stored. Required when this feature is enabled. | | configuration.reportAdl.enabled | bool | `false` | Enable/disable storing file processing reports to ADL. | | configuration.reportAdl.filenameTimestampFormat | string | `""` | File naming pattern for the report itself. A timestamp is appended to the SHA1 hash of the file. The timestamp format must follow the strftime specification and be enclosed in quotation marks. If not specified, the ISO 8601 format is used. | | configuration.reportAdl.folder | string | `""` | Specify the name of a folder where analysis reports will be stored. If the folder name is not provided, files are stored into the root of the configured container. | | configuration.reportAdl.folderOption | string | `"date_based"` | Select the naming pattern that will be used when automatically creating subfolders for storing analysis reports. Supported options are: date\_based (YYYY/mm/dd/HH), datetime\_based (YYYY/mm/dd/HH/MM/SS), and sha1\_based (using the first 4 characters of the file hash). | | configuration.reportAdl.maliciousOnly | bool | `false` | When set, the report will only contain malicious and suspicious children. | | configuration.reportAdl.reportType | string | `"large"` | Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory. | | configuration.reportAdl.splitReport | bool | `false` | By default, reports contain information on parent files and all extracted children files. When this option is enabled, analysis reports for extracted files are separated from their parent file report, and saved as individual report files. | | configuration.reportAdl.timestampEnabled | bool | `true` | Enable/disable appending a timestamp to the report name. | | configuration.reportAdl.topContainerOnly | bool | `false` | When enabled, the file analysis report will only include metadata for the top container and subreports for unpacked files will not be generated. | | configuration.reportAdl.view | string | `""` | Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker. | | configuration.reportApi | object | \- | Configures the settings applied to the file analysis report fetched using the GET endpoint. | | configuration.reportApi.maliciousOnly | bool | `false` | Report contains only malicious and suspicious children. | | configuration.reportApi.reportType | string | `"large"` | Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory. | | configuration.reportApi.topContainerOnly | bool | `false` | When enabled, thefile analysis report will only include metadata for the top container and subreports for unpacked files will not be generated. | | configuration.reportApi.view | string | `""` | Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker. | | configuration.reportMsGraph | object | \- | Settings to configure how reports saved to OneDrive or SharePoint are formatted. | | configuration.reportMsGraph.archiveSplitReport | bool | `true` | Enable sending a single, smaller archive of split report files to Microsoft Cloud Storage instead of each file. Relevant only when the "Split Report" option is used. | | configuration.reportMsGraph.enabled | bool | `false` | Enable/disable storing file processing reports. | | configuration.reportMsGraph.filenameTimestampFormat | string | `""` | This refers to the naming of the report file itself. A timestamp is appended to the SHA1 hash of the file. The timestamp format must follow the strftime specification and be enclosed in quotation marks. If not specified, the ISO 8601 format is used. | | configuration.reportMsGraph.folder | string | `""` | Folder where report files will be stored on the Microsoft Cloud Storage. If the folder name is not provided, files are stored into the root of the configured container. | | configuration.reportMsGraph.folderOption | string | `"date_based"` | Select the naming pattern that will be used when automatically creating subfolders for storing analysis reports. Supported options are: date\_based (YYYY/mm/dd/HH), datetime\_based (YYYY/mm/dd/HH/MM/SS), and sha1\_based (using the first 4 characters of the file hash). | | configuration.reportMsGraph.maliciousOnly | bool | `false` | When set, the report will only contain malicious and suspicious children. | | configuration.reportMsGraph.reportType | string | `"large"` | Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory. | | configuration.reportMsGraph.splitReport | bool | `false` | By default, reports contain information on parent files and all extracted children files. When this option is enabled, analysis reports for extracted files are separated from their parent file report, and saved as individual report files. | | configuration.reportMsGraph.topContainerOnly | bool | `false` | When enabled, file analysis report will only include metadata for the top container, and subreports for unpacked files will not be generated. | | configuration.reportMsGraph.view | string | `""` | Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker. | | configuration.reportS3 | object | \- | Settings to configure how reports saved to S3 buckets are formatted. | | configuration.reportS3.advancedFilterEnabled | bool | `false` | Enable/disable usage of the advanced filter. | | configuration.reportS3.advancedFilterName | string | `""` | Name of the advanced filter. | | configuration.reportS3.archiveSplitReport | bool | `true` | Enable sending a single, smaller archive of split report files to S3 instead of each file. Relevant only when the 'Split report' option is used. | | configuration.reportS3.bucketName | string | `""` | Name of the S3 bucket where processed files will be stored. Required when this feature is enabled. | | configuration.reportS3.enabled | bool | `false` | Enable/disable storing file processing reports to S3. | | configuration.reportS3.filenameTimestampFormat | string | `""` | This refers to the naming of the report file itself. A timestamp is appended to the SHA1 hash of the file. The timestamp format must follow the strftime specification and be enclosed in quotation marks. If not specified, the ISO 8601 format is used. | | configuration.reportS3.folder | string | `""` | Folder where report files will be stored in the given S3 bucket. The folder can be up to 1024 bytes long when encoded in UTF-8, and can contain letters, numbers and special characters: "\!", "-", "\_", ".", "\*", "'", "(", ")", "/". It must not start or end with a slash or contain leading or trailing spaces. Consecutive slashes ("//") are not allowed. | | configuration.reportS3.folderOption | string | `"date_based"` | Select the naming pattern used when automatically creating subfolders for storing analysis reports. Supported options are: date\_based (YYYY/mm/dd/HH), datetime\_based (YYYY/mm/dd/HH/MM/SS), and sha1\_based (using the first 4 characters of the file hash). | | configuration.reportS3.maliciousOnly | bool | `false` | When set, the report will only contain malicious and suspicious children. | | configuration.reportS3.reportType | string | `"large"` | Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. In other words, fields can be included or excluded as required. Report types are stored in the /etc/ts-report/report-types directory. | | configuration.reportS3.splitReport | bool | `false` | By default, reports contain information on parent files and all extracted children files. When this option is enabled, analysis reports for extracted files are separated from their parent file report, and saved as individual report files. | | configuration.reportS3.timestampEnabled | bool | `true` | Enable/disable appending a timestamp to the report name. | | configuration.reportS3.topContainerOnly | bool | `false` | When enabled, the file analysis report will only include metadata for the top container and subreports for unpacked files will not be generated. | | configuration.reportS3.view | string | `""` | Apply a view for transforming report data to the “large” report type to ensure maximum compatibility. Several existing views are also available as report types, which should be used as a view substitute due to performance gains. Custom views can be defined by placing the scripts in the “/usr/libexec/ts-report-views.d” directory on Spectra Detect Worker. | | configuration.s3 | object | \- | Settings for storing a copy of all files uploaded for analysis on Worker to an S3 or a third-party, S3-compatible server. | | configuration.s3.advancedFilterEnabled | bool | `false` | Enable/disable usage of the advanced filter. | | configuration.s3.advancedFilterName | string | `""` | Name of the advanced filter. | | configuration.s3.bucketName | string | `""` | Name of the S3 bucket where processed files will be stored. Required when this feature is enabled. | | configuration.s3.enabled | bool | `false` | Enable/disable storing file processed files on S3. | | configuration.s3.folder | string | `""` | Specify the name of a folder where analyzed files will be stored. If the folder name is not provided, files are stored into the root of the configured bucket. | | configuration.s3.storeMetadata | bool | `true` | When true, analysis metadata will be stored to the uploaded S3 object. | | configuration.scaling | object | \- | Configures the number of concurrent processes and the number of files analyzed concurrently. Parameters in this section can be used to optimize the file processing performance on Worker. | | configuration.scaling.postprocessing | int | `1` | Specify how many post-processing instances to run. Post-processing instances will then modify and save reports or upload processed files to external storage. Increasing this value can increase throughput for servers with extra available cores. Maximum: 256\. | | configuration.scaling.preprocessingUnpacker | int | `1` | Specify how many copies of Spectra Core are used to unpack samples for Deep Cloud Analysis. This setting only has effect if Deep Cloud Analysis is enabled with Scan Unpacked Files capability. | | configuration.scaling.processing | int | `1` | Specify how many copies of Spectra Core engine instances to run. Each instance starts threads to process files. Maximum: 256\. | | configuration.sns | object | \- | Configures settings for publishing notifications about file processing status and links the reports to an Amazon SNS (Simple Notification Service) topic. | | configuration.sns.enabled | bool | `false` | Enable/disable publishing notifications to Amazon SNS. | | configuration.sns.topic | string | `""` | Specify the SNS topic ARN that the notifications should be published to. Prerequisite: the AWS account in the AWS settings must be given permission to publish to this topic. Required when this feature is enabled. | | configuration.spectraAnalyzeIntegration | object | \- | Configuration settings to upload processed samples to configured Spectra Analyze. | | configuration.spectraAnalyzeIntegration.address | string | `""` | Spectra Analyze address. Required when this feature is enabled. Has to be in the following format: https://\. | | configuration.spectraAnalyzeIntegration.advancedFilterEnabled | bool | `true` | Enable/disable the advanced filter. | | configuration.spectraAnalyzeIntegration.advancedFilterName | string | `"default_filter"` | Name of the advanced filter. | | configuration.spectraAnalyzeIntegration.enabled | bool | `false` | Enable/disable integration with Spectra Analyze. | | configuration.splunk | object | \- | Configures integration with Splunk, a logging server that can receive Spectra Detect file analysis reports. | | configuration.splunk.caPath | string | `""` | Path to the certificate. | | configuration.splunk.chunkSizeMb | int | `0` | The maximum size (MB) of a single request sent to Splunk. If an analysis report exceeds this size, it will be split into multiple parts. The report is split into its subreports (for child files). A request can contain one or multiple subreports, as long as its total size doesn’t exceed this limit. The report is never split by size alone \- instead, complete subreports are always preserved and sent to Splunk. Default: 0 (disabled) | | configuration.splunk.enabled | bool | `false` | Enable/disable Splunk integration. | | configuration.splunk.host | string | `""` | Specify the hostname or IP address of the Splunk server that should connect to the Worker appliance. | | configuration.splunk.https | bool | `true` | If set to true, HTTPS will be used for sending information to Splunk. If set to false, HTTP is used. | | configuration.splunk.port | int | `8088` | Specify the TCP port of the Splunk server’s HTTP Event Collector. | | configuration.splunk.reportType | string | `"large"` | Specifies which report\_type is returned. By default or when empty, only the medium (summary) report is provided in the callback response. Set to small, medium or large to view results of filtering the full report. | | configuration.splunk.sslVerify | bool | `false` | If HTTPS is enabled, setting this to true will enable certificate verification." | | configuration.splunk.timeout | int | `5` | Specify how many seconds to wait for a response from the Splunk server before the request fails. If the request fails, the report will not be uploaded to the Splunk server, and an error will be logged. The timeout value must be greater than or equal to 1, and not greater than 999\. | | configuration.splunk.topContainerOnly | bool | `false` | Whether or not Splunk should receive the report for the top (parent) file only. If set to true, no subreports will be sent. | | configuration.splunk.view | string | `""` | Specifies whether a custom Report View should be applied to the file analysis report and returned in the response. | | configuration.strings | object | \- | Configure the output of strings extracted from files during Spectra Core static analysis. | | configuration.strings.enableStringExtraction | bool | `false` | If set to true, user-provided criteria for string extraction will be used. | | configuration.strings.maxLength | int | `32768` | Maximum number of characters in strings. | | configuration.strings.minLength | int | `4` | Minimum number of characters in strings. Strings shorter than this value are not extracted. | | configuration.strings.unicodePrintable | bool | `false` | Specify whether strings are Unicode printable or not. | | configuration.strings.utf16be | bool | `true` | Allow/disallow extracting UTF-16BE strings. | | configuration.strings.utf16le | bool | `true` | Allow/disallow extracting UTF-16LE strings. | | configuration.strings.utf32be | bool | `false` | Allow/disallow extracting UTF-32BE strings. | | configuration.strings.utf32le | bool | `false` | Allow/disallow extracting UTF-32LE strings. | | configuration.strings.utf8 | bool | `true` | Allow/disallow extracting UTF-8 strings. | | configuration.ticore | object | \- | Configures options supported by Spectra Core. | | configuration.ticore.maxDecompressionFactor | float | `1.0` | Decimal value between 0 and 999.9. If multiple decimals are given, it will be rounded to one decimal. Used to protect the user from intentional or unintentional archive bombs, terminating decompression if size of unpacked content exceeds a set quota. | | configuration.ticore.mwpExtended | bool | `false` | Enable/disable information from antivirus engines in Spectra Intelligence. Requires Spectra Intelligence to be configured | | configuration.ticore.mwpGoodwareFactor | int | `2` | Determines when a file classified as KNOWN in Spectra Intelligence Cloud is classified as Goodware by Spectra Core. By default, all KNOWN cloud classifications are converted to Goodware. Supported values are 0 \- 5, where zero represents the best trust factor (highest confidence that a sample contains goodware). Lowering the value reduces the number of samples classified as goodware. Samples with a trust factor above the configured value are considered UNKNOWN. Requires Spectra Intelligence to be configured | | configuration.ticore.processingMode | string | `"best"` | Determines which file formats are unpacked by Spectra Core for detailed analysis. "best" fully processes all supported formats; "fast" processes a limited set. | | configuration.ticore.useXref | bool | `false` | Enabling XREF service will enrich analysis reports with cross-reference metadata like AV scanner results. Requires Spectra Intelligence to be configured | | configuration.unpackedAdl | object | \- | Settings for storing extracted files in an Azure Data Lake container. | | configuration.unpackedAdl.archiveUnpacked | bool | `true` | Enable sending a single, smaller archive of unpacked files to ADL instead of each unpacked file. | | configuration.unpackedAdl.container | string | `""` | Specify the name of the Azure Data Lake container where extracted files will be saved. Required when this feature is enabled. | | configuration.unpackedAdl.enabled | bool | `false` | Enable/disable storing extracted files to ADL. | | configuration.unpackedAdl.folder | string | `""` | Specify the name of a folder in the configured Azure container where extracted files will be stored. If the folder name is not provided, files are stored into the root of the configured container. | | configuration.unpackedAdl.folderOption | string | `"date_based"` | Select the naming pattern that will be used when automatically creating subfolders for storing analyzed files. Supported options are: date\_based (YYYY/mm/dd/HH), datetime\_based (YYYY/mm/dd/HH/MM/SS), and sha1\_based (using the first 4 characters of the file hash). | | configuration.unpackedMsGraph | object | \- | Settings for storing extracted files to Microsoft Cloud Storage. | | configuration.unpackedMsGraph.archiveUnpacked | bool | `true` | Enable sending a single, smaller archive of unpacked files to Microsoft Cloud Storage instead of each unpacked file. | | configuration.unpackedMsGraph.enabled | bool | `false` | Enable/disable storing extracted files. | | configuration.unpackedMsGraph.folder | string | `""` | Folder where unpacked files will be stored on the Microsoft Cloud Storage. If the folder name is not provided, files are stored into the root of the configured container. | | configuration.unpackedMsGraph.folderOption | string | `"date_based"` | Select the naming pattern that will be used when automatically creating subfolders for storing analyzed files. Supported options are: date\_based (YYYY/mm/dd/HH), datetime\_based (YYYY/mm/dd/HH/MM/SS), and sha1\_based (using the first 4 characters of the file hash). | | configuration.unpackedS3 | object | \- | Settings for storing extracted files to S3 container. | | configuration.unpackedS3.advancedFilterEnabled | bool | `false` | Enable/disable the use of advanced filters. | | configuration.unpackedS3.advancedFilterName | string | `""` | Name of the advanced filter. | | configuration.unpackedS3.archiveUnpacked | bool | `true` | Enable sending a single, smaller archive of unpacked files to S3 instead of each unpacked file. | | configuration.unpackedS3.bucketName | string | `""` | Specify the name of the S3 container where extracted files will be saved. Required when this feature is enabled. | | configuration.unpackedS3.enabled | bool | `false` | Enable/disable storing extracted files in S3. | | configuration.unpackedS3.folder | string | `""` | The name of a folder in the configured S3 container where extracted files will be stored. If the folder name is not provided, files are stored into the root of the configured container. The folder can be up to 1024 bytes long when encoded in UTF-8, and can contain letters, numbers and special characters: "\!", "-", "\_", ".", "\*", "'", "(", ")", "/". It must not start or end with a slash or contain leading or trailing spaces. Consecutive slashes ("//") are not allowed. | | configuration.unpackedS3.folderOption | string | `"date_based"` | Select the naming pattern that will be used when automatically creating subfolders for storing analyzed files. Supported options are: date\_based (YYYY/mm/dd/HH), datetime\_based (YYYY/mm/dd/HH/MM/SS), and sha1\_based (using the first 4 characters of the file hash). | | configuration.wordlist | list | \- | List of passwords for protected files. | ### Other Values | Key | Type | Default | Description | | :---- | :---- | :---- | :---- | | advancedFilters | object | `{}` | Contains key-value pairs in which keys are filter names and values are the filter definitions. | | auth.image.pullPolicy | string | `"Always"` | | | auth.image.repository | string | `"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-detect-auth"` | | | auth.image.tag | string | `"latest-dev"` | | | auth.resources.limits.cpu | string | `"4000m"` | | | auth.resources.limits.memory | string | `"256Mi"` | | | auth.resources.requests.cpu | string | `"500m"` | | | auth.resources.requests.memory | string | `"128Mi"` | | | auth.serverPort | int | `8080` | | | authReverseProxy.image.pullPolicy | string | `"Always"` | | | authReverseProxy.image.repository | string | `"nginx"` | | | authReverseProxy.image.tag | string | `"stable"` | | | authReverseProxy.resources.limits.cpu | string | `"2000m"` | | | authReverseProxy.resources.limits.memory | string | `"512Mi"` | | | authReverseProxy.resources.requests.cpu | string | `"250m"` | | | authReverseProxy.resources.requests.memory | string | `"128Mi"` | | | authReverseProxy.serverPort | int | `80` | | | cleanup.failedJobsHistoryLimit | int | `1` | Number of failed finished jobs to keep. | | cleanup.image.pullPolicy | string | `"Always"` | | | cleanup.image.repository | string | `"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-detect-utilities"` | | | cleanup.image.tag | string | `"latest-dev"` | | | cleanup.resources.limits.cpu | string | `"2000m"` | | | cleanup.resources.limits.memory | string | `"2Gi"` | | | cleanup.resources.requests.cpu | string | `"1000m"` | | | cleanup.resources.requests.memory | string | `"1Gi"` | | | cleanup.startingDeadlineSeconds | int | `180` | Deadline (in seconds) for starting the Job, if that Job misses its scheduled time for any reason. After missing the deadline, the CronJob skips that instance of the Job. | | cleanup.successfulJobsHistoryLimit | int | `1` | Number of successful finished jobs to keep. | | cloudCache.image.pullPolicy | string | `"Always"` | | | cloudCache.image.repository | string | `"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-cloud-cache"` | | | cloudCache.image.tag | string | `"latest-dev"` | | | cloudCache.resources.limits.cpu | string | `"4000m"` | | | cloudCache.resources.limits.memory | string | `"4Gi"` | | | cloudCache.resources.requests.cpu | string | `"1000m"` | | | cloudCache.resources.requests.memory | string | `"1Gi"` | | | cloudCache.serverPort | int | `8080` | | | global.umbrella | bool | `false` | | | imagePullSecrets\[0\] | string | `"rl-registry-key"` | | | ingress.annotations | object | `{}` | | | ingress.className | string | `"nginx"` | | | ingress.enabled | bool | `true` | | | ingress.host | string | `""` | | | ingress.paths\[0\].path | string | `"/"` | | | ingress.paths\[0\].pathType | string | `"Prefix"` | | | ingress.tls.certificateArn | string | `""` | | | ingress.tls.issuer | string | `""` | | | ingress.tls.issuerKind | string | `"Issuer"` | | | ingress.tls.secretName | string | `"tls-tiscale-worker"` | | | monitoring.enabled | bool | `false` | Enable/disable monitoring with Prometheus | | monitoring.prometheusReleaseName | string | `"kube-prometheus-stack"` | Prometheus release name | | persistence | object | \- | Persistence values configure options for `PersistentVolumeClaim` used for storing samples and reports | | persistence.accessModes | list | `["ReadWriteMany"]` | Access mode. When autoscaling or multiple worker is used should be set to `[ "ReadWriteMany" ]` | | persistence.requestStorage | string | `"10Gi"` | Request Storage | | persistence.storageClassName | string | `nil` | Storage class name. When autoscaling or multiple worker is used storage class should support "ReadWriteMany" | | postgres.releaseName | string | `""` | Postgres release name, required when deployment is not done with the umbrella Helm chart. | | postprocessor.autoscaling.enabled | bool | `false` | Enable/disable autoscaling | | postprocessor.autoscaling.maxReplicas | int | `8` | Maximum number of replicas that can be deployed when scaling in enabled | | postprocessor.autoscaling.minReplicas | int | `1` | Minimum number of replicas that need to be deployed | | postprocessor.autoscaling.pollingInterval | int | `10` | Interval to check each trigger. In seconds. | | postprocessor.autoscaling.scaleDown | object | `{"stabilizationWindow":180}` | ScaleDown configuration values | | postprocessor.autoscaling.scaleDown.stabilizationWindow | int | `180` | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. | | postprocessor.autoscaling.scaleUp | object | `{"numberOfPods":1,"period":30,"stabilizationWindow":15}` | ScaleUp configuration values | | postprocessor.autoscaling.scaleUp.numberOfPods | int | `1` | Number of pods that can be scaled in the defined period | | postprocessor.autoscaling.scaleUp.period | int | `30` | Interval in which the numberOfPods value is applied | | postprocessor.autoscaling.scaleUp.stabilizationWindow | int | `15` | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. | | postprocessor.autoscaling.targetInputQueueSize | int | `10` | Number of messages in backlog to trigger scaling on. | | postprocessor.image.pullPolicy | string | `"Always"` | | | postprocessor.image.repository | string | `"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-detect-postprocessor"` | | | postprocessor.image.tag | string | `"latest-dev"` | | | postprocessor.resources.limits.cpu | string | `"8000m"` | | | postprocessor.resources.limits.memory | string | `"16Gi"` | | | postprocessor.resources.requests.cpu | string | `"2500m"` | | | postprocessor.resources.requests.memory | string | `"2Gi"` | | | preprocessor.autoscaling.enabled | bool | `false` | Enable/disable autoscaling | | preprocessor.autoscaling.maxReplicas | int | `8` | Maximum number of replicas that can be deployed when scaling in enabled | | preprocessor.autoscaling.minReplicas | int | `1` | Minimum number of replicas that need to be deployed | | preprocessor.autoscaling.pollingInterval | int | `10` | Interval to check each trigger. In seconds. | | preprocessor.autoscaling.scaleDown | object | `{"stabilizationWindow":180}` | ScaleDown configuration values | | preprocessor.autoscaling.scaleDown.stabilizationWindow | int | `180` | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. | | preprocessor.autoscaling.scaleUp | object | `{"numberOfPods":1,"period":30,"stabilizationWindow":15}` | ScaleUp configuration values | | preprocessor.autoscaling.scaleUp.numberOfPods | int | `1` | Number of pods that can be scaled in the defined period | | preprocessor.autoscaling.scaleUp.period | int | `30` | Interval in which the numberOfPods value is applied | | preprocessor.autoscaling.scaleUp.stabilizationWindow | int | `15` | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. | | preprocessor.autoscaling.targetInputQueueSize | int | `10` | Number of messages in backlog to trigger scaling on. | | preprocessor.image.pullPolicy | string | `"Always"` | | | preprocessor.image.repository | string | `"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-detect-preprocessor"` | | | preprocessor.image.tag | string | `"latest-dev"` | | | preprocessor.replicaCount | int | `1` | | | preprocessor.resources.limits.cpu | string | `"4000m"` | | | preprocessor.resources.limits.memory | string | `"4Gi"` | | | preprocessor.resources.requests.cpu | string | `"1000m"` | | | preprocessor.resources.requests.memory | string | `"1Gi"` | | | preprocessorUnpacker.autoscaling.enabled | bool | `false` | Enable/disable autoscaling | | preprocessorUnpacker.autoscaling.maxReplicas | int | `8` | Maximum number of replicas that can be deployed when scaling in enabled | | preprocessorUnpacker.autoscaling.minReplicas | int | `1` | Minimum number of replicas that need to be deployed | | preprocessorUnpacker.autoscaling.pollingInterval | int | `10` | Interval to check each trigger. In seconds. | | preprocessorUnpacker.autoscaling.scaleDown | object | `{"stabilizationWindow":180}` | ScaleDown configuration values | | preprocessorUnpacker.autoscaling.scaleDown.stabilizationWindow | int | `180` | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. | | preprocessorUnpacker.autoscaling.scaleUp | object | `{"numberOfPods":1,"period":30,"stabilizationWindow":15}` | ScaleUp configuration values | | preprocessorUnpacker.autoscaling.scaleUp.numberOfPods | int | `1` | Number of pods that can be scaled in the defined period | | preprocessorUnpacker.autoscaling.scaleUp.period | int | `30` | Interval in which the numberOfPods value is applied | | preprocessorUnpacker.autoscaling.scaleUp.stabilizationWindow | int | `15` | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. | | preprocessorUnpacker.autoscaling.targetInputQueueSize | int | `10` | Number of messages in backlog to trigger scaling on. | | preprocessorUnpacker.replicaCount | int | `1` | | | preprocessorUnpacker.resources.limits.cpu | string | `"16000m"` | | | preprocessorUnpacker.resources.limits.memory | string | `"16Gi"` | | | preprocessorUnpacker.resources.requests.cpu | string | `"4000m"` | | | preprocessorUnpacker.resources.requests.memory | string | `"4Gi"` | | | preprocessorUnpacker.scaling.prefetchCount | int | `4` | Defines the maximum number of individual files that can simultaneously be processed by a single instance of Spectra Core. When one file is processed, another from the queue enters the processing state. Value 0 sets the maximum number of files to be processed to the number of CPU cores on the system. | | processor.autoscaling.enabled | bool | `false` | Enable/disable autoscaling | | processor.autoscaling.maxReplicas | int | `8` | Maximum number of replicas that can be deployed when scaling in enabled | | processor.autoscaling.minReplicas | int | `1` | Minimum number of replicas that need to be deployed | | processor.autoscaling.pollingInterval | int | `10` | Interval to check each trigger. In seconds. | | processor.autoscaling.scaleDown | object | `{"stabilizationWindow":180}` | ScaleDown configuration values | | processor.autoscaling.scaleDown.stabilizationWindow | int | `180` | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. | | processor.autoscaling.scaleUp | object | `{"numberOfPods":1,"period":30,"stabilizationWindow":15}` | ScaleUp configuration values | | processor.autoscaling.scaleUp.numberOfPods | int | `1` | Number of pods that can be scaled in the defined period | | processor.autoscaling.scaleUp.period | int | `30` | Interval in which the numberOfPods value is applied | | processor.autoscaling.scaleUp.stabilizationWindow | int | `15` | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. | | processor.autoscaling.targetInputQueueSize | int | `10` | Number of messages in backlog to trigger scaling on. | | processor.image.pullPolicy | string | `"Always"` | | | processor.image.repository | string | `"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-processor"` | | | processor.image.tag | string | `"latest-dev"` | | | processor.replicaCount | int | `1` | | | processor.resources.limits.cpu | string | `"16000m"` | | | processor.resources.limits.memory | string | `"32Gi"` | | | processor.resources.requests.cpu | string | `"4000m"` | | | processor.resources.requests.memory | string | `"4Gi"` | | | processor.scaling.prefetchCount | int | `8` | Defines the maximum number of individual files that can simultaneously be processed by a single instance of Spectra Core. When one file is processed, another from the queue enters the processing state. Value 0 sets the maximum number of files to be processed to the number of CPU cores on the system. | | processorRetry.autoscaling.enabled | bool | `false` | Enable/disable autoscaling | | processorRetry.autoscaling.maxReplicas | int | `8` | Maximum number of replicas that can be deployed when scaling in enabled | | processorRetry.autoscaling.minReplicas | int | `1` | Minimum number of replicas that need to be deployed | | processorRetry.autoscaling.pollingInterval | int | `10` | Interval to check each trigger. In seconds. | | processorRetry.autoscaling.scaleDown | object | `{"stabilizationWindow":180}` | ScaleDown configuration values | | processorRetry.autoscaling.scaleDown.stabilizationWindow | int | `180` | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. | | processorRetry.autoscaling.scaleUp | object | `{"numberOfPods":1,"period":30,"stabilizationWindow":15}` | ScaleUp configuration values | | processorRetry.autoscaling.scaleUp.numberOfPods | int | `1` | Number of pods that can be scaled in the defined period | | processorRetry.autoscaling.scaleUp.period | int | `30` | Interval in which the numberOfPods value is applied | | processorRetry.autoscaling.scaleUp.stabilizationWindow | int | `15` | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. | | processorRetry.autoscaling.targetInputQueueSize | int | `10` | Number of messages in backlog to trigger scaling on. | | processorRetry.replicaCount | int | `1` | | | processorRetry.resources.limits.cpu | string | `"16000m"` | | | processorRetry.resources.limits.memory | string | `"64Gi"` | | | processorRetry.resources.requests.cpu | string | `"4000m"` | | | processorRetry.resources.requests.memory | string | `"8Gi"` | | | processorRetry.scaling.prefetchCount | int | `1` | Defines the maximum number of individual files that can simultaneously be processed by a single instance of Spectra Core. When one file is processed, another from the queue enters the processing state. Value 0 sets the maximum number of files to be processed to the number of CPU cores on the system. Recommended value for this type of processor is 1\. | | rabbitmq.releaseName | string | `""` | Rabbitmq release name, required when deployment is not done using the umbrella Helm chart. | | receiver.autoscaling.enabled | bool | `false` | Enable/disable autoscaling | | receiver.autoscaling.maxReplicas | int | `8` | Maximum number of replicas that can be deployed when scaling in enabled | | receiver.autoscaling.minReplicas | int | `1` | Minimum number of replicas that need to be deployed | | receiver.autoscaling.pollingInterval | int | `10` | Interval to check each trigger. In seconds. | | receiver.autoscaling.scaleDown | object | `{"stabilizationWindow":180}` | ScaleDown configuration values | | receiver.autoscaling.scaleDown.stabilizationWindow | int | `180` | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. | | receiver.autoscaling.scaleUp | object | `{"numberOfPods":1,"period":30,"stabilizationWindow":30}` | ScaleUp configuration values | | receiver.autoscaling.scaleUp.numberOfPods | int | `1` | Number of pods that can be scaled in the defined period | | receiver.autoscaling.scaleUp.period | int | `30` | Interval in which the numberOfPods value is applied | | receiver.autoscaling.scaleUp.stabilizationWindow | int | `30` | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. | | receiver.autoscaling.triggerCPUValue | int | `75` | CPU value (in percentage), which will cause scaling when reached. The percentage is taken from the resource.limits.cpu value. Limits have to be set up. | | receiver.image.pullPolicy | string | `"Always"` | | | receiver.image.repository | string | `"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-detect-receiver"` | | | receiver.image.tag | string | `"latest-dev"` | | | receiver.initImage.pullPolicy | string | `"Always"` | | | receiver.initImage.repository | string | `"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-detect-utilities"` | | | receiver.initImage.tag | string | `"latest-dev"` | | | receiver.replicaCount | int | `1` | | | receiver.resources.limits.cpu | string | `"4000m"` | | | receiver.resources.limits.memory | string | `"8Gi"` | | | receiver.resources.requests.cpu | string | `"1500m"` | | | receiver.resources.requests.memory | string | `"1Gi"` | | | report.autoscaling.enabled | bool | `false` | Enable/disable autoscaling | | report.autoscaling.maxReplicas | int | `8` | Maximum number of replicas that can be deployed when scaling in enabled | | report.autoscaling.minReplicas | int | `1` | Minimum number of replicas that need to be deployed | | report.autoscaling.pollingInterval | int | `10` | Interval to check each trigger. In seconds. | | report.autoscaling.scaleDown | object | `{"stabilizationWindow":180}` | ScaleDown configuration values | | report.autoscaling.scaleDown.stabilizationWindow | int | `180` | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. | | report.autoscaling.scaleUp | object | `{"numberOfPods":1,"period":30,"stabilizationWindow":30}` | ScaleUp configuration values | | report.autoscaling.scaleUp.numberOfPods | int | `1` | Number of pods that can be scaled in the defined period | | report.autoscaling.scaleUp.period | int | `30` | Interval in which the numberOfPods value is applied | | report.autoscaling.scaleUp.stabilizationWindow | int | `30` | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. | | report.autoscaling.triggerCPUValue | int | `75` | CPU value (in percentage), which will cause scaling when reached. The percentage is taken from the resource.limits.cpu value. Limits have to be set. | | report.image.pullPolicy | string | `"Always"` | | | report.image.repository | string | `"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-report"` | | | report.image.tag | string | `"latest-dev"` | | | report.resources.limits.cpu | string | `"8000m"` | | | report.resources.limits.memory | string | `"8Gi"` | | | report.resources.requests.cpu | string | `"2000m"` | | | report.resources.requests.memory | string | `"2Gi"` | | | reportTypes | object | `{}` | Contains key-value pairs where keys are the report type names and values are the report type definitions. | | securityContext.privileged | bool | `false` | | | tcScratch | object | \- | tcScratch values configure generic ephemeral volume options for the Spectra Core `/tc-scratch` directory. | | tcScratch.accessModes | list | `["ReadWriteOnce"]` | Access modes. | | tcScratch.requestStorage | string | `"100Gi"` | Requested storage size for the ephemeral volume. | | tcScratch.storageClassName | string | `nil` | Sets the storage class for the ephemeral volume. If not set, `emptyDir` is used instead of an ephemeral volume. | | tclibs.autoscaling.enabled | bool | `false` | Enable/disable autoscaling | | tclibs.autoscaling.maxReplicas | int | `8` | Maximum number of replicas that can be deployed when scaling in enabled | | tclibs.autoscaling.minReplicas | int | `1` | Minimum number of replicas that need to be deployed | | tclibs.autoscaling.pollingInterval | int | `10` | Interval to check each trigger. In seconds. | | tclibs.autoscaling.scaleDown | object | `{"stabilizationWindow":180}` | ScaleDown configuration values | | tclibs.autoscaling.scaleDown.stabilizationWindow | int | `180` | Number of continuous seconds in which the scaling condition is not met. When this is reached, scale down is started. | | tclibs.autoscaling.scaleUp | object | `{"numberOfPods":1,"period":30,"stabilizationWindow":30}` | ScaleUp configuration values | | tclibs.autoscaling.scaleUp.numberOfPods | int | `1` | Number of pods that can be scaled in the defined period | | tclibs.autoscaling.scaleUp.period | int | `30` | Interval in which the numberOfPods value is applied | | tclibs.autoscaling.scaleUp.stabilizationWindow | int | `30` | Number of continuous seconds in which the scaling condition is met. When this is reached, scale up is started. | | tclibs.autoscaling.triggerCPUValue | int | `75` | CPU value (in percentage), which will cause scaling when reached. The percentage is taken from the resource.limits.cpu value. Limits have to be set. | | tclibs.image.pullPolicy | string | `"Always"` | | | tclibs.image.repository | string | `"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-tclibs"` | | | tclibs.image.tag | string | `"latest-dev"` | | | tclibs.replicaCount | int | `1` | | | tclibs.resources.limits.cpu | string | `"2000m"` | | | tclibs.resources.limits.memory | string | `"2Gi"` | | | tclibs.resources.requests.cpu | string | `"1000m"` | | | tclibs.resources.requests.memory | string | `"1Gi"` | | | yaraSync.enabled | bool | `false` | | | yaraSync.image.pullPolicy | string | `"Always"` | | | yaraSync.image.repository | string | `"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-detect-yara-sync"` | | | yaraSync.image.tag | string | `"latest-dev"` | | | yaraSync.replicaCount | int | `1` | | | yaraSync.resources.limits.cpu | string | `"2000m"` | | | yaraSync.resources.limits.memory | string | `"2Gi"` | | | yaraSync.resources.requests.cpu | string | `"1000m"` | | | yaraSync.resources.requests.memory | string | `"1Gi"` | | | yaraSync.serverPort | int | `8080` | | ## Connector S3 ### Secrets | Secret (fullNameOverride is set) | Secret (deployment with umbrella) | Secret (deployment without umbrella) | Type | Description | | :---- | :---- | :---- | :---- | :---- | | \-secret-\ | \-connector-s3-secret-\ | \-secret-\ | required | Authentication secret used to connect to AWS S3 or any S3-compatible storage system. | ### Values | Key | Type | Default | Description | | :---- | :---- | :---- | :---- | | configuration.dbCleanupPollInterval | int | `7200` | Specifies time in seconds, in which the database cleanup will be run. | | configuration.dbCleanupSampleThresholdInDays | int | `21` | Number of previous days that the data will be preserved. | | configuration.diskHighPercent | int | `0` | disk high percent | | configuration.inputs | list | `[]` | Configuration for S3 File Storage Input. [S3 input](#configmap-configuration-values-for-s3-connector---configuration-for-s3-file-storage-input). | | configuration.maxFileSize | int | `0` | The maximum sample size in bytes that will be transmitted from the connector to the appliance for analysis. Setting it to 0 will disable the option. | | configuration.maxUploadDelayTime | int | `10000` | Delay in milliseconds. In case the Worker cluster is under high load, this parameter is used to delay any new upload to the Worker cluster. The delay parameter will be multiplied by the internal factor determined by the load on the Worker cluster. | | configuration.maxUploadRetries | int | `100` | Number of times the connector will attempt to upload the file to the processing appliance. Upon reaching the number of retries, it will be saved in the error\_files/ destination or be discarded. | | configuration.uploadTimeout | int | `10000` | Period (in milliseconds) between upload attempts of the sample being re-uploaded. | | configuration.uploadTimeoutAlgorithm | string | `"exponential"` | Enum: "exponential" "linear". The algorithm used for managing delays between re-uploading the samples into the processing appliance. In exponential, the delay is defined by multiplying the max upload timeout parameter by 2, until max value of 5 minutes. Linear backoff will always use the Max upload timeout value for the timeout period between re-uploads. | #### Configmap System Info values for Connector S3 | Key | Type | Default | Description | | :---- | :---- | :---- | :---- | | configuration.systemInfo | object | \- | Configuration for S3 System Info. [S3 System Info](#configmap-system-info-values-for-connector-s3). | | configuration.systemInfo.centralLogging | bool | `false` | Enable central logging | | configuration.systemInfo.diskHighPercent | int | `0` | Dish high percent | | configuration.systemInfo.fetchChannelSize | int | `40` | Fetch channel size | | configuration.systemInfo.hostUUID | string | `""` | Host UUID | | configuration.systemInfo.maxConnections | int | `10` | Max number of connections | | configuration.systemInfo.maxSlowFetches | int | `12` | Max slow fetches | | configuration.systemInfo.numberOfRetries | int | `300` | Number of retries | | configuration.systemInfo.requestTimeout | int | `43200` | Timeout for requests | | configuration.systemInfo.slowFetchChannelSize | int | `100` | Slow fetch channel size. | | configuration.systemInfo.slowFetchPause | int | `5` | Slow fetch pause | | configuration.systemInfo.type | string | `"tiscale"` | Type | | configuration.systemInfo.verifyCert | bool | `false` | Verify SSL certificate | | configuration.systemInfo.version | string | `"5.6.0"` | Version | | configuration.systemInfo.waitTimeout | int | `1000` | Wait timeout | #### Configmap Configuration values for S3 connector \- Configuration for S3 File Storage Input | Key | Type | Default | Description | | :---- | :---- | :---- | :---- | | inputs\[n\] | list | \- | Configmap Configuration for S3 File Storage Input. | | inputs\[n\].awsEnableArn | bool | `false` | Enable/disable the usage of AWS IAM roles to access S3 buckets without sharing secret keys. | | inputs\[n\].awsExternalRoleId | string | `""` | The external ID of the role that will be assumed. This can be any string. | | inputs\[n\].awsRoleArn | string | `""` | The role ARN created using the external role ID and an Amazon ID. In other words, the ARN which allows a Worker to obtain a temporary token, which then allows it to save to S3 buckets without a secret access key. | | inputs\[n\].awsRoleSessionName | string | `"ARNRoleSession"` | Name of the session visible in AWS logs. Can be any string. | | inputs\[n\].bucket | string | `""` | Name of an existing S3 bucket which contains the samples to process. | | inputs\[n\].deleteSourceFile | bool | `false` | Selecting the checkbox will allow the connector to delete source files on S3 storage after they have been processed. Required if 'require\_analyze' or 'post\_actions\_enabled' is true | | inputs\[n\].endpoint | string | `""` | Custom S3 endpoint URL. Leave empty if using standard AWS S3. | | inputs\[n\].folder | string | `""` | The input folder inside the specified bucket which contains the samples to process. All other samples will be ignored. | | inputs\[n\].identifier | string | `""` | Unique name of S3 connection. Must contain only lowercase alphanumeric characters or hyphen (-). Must start and end with an alphanumeric character. Identifier length must be between 3 and 49 characters. | | inputs\[n\].knownBucket | string | `""` | Specify the bucket into which the connector will store files classified as "Malicious". If empty, the input bucket will be used. | | inputs\[n\].knownDestination | string | `"goodware"` | The folder into which the connector will store files classified as 'Goodware'. The folder is contained within the specified bucket field. | | inputs\[n\].maliciousBucket | string | `""` | Specify the bucket into which the connector will store files classified as 'Malicious'. If empty, the input bucket will be used. | | inputs\[n\].maliciousDestination | string | `"malware"` | The folder into which the connector will store files classified as 'Malicious'. The folder is contained within the specified bucket field. | | inputs\[n\].paused | bool | `false` | Temporarily pause the continuous scanning of this Storage Input. This setting must be set to true to enable retro hunting. | | inputs\[n\].postActionsEnabled | bool | `false` | Disable/enable post actions for S3 connectors | | inputs\[n\].priority | int | `5` | A higher Priority makes it more likely that files from this bucket will be processed first. The supported range is from 1 (highest) to 5 (lowest). Values outside of those minimum and maximum values will be replaced by the minimum or maximum, respectively. Multiple buckets may share the same priority. | | inputs\[n\].requireAnalyze | bool | `false` | Disable/enable the requirement for analysis of data processed by connector | | inputs\[n\].serverSideEncryptionCustomerAlgorithm | string | `""` | Customer provided encryption algorithm. | | inputs\[n\].serverSideEncryptionCustomerKey | string | `""` | Customer provided encryption key. | | inputs\[n\].suspiciousBucket | string | `""` | Specify the bucket into which the connector will store files classified as 'Suspicious'. If empty, the input bucket will be used. | | inputs\[n\].suspiciousDestination | string | `"suspicious"` | The folder into which the connector will store files classified as 'Suspicious'. The folder is contained within the specified bucket field. | | inputs\[n\].unknownBucket | string | `""` | Specify the bucket into which the connector will store files classified as "Unknown". If empty, the input bucket will be used. | | inputs\[n\].unknownDestination | string | `"unknown"` | The folder into which the connector will store files classified as 'Unknown'. The folder is contained within the specified bucket field. | | inputs\[n\].verifySslCertificate | bool | `true` | Connect securely to the custom S3 instance. Deselect this to accept untrusted certificates. Applicable only when using a custom S3 endpoint. | | inputs\[n\].zone | string | `"us-east-1"` | AWS S3 region | #### Other Values | Key | Type | Default | Description | | :---- | :---- | :---- | :---- | | boltdb.claimName | string | `nil` | | | enabled | bool | `false` | | | fullNameOverride | string | `""` | overrides connector-s3 chart full name | | image.imagePullPolicy | string | `"Always"` | | | image.repository | string | `"alt-artifactory-prod.rl.lan/appliances-docker-test/detect/images/components/rl-integration-s3"` | | | image.tag | string | `"latest-dev"` | | | imagePullSecrets\[0\] | string | `"rl-registry-key"` | | | nameOverride | string | `""` | overrides connector-s3 chart name | | persistence.accessModes | list | `["ReadWriteOnce"]` | Access mode | | persistence.requestStorage | string | `"10Gi"` | Request Storage | | persistence.storageClassName | string | `"encrypted-gp2"` | Storage class name | | receiver.baseUrl | string | `nil` | | | receiver.service.httpPort | int | `80` | | | tmp | object | \- | tmp values configure generic ephemeral volume options for the connectors `/data/connectors/connector-s3/tmp` directory. | | tmp.accessModes | list | `["ReadWriteOnce"]` | Access modes. | | tmp.requestStorage | string | `"100Gi"` | Requested storage size for the ephemeral volume. | | tmp.storageClassName | string | `nil` | Sets the storage class for the ephemeral volume. If not set, `emptyDir` is used instead of an ephemeral volume. | | worker.releaseName | string | `nil` | Set Spectra Detect Worker release for connector to connect to. It is required if not using the umbrella Helm chart. | ## RabbitMQ | Key | Type | Default | Description | | :---- | :---- | :---- | :---- | | affinity | object | `{}` | | | createManagementAdminSecret | bool | `true` | A user management admin secret will be created automatically with given admin username and admin password; otherwise, the secret must already exist. | | createUserSecret | bool | `true` | A user secret will be created automatically with the given username and password; otherwise, the secret must already exist. | | global.umbrella | bool | `false` | | | host | string | `""` | Host for external RabbitMQ. When configured, the Detect RabbitMQ cluster won't be created. | | managementAdminPassword | string | `""` | Management admin password. If left empty, defaults to `password`. | | managementAdminUrl | string | `""` | Management Admin URL. If empty, defaults to "http://:15672". | | managementAdminUsername | string | `""` | Management admin username. If left empty, defaults to `username`. | | password | string | `"guest_11223"` | Password | | persistence.requestStorage | string | `"5Gi"` | | | persistence.storageClassName | string | `nil` | | | port | int | `5672` | RabbitMQ port | | replicas | int | `1` | Number of replicas | | resources.limits.cpu | string | `"2"` | | | resources.limits.memory | string | `"2Gi"` | | | resources.requests.cpu | string | `"1"` | | | resources.requests.memory | string | `"2Gi"` | | | useQuorumQueues | bool | `false` | Setting this to true defines queues as quorum type (recommended for multi-replica/HA setups); otherwise, queues are classic. | | useSecureProtocol | bool | `false` | Setting this to true enables the secure AMQPS protocol for the RabbitMQ connection. | | username | string | `"guest"` | Username | | vhost | string | `""` | Vhost. When empty, the default `rl-detect` vhost is used. | ## PostgreSQL | Key | Type | Default | Description | | :---- | :---- | :---- | :---- | | affinity | object | `{}` | | | createUserSecret | bool | `true` | A user secret will be created automatically with the given username and password; otherwise, the secret must already exist. | | database | string | `"tiscale"` | Database name | | global.umbrella | bool | `false` | | | host | string | `""` | Host for external PostgreSQL. When configured, the Detect PostgreSQL cluster won't be created. | | image.repository | string | `"ghcr.io/cloudnative-pg/postgresql"` | | | image.tag | string | `"17.6"` | | | password | string | `"tiscale_11223"` | Password | | persistence.accessModes\[0\] | string | `"ReadWriteOnce"` | | | persistence.requestStorage | string | `"5Gi"` | | | persistence.storageClassName | string | `nil` | | | port | int | `5432` | PostgreSQL port. | | replicas | int | `1` | Number of replicas. | | resources.limits.cpu | string | `nil` | | | resources.limits.memory | string | `nil` | | | resources.requests.cpu | string | `"500m"` | | | resources.requests.memory | string | `"1Gi"` | | | username | string | `"tiscale"` | Username. Required if `host` is not set, because the Detect PostgreSQL cluster will be created and this user will be set as the database owner. | --- ## SpectraDetect AWS EKS Microservices Deployment — Helm, KEDA, and RabbitMQ ## Introduction **Warning: **PREVIEW**** This deployment and its interfaces are under active development and subject to change. Compatibility is not guaranteed across minor updates during the beta period. Scope: **Non‑production use** This document describes how the Spectra Detect platform is deployed and operated on Kubernetes, providing a high-volume, high-speed file processing and analysis that seamlessly integrates into existing infrastructure and effectively scales with business needs. The platform is packaged as container images and deployment is managed with Helm charts, which package Kubernetes manifests and configuration into versioned releases for consistent installs, upgrades, and rollbacks. Configuration is externalized via [ConfigMaps](./config-reference.md) and [Secrets](./config-reference.md) so behavior can be adjusted without rebuilding images, and sensitive data is stored separately with controlled access. Horizontal Pod Autoscaling may adjust replica counts based on metrics such as CPU utilization or queue size. ## Requirements - EKS version 1.34, Amazon Linux 2023 with cgroupsv2 (tested) - `Persistent Volume` provisioner supporting `ReadWriteMany` (e.g. Amazon EFS CSI). - Helm 3 or above ## Operators and Tools ### Keda (autoscaling - optional) For Spectra Detect to autoscale Workers, [Keda](https://keda.sh/) needs to be installed on the cluster. Keda can be deployed following the official [Deploying Keda documentation](https://keda.sh/docs/2.17/deploy/). It is not required to have Keda installed to run Spectra Detect on K8s, but it is required to utilize Worker autoscaling features. ### Prometheus Operator All pods have metrics exposed in Prometheus format. Prometheus can be used by setting: ```yaml # Can be enabled with the following worker: monitoring: # -- Enable/disable monitoring with Prometheus enabled: true # -- Use actual release name prometheusReleaseName: "${PROMETHEUS RELEASE NAME}" ``` ### RabbitMQ Broker Spectra Detect Helm charts support using external RabbitMQ Brokers (like AmazonMQ), as well as deploying and using RabbitMQ cluster resources as part of a Detect deployment installed in the same namespace. Choose which option to use based on the business requirements. - **External RabbitMQ Broker** (deployed and managed outside of Spectra Detect Helmcharts) External/existing RabbitMQ Broker needs to be set up as per the broker installation guides. As an example, please check the [Amazon MQ instructions](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/getting-started-rabbitmq.html). - **RabbitMQ Operator** (deployed and managed by Spectra Detect Helm charts) Cloud native brokers can be deployed and managed by Spectra Detect Helm charts. RabbitMQ Operator needs to be installed in the K8s cluster. ```bash kubectl apply --wait -f \ https://github.com/rabbitmq/cluster-operator/releases/download/v2.6.0/cluster-operator.yml ``` | Secret | Type | Description | |----------------------------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `-rabbitmq-secret` | required | Basic authentication secret which contains the RabbitMQ username and password. Secret is either created manually (rabbitmq chart) or already exists. | | `-rabbitmq-secret-admin` | optional | Basic authentication secret which contains the RabbitMQ Admin username and password. Secret is either created manually (rabbitmq chart) or already exists. If missing, credentials from `-rabbitmq-secret`. | ### PostgreSQL Server Spectra Detect Helm charts support using external PostgreSQL Clusters (like Amazon RDS), as well as deploying and using PostgreSQL cluster resources as part of a Detect deployment. - **External PostgreSQL Server** (deployed and managed outside of Spectra Detect Helm charts) External/existing PostgreSQL server needs to be set up as per the PostgreSQL server guide. As an example, please check the Amazon RDS instructions - **CloudNativePG Operator** (deployed and managed by Spectra Detect Helm charts) CloudNativePG Operator needs to be installed in the K8s cluster. ```bash # PostgreSQL Operator - CloudNativePG (CNPG) kubectl apply --wait -f \ https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.21/releases/cnpg-1.21.1.yaml ``` | Secret (deployment with Detect chart) | Type | Description | |---------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------| | `-postgres-secret` | required | Basic authentication secret which contains the database username and password. Secret is either created manually (postgres chart) or already exists. | ## Storage ### Persistent volume The `/scratch` folder is implemented as a persistent volume. Multiple services have access to the volume: - Cleanup - Preprocessor - Processor - Postprocessor - Receiver - Rabbit (if deployed) - Postgres (if deployed) - Connector S3 (if deployed) Since multiple services (pods) are accessing the volume, the access mode of that volume has to be `ReadWriteMany`. In AWS, it is recommended to use EFS storage since it supports the requested access for pods. #### Amazon EFS Remote Storage If you are running Kubernetes on Amazon EKS, you can use Amazon EFS storage for the shared storage. You will need to: 1. Install Amazon [EFS CSI Driver](https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html) on the cluster to use EFS 2. Create EFS file system via Amazon EFS console or [command line](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/efs-create-filesystem.md) 3. Set Throughput mode to Elastic or Provisioned for higher throughput levels 4. Add [mount targets](https://docs.aws.amazon.com/efs/latest/ug/accessing-fs.html) for the node's subnets 5. Create a [storage class](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/dynamic_provisioning/README.md#dynamic-provisioning) for Amazon EFS ### Ephemeral volume The `/tc-scratch` folder is used as an ephemeral volume for temporary processing files. - Processor The requested access mode is `ReadWriteOnce`, and any type of storage that supports that can be used (e.g. in AWS encrypted-gp2). The storage class name is empty by default (value `tcScratch.storageClassName`). If not overridden, `emptyDir` will be used for storage. ## Getting Started ReversingLabs Spectra Detect Helm charts and container images are available at **registry.reversinglabs.com**. In order to connect to the registry, you need to use the ReversingLabs Spectra Intelligence account. ```bash helm registry login registry.reversinglabs.com -u "${RL_SPECTRA_INTELLIGENCE_USERNAME}" ``` If you want to see which versions of the charts are available in the registry, you can use a tool like [Skopeo](https://github.com/containers/skopeo) to log in to the registry and list the versions: ```bash skopeo login registry.reversinglabs.com -u "${RL_SPECTRA_INTELLIGENCE_USERNAME}" skopeo list-tags docker://registry.reversinglabs.com/detect/charts/detect-suite/detect-platform eg. { "Repository": "registry.reversinglabs.com/detect/charts/detect-suite/detect-platform", "Tags": [ "5.7.0-0.beta.3" ] } ``` ### List of Detect Images | Image | Mandatory/Optional | Scaling * (see [Appendix](#appendix)) | |---------------------------|--------------------|---------------------------------------| | `rl-detect-auth` | Optional | N/A | | `rl-detect-receiver` | Mandatory | CPU | | `rl-detect-preprocessor` | Optional | QUEUE | | `rl-processor` | Mandatory | QUEUE | | `rl-tclibs` | Mandatory | CPU | | `rl-detect-postprocessor` | Optional | QUEUE | | `rl-integration-s3` | Optional | N/A | | `rl-report` | Optional | CPU | | `rl-cloud-cache` | Optional | N/A | | `rl-detect-utilities` | Mandatory | N/A | ### Deploying Detect using Helm Charts 1. Create a namespace or use an existing one ```bash kubectl create namespace detect # Namespace name is arbitrary ``` 2. Set up the Registry Secret to allow Kubernetes to pull container images. The Kubernetes secret `rl-registry-key` containing the user's Spectra Intelligence credentials needs to be created in the namespace where Detect will be installed. 2.1. The secret can either be created via Detect Helm chart - `registry.createRegistrySecret`: needs to be set to true (default) - `registry.authSecretName`: value needs to be the Spectra Intelligence account username. - `registry.authSecretPassword`: value needs to be the Spectra Intelligence account password. 2.2. Can be managed outside the Helm release - `registry.createRegistrySecret`: value should be set to false. You can create the secret manually by using the following command ```bash kubectl apply -n "detect" -f - <<EOF apiVersion: v1 kind: Secret metadata: name: "rl-registry-key" type: kubernetes.io/dockerconfigjson data: .dockerconfigjson: $(echo -n '{"auths": {"registry.reversinglabs.com": {"auth": "'$(echo -n "${SPECTRA_INTELLIGENCE_USERNAME}:${SPECTRA_INTELLIGENCE_PASSWORD}" | base64)'"}}}' | base64 | tr -d '\n') EOF ``` 3. Install a Spectra Detect Chart Choose any deploy name and run the following command. For more details regarding values, please reference the [Appendix](#appendix). ```bash helm install "${RELEASE_NAME}" oci://registry.reversinglabs.com/detect/charts/detect-suite/detect-platform \ --version "${DETECT_HELM_CHART_VERSION}" --namespace "${NAMESPACE}" -f values.yaml ``` 4. Configure Ingress Controller In order to be able to access Spectra Detect endpoints from outside the cluster, and in order for Worker pod to be able to connect to the Spectra Detect Manager, an Ingress Controller (like AWS ALB or Nginx Controller) must be configured on the K8s cluster. Follow the official installation guides for the controllers: - [AWS Load Balancer Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.13/deploy/installation/) - [Nginx Controller](https://docs.nginx.com/nginx-ingress-controller/installation/installing-nic/) This example shows how to configure Ingress using [AWS ALB Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/) and how to use [External DNS](https://kubernetes-sigs.github.io/external-dns/v0.15.0/) to automatically create DNS records in AWS Route53. > Ingress values configuration example using AWS ALB Controller ```yaml worker: ingress: annotations: alb.ingress.kubernetes.io/backend-protocol: HTTP alb.ingress.kubernetes.io/certificate-arn: <> alb.ingress.kubernetes.io/group.name: detect alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15" alb.ingress.kubernetes.io/healthcheck-path: /readyz alb.ingress.kubernetes.io/healthcheck-port: "80" alb.ingress.kubernetes.io/healthcheck-protocol: HTTP alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5" alb.ingress.kubernetes.io/healthy-threshold-count: "2" alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600 alb.ingress.kubernetes.io/manage-backend-security-group-rules: "true" alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/security-groups: <> alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-FS-1-2-Res-2020-10 alb.ingress.kubernetes.io/success-codes: 200,301,302,404 alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=1200 alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/unhealthy-threshold-count: "2" external-dns.alpha.kubernetes.io/hostname: detect-platform.example.com external-dns.alpha.kubernetes.io/ingress-hostname-source: annotation-only className: alb enabled: true host: detect-platform.example.com ``` ### Updating Detect using Helm Charts Before running the upgrade, you can use the helm `diff upgrade` command to see the changes that will occur in the Kubernetes manifest files. [Helm Diff Plugin](https://github.com/helmfile/helmfile) must be installed to utilize the `diff` feature. You can install Helm Diff using the following command: ```bash # Install plugin helm plugin install https://github.com/databus23/helm-diff # Run diff command helm diff upgrade "${RELEASE_NAME}" oci://registry.reversinglabs.com/detect/charts/detect-suite/detect-platform \ --version "${DETECT_CHART_VERSION}" \ --namespace "${NAMESPACE}" \ -f values.yaml # Check the Helm chart readme beforehand if you want helm show readme oci://registry.reversinglabs.com/detect/charts/detect-suite/detect-platform --version "${DETECT_CHART_VERSION}" # Run upgrade helm upgrade "${RELEASE_NAME}" oci://registry.reversinglabs.com/detect/charts/detect-suite/detect-platform \ --version "${DETECT_CHART_VERSION}" \ --namespace "${NAMESPACE}" \ -f values.yaml ``` ### Uninstalling Detect ```bash helm uninstall detect -n "${NAMESPACE}" ``` ## Authentication Authentication is achieved by leveraging [Authentication based on SubRequest Result](https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/). This is natively supported by the Nginx Ingress Controller. In case a different ingress controller is used (e.g. ALB on AWS), additional Nginx Reverse Proxy is deployed in order to support the authentication mechanism. Authentication can be configured in the following ways: 1. Using external authentication service by specifying the `externalAuhtUrl` in the configuration If an external authentication service is enabled, all header values from the incoming request will be forwarded to the external authentication service. The external authentication service needs to return following responses in order to support this authentication mechanism: - `HTTP 200`: authentication successful (ingress will forward the traffic to the backend service) - `HTTP 401` or `HTTP 403`: authentication failed 2. Using a simple authentication service deployed in the cluster Authentication service supports a simple Token check based on API path. The token needs to be included in the "Authorization" header with the "Token" prefix/scheme. ```bash curl -H "Authorization: Token " ``` The tokens are configured as secrets with the following behavior: | Secret | Type | Description | Used in deployments (Pods) | |:----------------------------------------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------| | `-secret-worker-api-token` | Optional | Token secret containing a token that is used to protect all endpoints with /api/ prefix, e.g. file upload. | Auth | | `-secret-worker-api-task-token` | Optional | Token secret containing a token that is used to protect `/api/tiscale/v1/task` endpoints. If left empty, the mentioned API is protected by `-secret-worker-api-token`. | Auth | The authentication service won’t be deployed in the cluster if `externalAuhtUrl` is defined: ```yaml # Example for enabling authentication worker: configuration: authentication: enabled: true externalAuthUrl: "" ``` ## Appendix ### Set Report Types - Report types can be added with the `--set-file` option by providing the name of the report type (added as a new key to `reportTypes`) and path to the file in which the filter is defined in JSON format - Report type name must match the one defined in the given file - Report types can be deleted by defining the name of the `report.type` and setting it to the value "" instead of adding the filepath - **Limitation**: max size of the report type file is 3MiB ### Set Report Types Example ```bash # Example of adding the 2 report types helm upgrade -i "${RELEASE_NAME}" oci://registry.reversinglabs.com/detect/charts/detect-suite/detect-platform -f custom_values.yaml \ --set-file worker.reportTypes.some_report_type=/some_report-type.json --set-file worker.reportTypes.extendedNoTags=/extendedNoTags.json # Example of adding the new report type and removing the existing report type helm upgrade -i "${RELEASE_NAME}" oci://registry.reversinglabs.com/detect/charts/detect-suite/detect-platform -f custom_values.yaml \ --set-file worker.reportTypes.some_report_type=/some_report-type.json \ --set-file worker.reportTypes.extendedNoTags="" \ ############### Example of the report type file content ############### { "name": "exampleReportType", "fields": { "info" : { "statistics" : true, "unpacking" : true, "package" : true, "file" : true, "identification" : true }, "story": true, "tags" : true, "indicators" : true, "classification" : true, "relationships" : true, "metadata" : true } } ``` ### Use Report Type Example Uploading a report type using the method above only makes the report type available to the system. To actually use the custom report type, you must configure it on the appropriate egress integration. For example, to use a custom report type with S3 storage: ```yaml # Specify the report type that should be applied to the Worker analysis report before storing it. Report types are results of filtering the full report. worker: configuration: reportS3: reportType: "custom_report_type" ``` ### Set YARA Rules - YARA rules can be added with the `--set-file` option by providing the name of the rule file (added as a new key to `yaraRules`) and path to the file in which the rule is defined - The rule file name must follow camelCase format - YARA rules can be deleted by defining the rule file name and setting it to the value "" instead of adding the filepath - **Limitation**: max size of YARA ruleset file is 45MiB ### Set YARA Rules Example ```bash # Example of adding yara rules helm upgrade -i "${RELEASE_NAME}" oci://registry.reversinglabs.com/detect/charts/detect-platform -f custom_values.yaml \ --set-file worker.yaraRules.rule1=/someYaraRule.yara \ --set-file worker.yaraRules.rule2=/other_yara_rule.yara # Example of adding the new yara rule and removing the existing yara rule helm upgrade -i "${RELEASE_NAME}" oci://registry.reversinglabs.com/detect/charts/detect-suite/detect-platform -f custom_values.yaml \ --set-file worker.yaraRules.rule1=/someYaraRule.yara \ --set-file worker.yaraRules.rule2="" ############### Example of the yara rule file content ############### rule ExampleRule : tc_detection malicious { meta: tc_detection_type = "Adware" tc_detection_name = "EXAMPLEYARA" tc_detection_factor = 5 strings: $1 = "example" $2 = "eeeeeee" condition: $1 or $2 } ``` ### Set Advanced Filter - Advanced filters can be added with the `--set-file` option by providing the name of the filter (added as a new key to `advancedFilters`) and path to the file in which the filter is defined in YAML format. Filter name must match the one defined in the given file - Advanced filters can be deleted by defining the name of the filter and setting it to the value "" instead of adding the filepath ```bash # Example of adding Advance Filter helm upgrade -i "${RELEASE_NAME}" oci://registry.reversinglabs.com/detect/charts/detect-suite/detect-platform -f custom_values.yaml \ --set-file worker.advancedFilters.filter1=/some_filter.yaml \ --set-file worker.advancedFilters.filter2=/other_filter.yaml # Example of adding Advance Filter helm upgrade -i "${RELEASE_NAME}" oci://registry.reversinglabs.com/detect/charts/detect-suite/detect-platform -f custom_values.yaml \ --set-file worker.advancedFilters.filter1=/some_filter.yaml \ --set-file worker.advancedFilters.filter2="" ``` Example of the filter file content: ```yaml name: some_filter description: Custom filter for Spectra Analyze integration scope: container type: filter_in condition: and: - range: info.file.size: gt: 50 lt: 20000 - one_of: classification.classification: - 3 - 2 ``` ### [Configuration Reference](config-reference.md) **Info: View the [Configuration Reference](config-reference.md) for more information about the configuration options.** ## Scaling Scaling of the services is done in one of the following ways: - Scaling based on CPU usage - Scaling based on the number of messages waiting in the queue ### Scaling based on CPU usage Scaling based on CPU usage is implemented in the following way: - Users can provide `triggerCPUValue` which represents the percentage of the given CPU resources. Service will be scaled up when this threshold is reached and scaled down when CPU usage drops below the threshold. - CPU resources are defined with `resources.limits.cpu` value, which represents the maximum CPU value that can be given to the pod - **Default values**: - scaling disabled by default - when CPU usage reaches 75%, the scaling is triggered - scaling delay is 30 seconds (scaling is triggered when scaling condition is met for 30 seconds) - one pod in every 30 seconds is created - number of maximum replicas is 8 - every 10 seconds the average CPU status is checked - **Services with CPU scaling**: - receiver - report - tclibs ### Scaling based on the number of messages waiting in the queue Scaling based on the number of messages waiting in the queue is implemented in the following way: - Scaling disabled by default - User can provide `targetInputQueueSize` which represents the number of messages in the queue for which the scaling will start - Unacknowledged messages are excluded from this calculation - Each scaled service has a different queue that is observed and scaled on - **Default values**: - when 10 or more messages are waiting in the queue for a longer period of time the scaling will be triggered - scaling delay is 15 seconds (scaling is triggered when scaling condition is met for 15 seconds) - one pod in every 30 seconds is created - number of maximum replicas is 8 - every 10 seconds status of the queue is checked - **Services with Queue based scaling**: - processor: number of messages in the `tiscale.hagent_input` is used for scaling - processor-retry: number of messages in the `tiscale.hagent_retry` is used for scaling - postprocessor: number of messages in the `tiscale.hagent_result` is used for scaling - preprocessor: number of messages in the `tiscale.preprocessing` is used for scaling - preprocessor-unpacker: number of messages in the `tiscale.preprocessing_unpacker` is used for scaling ### Scaling configuration | Value | Description | |:-----------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------| | `enabled` | enables/disables auto-scaling | | `maxReplicas` | maximum number of replicas that can be deployed when scaling in enabled | | `minReplicas` | minimum number of replicas that need to be deployed | | `pollingInterval` | interval to check each trigger (in seconds) | | `scaleUp` | configuration values for scaling up | | `stabilizationWindow` | number of continuous seconds in which the scaling condition is met (when reached, scale up is started) | | `numberOfPods` | number of pods that can be scaled in the defined period | | `period` | interval in which the numberOfPods value is applied | | `scaleDown` | configuration values for scaling down | | `stabilizationWindow` | number of continuous seconds in which the scaling condition is not met (when reached, scale down is started) | | **CPU scaling** | | | `triggerCPUValue` | CPU value (in percentage), which will cause scaling when reached; percentage is taken from the resource.limits.cpu value; limits have to be set up | | **Queue scaling** | | | `targetInputQueueSize` | Number of waiting messages to trigger scaling on; unacknowledged messages are excluded | --- ## SpectraDetect Central SDM — Federated Manager Monitoring and YARA Policy Sync The Central Spectra Detect Manager (CSDM) is a management overlay that operates on top of deployed SDM instances. It provides a high-level, aggregated operational dashboard across all deployed SDM environments and ensures consistent security policies are maintained across the entire federated SDM deployment topology. ## Overview The current Spectra Detect Manager (SDM) implementation supports both Open Virtual Appliance (OVA) and monolith microservice deployment models, but cannot manage both simultaneously within a single SDM instance. This limitation arises because the configuration switches that govern the instance's deployment behavior are implemented as global parameters. The CSDM addresses this limitation by providing: - **Centralized monitoring**: Aggregated operational dashboard across all deployed SDM environments - **Policy synchronization**: Consistent YARA rule distribution across the entire federated SDM deployment topology - **Multi-deployment support**: Ability to manage both OVA and Kubernetes deployments from a single interface, supporting migration from legacy deployment models ### What Central SDM does not do The CSDM operates as a monitoring and YARA rule management plane. It does not replace the Spectra Detect Manager for day-to-day configuration tasks. The following operations remain on each local SDM: - Connector configuration (ingress and egress) - User management and access control - Quota settings - Classification change notifications - Central Configuration settings Local SDMs continue to serve as the only integration point for Worker and Spectra Analyze appliances. The CSDM is not a direct integration endpoint. ## Requirements - All local SDMs registered with the Central SDM must run version 5.7.2 or later - OpenID Connect (OIDC) authentication is strongly recommended for Single Sign-On (SSO) functionality ## Architecture The CSDM uses a centralized approach where a single SDM instance manages all deployments, providing a unified view over the entire system while respecting the specifics of different deployment types. The CSDM transparently aggregates groups from different environments. ### Design approach The CSDM architecture is based on an API-driven model where the CSDM acts as a lightweight orchestrator rather than a complete data repository. It relies on APIs to communicate with local SDMs, sending requests to relevant instances when information is needed or changes must be made. Each local SDM remains the true source of its own data. All communication from the CSDM to local SDMs is initiated by the CSDM. This design simplifies network configuration and prevents the need for complex, direct connections between components. ### Local Spectra Detect Manager registration Local Spectra Detect Manager (SDM) instances are registered with the Central SDM through configuration. This registration specifies the necessary parameters for the SDM's integration into the broader architecture. ### Data ingestion and M2M communication The Central SDM collects operational data and telemetry from the registered local SDMs using the standard SDM API. Authentication for this Machine-to-Machine (M2M) communication is secured through a token-based mechanism. The CSDM operates solely as a centralized monitoring and management plane only. ### Communication patterns #### API requests The CSDM sends API requests to local SDMs when configuration changes or information retrieval is needed. For Kubernetes deployments, a Manager component serves as a proxy between the CSDM and individual appliances. The appliance ID is included as part of the request, and the Manager forwards the request to the specific hub or worker. #### Appliance events Events from appliances are collected by local SDMs and stored in persistent storage. The CSDM pulls events from local SDMs on a regular polling schedule, reversing the typical communication direction for event handling. ### YARA rule synchronization #### Centralized rule authority The CSDM maintains the master copy and serves as the single source of truth for all YARA rules. The CSDM is responsible for aggregating and resolving rule conflicts from all connected Detect and Analyze appliances. #### Distribution mechanism The CSDM pushes YARA rules to local SDMs. For Kubernetes deployments, local SDMs serve as repositories from which workers pull rules. This maintains the standard worker behavior while ensuring centralized rule management. #### Upgrade path and initial data seeding An existing cluster or set of clusters can be switched to the CSDM. During this transition, the existing single SDM is temporarily designated as the initial source, and its active YARA rules are imported into the CSDM repository. After import, all local SDMs are required to use the YARA rules managed by the CSDM. #### Request forwarding Spectra Analyze appliances continue to direct requests to their respective local SDMs. However, the local SDMs are configured to forward all YARA-related requests from both Spectra Analyze appliances and Workers to the CSDM for centralized processing. Sample-specific operations like egress configuration and retro hunting remain local. Worker synchronization is also forwarded to CSDM. Synchronization status is resolved by local SDM by fetching the rules versions from CSDM and comparing them with Workers' statuses. ## Configuration ### Central SDM The Central SDM (CSDM) pulls operational telemetry from local SDM instances. Local SDM discovery information is configured in the `multiRegion` section of the CSDM's ConfigMap. Local SDMs are enumerated here, and the overall multi-region polling feature is activated by setting the `enabled` flag to true. ```yaml multiRegion: enabled: true initialYaraSyncPullId: vm-1 sdmList: - id: vm-1 name: "OVA First" url: https://10.200.1.1 type: "ova" - id: vm-2 name: "OVA Second" url: https://10.200.2.2/ type: "ova" - id: k8s-2 name: "AWS Kubernetes Third" url: "https://third.detect-dev.reversinglabs.com/" type: "kubernetes" - id: k8s-1 name: "AWS Kubernetes Fourth" url: "https://fourth.detect-dev.reversinglabs.com/" type: "kubernetes" ``` #### Configuration parameters - **enabled**: Activates or deactivates Central SDM multi-region polling - **initialYaraSyncPullId**: Specifies which local SDM environment serves as the initial source of YARA rules. This must match one of the `id` values in the `sdmList` - **sdmList**: Defines all local SDMs that the Central SDM monitors - **id**: Internal identifier for the SDM (used by APIs and UI routing) - **name**: Display name shown in the Central SDM UI - **url**: Base URL of the local SDM - **type**: Deployment type (`ova` or `kubernetes`) #### Initial YARA rule synchronization When configuring Central SDM, exactly one environment is designated as the initial YARA rule source via the `initialYaraSyncPullId` parameter. This environment's Spectra Analyze appliances should contain the YARA rulesets that will be distributed to all other environments. All other Spectra Analyze appliances must start without existing rulesets. They will receive their initial rulesets through synchronization from the designated source environment. After the initial synchronization is complete, the Central SDM maintains a consistent ruleset across all environments. ### Local SDM The local Spectra Detect Manager (SDM) generally operates independently of the CSDM. The exception to this independence is the YARA rule synchronization process. The local SDM assumes a simple consumer role, exclusively fetching the master YARA rule set from the central CSDM. This behavior is configured exclusively on the CSDM side via an API operation. Refer to the Central SDM API documentation for endpoint details. This design decision ensures compatibility with deployment environments, such as OVA-based clusters, where configuration by a ConfigMap is not available. Local SDMs are registered at the Central SDM. No separate configuration on each local SDM is required beyond standard YARA synchronization with Spectra Analyze and workers. ## Using Central SDM ### Authentication Authentication for the Central SDM is configured to be consistent with the approach used for the local Spectra Detect Manager (SDM) instances. It is strongly recommended that OpenID Connect (OIDC) be deployed as the primary authentication mechanism. Using OIDC enables Single Sign-On (SSO), allowing users to seamlessly transition between the CSDM interface and local SDM management consoles without requiring re-authentication. For machine-to-machine communication, local SDMs are authenticated with the CSDM using token-based authentication. Workers are authenticated with their local SDM instances as needed. ### Landing page The UI of the CSDM is similar to the regular SDM UI, but with reduced functionality. Most of it is located on the single, initial screen. On the left side, it shows a list of the configured SDMs together with their type icon (VM/Kubernetes), name, and status. ![image](./images/CSDM.png) SDM statuses are: - **Unknown**: Before status is known - **Online**: Everything is operating normally - **Degraded**: Some hubs or workers are down - **Error**: Some component reported an error - **Offline**: Secondary unavailable - **Maintenance**: Maintenance or restart in progress - **Unauthorized**: Invalid primary authentication configuration ### Appliances tab The Appliances tab on the SDM-CM interface largely mirrors the core status presentation found in a standalone local SDM. However, it incorporates CSDM specific controls which act as a centralized gateway to manage individual instances. Added controls: - **Manage button**: Launches the UI of the specific local SDM in a new browser tab. This is an authenticated launch (preferably via SSO) for direct configuration and troubleshooting. ### YARA synchronization The overview screen provides a granular status display, detailing the current sync state for each appliance. Possible synchronization states include: - **InSync**: The local SDM has successfully fetched and applied the latest rule set - **OutOfSync**: The local SDM is using an older version of the rules - **Error**: The last attempted synchronization failed - **Unavailable**: The CSDM cannot establish communication with the local SDM endpoint - **PendingNew**: A new rule set is available on the CSDM-CM, awaiting local SDM retrieval - **Disabled**: Synchronization for this local SDM has been administratively deactivated - **NoRules**: The local SDM has not yet received any rules from the CSDM ## Upgrading to Central SDM Upgrading from a single existing local SDM to Central SDM is the supported migration path. The process involves designating an existing SDM as the initial YARA rule source and importing its active rules into the Central SDM. ### Migration considerations - All SDMs in a Central SDM deployment share the same YARA rules - The upgrade requires a maintenance window during which YARA synchronization is paused and rules are migrated - Environments with multiple pre-existing SDMs that have independent rulesets require manual alignment before migration ### Migration procedure 1. Prepare the existing SDM that will serve as the initial rules source: - Place the SDM into maintenance mode - Stop YARA synchronization between Spectra Analyze and the SDM - Export active YARA rules from this SDM 2. Import rules into the Central SDM 3. Configure the local SDM to consume rules from the Central SDM 4. Re-enable synchronization (Central SDM to local SDM to workers) 5. Add additional SDMs to the Central SDM configuration. These will automatically adopt the centralized ruleset ## Limitations and considerations ### Functional scope The Central SDM UI does not include the following features, which remain available only on each local Spectra Detect Manager instance: - Central Configuration page - Connector settings (ingress and egress configuration) - Classification change notifications - Local user and access management ### Operational considerations - The Central SDM does not push configuration changes to local SDMs. It serves as a monitoring and YARA rule authority only - If you need to decommission the Central SDM or remove an instance from it, coordinate this change with support to avoid interruptions in rule management - All connected SDMs must share a single, unified YARA ruleset managed by the Central SDM --- ## SpectraDetect Deployment — OVA, AMI, Kubernetes EKS, and Multi-Region Spectra Detect appliances are available as OVA, AMI, or container images. Use your platform’s standard deployment process, and refer to the guides below for detailed instructions. Additionally, you can deploy Spectra Detect using Kubernetes. Kubernetes is an open source platform installed on Linux servers according to the manufacturer's instructions, and supports several different Linux distributions, including current versions of Ubuntu 24.04 LTS and Debian 12. ## Deployment Guides --- ## SpectraDetect Multi-Region Deployment — High Availability and Data Residency A multi-region deployment distributes Spectra Detect across two or more geographic locations to achieve high availability, disaster recovery, and data residency compliance. Files are analyzed in the region where they are submitted, and a centralized SDM cluster provides unified management across all regions. ## Architecture ![Spectra Detect multi-region deployment](../images/spectra-detect-multi-region.png) A single load balancer sits in front of both regions and receives all incoming traffic from file sources (email gateways, web proxies, endpoint agents, file storage, and cloud buckets). It routes each request to the active region based on health, geography, or routing policy. Within each region, an SDM node and a Hub node run as an active/standby pair. Region A runs the active SDM and the active Hub; Region B runs the standby SDM and standby Hub, kept in sync via state replication. Workers run as a cluster in each region and process files submitted by the local Hub. A worker in Region B handles overflow or failover from Region A when needed. Analysis results flow from the Worker clusters through the Hubs and out to the configured output destinations — SOAR, SIEM, EDR, sandbox, and threat intelligence platforms — regardless of which region performed the analysis. ## When to use multi-region deployment Use a multi-region deployment when you need: - **Data residency** — files must be analyzed in a specific jurisdiction and must not leave that region. - **Low latency** — file sources in different geographies connect to the nearest Hub rather than routing across continents. - **High availability** — a regional outage does not stop file analysis in other regions. - **Disaster recovery** — the SDM failover ensures management continuity even if one data center is lost. ## Components | Component | Role in multi-region | |---|---| | SDM load balancer | Routes SDM API/UI traffic to the active SDM node; fails over to standby across regions | | SDM cluster (active + standby) | Central control plane; state-synced between nodes | | Hub load balancer | Distributes file submission traffic to regional Hub clusters by geography or policy | | Regional Hub cluster (active + standby) | Ingests files and distributes them to Workers in the same region | | Regional Worker cluster | Performs file analysis locally; scales independently per region | ## Prerequisites Before deploying across multiple regions: - An SDM instance must be reachable from all Hub clusters (consider network peering or a transit gateway between regions). - Each regional Hub and Worker cluster must be deployed and registered with SDM before enabling cross-region routing. - TLS certificates must cover all regional endpoints, or a wildcard/SAN certificate must be issued for the load balancer and regional hostnames. **Load balancer requirements by platform:** | Platform | SDM load balancer | Hub load balancer | |---|---|---| | On-premises data center | Customer-provided hardware or software load balancer (for example, F5 BIG-IP, HAProxy, or NGINX) with health-check and failover support | Customer-provided load balancer with geographic or policy-based routing between data center sites | | AWS | AWS Global Accelerator or Application Load Balancer (ALB) | AWS Global Accelerator with endpoint groups per region, or Route 53 latency-based or geolocation routing | | Microsoft Azure | Azure Front Door or Azure Load Balancer with Traffic Manager | Azure Front Door with origin groups per region, or Azure Traffic Manager with geographic routing | | Google Cloud | Google Cloud Load Balancing (global external HTTP(S) load balancer) | Google Cloud Load Balancing with backend services per region, or Cloud DNS with geolocation routing policies | ## Deployment steps ### 1. Deploy the SDM cluster Deploy SDM in your primary region following the appropriate [deployment guide](./index.md) for your platform (OVA, AMI, or Kubernetes). Configure the standby SDM node in a second region and enable state synchronization between them. Place a load balancer (or DNS failover) in front of both SDM nodes so that API and UI clients use a single hostname regardless of which node is active. ### 2. Deploy regional Hub and Worker clusters For each region: 1. Deploy a Hub cluster (active + standby) following the appropriate [deployment guide](./index.md) for your platform. 2. Deploy a Worker pool in the same region and register it with the local Hub. 3. Register the Hub cluster with SDM using the SDM management API or UI. Repeat for each region. Each region should have its own Hub cluster and Worker pool that operate independently. ### 3. Configure the Hub load balancer Configure your global load balancer to route incoming file submission traffic to the correct regional Hub cluster. Common routing strategies: - **Geographic routing** — route requests to the nearest region based on the client's location. - **Latency-based routing** — route to the region with the lowest measured latency from the client. - **Failover routing** — designate a primary region and fail over to a secondary if the primary becomes unhealthy. ### 4. Verify cross-region operation After deployment: 1. Submit a file from each region and confirm it is analyzed by Workers in the correct regional cluster. 2. Check the SDM dashboard and confirm all regional Hubs and Workers appear as connected and healthy. 3. Test SDM failover by stopping the active SDM node and confirming the standby takes over without interrupting Hub connectivity. 4. Confirm YARA rules deployed through SDM propagate to Workers in all regions. ## Considerations **YARA rule synchronization** — SDM pushes YARA rule updates to all registered Workers across all regions. Ensure network connectivity between SDM and every regional Worker cluster. **License** — each Worker node consumes a license seat regardless of region. Contact your ReversingLabs account team to confirm your license covers the total Worker count across all regions. **Analysis results** — by default, each regional Hub stores results locally. Configure S3 output or webhook forwarding if you need results aggregated centrally. **SDM availability** — if the SDM cluster is unreachable, Hubs and Workers continue processing files using their last-known configuration. YARA rule updates and license checks are paused until SDM connectivity is restored. --- ## Spectra Detect File Analysis — Worker API, Reports, and Spectra Core Results Spectra Detect Worker analyzes files submitted via the [Worker API](../API/UsageAPI.md) and produces a detailed analysis report for every file using the built-in Spectra Core static analysis engine. Analysis reports can be retrieved in several ways, depending on the Worker configuration. It is also possible to control the contents of the report to an extent. ## Retrieving Analysis Reports There are two ways to get the file analysis report(s): 1. The [Get information about a processing task](../API/UsageAPI.md#get-information-about-a-processing-task-taskinfo) endpoint. Sending a GET request to the endpoint with the task ID returns the analysis report in the response. 2. Saved on one of the configured integrations. - **S3** - for hosted deployments, this is the only supported integration - Microsoft services (Azure Storage, SharePoint, OneDrive) - file shares (NFS, SMB) - Splunk - callback server ## Adding Custom Data to the Report Users can also save any custom data in the analysis report by submitting it in the file upload request. The `custom_data` field accepts any user-defined data as a JSON-encoded payload. This data is included in all file analysis reports (Worker API, Callback, AWS S3, Azure Data Lake and Splunk, if enabled) in its original, unchanged form. The `custom_data` field will not be returned in the [Get information about a processing task](../API/UsageAPI.md#get-information-about-a-processing-task-taskinfo) endpoint response if the file has not been processed yet. Users should avoid using `request_id` as a key in their `custom_data`, as that value is used internally by the appliance. **Example - Submitting a file with the custom_data parameter to store user-defined information in the report** ```bash curl https://tiscale-worker-01/api/tiscale/v1/upload -H 'Authorization: Token 94a269285acbcc4b37a0ad335d221fab804a1d26' -F file=@Classification.pdf -F 'custom_data={"file_source":{"uploader":"malware_analyst", "origin":"example"}}' ``` ## Customizing Analysis Reports There are several different ways of customizing an analysis report: 1. through report configuration 2. through report **types** 3. through report **views** These methods are not mutually exclusive and are applied in the order above (configuration first, then report type, then report view). For example, to even be present for later filtering/transforming, **strings** found in a file must be included in the report. **Report types** are results of *filtering* the **full** report. In other words, fields can be included or excluded as required. On the other hand, **report views** are results of *transforming* parts of the report, such as field names or the structure of the report. Historically, views could also be used to just filter out certain fields without any transformations, and this functionality has been maintained for backward compatibility. However, filtering-only views should be replaced by their equivalent report types as they are much faster. As previously mentioned, filtering and transforming actions are not mutually exclusive. You can filter out some fields (using a report type), and then perform a transformation on what remains (using a report view). However, not all report views are compatible with all report types. This is because some report views expect certain fields to be present. ### Report Types Report types are JSON configuration files with the following format: ```json5 { "name": "string", "exclude_fields": true, "fields": { "example_field": false, "another_example": { "example_subfield": false, "another_subfield": false } } } ``` Some default options: - `small` - Contains only the classification of the file, and some information about the file. - `extended_small` - Contains information about file classification, information about the file, the `story` field, `tags` and `interesting_strings`. - `medium` - This is the **default report** that’s served when there are no additional query parameters (in other words, it’s not necessary to specifically request this report, as it’s sent by default). - It is equivalent to the previous "summary" report with some small differences: - each subreport contains an `index` and `parent` field - if `metadata.application.capabilities` is 0, then this field is not present in the report - Changes in this report: - excludes the entire `relationships` section - excludes certain fields under the `info` section, such as `warnings` and `errors` - many `metadata` fields are not present such as those related to certificates - there are no strings, no story and no tags - `large` - Includes every single field present in the analysis report. It is equivalent to the previous "full" report (`?full=true`). **Note: You can upload custom report types to the appliance through **Central Configuration > Egress Integrations**, where you can also manage other settings for each integration. Custom report types are visible for all integrations, regardless of which section was used to upload it.** ------ Report types that replace report views with the same name: - `classification` - This report returns only the classification of the file, story and info. It has no metadata except the `attack` field. - `classification_tags` - Same as the classification view, with the addition of Spectra Core tags. - `extended` - Compared to the default (medium) type, contains:all metadata, relationships, tagsthe `story` fieldunder `info`, contains statistics and unpacking information - `mobile_detections` - Contains mobile-related metadata, as well as classification and story. - `mobile_detections_v2` - Contains more narrowly defined mobile metadata, with exclusive focus on Android. Also contains classification and story. - `short_cert` - Contains certificate and signature-related metadata, as well as indicators and some classification info. The `name` of the report type is the string you’ll refer to when calling the [Get information about a processing task](../API/UsageAPI.md#get-information-about-a-processing-task-taskinfo) endpoint (or the one passed to the relevant configuration command). For example, if the name of your report type is `my-custom-report-type`, you would include it in the query parameters as follows: `?report_type=my-custom-report-type`. The `exclude_fields` field defines the behavior of report filtering. This is an optional field and is `false` by default. This means that, by default, the report fields under `fields` will be **included** (you explicitly say which fields you **want**). Conversely, if this value is set to `true`, then the report fields under `fields` will be **excluded** (you explicitly say which fields you **don’t want**). The `fields` nested dictionary contains the fields that are either included or excluded (depending on the value of `exclude_fields`). If a subfield is set to a boolean value (`true`/`false`), then the inclusion/exclusion applies to **that section and all sections under it**. For example, `small.json`: ``` { "name": "small", "fields": { "info" : { "file": true, "identification": true }, "classification" : true } } ``` In this configuration, we’re explicitly **including** fields (`exclude_fields` was not set, so it’s `false` by default). Setting individual fields to `true` will make them (and their subfields) appear in the final report. In other words, the only sections that will be in the final report are the entire `classification` section and the `file` and `identification` fields from the `info` section. Everything else will not be present. Or, `exclude-example.json`: ``` { "name": "exclude-example", "exclude_fields": true, "fields": { "relationships": false, "info": { "statistics": false, "binary_layer": false, } } } ``` In this configuration, the entire `relationships` section is excluded, as well as `statistics` and `binary_layer` from the `info` section. Everything else will be present in the report. #### Limitations - `info.file` cannot be excluded and is always present. - **Lists cannot be partially filtered.** If you include a list field (such as `indicators` or `interesting_strings`), all items in that list are included with all their primitive fields. You cannot selectively include or exclude individual fields within list items. For example, if you want to include `indicators` but only show the `priority` field: ```json { "name": "custom-report", "fields": { "indicators": { "priority": true } } } ``` This will **not** work as expected. The entire `indicators` list will be included with all fields (`priority`, `category`, `description`, `reasons`, etc.). This limitation applies to all list types in the report schema, including: - `indicators` (struct-list) - `interesting_strings` (struct-list) - `strings` (struct-list) - `scan_results` (struct-list) - `tags` (list) - `story` (struct-list) You can include or exclude these lists as a whole, but you cannot selectively filter individual fields within list items. This applies regardless of whether you use include mode (`exclude_fields: false`) or exclude mode (`exclude_fields: true`). - **Primitive fields cannot be excluded if they share a level with an included field**. Primitive types (*string*, *number*, *boolean*, *null*) on the same level as an included field, or above it, will always be included. Only complex objects (structs) and lists can be excluded at those levels. Example structure: ```json { "classification": { "propagated": false, "classification": 3, "factor": 2, "result": "Win32.Trojan.Generic", "rca_factor": 7, "scan_results": [ { "name": "TitaniumCore RHA1", "classification": 3 } ], "propagation_source": { "sha1": "a94a8fe5ccb19ba61c4c0873d391e987982fbbd3" } } } ``` If you include only `result` (the threat name): - `propagated`, `classification`, `factor`, `rca_factor` are included (primitives on the same level as `result`) - `scan_results` and `propagation_source` are excluded (complex types not explicitly included) Filtering result: ```json { "classification": { "propagated": false, "classification": 3, "factor": 2, "result": "Win32.Trojan.Generic", "rca_factor": 7 } } ``` ### Report Views Views are transformations of the JSON analysis output produced by the Worker. For example, views can be used to change the names of some sections in the analysis report. There are also deprecated views that allow filtering fields in or out, but this functionality is covered by report types (see above). The following views are present by default (deprecated views are excluded): - `classification_top_container_only` - Returns a report view equivalent to the `classification` report **type** (see above), but for the top-level container (parent file). - `flat` - "Flattens" the JSON structure. Without flattening: ```json "tc_report": [ { "info": { "file": { "file_type": "Binary", "file_subtype": "Archive", "file_name": "archive.zip", "file_path": "archive.zip", "size": 20324816, "entropy": 7.9999789976332245, ``` With flattening: ```json "tc_report": [ { "info_file_entropy": 7.9999789976, "info_file_file_name": "archive.zip", "info_file_file_path": "archive.zip", "info_file_file_subtype": "Archive", "info_file_file_type": "Binary", ``` - `flat-one` - Returns the `flat` report, but only for the parent file. - `no_goodware` - Returns a short version of the report for the top-level container, and any children files that are suspicious or malicious (goodware files are filtered out). This view is not compatible with split reports. - `no_email_indicator_reasons` - Strips potential PII (personally identifiable information) from some fields in analysis reports for email messages, and replaces it with a placeholder string. - `splunk-mod-v1` - Transforms the report so that it’s better suited for indexing by Splunk. The changes are as follows: - if `classification` is 0 or 1, `factor` becomes `confidence` - if `classification` is 2 or 3, `factor` becomes `severity` - a `string_status` field is added with the overall classification (UNKNOWN, GOODWARE, SUSPICIOUS, MALICIOUS) - scanner `name` becomes `reason` - scanner `result` becomes `threat` Views can generally be applied to both split (available in self-hosted deployments) and non-split reports. If none of these views satisfy your use case, contact [ReversingLabs Support](mailto:support@reversinglabs.com) to get help with building a new custom view. ## Interpreting the report After sending files to be processed, you will receive a link to a JSON report. It contains a `tc_report` field, which looks something like this: ```json "tc_report": [ { "info": { "file": { "file_type": "Text", "file_subtype": "Shell", "file_name": "test.sh", "file_path": "test.sh", "size": 35, "entropy": 3.7287244452691413, "hashes": [] } }, "classification": { "propagated": false, "classification": 0, "factor": 0, "scan_results": [ {} ], "rca_factor": 0 } } ] ``` ### High-level overview The key information here is the classification value (`tc_report[].classification.classification`), which will be a number from 0 to 3: | `classification` | description | |------------------|------------------| | 0 | no threats found | | 1 | goodware | | 2 | suspicious | | 3 | malicious | ### More information For more information, use the `tc_report[].classification.rca_factor` field. The higher its value, the more dangerous the threat, except for files that weren’t classified (their `classification` is 0). In that case, `rca_factor` will be 0 and will not signal trustworthiness. For even more information on why a file was given a certain classification, look at the `scan_results`. This field contains the individual scanners which processed the file (`name`), as well as their reason for classifying a file a given way (`result`). The following table maps the `classification` value to the old "trust factor" and "threat level" values, and the new "RCA factor" value which replaces them. It also provides a mapping to a color-coded severity value, and provides general commentary with examples regarding the origin of any given classification. ### Risk tolerance profiles Based on your risk tolerance, you can set up the downstream systems (for example, a SIEM/SOAR) to act on different values in the classification field. Here are two example risk tolerance profiles - adapt them to your own use case: #### Low risk tolerance - receive alerts on all possible threats (maximum number of matches) - act on both suspicious and malicious verdicts ```python classification = report['tc_report'][0]['classification']['classification'] if classification >= 2: alert_SOC() ``` #### High risk tolerance - receive alerts only on highly impactful threats ```python classification = report['tc_report'][0]['classification']['classification'] risk_score = report['tc_report'][0]['classification']['rca_factor'] if classification == 3 and risk_score >= 7: alert_SOC() ``` ## Spectra Core Spectra Detect Worker relies on the built-in Spectra Core static analysis engine to classify files and produce the analysis report. The file classification system can produce the following classification states: *goodware*, *suspicious*, *malicious*. With this classification system, any file found to contain a malicious file will be considered malicious itself if classification propagation has been enabled in the configuration. In the default configuration, propagation is enabled. Multiple technologies are used for file classification, such as: format identification (malware packers), signatures (byte pattern matches), file structure validation (format exploits), extracted file hierarchy, file similarity (RHA1), certificates, machine learning (for Windows executables), heuristics (scripts and fileless malware) and YARA rules. These are shipped with the static analysis engine, and their coverage varies based on threat and file format type. ## Classifying Files with Cloud-Enabled Spectra Core Spectra Core can be connected to Spectra Intelligence to use file reputation data. This data is not based solely on antivirus scanning results, but on the interpretation of the accuracy of those results by ReversingLabs, as well as on the analyst-provided (manual) classification overrides. Note that the only information Spectra Core submits to the Spectra Intelligence cloud is the file hash. Connecting Spectra Core to the cloud will add threat reputation to the `scan_results` in the report, for example: ```json { "ignored": false, "type": "av", "classification": 0, "factor": 0, "name": "", "rca_factor": 0 } ``` To connect Spectra Core to Spectra Intelligence, in the Manager, go to *Central Configuration > Spectra Intelligence*. ### How Spectra Intelligence Enhances Spectra Core Classification **Threat Naming Accuracy** When classifying files, Spectra Core takes all engines listed in the analysis report (Spectra Intelligence included) into consideration. Based on their responses, it selects the technology that provides the most accurate threat naming. More specific methods that identify the malware family more accurately are given precedence. Generic and heuristic classification are picked last, and only if there is no better-named classification response. Spectra Intelligence generally returns specific threat names, and it will be selected as authoritative if a better option is not available. It can also enhance Spectra Core classification results. For instance, Spectra Core machine learning can classify malware only with heuristic, non-named classification. If Spectra Core finds ransomware via machine learning, the threat name will appear as *Win32.Ransomware.Heuristic*. However, if Spectra Core is connected to Spectra Intelligence, the cloud response can change the threat name to something better-defined, such as *Win32.Ransomware.GandCrab*. This helps users understand exactly which malware family they are dealing with, as opposed to just the threat type (ransomware). **Whitelisting and Goodware Overrides** When not connected to Spectra Intelligence, Spectra Core can classify files as goodware either based on digital certificates that were used to sign the files, or via graylisting - a system that can declare certain file types as goodware based on the lack of code detected within them. When connected to Spectra Intelligence, whitelisting can be expanded based on file reputation and origin information. As a result, the number of unknown files (files without classification) will be significantly reduced. Users also get an insight into the trustworthiness of whitelisted files measured through trust factor values. If classification propagation is enabled on Spectra Core, a whitelisted file can still be classified as suspicious or malicious if any of its extracted files is classified as suspicious or malicious. Goodware overrides is a feature designed to prevent this. When enabled, it ensures that any files extracted from a parent and whitelisted by certificate, source, or user override can no longer be classified as malicious or suspicious. Spectra Core automatically calculates the trust factor of the certificates before applying goodware overrides, and does not use certificates with the trust factor lower than the user-configured goodware override factor. With goodware overrides enabled, classification is suppressed on any files extracted from whitelisted containers. In this case, whitelisting done by either Spectra Intelligence or certificates will be the final classification outcome. Spectra Core will still report all malicious files it finds, but they won’t be displayed as the final file classification. This feature allows for more advanced heuristic classifications that have a chance of catching supply chain attacks. As those rules tend to be noisy, they can be suppressed by using this feature. The user can still see all engine classification results, and can use them to proactively hunt for possible supply chain attacks. Note that the goodware overrides will not apply if the extracted file is found by itself (for example, if an extracted file is rescanned without its container). ### Mapping Spectra Intelligence and Spectra Core Classification States When Spectra Core is connected to Spectra Intelligence, it uses a combination of two classification systems - Spectra Core file classification and Malware Presence (MWP). While their *malicious* and *suspicious* classification states translate well from one to another, MWP *known* to Spectra Core *goodware* does not. By default, Spectra Core converts all MWP *known* states to *goodware*. This can be problematic when false negative scans happen in the Spectra Intelligence cloud, as the cloud would declare a file as KNOWN, but in reality, the file would be undiscovered malware. Such files generally have a low trust factor value. To resolve those issues, users can rescan files classified as non-malicious to confirm whether they are false negatives, or configure Spectra Core to map Spectra Intelligence *known* to *goodware* based on the trust factor. Users can configure the MWP Goodware Factor value, which defines the threshold for automatically converting MWP *known* to Spectra Core *goodware*. When this value is configured, instead of converting *known* to *goodware* in all cases, Spectra Core will only convert it when a file has a trust factor value equal to or lower than the configured one. By default, the value is configured to 5 (convert all), and the recommended value is 2. In this case, if a file classified as *known* in the cloud has a trust factor greater than 2, Spectra Core will not consider that file as *goodware*. It will be considered unclassified (with no threats found), and its cloud classification will not be present in the list of scanners in the Spectra Core analysis report. ## Spectra Detect Decision Matrix The following section explains how to interpret the classification data provided by Spectra Detect. The intent is to maximize the effectiveness of malicious classifications, while reducing the negative impact false positive detections might have. Start the decision-making process by looking at the top-level file, found first in the Spectra Detect report. Perform the following checks in order. 1. **No Threats Found classification** If the value of `tc_report.classification.classification` is "0" (no threats found), Spectra Detect detected no threats and the file is not present in ReversingLabs Cloud (or in the T1000 database). The sample could receive a classification at a later date, or it can be uploaded for analysis to the Spectra Intelligence cloud. After cloud classification, the sample may be marked as 3 (Malicious), 2 (Suspicious) or 1 (Known/Goodware). ReversingLabs reserves the "Known/Goodware" classification only for samples that the classification algorithm deems as trustworthy. If a sample remains unclassified, analysis did not find malicious intent at that time, but the sample and its metadata are not trustworthy enough to be declared "Known/Goodware." 2. **Known classification** If the value of `tc_report.classification.classification` is 1 (known), the file has been analyzed and the analysis did not find any known threats. To determine if the file is goodware, trusted and clean, perform the following checks: 1. If the value of `tc_report.classification.factor` is 0 or 1, the file is goodware and it comes from a highly reputable source. 2. If the value of `tc_report.classification.factor` is 2 or 3, the file is clean and it comes from known public sources usually unrelated to malware infections. 3. If the value of `tc_report.classification.factor` is 4 or 5, the file is likely clean and it comes from known public sources, some of which have been known to carry malware in the past. Files with this factor can change to other classifications over time, or their factor can improve when they are found in better sources. 3. **Suspicious classification** If the value of `tc_report.classification.classification` is 2 (suspicious), the file has been analyzed and the analysis found a possibility that the file is a new type of threat. This classification category is reserved for static analysis and cloud reputation heuristics, and it can lead to false positives. Depending on the risk aversion profile, two approaches are advised: 1. High risk tolerance - suspicious classifications are allowed as heuristics trigger often on files. This most commonly happens in the case where the files are collected in a corrupted or truncated state before analysis. 2. Low risk tolerance - suspicious classifications are not allowed. However, filtering on specific reasons for suspicious classifications is still available via the first element in the `tc_report.classification.scan_results[0].name` list. 4. **Malicious classification** If the value of `tc_report.classification.classification` is 3 (malicious), the file has been analyzed and recognized as a known malware threat. Depending on the risk aversion profile, two approaches are advised: 1. High risk tolerance - malicious files are not allowed, but PUA (potentially unwanted applications) are. Lower risk malware, PUA, have `tc_report.classification.factor` set to 1. In case PUA are allowed, an additional filter that blocks files with a factor greater than 1 is advised. 2. Low risk tolerance - malicious files are not allowed regardless of classification reason. ## Classification Propagation Spectra Detect unpacks files during analysis, so it is possible to have a file that is classified based on its content. Any file that contains a malicious or suspicious file is also considered malicious or suspicious because of its content. ### Propagated Classification Suppression In some cases, unpacked files might contain files that are misclassified. These false positives are propagated to the top, and it may appear that the entire archive is malicious. Suppression of these classifications is possible, and is safe, under the following conditions: 1. The classification is caused by propagation. When this happens, the optional field `tc_report.classification.propagation_source` exists. 2. Either of the following or all of the following: - The top-level file has been found in a trusted source. Find the scanner named `av` in the `tc_report.classification.scan_results` scanner list. If it exists, and its classification is 1 (known), check its factor value. If the factor value is 0 or 1, the file is goodware and it comes from a highly reputable source. - The top-level file is signed by a trusted party. Find the scanner named "TitaniumCore Certificate Lists" in the `tc_report.classification.scan_results` scanner list. If it exists and its classification is 1 (known), the file is goodware regardless of its factor value. ### Propagation Suppression Example The following false positive scenario illustrates how the suppression logic is applied: 1. A file is considered whitelisted because it is signed by a trusted digital certificate. 2. It has been classified as known and highly trusted in ReversingLabs Cloud, and has no positive antivirus detections in ReversingLabs Cloud. 3. However, during extraction, one or more malicious files were detected inside the file, and one or more malicious detections have been declared as a false positive. The classification of the file that has been submitted to Spectra Detect is indicated in the top-level section. Since it contains the `propagation_source` field, that indicates the top-level file is considered malicious because it contains at least one malicious file. The SHA1 value points to this file, which is the origin of the final top-level classification. The scanners in the `scan_results` list enumerate all the factors that contributed to the final classification. For example, they might include: 1. Resulting classification from propagation: Classification is malicious ( `classification: 3` ) with `factor: 2` 2. Certificate whitelisting for the top-level file: Classification is goodware ( `classification: 1` ) with `factor: 0` 3. Cloud response for the top-level file: Classification is goodware ( `classification: 1` ) with `factor: 0` The suppression algorithm can be applied to this top-level file, as it is not only signed with a whitelisted certificate, but is also considered known and highly trusted in ReversingLabs Cloud. ## Deep Cloud Analysis Pipeline Diagram When Deep Cloud Analysis is enabled on Spectra Detect, the appliance uses the following pipeline to analyze file submissions. ```mermaid flowchart TD %% Compact layout with all-rectangle nodes; only YES, NO, SKIP, SCAN uppercase S[START] --> A[Is the hash in user_data hash block list?] A -->|YES| X1[SKIP] A -->|NO| B[Is the hash in the global hash block list?] B -->|YES| X2[SKIP] B -->|NO| C[Is full scan enabled?] C -->|YES| Y1[SCAN] C -->|NO| D[Is this an unpacked file?] D -->|YES| E[Is the file type blocked?] E -->|YES| X3[SKIP] E -->|NO| F[CONTINUE] D -->|NO| F F[Was the file previously scanned?] F -->|NO| Y2[SCAN] F -->|YES| G[Is rescanning enabled?] G -->|NO| X4[SKIP] G -->|YES| H[Is the previous scan result older than the configured rescan interval?] H -->|YES| Y2 H -->|NO| X5[SKIP] ``` --- ## Spectra Detect Dashboard — Manager UI, Navigation, and Status Indicators This short introductory section is intended to help with understanding the basic layout of the user interface, terminology and visual indicators that are used on the Manager and in the rest of this User Guide. ## Global Header Bar At the top of the Manager interface is the global header bar, containing the most commonly used options and the main appliance menu used to access all sections of the Manager. - **Quota**: Usage-based quota insights for all Spectra Intelligence accounts used by the connected appliances. Clicking the triangle icon expands the header and displays the limit and the license renewal dates for each account. Quota limit statuses are color indicated. Additionally, this menu contains the option to contact ReversingLabs Support. - **Dashboard**: The dashboard displays statistics related to the amount and type of files that have been submitted and processed on the appliance within a specified time range. - **Central Configuration**: The [Central Configuration Manager](../Config/ApplianceConfiguration.md) allows users to modify configuration settings on Spectra Detect appliances directly from the Manager interface. The Central Configuration feature makes it easier to configure appliances remotely, and to ensure that the settings are consistent and correct across multiple appliances. The appliances must first be connected and authorized on the Manager instance. **Spectra Analyze appliances can be configured using the Spectra Detect Manager APIs.** - **Administration**: Allows users to access and configure [Spectra Detect Manager Settings](../Admin/ManagerSettings.md), [YARA Synchronization](../Config/YARASync.md), [Redundancy](../Config/Redundancy.md), and [Email Alerting](../Admin/EmailAlerting.md). - **Help**: Contains an option to access Manager API Documentation and the online product documentation. - **Notifications**: Contains a bell icon that shows unread notifications for cloud classification changes. Clicking the `See all notifications` redirects users to the [Notifications](../Config/Notifications.md) page, where they can view and manage all notifications. - **User menu**: Shows the username of the current user, contains link to user details, and the option to Log Out from the appliance. - **Integrations Status Indicators**: Contains arrow labels providing information on which file input sources and file/report output sources are currently configured on the connected Hub groups. ## Integrations Indicators The **Integrations Indicators** at the top-right of the interface contain `File Ingress Connectors` and `File Egress Connectors` providing information on which file input sources ([Connectors](../Config/AnalysisInput.md)) and file/report output sources ([Egress Integrations](../Config/ApplianceConfiguration.md#egress-integrations)) are currently configured on the connected Hub groups. Green icons indicate that the item has an existing configuration, not necessarily that the configuration is correct, and the service is connected, whereas grey icons indicate that the item does not have an existing configuration. Hub groups with configured Connectors are listed below the Connector, and outside of parentheses. ## Central Logging The Manager dashboard has two modes: central logging *enabled*/*disabled*. If central logging is enabled (*Administration > Spectra Detect Manager > Central Logging*), users can access the **Analytics** tab which shows various statistics for the processed files before showing appliance status. If it's not enabled, the default tab is **Appliance Management** showing the status of the Manager and all connected appliances. When central logging is enabled, the Analytics tab on the dashboard shows a detailed breakdown of analyzed files according to classification state, file type, malware family, the total size of all processed files, and more. This classification data and error logs can be exported to CSV by clicking the `Export Classification Data` button at the top of the page. Note that some processed files in these .csv files appear with slightly changed names, e.g. `-` become `[-]`. This is a measure to ensure sanitized input. ![](../images/c1000-dashboard-logging.png) Exported classification data CSV files contain the following information: sample (container), classification, rca_factor, malware_platform, malware_type, malware_family, filename, file_size (bytes), file_type_group, file_type, file_subtype, identification, processed_at, hostname. ## Detections Overview The Detections Overview table is a list of files analyzed on connected Workers. It updates in 15 second intervals, and shows sample classification, file name, size, file type, threat name, scan date, AV detections, SHA1 and SHA256 hashes, source, source tag, and detailed analysis. All columns, apart from File, can be enabled or disabled using the gear icon menu in the top right of the table section. The table can be sorted by clicking the column headers, and filtered using the `Show Table Filters` button which reveals text field filters above the table columns. Results can additionally be filtered by the time they were analyzed using the drop-down menu at the top of the table. Clicking the `Showing LIVE Results` button stops the table from automatically updating, which can be useful while inspecting a specific set of results. Automatic updating is also paused when the user navigates from the first page of results. Cloud analysis results do not affect the dashboard classification. However, if a newer cloud classification differs from the existing dashboard classification, a red indicator is displayed in the *Classification* column to signal that reprocessing is recommended. The *AV Detections* column in the Detections Overview table displays results from the Deep Cloud Analysis (Multi-Scanning) Service, combining Worker Static Analysis and Spectra Intelligence analysis. It can be expanded to show the names of AV scanners and any detected threats, reflecting the analysis conducted by the Spectra Intelligence, where files are sent to AV scanners in the Cloud, with the results then exposed on the Manager dashboard. To enable the Deep Cloud Analysis (Multi-Scanning) Service, navigate to *Administration > Spectra Detect Manager > Dashboard Configuration*, and select the `Enable Multi-Scanning` checkbox. When the Multi-Scanning is enabled, Workers upload samples to the Cloud only if the sample does not already exist in the Cloud, and passes the filtering criteria: up to 2GB in size. If the sample already exists in the Cloud, the Manager will monitor for any changes in the data and update the Manager dashboard accordingly. Read more details in the Multi-Scanning section of the [Spectra Detect Manager Settings](../Admin/ManagerSettings.md) chapter. The `RESCAN` button sends a request to the Cloud to check for any updates associated with the submitted samples, and updates the information accordingly. The `See All Scans` button opens a new page with all available Spectra Intelligence and AV Detections information. ### Product integration with Spectra Analyze The `Detailed Analysis` column in the Detections Overview table allows the users to import the sample analyzed on the Spectra Detect platform into Spectra Analyze for a deeper insight into the analyzed file. For easier filtering, files imported to the Spectra Analyze appliance using this feature will automatically be tagged with the `spectra_detect` tag. For this integration to be enabled, the Manager must be connected to at least one instance of Spectra Analyze 8.1 or higher, and central logging (*Administration > Spectra Detect Manager > Dashboard Configuration > Central Logging*) must be enabled. In addition, the sample must be stored either in an S3 bucket or on the Manager itself (Central Storage). If both of these features are enabled, Manager central storage takes priority (Spectra Analyze will download files from the Manager). When Spectra Analyze integration is enabled and files are uploaded, the Sample Details link is available in the `Detailed Analysis` column, allowing users to directly open the file analysis page on the configured Spectra Analyze instance. This link remains available for the same files even after disabling the integration and takes priority over the Central File Storage and AWS S3 import links when multiple options are enabled. ## Processing Timeline The Processing Timeline section of the dashboard shows a graph of uploaded, processed and post-processed samples, and also the number of samples that failed to analyze. To retrieve a list of files that failed to analyze on the connected Spectra Detect appliances in the last 90 days, use the `Export Errors / Hashes` button. The exported CSV file contains the following information: - `host_uuid` - `hostname` - `time` - `event_type` - `task_id` - `sample` - `container` The `host_uuid` value is set automatically when the Worker connects to the Manager, and obtainable using the `conf_centralmanager` command on the Worker. Note that exported error logs might contain double entries for some errors. For example, if a file processing task fails, causing a failed report upload to S3, this is counted as two errors, despite being one event. ## Malware Types / Malware Family Count The Malware Types and Malware Family Count charts show the analyzed samples categorized by Malware Type and Malware Family Count, respectively. Malware Type is presented as a percentage in a pie chart while Malware Families are displayed as a bar chart indicating the sample count per family. ## Appliance Management ![](../images/c1000-dashboard.png) System information about the Manager instance and connected appliances can be found on the **Appliance Management** tab and is updated every 5 minutes. The *Status* column indicates whether the appliance is online, offline, unlicensed, in error state, or unauthorized. If YARA ruleset synchronization is enabled (*Administration > Spectra Detect Manager > General*) and on at least one connected appliance that supports it, a *YARA* column will show the current YARA ruleset synchronization status for each appliance. Possible YARA synchronization statuses are *In Sync*, *Not In Sync*, *Unknown*, *Please Update* and *Please Set To HTTPS*. See the [YARA Synchronization](../Config/YARASync.md) section for more details. The Appliance Management page can be configured to display up to 100 appliances per page, and also filtered using the `Show Table Filters` button at the top of the list. This displays filter input fields in a row above the appliances table. Each table column has its own filter input field, allowing simultaneous filtering by multiple criteria. Keep in mind that some actions (like configuration changes) will result in a system restart. Depending on the type of appliance, the process of restarting and reloading the configuration might take some time. Spectra Detect Worker appliances generally take longer to restart. The following table describes common management actions and their consequences. | ACTION | APPLIANCE RESTART | MANAGER RESTART | |------------------------------------------------|-------------------|------------------| | Update the Manager instance | NO | YES | | Modify settings on the Manager instance | NO | YES | | Connect an appliance to the Manager | NO | NO | | Authorize an appliance | NO | NO | | Update a connected appliance | YES | NO | | Modify settings on the Appliance Status page | YES | NO | | Disconnect an appliance from the Manager | NO | NO | ## Connecting Appliances to the Manager **Note: Adding the same appliance to multiple Managers is not supported. It can lead to misconfigurations and conflicts. Always remove the appliance from one Manager before adding it to another.** To add an appliance, click `Add new appliance` on the Appliance Management tab on the Dashboard. In the *Add new appliance* dialog, choose the appliance type, then enter a name and an URL for the new appliance. **All appliance URLs must use the HTTPS protocol.** Note that the Manager does not validate SSL certificates automatically, so users must ensure their certificates are valid prior to connecting appliances. The *SNMP community* field is required for the appliance to properly communicate with the Manager, and for the Manager to display accurate status information on the dashboard page. Note that the SNMP community string set here must match the string previously configured on the appliance itself. If the selected appliance type is *TiScale Hub*, an additional field called *Load Balancer password* becomes available in the dialog. If the password is provided here, the appliance status page will display a tab with Load Balancer (HAProxy) statistics. Note that the password must be previously configured directly on the Hub, and on all Workers connected to that Hub. Clicking `Add` for Spectra Analyze appliances will redirect to the appliance login page, where the appliance must be authorized in order to successfully connect to the Manager. If authorization does not complete successfully, the appliances will be added to the Manager and displayed on the dashboard with *Unauthorized* status. They can be authorized at any moment from the [Appliance Status](#appliance-status-page) page. Workers and Hubs are immediately added to the Manager after clicking `Add`, without the authorization step. ## Appliance Status Page Apart from allowing access to *Update*, *Edit*, *Authorize*, and *Remove* options, the appliance status page provides detailed information about the system resources and health of the appliance. If there is a problem with retrieving the status information from the appliance, a warning message is displayed at the top of the appliance status page. In this case, users should check whether the SNMP community string is properly configured for the appliance on the Manager, and on the appliance itself. ![](../images/c1000-tabs.png) The information on the appliance status page is divided into tabs and refreshed every 5 minutes. The appliance type - Spectra Analyze, Spectra Detect Worker, Spectra Detect Hub - determines which tabs will be visible. | TAB | SUPPORTED ON | DESCRIPTION | |------------------|---------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **System** | All appliance types | Displays the appliance type, status, version, name, description, and uptime. | | **CPU** | All appliance types | Displays the number of cores on the appliance, the average load over the last minute for each individual core and for all cores. | | **Storage** | All appliance types | Shows the total storage size (in GB), the amount of used storage (in GB and percentage) and allocation units for each partition on the appliance. | | **Network** | All appliance types | Provides an overview of network devices on the appliance, with information about their type, operational and administrative status (up/down), physical address, bandwidth (in Mb/s or Gb/s), and the total amount of traffic received and sent. | | **Queues** | All appliance types except Spectra Detect Hub | If supported by the appliance type, displays the state of queues on the appliance. Queues are used to communicate between various background services. The state of all queues should be “running”, and each queue should have at least one consumer (a service connected to the queue). | | **Processing** | Worker only | If supported by the appliance type, provides statistics on the amount and size of files submitted, processed, and unpacked on the selected Worker in each of the predefined time intervals. The *Queue AVG* column indicates the average number of files in the incoming queue for each of the predefined time intervals. | | **Load Balancer**| Hub only | If supported by the appliance type, provides real-time information on the status of the HAProxy service used for load balancing on the Hub. The data is updated every 10 seconds. Hovering over column names in the tables provides tooltips that clarify what each column refers to. This tab is visible only if the load balancer (HAProxy) password is provided in the appliance configuration dialog. | | **Metrics** | Worker only | If supported by the appliance type, displays additional file processing and post-processing statistics. The statistics track the following events: files successfully uploaded to Worker and sent for analysis; success and failure events for file processing on the Worker; success and failure events for uploading parent files to S3; success and failure events for uploading analysis reports to Splunk and Callback server (if those report uploading options are configured on Worker). The statistics are collected for the same predefined time intervals as in the *Processing* tab, and preserved for the maximum duration of 1 day (24 hours). The statistics are collected using HTTP calls to the Worker service API, but the SNMP community string must be set for this tab to be visible. Counts in each interval automatically adjust to indicate only the events that occurred within the exact interval, while all events exceeding it are removed from the count. Note that the count precision may be impacted if the system is under a heavier load, but it should improve within 2-3 minutes. Additionally, extracted files are not individually counted in S3 upload events - only their parent files are. This may cause discrepancies between the count of files processed on Worker versus files uploaded to S3. | ### RabbitMQ Queues Based on the appliance type and configuration, the following RabbitMQ queues may be available. #### Spectra Detect Worker | Queue | Description | | --- | ------------- | | `tiscale.preprocessing_dispatcher` | Waits for AV analyses to complete, then routes samples to `hagent_input`, retry, or preprocessing based on thresholds and Deep Cloud Analysis settings. | | `tiscale.hagent_result` | Stores successful analysis results from `ts-worker` for postprocessing. | | `tiscale.hagent_url` | Handles submit-url and submit-docker-image requests: downloads samples and routes them to `hagent_input`, retry, or preprocessing based on thresholds and Deep Cloud Analysis settings. | | `tiscale.hagent_error` | Stores failed analysis results from `ts-worker` for postprocessing. | | `tiscale.preprocessing` | Manages subscription and upload of samples for AV scanning when Deep Cloud Analysis is enabled. | | `tiscale.preprocessing_unpacker` | Unpacks samples before preprocessing when Deep Cloud Analysis is configured to scan child files. | | `tiscale.hagent_input` | Receives samples for analysis. Routes to `hagent_result` on success or `hagent_retry` on failure. Preprocessing occurs first if Deep Cloud Analysis is enabled. | | `tiscale.hagent_retry` | Handles retry attempts for failed analyses. Routes to `hagent_result` on success or `hagent_error` on final failure. | ### Additional Options The "Upload SSL Certificate" button allows the users to apply new HTTPS certificates to Workers, Hub, or Manager. The Actions > Download logs button downloads a support archive containing relevant system logs from all connected appliances. Appliance logs can also be downloaded using the [Download support logs endpoint](../API/ServiceAPI.md#download-support-logs). #### Spectra Detect Hub For Hubs that are configured as a Hub group (redundancy cluster), the appliance status page contains additional information above the tabs and a button to promote the secondary Hub into primary. The link *Redundant with other Hub instance* allows users to quickly access the other Hub instance in the cluster, and view its status page. ## Editing and Removing Appliances To edit an appliance, click on its row in the Dashboard to access the appliance status page. Click `Edit` to open the *Configure appliance* dialog. Here the name, host name, URL, and the SNMP (Simple Network Management Protocol) community string of the appliance can be modified. The HTTPS protocol is mandatory for the appliance URL. For Spectra Detect Hub appliances, it's also possible to provide the HAProxy password to enable HAProxy monitoring. Click `Save` to save changes, or `Cancel` to return to the appliance status page without saving. To remove an appliance, click `Remove` on the appliance status page. Confirm the removal in the popup dialog that appears at the top of the page. To safely remove an appliance, always use the *Remove* option on the appliance status page when the appliance is online. Attempting to remove or replace an appliance by changing its URL, or removing it while it is offline will result in an error. Appliances periodically check whether they are still connected to the Manager. If an appliance is removed improperly, the Manager will detect it, and the appliance will be automatically removed from the Manager. ## Authorizing Spectra Analyze While Workers and Hubs are authorized automatically, Spectra Analyze appliances are not. In that case, the *Status* column in the dashboard shows an "Unauthorized" message. The `Authorization` button is visible on the status page of an unauthorized appliance, and it redirects to the authorization page on Spectra Analyze. ### Enabling Appliances Search on Spectra Analyze Spectra Analyze appliances connected and authorized to the same Manager instance can be used to perform an Appliances Search for samples. The Appliances Search feature looks for samples on all connected and authorized Spectra Analyze appliances, and provides links to the results on each appliance from Spectra Analyze Uploads page. Users can search for samples by file name and sample hash. Multi-hash search is supported, and different types of hashes (for example, MD5 and SHA1) can be submitted in one query. A notification message will appear if an appliance is not reachable or if the search cannot be performed on an appliance. To enable the Appliances Search feature on a Spectra Analyze appliance, access the *Administration > Configuration > Spectra Detect Manager* dialog and select the *Enable Appliances Search* checkbox. --- ## Spectra Detect YARA Hunting — Custom Rules, Modules, and Worker Sync ## Classifying Files with YARA Rules ## Using YARA with Spectra Detect Worker Default YARA rulesets on the appliance are automatically installed with the Spectra Core static analysis engine. With every engine update, these rulesets are updated as well. The rulesets cannot be saved to the Spectra Intelligence cloud or modified (edited, disabled, or deleted) in any way by any type of user. Additionally, ReversingLabs publishes open source YARA rules in [a public GitHub repository](https://github.com/reversinglabs/reversinglabs-yara-rules). These rules can be freely downloaded and imported into any Worker. In addition to default YARA rulesets, the Worker can use custom rulesets created by users. This is available by pulling rulesets from other Spectra Analyze and Worker appliances using [the YARA Sync feature](../Config/YARASync.md) on Spectra Detect Manager. ### Rulesets and restrictions ReversingLabs products support the following YARA modules: - PE - ELF - Math - Hash - Time - Dotnet **Note: "Import" and "include" statements are not supported.** Save custom YARA rulesets as files with the *.yara* extension. Naming restrictions: - YARA ruleset names must be between 3 and 48 characters. - The underscore ( _ ) should be used instead of spaces, and any other special characters should be avoided. Ruleset names should only use numbers (0-9) and a-z/A-Z letters. **Tip: For more information on writing YARA rulesets, consult one of the following sources:** - ReversingLabs publishes guidance for using YARA on the official blog. See the blog posts ["Level up your YARA game"](https://blog.reversinglabs.com/blog/level-up-your-yara-game) , ["Writing detailed YARA rules for malware detection"](https://www.reversinglabs.com/blog/writing-detailed-yara-rules-for-malware-detection) and ["Five Uses of YARA"](https://blog.reversinglabs.com/blog/five-uses-of-yara) to learn more. - The official YARA documentation offers detailed advice on [how to write YARA rules](https://yara.readthedocs.io/en/stable/writingrules.html). - Use Spectra Core rulesets present on the Spectra Analyze appliance as reference. ## Synchronizing YARA Rulesets via Spectra Detect Manager In order to synchronize YARA rulesets, the Worker appliance must be connected to a Manager, and YARA syncing must be enabled on that Manager. The worker-c1000 section briefly explains how to connect a Worker to a Manager. If the Worker is connected to a Manager which has YARA synchronization enabled, rulesets from the Worker will be automatically synchronized with other appliances, and vice-versa. Likewise, when YARA synchronization is disabled on a Manager that Worker is connected to, it will be automatically disabled on the Worker as well. When YARA synchronization is enabled on a Manager, the Worker will poll it for new and updated rulesets once per minute. The YARA Sync page on the Manager will display a table of all appliances connected to the Manager and their YARA ruleset synchronization status. Appliances can show one of the following statuses: - **In Sync** - The rulesets on the connected appliance match the rulesets on the Manager. - **Not In Sync** - The connected appliance doesn’t have the newest YARA rulesets. - **Unknown** - The connected appliance doesn’t have YARA synchronization enabled, or is unreachable. - **Please Update** - The connected appliance needs to be updated to a newer version before it can show the YARA synchronization status. - **Please Set To HTTPS** - The appliance is connected to the Manager using the unsupported HTTP protocol. The appliance URL must be updated to `https://` in the *Configure appliance* dialog on the Manager. Workers will poll the Manager for rule changes every minute. Spectra Analyze appliances will push new rules to the Manager as soon as they are created, and pull new rules every 5 minutes. **Example case: One Spectra Analyze appliance and one Worker attached to a Manager with YARA Synchronization enabled** Any YARA ruleset change made on Spectra Analyze will take up to 5 minutes to be synchronized to the Manager. Once the change reaches the Manager, it will take up to 1 minute for the change to be synchronized to the Worker. In total, it will take up to 6 minutes for a change to be synchronized from a Spectra Analyze appliance to a Worker. Appliances that are *Not In Sync* can be manually synchronized at any time by clicking the *Start YARA Sync* button in the far right column of the table. Rulesets created on Spectra Analyze appliances before YARA synchronization was enabled won’t synchronize to the Manager until the user changes their status or modifies them in any way. Rules present on the Manager, however, will synchronize to newly connected Spectra Analyze appliances regardless of when they were created. Apart from new rulesets, changes in existing rulesets will be synchronized as well. If a ruleset is disabled or deleted on one appliance, its status will be distributed to other appliances. In case of Workers, disabled rulesets will be removed until they are re-enabled on another appliance. When enabled again, those rulesets will be synchronized to the Worker as if they have been newly created. This means that the Worker only contains enabled (synchronizable) rulesets at all times. ## Troubleshooting YARA Issues on the Worker 1. From the Worker appliance status page on the Manager, disconnect the Worker by clicking the *Remove* button. 2. Access the dashboard page and click the *Add new appliance* button. 3. In the *Add new appliance* dialog that opens, select *Spectra Detect Worker* as the appliance type, and fill in the configuration fields with the data of the previously disconnected Worker instance. 4. Click *Submit* to connect the Worker instance to the Manager again. If the process completes successfully, the YARA Sync page on the Manager should display the status of the Worker instance. --- ## Spectra Detect Usage — Analysis, Dashboards, and YARA Management --- ## Getting Started with Spectra Detect — First Login, SDM Setup, and File Analysis # Getting started with Spectra Detect This guide walks you through logging in to Spectra Detect Manager (SDM), connecting to the Spectra Intelligence cloud, and running your first file analysis workflow. **What you'll accomplish:** - Log in to Spectra Detect Manager - Connect to the Spectra Intelligence cloud - Configure your first scan input - Submit a file and view analysis results ## Prerequisites Before you begin: - Access to your organization's Spectra Detect deployment (or deployment credentials if setting up from scratch) - Familiarity with your chosen deployment model (OVA/AMI, K8s Mono, or K8s Micro) - For new deployments: access to the appropriate infrastructure (VMware/AWS for OVA/AMI, Kubernetes cluster for K8s deployments) **Tip: Initial credentials** Your initial SDM administrator username and password are provided by [ReversingLabs Support](mailto:support@reversinglabs.com). Change the default password after your first login. ## Step 1: Access your deployment Spectra Detect is available in two deployment models (OVA/AMI, K8s Micro). This guide assumes you have already deployed Spectra Detect or have access to an existing deployment. For deployment instructions, see: - [AWS EKS Micro Deployment](/SpectraDetect/Deployment/AWS-EKS-Deployment-Micro/) — microservices on AWS EKS - [On-Premises Deployment](/SpectraDetect/Deployment/) — hardware and VM options ## Step 2: Log in to Spectra Detect Manager Spectra Detect Manager (SDM) is the central web interface for configuring Workers, monitoring analysis activity, and managing the cluster. It is available in OVA/AMI and K8s Micro deployments. 1. Open your browser and navigate to the SDM URL provided by your administrator (for example, `https://sdm.example.com`). 2. Log in with the administrator credentials provided by [ReversingLabs Support](mailto:support@reversinglabs.com). 3. Change the default password when prompted. **Note: SDM is not included in the K8s Micro deployment (v5.7). Refer to the [AWS EKS Micro Deployment](./Deployment/AWS-EKS-Deployment-Micro/index.md) guide for configuration in that model.** ## Step 3: Configure the Spectra Intelligence cloud connection Spectra Detect Manager must be connected to [Spectra Intelligence](/SpectraIntelligence/) to receive system updates, appliance upgrades, and cloud-enriched classification data. Without this connection, software updates cannot be delivered automatically and Deep Cloud Analysis will not be available. 1. In SDM, navigate to **Administration > Spectra Detect Manager**. 2. Scroll to the **Spectra Intelligence** section. 3. Select the **Enable Spectra Intelligence** checkbox. 4. Enter your Spectra Intelligence **username** and **password**. 5. Select **Save**. SDM will restart and begin polling Spectra Intelligence every 60 minutes. If the connection fails, verify that your network allows outbound HTTPS (port 443) to `appliance-api.reversinglabs.com`. See [Network Ports](/SpectraDetect/Admin/ManagerSettings/#network-ports) for the full list of required ports. **Note: If you do not yet have Spectra Intelligence credentials, contact [ReversingLabs Support](mailto:support@reversinglabs.com).** ## Step 4: Configure a scan input Configure at least one file input source so Spectra Detect knows where to pick up files for analysis. 1. In SDM, navigate to **Configuration > Analysis Input**. 2. Select your input type (S3 bucket, ICAP, network share, or API submission). 3. Enter connection details and authentication credentials. 4. Select **Save**. For detailed input options, see [Analysis Input Configuration](/SpectraDetect/Config/AnalysisInput/). ## Step 5: Submit a file and view results Once a scan input is configured, files submitted through that input are automatically analyzed by [Spectra Core](/General/AnalysisAndClassification/SpectraCoreAnalysis). To verify analysis is working: 1. Submit a test file through your configured input. 2. Open the [Dashboard](/SpectraDetect/Usage/Dashboard/) in SDM. 3. Confirm the file appears with a classification (Malicious, Suspicious, Goodware, or Unknown). 4. Select the file to view its full analysis report. ## Step 6: Configure YARA rules (optional) Spectra Detect supports custom [YARA rules](/SpectraDetect/Usage/YARA/) for detection of specific file patterns. Rules can be uploaded through SDM and are automatically synchronized across all connected Workers. ## Next steps Now that you've configured your first scan input and verified analysis is working, explore the full capabilities of Spectra Detect: - **[Appliance Configuration](/SpectraDetect/Config/ApplianceConfiguration/)** — Network and system settings - **[Manager Settings](/SpectraDetect/Admin/ManagerSettings/)** — SDM configuration reference - **[Dashboard](/SpectraDetect/Usage/Dashboard/)** — Monitoring cluster status and throughput - **[Management API](/SpectraDetect/API/ManagementAPI/)** — Automate Spectra Detect via REST API - **[Updating](/SpectraDetect/Admin/Updating/)** — Software update procedures --- ## Spectra Detect — Enterprise File Analysis and Malware Detection Platform # Spectra Detect — Enterprise File Analysis and Malware Detection Spectra Detect is a file analysis system available in two deployment configurations: - **OVA/AMI deployment**: traditional virtual machine deployment with Workers, Hubs, and Spectra Detect Manager (SDM). - **K8s Micro deployment**: redesigned architecture where traditional Workers and Hubs are decomposed into specialized microservices. - Workers are broken down into individual processing components. - Hub functionality is currently supported only for S3 connector integration. - SDM is not included in this preview release. ## About Spectra Detect Spectra Detect uses a flexible cluster architecture that scales incrementally to support distributed or centralized file processing across physical and cloud environments. The cluster can incrementally scale file processing capacity from 100K to 100M files per day by adding Worker nodes. Spectra Detect is an enterprise-scale automated malware detection platform built for organizations that need to inspect millions of files per day across email gateways, file shares, S3 buckets, ICAP proxies, and other ingestion points — without creating bottlenecks or slowing down production workflows. Powered by [Spectra Core](/General/AnalysisAndClassification/SpectraCoreAnalysis), it performs deep static analysis on over 400 file formats, identifying malware, suspicious indicators, and embedded threats in seconds per file. Analysis results include full indicator extraction, threat names, risk scores, and mapping to [MITRE ATT&CK](https://attack.mitre.org/) tactics — all delivered without executing files. ## File analysis The platform scales horizontally from 100,000 to 100 million files per day by adding Worker nodes. In virtual machine deployments (OVA/AMI), Workers can be provisioned manually to match capacity needs. In Kubernetes deployments (K8s Micro), Workers auto-scale based on queue depth. Results feed directly into existing security infrastructure — SIEM, SOAR, EDR, and threat intelligence platforms — through webhooks, S3 output, and REST APIs. Enterprise YARA rule deployment and synchronization across all Workers is managed centrally through Spectra Detect Manager. In OVA/AMI deployments, every Worker contains an instance of [Spectra Core](https://www.reversinglabs.com/products/spectra-core), a platform for automated static decomposition and analysis of files. ## Architecture In K8s Micro deployment, the Spectra Core functionality is distributed across multiple microservice components that work together to analyze files. Spectra Detect uses a three-tier cluster architecture: a central manager for control and configuration, one or more Hubs for ingestion and load distribution, and horizontally scalable Workers that perform the analysis. ![Spectra Detect architecture](./images/spectra-detect-architecture.png) Spectra Core can automatically unpack and extract information from more than 300 PE packers, archives, installation packages, firmware images, documents, and mobile application formats. The extracted information includes metadata such as strings, format header details, function names, library dependencies, file segments, and capabilities. This information is contained in the Worker analysis report (JSON file). ### Components **Spectra Detect Manager (SDM)** SDM is the central control plane for the entire cluster. It provides a web UI and REST API for monitoring appliance health, managing licenses, deploying software updates, and synchronizing YARA rules across all connected Workers and Spectra Analyze appliances. SDM connects to [Spectra Intelligence](/SpectraIntelligence/) for cloud-enriched classifications and automatic update delivery. Spectra Detect Manager (SDM) is a management platform that provides a centralized view of the status of ReversingLabs appliances, centralized software upgrades, configuration of authorized appliances, and YARA rules deployment. SDM is available in OVA/AMI deployments. It is not included in the K8s Micro preview (v5.7). **Info: SDM is available in OVA/AMI deployments, but is **not included** in the K8s Micro deployment preview (v5.7).** **Hub** The Hub is the ingestion and distribution layer between file sources and Workers. It receives files from configured input connectors — S3 buckets, file shares, ICAP proxies, email gateways, and direct API submissions — and distributes them to available Workers for analysis. In high-availability configurations, two Hubs can be deployed for redundancy. **Workers** Workers perform the actual file analysis using [Spectra Core](/General/AnalysisAndClassification/SpectraCoreAnalysis). Each Worker unpacks and inspects submitted files, extracts indicators, assigns a risk score, and returns the result to the Hub. Workers scale horizontally: additional Workers are added manually in OVA/AMI deployments or provisioned automatically by Kubernetes based on queue depth. ### Deployment models | Model | Description | Scaling | |---|---|---| | OVA/AMI | Virtual machine with Workers, Hubs, and SDM | Manual — provision new VMs | | K8s Micro | Microservices architecture on Kubernetes | Automatic — per-component scaling | The Manager functions as a mediator between ReversingLabs appliances connected to it. When YARA rulesets are uploaded to any of the connected appliances that support them, the Manager ensures the rulesets are synchronized across all applicable appliances. ### Multi-region deployment In multi-region deployments, global load balancers are added in front of each tier to distribute traffic across geographic locations and provide fault tolerance. - Status overview for multiple ReversingLabs product types - License management for connected Spectra Analyze and Spectra Detect Worker appliances - Control for upgrading Spectra Analyze, Spectra Detect Worker and Hub - Centralized YARA rules deployment and synchronization between Spectra Analyze, and Spectra Detect Worker - Alerts for critical system services - Support for sample search across all connected and authorized Spectra Analyze appliances - Configuration modules for centralized management of Spectra Analyze, Spectra Detect Worker and Hub - Support for configuring the Connectors service on Spectra Analyze and Spectra Detect appliances A global load balancer sits in front of the SDM cluster, routing management and API traffic to the active SDM instance regardless of which region handles the request. A separate load balancer sits in front of the Hub cluster, distributing file submission traffic across Hubs. Workers are deployed in two or more independent regional clusters — each region runs its own Hub-and-Worker pool, so file analysis stays local to the region where files are submitted. The SDM cluster spans regions for centralized control while Worker clusters remain regionally isolated for performance and data residency. **Components added in multi-region deployments:** - **Global Load Balancer (SDM)** — routes SDM API and UI traffic to the active SDM node; handles failover between SDM instances across regions - **Global Load Balancer (Hub)** — distributes incoming file submissions across regional Hub clusters; can route by geography to keep analysis traffic local - **Regional Hub + Worker clusters** — each region runs an independent Hub-and-Worker pool; Workers in each region auto-scale independently based on local queue depth ## Documentation ### Getting started - [Getting started](/SpectraDetect/getting-started/) — first login, cloud connection, and first analysis ### Deployment - [AWS EKS Micro Deployment](/SpectraDetect/Deployment/AWS-EKS-Deployment-Micro/) — microservices architecture deployment on AWS EKS - [On-Premises Deployment](/SpectraDetect/Deployment/) — hardware and virtual machine deployment options ### Configuration - [Appliance Configuration](/SpectraDetect/Config/ApplianceConfiguration/) — network, storage, and system settings - [Analysis Input](/SpectraDetect/Config/AnalysisInput/) — configure file sources and input connectors - [YARA Sync](/SpectraDetect/Config/YARASync/) — synchronize YARA rules across appliances - [Certificate Management](/SpectraDetect/Config/CertificateManagement/) — TLS/SSL certificate configuration ### Usage - [Dashboard](/SpectraDetect/Usage/Dashboard/) — monitoring cluster status and analysis results - [Analysis](/SpectraDetect/Usage/Analysis/) — understanding analysis results and classifications - [YARA Rules](/SpectraDetect/Usage/YARA/) — managing and deploying YARA rulesets ### API - [Management API](/SpectraDetect/API/ManagementAPI/) — REST API for appliance management and automation ### Administration - [Manager Settings](/SpectraDetect/Admin/ManagerSettings/) — Spectra Detect Manager configuration reference - [Updating](/SpectraDetect/Admin/Updating/) — software update procedures - [Troubleshooting](/SpectraDetect/troubleshooting/) — common issues and solutions --- ## Spectra Detect Troubleshooting — Workers, Queues, YARA Sync, and Updates # Troubleshooting This guide covers common issues with [Spectra Detect](./index.md) deployments and the steps to resolve them. --- ## Worker node not appearing in SDM dashboard **Symptom** A newly deployed or restarted Worker node does not appear in the Spectra Detect Manager (SDM) appliance list, or its status shows as disconnected. **Cause** - The Worker is not configured with the correct SDM address or port. - A firewall is blocking the communication channel between the Worker and SDM. - The Worker has not been authorized in SDM. - TLS certificate mismatch between the Worker and SDM. **Solution** 1. On the Worker appliance, verify that the SDM address is correctly configured under [Appliance Configuration](./Config/ApplianceConfiguration.md). 2. Test network connectivity from the Worker to the SDM: ```bash curl -k https://:/api/v1/health-check/ ``` If this fails, check firewall rules between the Worker and SDM host. The required ports are documented in the deployment prerequisites. 3. In the SDM interface, navigate to **Appliances** and check whether the Worker appears as pending authorization. Newly connected Workers must be explicitly authorized before they appear as active. See [Manager Settings](./Admin/ManagerSettings.md). 4. If TLS certificates are custom, verify that the Worker trusts the SDM certificate. See [Certificate Management](./Config/CertificateManagement.md) for certificate configuration. 5. Check SDM logs for connection rejection messages to identify the specific failure. --- ## File analysis queue is backing up **Symptom** The [Dashboard](./Usage/Dashboard.md) shows the analysis queue growing continuously. Files are taking much longer than normal to receive a classification result. The queue count in the SDM overview is increasing or not decreasing. **Cause** - The cluster does not have enough Worker nodes or CPU resources to keep up with the current ingestion rate. - A subset of Worker nodes is unhealthy or offline, reducing effective throughput. - Very large or deeply nested archives are occupying Worker instances for extended periods. - The input connector (S3, folder watch, or API) is ingesting files faster than Workers can process them. **Solution** 1. Check the status of all Worker nodes in the SDM dashboard. Confirm that all expected Workers are online and healthy. 2. Review Worker CPU and memory usage. Sustained CPU at 100% indicates the Workers are capacity-constrained. 3. For K8s deployments, scale the Worker deployment horizontally by increasing the replica count: ```bash kubectl scale deployment spectra-detect-worker \ --replicas= -n ``` --- ## YARA rules not syncing to Workers **Symptom** YARA rules uploaded to an appliance or to SDM do not appear on Worker nodes. Samples that should match a YARA rule do not produce expected results. The SDM YARA management page shows rules as uploaded but Workers do not reflect the changes. **Cause** - The YARA sync service on a Worker node is stopped or misconfigured. - A network connectivity issue is preventing SDM from pushing rules to the Worker. - A syntax error in a newly uploaded YARA rule is causing the entire ruleset to be rejected by the Worker. **Solution** 1. Verify that YARA sync is enabled and configured correctly on the Worker. See [YARA Sync](./Config/YARASync.md) for the required configuration parameters. 2. Check YARA sync logs on the Worker node for error messages: ```bash # For OVA/AMI deployments journalctl -u spectradetect-yarasync -n 100 # For K8s deployments kubectl logs -c yara-sync -n ``` 3. If the logs show rule validation errors, identify the specific rule causing the failure. Invalid rules are typically logged with the rule name and a parse error message: ``` ERROR: YARA rule parse failed: rule "my_rule" at line 14 - unexpected token ``` Remove or correct the invalid rule and re-upload the ruleset. 4. Test SDM-to-Worker connectivity on the YARA sync port. Consult [Certificate Management](./Config/CertificateManagement.md) if TLS is involved. 5. After resolving connectivity or rule issues, trigger a manual YARA sync from the SDM interface or via the [Management API](./API/ManagementAPI.md). --- ## S3 connector not picking up files **Symptom** Files placed in the configured S3 bucket are not being submitted for analysis. The analysis queue remains empty even though new files are present in the bucket. **Cause** - The S3 connector credentials (AWS access key or IAM role) are incorrect or lack the required permissions. - The bucket name or prefix path in the connector configuration does not match the actual bucket. - The S3 connector service is stopped or has encountered an error during startup. - The bucket is in a different AWS region than expected. **Solution** 1. Review the S3 connector configuration under [Analysis Input](./Config/AnalysisInput.md). Confirm the bucket name, region, and prefix are correct. 2. Verify that the IAM role or access keys have the following minimum permissions on the bucket: - `s3:GetObject` - `s3:ListBucket` - `s3:DeleteObject` (if files should be removed after processing) ```bash # Test access from the connector host aws s3 ls s3://// --region ``` 3. Check the connector service logs for authentication or connectivity errors: ```bash # For OVA/AMI deployments journalctl -u spectradetect-connector -n 100 # For K8s deployments kubectl logs -n ``` 4. Confirm the connector service is running. Restart it if it has crashed: ```bash # For OVA/AMI deployments sudo systemctl restart spectradetect-connector ``` 5. For K8s deployments, note that S3 connector support is provided through the Hub component. Refer to your deployment documentation for Hub configuration details. --- ## SDM shows appliance as unreachable **Symptom** In the SDM overview, one or more connected appliances (Workers or Spectra Analyze instances) display an "Unreachable" status. Alerts may be generated for the affected appliances. **Cause** - The appliance is powered off or has lost network connectivity. - The SDM heartbeat check is failing due to a temporary network disruption. - The appliance's management interface has changed IP address. - A TLS certificate on the appliance has expired, causing the SDM connection to fail. **Solution** 1. Attempt to access the appliance web interface or SSH directly to confirm it is online: ```bash ssh admin@ ping -c 4 ``` 2. In the SDM interface, navigate to **Manager Settings** and verify the registered IP address or hostname for the appliance. Update it if the IP has changed. See [Manager Settings](./Admin/ManagerSettings.md). 3. Check whether the appliance TLS certificate has expired. Certificate issues are logged in the appliance system logs and also visible on the Certificate Management page. See [Certificate Management](./Config/CertificateManagement.md) for certificate renewal steps. 4. If the appliance is accessible but SDM still shows it as unreachable, restart the SDM communication service on the appliance: ```bash sudo systemctl restart spectradetect-sdm-agent ``` 5. Review SDM logs for specific error messages related to the unreachable appliance to identify the root cause. --- ## Analysis results not appearing in dashboard **Symptom** Files are being submitted and Workers show activity (CPU usage, queue movement), but analysis results do not appear in the SDM [Dashboard](./Usage/Dashboard.md) or in the analysis results view. **Cause** - The results reporting service on the Worker is misconfigured and is not sending results back to the Hub or SDM. - A downstream results consumer (webhook, SIEM integration, or S3 output bucket) is misconfigured, causing result delivery to fail silently. - The SDM database is not receiving result records due to a connectivity issue between the Hub and SDM. **Solution** 1. Check Worker logs for result delivery errors: ```bash kubectl logs -n | grep -i "result\|output\|error" ``` 2. Verify the output configuration under [Analysis Input](./Config/AnalysisInput.md). Confirm that the output destination (Hub address, S3 bucket, or webhook URL) is correct and reachable. 3. Test connectivity from the Worker to the Hub or SDM results endpoint. 4. For notification-based integrations (email alerts, webhooks), check the notification configuration. See [Notifications](./Config/Notifications.md) for configuration options. 5. If results are being generated locally but not forwarded, check for disk space issues on the Worker node that might be causing result queuing to back up locally rather than forwarding to SDM. --- ## Update fails or gets stuck **Symptom** A software update initiated from the SDM **Updating** page (or via the update CLI) does not complete. The update status page shows the process as "In Progress" for an extended period, or the update fails with an error message. **Cause** - Network connectivity between the appliance and the ReversingLabs update server is disrupted during the download. - The appliance does not have sufficient disk space to stage the update package. - An SDM-managed appliance was rebooted or lost connectivity to SDM mid-update. - An incompatible update sequence (for example, skipping a required intermediate version). **Solution** 1. Navigate to the [Updating](./Admin/Updating.md) page in SDM and review the update log for specific error messages. 2. Check available disk space on the appliance before retrying: ```bash df -h ``` If disk space is insufficient, free space by purging old analysis results or logs before reattempting the update. 3. Test network connectivity to the ReversingLabs update infrastructure from the appliance: ```bash curl -I https://updates.reversinglabs.com ``` 4. Do not reboot or restart the appliance while an update is in progress unless instructed to do so in the error message. 5. If the update process is hung (no log activity for more than 30 minutes), contact [ReversingLabs Support](mailto:support@reversinglabs.com) before attempting to cancel or retry the update, as incomplete updates may leave the system in a partially upgraded state. 6. For K8s deployments, follow the upgrade procedure in your deployment documentation rather than using the SDM update mechanism. --- ## High memory usage on Worker nodes **Symptom** Kubernetes Worker pods are being OOM-killed, or node memory usage is consistently above 90%. The SDM dashboard or cluster monitoring (Prometheus, CloudWatch) shows frequent memory pressure on Worker nodes. **Cause** - The number of concurrent analyses (`concurrency-limit`) is set too high relative to the available memory per Worker pod. - Large files or archives with high decompression ratios are consuming more memory than typical workloads. - Memory limits set in the Helm chart are too low for the configured number of Spectra Core instances. **Solution** 1. Check if OOM events are occurring: ```bash kubectl describe pod -n | grep -i "OOMKilled\|Reason" dmesg | grep -i "oom\|killed" ``` 2. Review the Worker's Helm chart values and reduce the concurrency limit or the number of Spectra Core instances (`number-of-regular-cores`) to reduce peak memory usage. 3. Increase the memory limits and requests for Worker pods in the Helm values if available node memory allows it. Refer to your deployment's Helm values reference documentation. 4. Consider configuring a dedicated large-file Worker group with higher memory limits and a separate concurrency limit, leaving the regular pool for smaller files. Refer to your deployment documentation for Worker group customization options. 5. Review the [platform requirements](/General/DeploymentAndIntegration/PlatformRequirements) to confirm that the cluster nodes meet the recommended memory specifications for the configured number of Spectra Core instances. --- ## Cluster certificate errors **Symptom** Connections between cluster components fail with TLS errors. SDM reports appliances as unreachable. Browser access to the SDM web interface shows a certificate warning. Logs contain messages such as: ``` TLS handshake failed: x509: certificate has expired or is not yet valid TLS handshake failed: x509: certificate signed by unknown authority ``` **Cause** - Internal cluster certificates have expired and have not been renewed. - A custom CA certificate used to sign internal certificates is not trusted by all components. - The system clock on one or more nodes is incorrect, causing certificate validity window checks to fail. **Solution** 1. Identify which certificate is causing the failure by examining the TLS error in detail: ```bash openssl s_client -connect : -showcerts 2>&1 | openssl x509 -noout -dates ``` 2. Check certificate expiration across cluster components. See [Certificate Management](./Config/CertificateManagement.md) for the full list of certificates used and the renewal procedure. 3. Renew expired certificates following the procedure documented in [Certificate Management](./Config/CertificateManagement.md). After renewal, restart the affected services. 4. Verify that the system clock is synchronized on all nodes: ```bash timedatectl status chronyc tracking ``` If clock drift is detected, synchronize with NTP and verify that `ntpd` or `chronyd` is running and configured. 5. If an "unknown authority" error is present, ensure that the custom CA certificate is distributed to and trusted by all cluster components and the SDM host. See [Certificate Management](./Config/CertificateManagement.md) for CA distribution steps.