Connectors
Spectra Analyze > Administration > Integrations & Connectors > Connectors
Overview
The Connectors service allows automatically retrieving a large number of files from external sources and analyzing them on the appliance. Events for the Connectors service are logged as CEF messages and can be monitored if System Alerting is enabled on the appliance.
Connectors can only be configured by the appliance administrator(s), not by regular users.
To manage settings for each connector, go to Administration > Integrations & Connectors > Connectors. The sidebar on the left lists all currently supported types of connectors.
Select a connector to access its configuration dialog. If a connector is disabled or if it has not been previously configured on the appliance, the dialog contains only the Enable connector button. Click the button to start configuring the connector.
The following connectors are currently supported:
- AWS S3
- Network File Share
- Azure Data Lake
- IMAP - MS Exchange
- SMTP
- ICAP Server
- CrowdStrike Falcon EDR
- Cortex XDR
The configuration of individual connectors is described below. You should also apply global settings for the Connectors service before you start or otherwise manage any of the connectors.
Global configuration
In addition to every connector service having specific configuration settings, there is a Global Configuration section at the bottom of every connector page. These settings can be configured once as they apply to all connectors.
| Field | Description |
|---|---|
| Save files that had encountered errors during processing | Original files that were not successfully uploaded are saved to /data/connectors/connector-[CONNECTOR_NAME]/error_files/. |
| Maximum upload retries | Number of times the connector attempts to upload the file to the processing appliance. Upon reaching the number of retries, it is saved in the error_files/ destination or discarded. |
| Maximum upload timeout | Period in seconds between upload attempts of the sample being re-uploaded. |
| Upload algorithm | The algorithm used for managing delays between attempting to reupload the samples. In Exponential back-off, the delay is defined by multiplying the Maximum upload timeout parameter by two, until reaching the maximum value of five minutes. Linear back-off always uses the Maximum upload timeout value for the timeout period between reuploads. |
| Maximum upload delay | In case the appliance is under high load, this parameter is used to delay any new upload to the appliance. The delay parameter is multiplied by the internal factor determined by the load on the appliance. |
| Database cleanup period | Specifies the number of days for which the data is preserved. |
| Database cleanup interval | Specifies the time in seconds in which the database cleanup is performed. |
| Max file size (MB) | The maximum sample size in megabytes that is transmitted from the connector to the appliance for analysis. Setting it to 0 disables the option. |
| Disk high (%) | Disk high is used to compute the quota for the maximum amount of disk space that can be used by temporary files during transfer. Setting it to 0 disables the option and indicates that unlimited space is available. |
Starting a connector
After configuring your connector settings:
- Test connection: if available, click to verify the appliance can access the configured item.
- Remove item: remove all configured settings for the current item.
- Add item: add another configuration for the current connector. Some connectors allow a limited number of configurations, in which case you must remove at least one configuration before adding another.
- Start connector: when all items are configured successfully, click Start connector at the bottom of the page to initiate the connector service on the appliance.
In case of advanced options, if they are not enabled, the connector service does not perform any additional actions on the retrieved files after the Spectra Analyze appliance finishes analyzing them. Users can see the analysis results for each file on its Sample Details page.
All the files retrieved and analyzed on the appliance are accessible to Spectra Analyze users on the Search & Submissions page.
Each file retrieved via a connector has a set of User Tags automatically assigned to it, which are based on the file metadata, and can contain information about the file source, the last modification time in the original location, file permissions, and more. Additionally, the files are distinguished from other files by a unique username based on the connector:
- AWS S3:
s3_connector - Network File Share:
fileshare_connector - Azure Data Lake:
azure-data-lake_connector - IMAP:
abusebox_connector
- CrowdStrike Falcon EDR:
falcon_connector
Managing active connectors
With active connectors, you can perform the following actions:
- Pause connector: while active, the Start connector button becomes Pause connector. Pausing temporarily halts the service while preserving the current state. Click Start connector again to resume scanning.
- Disable connector: completely disable the service. While disabled, you cannot configure, start, or pause the connector. The configuration is preserved and restored when re-enabled. Previously analyzed files remain on the appliance.
Additionally, you can modify connector settings and click Save changes while the connector is running without pausing it.
If a purge action occurs while a connector is active, the system automatically stops the connector, performs the purge, then restarts it.
Folder naming restrictions
Some folder names can only include:
- Alphanumeric characters (
A–Z,a–z,0–9) - Spaces
- The following special characters:
_,-,.,(,)
To ensure maximum compatibility, avoid using special characters beyond underscore and hyphen.
The following characters are not allowed:
/, \, :, *, ?, ", <, >, |, and the null byte (\0).
If specifying subfolders, each folder name must conform to these rules.
Connectors
AWS S3
The AWS S3 connector allows connecting up to five S3 buckets to the appliance. When the buckets are connected and mounted to the appliance, it can automatically scan the buckets and submit files for analysis. After analyzing the files, the appliance can place them into the root of each bucket, or, optionally, sort the files into folders based on their classification status.
Currently, it is not possible to assign a custom name to each S3 file storage input, and the way to distinguish between configured buckets is to look at their names.
Configuring S3 buckets
To add a new S3 bucket:
- Make sure the connector is enabled. If not, click the Enable connector button.
- Click Add item to expand the S3 File Storage Inputs section in the S3 dialog and fill in the following fields.
| Field | Mandatory | Description |
|---|---|---|
| Paused | Optional | Pauses the continuous scanning of this storage input. Applies only to Spectra Detect Hub 5.1.0 and higher. |
| AWS S3 access key ID | Mandatory | The access key ID for AWS S3 account authentication. Note: In cases where the appliance is hosted by ReversingLabs and Role ARN is used, this value is provided by ReversingLabs. |
| AWS S3 secret access key | Mandatory | The secret access key for AWS S3 account authentication. Note: In cases where the appliance is hosted by ReversingLabs and Role ARN is used, this value is provided by ReversingLabs. |
| AWS S3 region | Mandatory | Specify the correct AWS geographical region where the S3 bucket is located. This parameter is ignored for non-AWS setups. |
| Enable role ARN | Optional | Select to enable authentication using an external AWS role. This allows the customers to use the connector without forwarding their access keys between services. The IAM role used to obtain temporary tokens has to be created for the connector in the AWS console. These temporary tokens allow ingesting files from S3 buckets without using the customer secret access key. If enabled, it exposes more configuration options below. |
| Role ARN | Mandatory and visible only if Role ARN is enabled. | The role ARN created using the external role ID and an Amazon ID. In other words, the ARN which allows the appliance to obtain a temporary token, which then allows it to connect to S3 buckets without using the customer secret access key. |
| External ID | Mandatory and visible only if Role ARN is enabled. | The external ID of the assumed role. Usually, it’s an ID provided by the entity which uses but doesn’t own an S3 bucket. The owner of that bucket takes the external ID and creates an ARN with it. It is strongly recommended to use a valid External ID in production environments to maintain security. However, in non-production or test environments, you can enter a placeholder value for the External ID if your use case doesn't require a real one. This is useful when you do not want to enforce the External ID requirement while testing configurations. |
| Role session name | Mandatory and visible only if Role ARN is enabled. | Name of the session visible in AWS logs. Can be any string. |
| ARN token duration | Mandatory and visible only if Role ARN is enabled. | How long before the authentication token expires and is refreshed. The minimum value is 900 seconds. |
| AWS S3 bucket | Mandatory | Specify the name of an existing S3 bucket which contains the samples to process. The name can be between 3 and 63 characters long, and can contain only lower-case characters, numbers, periods, and dashes. Each label in the name must start with a lowercase letter or number. The name cannot contain underscores, end with a dash, have consecutive periods, or use dashes adjacent to periods. The name cannot be formatted as an IP address. |
| Processing priority | Mandatory | Assign a priority for processing files from this bucket from highest (1) to lowest (5). Multiple buckets may share the same priority. Default is 5. |
| AWS S3 folder | Optional | The input folder inside the specified bucket which contains the samples to process. All other samples are ignored. The name can be between 3 and 63 characters long, and can contain only lower-case characters, numbers, periods, and dashes. Each label in the name must start with a lowercase letter or number. The name cannot contain underscores, end with a dash, have consecutive periods, or use dashes adjacent to periods. The name cannot be formatted as an IP address. If the folder is not configured, the root of the bucket is treated as the input folder. |
| S3 endpoint URL | Optional | Enter a custom S3 endpoint URL. Specifying the protocol is optional. Leave empty if using standard AWS S3. |
| Server Side Encryption Type | Optional | Specify the server-side encryption method managed by AWS for your S3 bucket. You can choose either AES256 to enable Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) or aws:kms to enable Server-Side Encryption with AWS Key Management Service (SSE-KMS). Note: This setting should be left blank unless your bucket policy requires SSE headers to be sent to S3. It is mutually exclusive and should not be configured alongside Customer Encryption Algorithm or Customer Encryption Key. |
| Customer Encryption Algorithm | Optional | Defines the encryption algorithm used when you provide your own encryption keys. The only valid value for this field is AES256. This option is intended for users who prefer to manage their own encryption keys rather than relying on AWS-managed keys. Note: It must be used in conjunction with Customer Encryption Key and cannot be used simultaneously with Server Side Encryption Type. |
| Customer Encryption Key | Optional | Provide a customer-managed encryption key for encrypting and decrypting objects in your S3 bucket. The key must be a valid Base64-encoded AES256 key. Note: It must be used together with Customer Encryption Algorithm and is mutually exclusive with Server Side Encryption Type. |
| Connect securely | Optional | If selected, the connector does not accept connections to S3 buckets with untrusted or expired certificates. This setting only applies when a custom S3 endpoint is used. |
| Enable Selection Criteria Using Metadata | Optional | If selected, the connector only fetches files that have certain object metadata saved. This metadata contains information about the classification of the file. This means that the files must have been pre-processed and saved with their metadata in Spectra Detect. The supported file metadata are Classification and Threat name. |
AWS S3 connector advanced options
Advanced Options refer to actions that the connector service can perform on the files after the Spectra Analyze appliance finishes analyzing them. The connector operates and analyzes files even if these advanced options are disabled. They only control the post-analysis activities.
Advanced options can be configured for every S3 bucket individually, and the sorting criteria, folder names and folder paths can be different on each configured S3 bucket. The connector can be configured to automatically sort files into user-defined sorting folders on the S3 bucket. Files are sorted into folders based on the classification they receive during analysis.
| Field | Description |
|---|---|
| Enable same hash rescan | Allow any duplicate file hashes to be rescanned. |
| Delete source files | Allow the connector to delete files on the S3 bucket after they have been processed. |
| Enable automatic file sorting | Allow the connector to store analyzed files and sort them into folders on every configured S3 bucket based on their classification. Enabling any of these options also switches on Delete source files. For more information about naming restrictions, see Folder naming restrictions. |
| Sort Goodware files into following folder | Specify the path to folder into which the connector stores files classified as Goodware/Known. The path specified here is relative to the address of the S3 bucket. If the folder doesn’t already exist on the S3 bucket, it is automatically created after saving the configuration. This field is mandatory when Enable automatic file sorting is selected. |
| Sort Malware files into following folder | Specify the path to folder into which the connector stores files classified as Malicious. The path specified here is relative to the address of the S3 bucket. If the folder doesn’t already exist on the S3 bucket, it is automatically created after saving the configuration. This field is mandatory when Enable automatic file sorting is selected. |
| Sort Unknown files into following folder | Specify the path to the folder where the connector stores files marked as no threats found or unclassified. The path specified here is relative to the address of the S3 bucket. |
| Sort Suspicious files into following folder | Specify the path to folder into which the connector stores files classified as Suspicious. The path specified here is relative to the address of the S3 bucket. If the folder doesn’t already exist on the S3 bucket, it is automatically created after saving the configuration. |
Network File Share
The Network File Share connector supports SMB and NFS file-sharing protocols and allows connecting up to five shared network resources to the appliance. When the network shares are connected and mounted to the appliance, it can automatically scan the network shares and submit files for analysis. After analyzing the files, the appliance can optionally sort the files into folders based on their classification status.
Currently, it is not possible to assign a custom name to each network share, and the way to distinguish between configured network shares is to look at their addresses.
Configuring network shares
To add a new network share:
- Make sure the connector is enabled. If not, click the Enable connector button.
- Click Add item to expand the Shares section in the Network File Share Connector dialog and fill in the following fields.
| Field | Mandatory | Description |
|---|---|---|
| Address | Mandatory | Enter the address of the shared network resource to be mounted to the appliance. The address must include the protocol (SMB or NFS). Leading slashes are not required for NFS shares, for example: nfs:storage.example.lan. The address can point to the entire network drive, or to a specific folder, for example: smb://storage.example.lan/samples/collection. When the input folder and/or sorting folders are configured, their paths are treated as relative to the address configured here. Note: If the address contains special characters, it may not be possible to mount the share to the appliance. The comma character cannot be used in the address for SMB shares. Some combinations of ? and # result in errors when attempting to mount both the SMB and the NFS shares. For more information about naming restrictions, see Folder naming restrictions. |
| Username | Optional, SMB only | Enter the username for authenticating to the SMB network share if required. Usernames and passwords for SMB authentication can only use ASCII-printable characters excluding the comma. |
| Password | Optional, SMB only | Enter the password for authenticating to the SMB network share if required. Usernames and passwords for SMB authentication can only use ASCII-printable characters excluding the comma. |
| Input folder | Optional | Specify the path to the folder on the network share containing the files to be analyzed by Spectra Analyze. The folder must exist on the network share. The path specified here is relative to the root (address of the network file share). If the input folder is not configured, the root is treated as the input folder. For more information about naming restrictions, see Folder naming restrictions. |
Network File Share connector advanced options
Advanced Options refer to actions that the connector service can perform on the files after the Spectra Analyze appliance finishes analyzing them. The connector operates and analyzes files even if these advanced options are disabled. They only control the post-analysis activities.
Advanced options can be configured for every network share individually, and the sorting criteria, folder names and folder paths can be different on each configured network share. The connector can be configured to automatically sort files into user-defined sorting folders on the network share. Files are sorted into folders based on the classification they receive during analysis.
| Field | Description |
|---|---|
| Delete source files | Allow the connector to delete files on the network share after they have been processed. |
| Enable automatic file sorting | Allow the connector to store analyzed files and sort them into folders on every configured network share based on their classification. Enabling any of these options also switches on Delete source files. For more information about naming restrictions, see Folder naming restrictions. |
| Sort Goodware files into following folder | Specify the path to folder into which the connector stores files classified as Goodware/Known. The path specified here is relative to the address of the network file share. If the folder doesn’t already exist on the network share, it is automatically created after saving the configuration. |
| Sort Malware files into following folder | Specify the path to folder into which the connector stores files classified as Malicious. The path specified here is relative to the address of the network file share. If the folder doesn’t already exist on the network share, it is automatically created after saving the configuration. |
| Sort Suspicious files into following folder | Specify the path to folder into which the connector stores files classified as Suspicious. The path specified here is relative to the address of the network file share. If the folder doesn’t already exist on the network share, it is automatically created after saving the configuration. |
| Sort Unknown files into following folder | Specify the path to the folder where the connector stores files marked as no threats found or unclassified. The path specified here is relative to the address of the network file share. |
| Rescan Unknowns | If Sort Unknown files into following folder is selected, check this box to allow the connector to rescan samples that were classified as unknown. |
| Rescan Unknowns interval | If Rescan unknowns is selected, specify the interval in days between rescan attempts. Default is one day. |
Handling rescanned and renamed files
The Network File Share connector supports several scenarios involving rescanned and renamed files. The connector has the ability to automatically rename files, which allows it to handle duplicates and files manually renamed by the user. Advanced file sorting options must be configured for the connector to be able to move files after they are analyzed.
| Scenario | Result |
|---|---|
| A new file is analyzed, but a file with the same filename already exists in the output folder. Their hashes are identical. | The original file remains in the output folder. The last modified timestamp value in the file metadata is updated for the original file. Its filename remains unchanged. The new file is removed after analysis. |
| A new file with the same filename as an old file is analyzed. Their hashes are identical. However, the old file no longer exists in the output folder, or the new file has been uploaded for the first time. | The new file is saved to the output folder. Its filename remains unchanged. |
| A new file is analyzed, but a file with the same filename already exists in the output folder. Their hashes are different. | The new file is renamed and saved to the output folder. The file renaming pattern is to add (#) after the original file name. For example Name.extension would be saved as Name(1).extension, Name(2).extension, Name(3).extension, etc. |
| A file has been analyzed previously and moved into one output folder (A). Based on reanalysis, it should be moved to a different output folder (B). | The file is moved to a different output folder (B). Its filename remains unchanged. |
Azure Data Lake
The Azure Data Lake connector allows connecting up to five Azure Data Lake Gen2 containers to the appliance. When the containers are connected and mounted to the appliance, it can automatically scan them and submit files for analysis. After analyzing the files, the appliance can place them into the root of each container, or, optionally, sort the files into folders based on their classification status.
Currently, it is not possible to assign a custom name to each Azure Data Lake container, and the way to distinguish between configured containers is to look at their names.
This connector is not compatible with containers that have the Blob Soft Delete feature enabled.
Configuring Azure Data Lake containers
To add a new Azure Data Lake container:
- Make sure the connector is enabled. If not, click the Enable connector button.
- Click Add item to expand the Azure Data Lake Inputs section in the Azure Data Lake dialog and fill in the following fields.
| Field | Mandatory | Description |
|---|---|---|
| Storage account name | Mandatory | The name of the storage account. |
| Storage access key | Mandatory | The access key used for Shared Key Authentication. This value should end in ==. |
| Container name | Mandatory | Specify the name of an existing Azure Data Lake container which contains the samples to process. The name can be between 3 and 63 characters long, and can contain only lower-case characters, numbers, and dashes. Each label in the name must start with a lowercase letter or number. The name cannot contain consecutive dashes. |
| Folder | Optional | The input folder inside the specified container which contains the samples to process. All other samples are ignored. |
Azure Data Lake connector advanced options
Advanced Options refer to actions that the connector service can perform on the files after the Spectra Analyze appliance finishes analyzing them. The connector operates and analyzes files even if these advanced options are disabled. They only control the post-analysis activities.
Advanced options can be configured for every Azure Data Lake container individually, and the sorting criteria, folder names and folder paths can be different on each configured Azure Data Lake container. The connector can be configured to automatically sort files into user-defined sorting folders on the Azure Data Lake container. Files are sorted into folders based on the classification they receive during analysis.
| Field | Description |
|---|---|
| Delete source files | Allow the connector to delete files on the Azure Data Lake container after they have been processed. |
| Enable automatic file sorting | Allow the connector to store analyzed files and sort them into folders on every configured Azure Data Lake container based on their classification. Enabling any of these options also switches on Delete source files. For more information about naming restrictions, see Folder naming restrictions. |
| Sort Goodware files into following folder | Specify the path to folder into which the connector stores files classified as Goodware/Known. The path specified here is relative to the address of the Azure Data Lake container. If the folder doesn’t already exist on the Azure Data Lake container, it is automatically created after saving the configuration. This field is mandatory when Enable automatic file sorting is selected. |
| Sort Malware files into following folder | Specify the path to folder into which the connector stores files classified as Malicious. The path specified here is relative to the address of the Azure Data Lake container. If the folder doesn’t already exist on the Azure Data Lake container, it is automatically created after saving the configuration. This field is mandatory when Enable automatic file sorting is selected. |
| Sort Unknown files into following folder | Specify the path to the folder where the connector stores files marked as no threats found or unclassified. The path specified here is relative to the address of the Azure Data Lake container. |
| Sort Suspicious files into following folder | Specify the path to folder into which the connector stores files classified as Suspicious. The path specified here is relative to the address of the Azure Data Lake container. If the folder doesn’t already exist on the Azure Data Lake container, it is automatically created after saving the configuration. |
IMAP - MS Exchange
The IMAP AbuseBox connector allows connecting to a Microsoft Exchange server and analyzing retrieved emails on the Spectra Analyze appliance.
To be able to use the IMAP AbuseBox connector, the following requirements must be met:
- IMAP must be enabled on the Exchange server.
- A new user account must be configured on the mail server and its credentials provided to the connector in the configuration dialog.
- A dedicated email folder must be created in the Exchange user account, and its name provided to the connector in the configuration dialog. All emails forwarded to that folder are collected by the connector and automatically sent to the appliance for analysis.
To improve performance and minimize processing delays, each email sample gets analyzed and classified only once. When automatic message filing is enabled, each email sample is moved only once, based on its first available classification.
Because of that, it is recommended you improve classification of emails with malicious attachments by enabling classification propagation and allowing the retrieval of Spectra Intelligence classification information during sample analysis instead of after. Administrators can enable these options under Administration > Configuration > Spectra Detect Processing Settings.
Configuring the Exchange user account
To configure a new connection with the Exchange user account, fill in the following fields.
| Field | Mandatory | Description |
|---|---|---|
| Server domain | Mandatory | Enter the domain or IP address of the Exchange server. The value should be FQDN, hostname or IP. This should not include the protocol (for example, http). |
| Email folder | Mandatory | Enter the name of the email folder from which email messages are collected for analysis. This folder must belong to the same Exchange user account for which the credentials are configured in this section. The folder name is case-sensitive. |
| Connection type | Mandatory | Supports IMAP (Basic Authentication) and Exchange (OAuth2) methods of authentication. Depending on the selection, the next section of the form asks for different user credentials. IMAP (Basic Authentication) requires Username and Password, Exchange (OAuth2) requires Client ID, Client Secret and Tenant ID. |
| Email address | Mandatory | Enter the primary email address of the configured Exchange user account. |
| Access type | Mandatory | Delegate is used in environments where there’s a one-to-one relationship between users. Impersonation is used in environments where a single account needs to access many accounts. |
| Connect securely | Optional | Enabled by default. When enabled, the connector does not accept connections to Exchange servers with untrusted or expired certificates. |
IMAP connector advanced options
Advanced Options refer to actions that the connector service can perform on the files after the Spectra Analyze appliance finishes analyzing them. The connector operates and analyzes files even if these advanced options are disabled. They only control the post-analysis activities.
The connector can be configured to automatically sort emails into user-defined sorting folders on the Exchange user account. Emails are sorted into folders based on the classification they receive during analysis.
| Field | Description |
|---|---|
| Enable automatic message filing | Allow the connector to move analyzed emails and sort them into folders in the configured Exchange email user account. |
| Malware folder | Specify the name of the folder into which the connector stores emails classified as Malicious. If the folder doesn’t already exist on the Exchange user account, it is automatically created after saving the configuration. This field is mandatory when Enable automatic message filing is selected. Note: If Allow suspicious is not selected, emails classified as Suspicious are sorted into the Malware folder. |
| Unknown folder | Specify the name of the folder into which the connector stores emails with no malicious content detected. If the folder doesn’t already exist on the Exchange user account, it is automatically created after saving the configuration. This field is mandatory when Enable automatic message filing is selected. Note: If Allow suspicious is selected, emails classified as Suspicious are sorted into the Unknown folder. Emails classified as Goodware or No threats found are always sorted into the Unknown folder. |
| Allow suspicious | If selected, emails classified as Suspicious are sorted into the Unknown folder. If not selected, emails classified as Suspicious are sorted into the Malware folder. |
SMTP
The SMTP connector allows analyzing incoming email traffic on the appliance to protect users from malicious content. When enabled, the connector service collects emails with attachments and uploads them to the appliance for analysis. Each email message is saved as one file. If email uploading fails for any reason, the connector automatically retries to upload it to the appliance.
When the analysis is complete, each email message receives a classification status from the appliance. In this operating mode, the connector acts as an SMTP relay. Therefore, the connector should not be used as a front-end service for accepting raw email traffic, but only as a system inside an already established secure processing pipeline for SMTP email.
To allow the SMTP connector to inspect and collect email traffic, users must ensure that the SMTP traffic in their network is diverted to port 25/TCP prior to configuring the connector on the appliance.
Additional port configuration may be required on the appliance. Because it involves manually modifying configuration files, this action can cause the appliance to malfunction. Contact ReversingLabs Support for instructions and guidance.
Configuring SMTP
To configure SMTP, fill in the following fields.
| Field | Mandatory | Description |
|---|---|---|
| Profile | Mandatory | Select a profile for this connector. The available options are Default and Strict which correspond to different Postfix configuration files. For more information, see Profiles vs. Postfix configuration. |
| Authorized networks | Mandatory if Profile: Strict is selected. | With Profile: Default, a custom value is not allowed but is instead set to 0.0.0.0/0 [::]/0. With Profile: Strict, a custom value is required. For more information, see Profiles vs. Postfix configuration. |
Profiles vs. Postfix configuration
If Profile is set to Default, you don’t enforce TLS traffic and you accept any SMTP client. This corresponds to the following Postfix configuration:
mynetworks = 0.0.0.0/0 [::]/0
smtpd_tls_security_level = may
smtp_tls_security_level = may
If Profile is set to Strict, you do enforce TLS and you can also specify trusted SMTP clients; see line 1 in the example below. In Strict mode, the relevant portion of the configuration looks like this:
mynetworks = 0.0.0.0/0 [::]/0
smtpd_tls_security_level = encrypt
smtp_tls_security_level = encrypt
smtpd_tls_mandatory_ciphers = high
smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1
smtpd_tls_mandatory_exclude_ciphers = aNULL, MD5
For specific syntax, see Postfix documentation.
Use case: Accept email from any email client or server
To configure the service to accept email from any email client or server, do the following:
- Enable the SMTP connector.
- Contact ReversingLabs Support with:
- Your intended email forwarding address range.
- A formal request to enable email forwarding.
- After receiving your request, ReversingLabs does the following:
- Creates an MX DNS record for your hosted service.
- Configures receiver restrictions to accept email only from your domain.
- Once ReversingLabs has added the MX record to the DNS for your hosted service, create email forwarding rules in your email system to automatically forward emails to the hosted service for processing.
ICAP Server
The ICAP Connector acts as an ICAP server and processes files sent through ICAP-enabled systems. It applies rules based on classifications, file size, and processing time, ensuring files meet the specified criteria before being analyzed by Spectra Detect.
Supported topologies
The ICAP Server Connector is designed to integrate seamlessly in any ICAP deployment model, including but not limited to:
- Application Delivery Controllers (ADCs)
- Forward Proxy
- Ingress Controller
- Intrusion Prevention System
- Load Balancer
- Managed File Transfer
- Next-Generation Firewall
- Protecting Enterprise Storage
- Reverse Proxies
- Secure Remote Access
- SSL Inspection and Termination
- Web Application Firewall (WAF)
- Web Gateway
- Web Traffic Security
Integration with ICAP clients
ICAP response codes
The table below lists the ICAP response codes and their meanings.
| ICAP response code | Description |
|---|---|
| 100 Continue | Signals the client to continue sending data to the ICAP server after the ICAP preview message. |
| 200 OK | The request was successfully processed. This status is returned whether the file is blocked or not. If the file was blocked, the X-Blocked header is set to True. In RESPMOD, a blocked file results in a 403 HTTP status code. |
| 204 No modification needed | Returned when the file was not blocked and the client sent an Allow: 204 header in the request. |
| 400 Bad request | Indicates a failure during file submission, analysis processing, or delivering results back to the client. |
| 404 Not found | The ICAP service could not be found. |
| 405 Method Not Allowed | The requested method is not supported on the endpoint. |
| 500 Internal Server Error | A generic server-side error occurred. |
Configuring ICAP
To configure the ICAP Server connector:
- Make sure the connector is enabled. If not, click the Enable connector button.
- Fill in the following fields.
| Field | Description |
|---|---|
| Max file size | Specify the maximum file size in megabytes that the ICAP Connector processes. Files exceeding this size are not analyzed. Default is 0, indicating that unlimited space is available. |
| Allow classifications | Select which classifications to allow. The available options are goodware, unknown, suspicious, malicious. Files not matching the selected classifications are blocked. |
| Service alias | The base service name is spectraconnector and cannot be changed. Here you may define any additional aliases your ICAP client can use to connect. Do not re-enter spectraconnector here. |
| Timeout | Set the timeout period in seconds for processing requests. If a file is not processed within this time, the request is terminated. Default is 300. |
| REQMOD block page URL | Request Mode (REQMOD) processes outgoing client requests before they reach the destination server. Use this mode to validate uploaded files to ensure they meet security requirements. Enter the full URL your ICAP clients fetch when a request is blocked. Note: To use the default block page, set it to https://{RL_APPLIANCE_IP}/icap-block-page and replace {RL_APPLIANCE_IP} with your appliance’s actual IP or hostname. If you host a custom block page, enter its URL here and ensure it's accessible to ICAP clients. In some deployments, RL_APPLIANCE_IP may refer to a virtual IP or a load-balancer IP. |
| RESPMOD block page | Response Mode (RESPMOD) processes server responses before they are delivered to the client. Use this mode to scan downloaded files for malware or sanitize content before delivery. Click Browse to upload a page to replace the content of the HTTP response. The uploaded file is served to the client instead of the original response from the web server. The file size must not exceed 0.5 MB. |
| Use TLS | Allow using a secure connection (TLS v1.3) when communicating with the ICAP server. |
| Scan raw data | Extract the raw HTTP message body, and send it to RL scan as is. |
CrowdStrike Falcon EDR
The CrowdStrike Falcon EDR connector integrates Spectra Analyze with your Falcon environment. It authenticates to Falcon Cloud, retrieves relevant file artifacts for analysis on the appliance, and can optionally write classification results back to Falcon events as comments.
Multiple Falcon inputs are supported per appliance.
Creating Falcon API credentials
Before configuring the connector, you must create API credentials in your Falcon console.
Go to Falcon Support and resources > API Clients and keys, click Create API client, and set the following API scopes:
| Scope | Read | Purpose | Write | Purpose |
|---|---|---|---|---|
| Alerts | ✅ | Get alert details. | ✅ | Append comment. |
| Quarantined Files | ✅ | Get quarantine file IDs, get quarantine file details. | ❌ | |
| Real Time Response (RTR) | ✅ | Open RTR session, list files in host quarantine, delete RTR session. | ✅ | Upload file from host quarantine to Falcon Cloud, get extracted file contents. |
| Event Streams | ✅ | List available event streams, subscribe to event stream. | ❌ | |
| Sensor Download | ✅ | Test connection | ❌ |
Once the API client is created, note its Client ID and Client Secret.
In addition to the api host, the connector needs access to CrowdStrike's firehose host.
For example, when us-2 is selected, ensure that the connector host has network access to the following:
api.us-2.crowdstrike.com:443firehose.us-2.crowdstrike.com:443
Configuring Falcon EDR
To add a new Falcon input:
- Make sure the connector is enabled. If not, click the Enable connector button.
- Click Add item to expand the Falcon EDR Inputs section in the Falcon EDR Connector dialog and fill in the following fields.
| Field | Mandatory | Description |
|---|---|---|
| Falcon cloud region | Mandatory | The region assigned to your CrowdStrike account. For example, us-1 or us-2. |
| Falcon client ID | Mandatory | The Client ID for your Falcon API credentials. |
| Falcon client secret | Mandatory | The Client Secret for your Falcon API credentials. |
| Falcon application ID | Mandatory | The Application ID used by this integration. Default is falcon-connector. If multiple connector instances are used under the same Crowdstrike tenant, each must have a unique Application ID. |
| Append Spectra Analyze results to EDR events | Mandatory | Enabled by default. When enabled, Spectra Analyze classification details are added as comments to related Falcon events. Analysts can view verdicts and risk information directly in CrowdStrike without switching tools. The connector operates and analyzes files even if this option is disabled. The option only controls whether results are written back to Falcon. |
Cortex XDR
Configuring Cortex XDR
Ensure the correct permissions are configured within Cortex XDR. For the Cortex XDR file retrieval integration, set the following permissions:
- Alerts & Incidents:
View/Edit - Action Center:
View/Edit(File Retrieval,File Search) - Endpoint Groups:
View
To add a new Cortex XDR input:
- Make sure the connector is enabled. If not, click the Enable connector button.
- Click Add item to expand the Cortex XDR Inputs section in the Cortex XDR Connector dialog and fill in the following fields.
| Field | Description |
|---|---|
| Cortex XDR host | The Cortex XDR host instance. |
| Cortex XDR API key | The Cortex XDR API key. |
| Cortex XDR Auth ID | The Authentication ID for the Cortex XDR integration. |
| Endpoint groups | Add multiple groups by separating them using the enter or tab key. Leaving it empty includes all groups. |
| Endpoint names | Add multiple names by separating them using the enter or tab key. Leaving it empty includes all names. |
| Append Spectra Analyze results to Cortex XDR alerts | Enabled by default. When enabled, Spectra Analyze classification details are added as comments to related Cortex XDR alerts. |
SentinelOne EDR
Configuring SentinelOne EDR
To add a new SentinelOne EDR input:
- Make sure the connector is enabled. If not, click the Enable connector button.
- Click Add item to expand the SentinelOne EDR Inputs section in the SentinelOne EDR Connector dialog and fill in the following fields.
| Field | Description |
|---|---|
| SentinelOne EDR Host | The SentinelOne EDR host instance. |
| SentinelOne EDR API key | The SentinelOne EDR API key. |
| Agent Node Names | Add multiple Agent Node Names by separating them using the enter or tab key. Leave empty to include all agents. |
| Append Spectra Analyze results to SentinelOne EDR Alerts | When enabled, Spectra Analyze classification details are added as comments to SentinelOne alerts containing RL Spectra Analyze classification data. |