Connector Configuration

Connectors allow users to automatically retrieve a large number of files from external sources such as email or S3 buckets. Users can configure this service from the Appliance status page. Click on a connected Spectra Analyze or Spectra Detect Hub, then click the Connectors button.

Note

The Hub must belong to a Hub group with at least one Worker.

If a connector is disabled or if it has not been previously configured on the appliance, the dialog contains only the Enable connector button. Click the button to start configuring the connector. Never use the same folder for both input and output of files.

Starting the Connector

When the configuration is complete, click Start connector at the bottom of the page. This will initiate the connector service on the appliance.

After starting, the connector service connects to the configured user account, automatically retrieves files from it, and submits them for analysis on the appliance.

Each file retrieved via the connector has a set of User Tags automatically assigned to it. Those User Tags are based on the file metadata, and can contain information about the file source, the last modification time in the original location, file permissions, email subject, recipient and sender addresses, and more.

If advanced options are not enabled, the connector service will not perform any additional actions on the files retrieved from the server after the appliance finishes analyzing them. The users can see the analysis results for each file on its Sample Details page.

After providing the required information, click Test connection to verify that the appliance can access the configured service storage. When the button is clicked, the appliance attempts to connect and mount the service storage.

To remove all configured settings for the current service storage, click Remove item.

To add another service storage, click Add item. Up to five units of storage can be added this way. If there are already five units of storage connected to the appliance, at least one must be removed by clicking Remove item before adding a new one. Note that for S3 connector, this limit is 20.

Pausing and Disabling the Connector

While the connector service is active, the Start connector button changes into Pause connector. Clicking this button temporarily halts the connector service. The connector service records the last state and is able to resume scanning when Start connector is clicked again.

While the connector is running, it is possible to modify its configuration and save it by clicking Save changes without having to pause or disable the connector.

If the connector service is active during a scheduled or manually executed Purge action, the system will automatically stop the service before performing the Purge action, and start it after the Purge action is complete.

To disable the entire connector service on the appliance, click Disable connector at the bottom of the page. When the connector is disabled, it will not be possible to reconfigure, start, or pause it until the service is enabled again.

Note that the current connector configuration will be preserved and restored when the service is enabled again. Likewise, all files that have been retrieved and analyzed by Spectra Analyze will remain on the appliance.

All files retrieved from the server and analyzed on the appliance are accessible to Spectra Analyze users from the Submissions page. They are distinguished from other files by a unique username specific for each connector.

Spectra Detect Workers will follow the default data retention policy. All processed files are deleted immediately after processing. If a file is queued but not processed within 9 hours, the processing task will be canceled (and the file deleted) but the record of the unsuccessful task will still be present in the database for 24 hours. All file processing results are retained until deleted by the user, or for 9 hours after processing (whichever comes first).

IMAP - MS Exchange - AbuseBox Connector

The IMAP - MS Exchange AbuseBox connector allows connecting to a Microsoft Exchange server and analyzing retrieved emails on the appliance.

Requirements

  • IMAP must be enabled on the Exchange server.

  • A new user account must be configured on the mail server and its credentials provided to the connector in the configuration dialog.

  • A dedicated email folder must be created in the Exchange user account, and its name provided to the connector in the configuration dialog. All emails forwarded to that folder are collected by the connector and automatically sent to the appliance for analysis.

When the analysis is complete, email samples with detected threats will get classified as malicious and, if the automatic message filing option is enabled, moved to the specified Malware folder.

Emails with no detected malicious content do not get classified. They can optionally be moved to the specified Unknown folder on the configured Exchange user account.

To improve performance and minimize processing delays on Spectra Analyze, each email sample will get analyzed and classified only once. When the Automatic message filing option is enabled, each email sample is moved only once, based on its first available classification.

Because of that, it is recommended to enable classification propagation and allow retrieving Spectra Intelligence classification information during sample analysis instead of after. Administrators can enable these two options in the Administration ‣ Configuration ‣ Spectra Detect Processing Settings dialog. This will improve classification of emails with malicious attachments. Workers do this by default and no configuration is necessary.

The connector can be configured to automatically sort emails after analysis into user-defined email folders in the configured Exchange user account.

Configuring the Exchange user account

To configure the connection with the Exchange user account:

  • make sure the connector is enabled

  • fill in the fields in the Exchange setup section of the Email AbuseBox Connector dialog.

Exchange setup

Server domain

Mandatory

Enter the domain or IP address of the Exchange server. The value should be FQDN, hostname or IP. This should not include the protocol (e.g., http)

Email folder

Mandatory

Enter the name of the email folder from which the email messages will be collected for analysis. This folder must belong to the same Exchange user account for which the credentials are configured in this section. The folder name is case-sensitive.

Connection Type

Mandatory

Supports IMAP (Basic Authentication) and Exchange (OAuth 2) methods of authentication. Depending on the selection, the next section of the form will ask for different user credentials: Basic Authentication asks for a username and password, OAuth 2 asks for a Client ID, Client Secret and Tenant ID.

Email address

Mandatory

Enter the primary email address of the configured Exchange user account.

Access Type

Mandatory

Delegate is used in environments where there’s a one-to-one relationship between users. Impersonation is used in environments where a single account needs to access many accounts.

Connect securely

Optional

If selected, the connector will not accept connections to Exchange servers with untrusted or expired certificates.

Note that the connector will operate and analyze emails even if these advanced options are disabled. They only control the post-analysis activities.

Network File Share Connector

Supported on Spectra Analyze and Spectra Detect Hub appliances

The Network File Share connector allows connecting up to five shared network resources to the appliance. Once the network shares are connected and mounted to the appliance, it can automatically scan the network shares and submit files for analysis. After analyzing the files, the appliance can optionally sort the files into folders on the network share based on their classification status.

The Network File Share connector supports SMB and NFS file sharing protocols.

Currently, it is not possible to assign a custom name to each network share. The only way to distinguish between configured network shares is to look at their addresses. If there are 3 configured network shares, and the network share 2 is removed, the previous network share 3 will automatically “move up” in the list and become network share 2.


Configuring Network Shares

To add a new network share to the appliance, expand the Shares section in the Network File Share Connector dialog and fill in the relevant fields.

SHARES

Address

Mandatory

Enter the address of the shared network resource that will be mounted to the appliance. The address must include the protocol (SMB or NFS). Leading slashes are not required for NFS shares (example: nfs:storage.example.lan). The address can point to the entire network drive, or to a specific folder (example: smb://storage.example.lan/samples/collection). When the input folder and/or sorting folders are configured, their paths are treated as relative to the address configured here.


Note: If the address contains special characters, it may not be possible to mount the share to the appliance. The comma character cannot be used in the address for SMB shares. Some combinations of ? and # will result in errors when attempting to mount both the SMB and the NFS shares.

Username

Optional, SMB only

Enter the username for authenticating to the SMB network share (if required). Usernames and passwords for SMB authentication can only use a limited range of characters (ASCII-printable characters excluding the comma).

Password

Optional, SMB only

Enter the password for authenticating to the SMB network share (if required). Usernames and passwords for SMB authentication can only use a limited range of characters (ASCII-printable characters excluding the comma).

Input folder

Optional

Specify the path to the folder on the network share containing the files to be analyzed by the appliance. The folder must exist on the network share. The path specified here is relative to the root (address of the network file share). If the input folder is not configured, the root is treated as the input folder.

The service will continually scan the network shares for new files (approximately every 5 minutes). If any of the existing files on the network share has changed since the last scan, it will be treated as a new file and analyzed again.

Microsoft Cloud Storage: Azure Data Lake

The Azure Data Lake connector allows connecting up to 20 Azure Data Lake Gen2 containers to the appliance. When the containers are connected and mounted to the appliance, it can automatically scan them and submit files for analysis. The files can be placed into the root of each container, or into an optional folder in each of the containers.

Important

This connector is not compatible with containers that have the Blob Soft Delete feature enabled.

After analyzing the files, the appliance can optionally sort the files into folders on the container based on their classification status.

Currently, it is not possible to assign a custom name to each data lake input. The only way to distinguish between configured containers is to look at their names. If there are three configured data lake inputs, and input 2 is removed, the previous input 3 will automatically “move up” in the list and become input 2.

Configuring Azure Data Lake containers

To add a new Azure Data Lake container:

  • make sure the connector is enabled

  • expand the Azure Data Lake Inputs section in the Azure data lake dialog and fill in the relevant fields.

Azure Data Lake Inputs

Storage account name

Mandatory

The name of the storage account.

Storage access key

Mandatory

The access key used for Shared Key Authentication. This value should end in ==

Container

Mandatory

Specify the name of an existing Azure Data Lake container which contains the samples to process. The value must start and end with a letter or number and must contain only letters, numbers, and the dash (-) character. Consecutive dashes are not permitted. All letters must be lowercase. The value must have between 3 and 63 characters.

Folder

Optional

The input folder inside the specified container which contains the samples to process. All other samples will be ignored.

Microsoft Cloud Storage: OneDrive / SharePoint Online

The Microsoft Cloud Storage connector allows connecting up to five OneDrive or SharePoint storages to the appliance. When the storages are connected and mounted to the appliance, it can automatically scan them and submit files for analysis. The files can be placed into the root of each storage, or into an optional subfolder,

After analyzing the files, the appliance can optionally sort the files into folders based on their classification status.

Configuring File Storage Inputs

To add a new File Storage Input:

  • make sure the connector is enabled

  • expand the File Storage Inputs section in the Microsoft Cloud Storage dialog and fill in the relevant fields.

OAuth2 Configuration

Host

Host of the OAuth2 authentication server.

Client ID

Identifier value of the OAuth2 client.

Client Secret

The value used by the service to authenticate to the authorization server.

Scopes

When the user is signed in, these values dictate the type of permissions Microsoft Cloud Storage connector needs in order to function. Provide one or more OAuth2.0 scopes that should be requested during login. These values should be separated by comma.

Auth URL

Authentication endpoint for OAuth2.

Token URL

Link to access token for OAuth2 after the authorization.

Resource

The server that hosts the needed resources.

Source

A choice between OneDrive and Online Sharepoint.

Drive ID

Identifier value for the drive when OneDrive source option is chosen.

Sharepoint Site

A dropdown which appears when the Online Sharepoint source option is chosen.

Folder

The input folder which contains samples to be processed, while all other samples will be ignored.

AWS S3 Connector

The S3 connector allows connecting up to 20 S3 buckets to the appliance. When the buckets are connected and mounted to the appliance, it can automatically scan the buckets and submit files for analysis. The files can be placed into the root of each bucket, or into an optional folder in each of the buckets.

After analyzing the files, the appliance can optionally sort the files into folders on the S3 bucket based on their classification status.

Currently, it is not possible to assign a custom name to each S3 file storage input. The only way to distinguish between configured buckets is to look at their names. If there are 3 configured S3 file storage inputs, and input 2 is removed, the previous input 3 will automatically “move up” in the list and become input 2.

Configuring S3 Buckets

To add a new S3 bucket:

  • make sure the connector is enabled

  • expand the S3 File Storage Inputs section in the S3 dialog and fill in the relevant fields.

S3 File Storage Inputs

AWS S3 access key ID

Mandatory

The access key ID for AWS S3 account authentication. In cases where the appliance is hosted by ReversingLabs and Role ARN is used, this value will be provided by ReversingLabs.

AWS S3 secret access key

Mandatory

The secret access key for AWS S3 account authentication. In cases where the appliance is hosted by ReversingLabs and Role ARN is used, this value will be provided by ReversingLabs.

AWS S3 region

Mandatory

Specify the correct AWS geographical region where the S3 bucket is located. This parameter is ignored for non-AWS setups.

Enable Role ARN

Optional

Enables or disables authentication using an external AWS role. This allows the customers to use the connector without forwarding their access keys between services. The IAM role which will be used to obtain temporary tokens has to be created for the connector in the AWS console. These temporary tokens allow ingesting files from S3 buckets without using the customer secret access key. If enabled, it will expose more configuration options below.

Role ARN

Mandatory and visible only if Role ARN is enabled

The role ARN created using the external role ID and an Amazon ID. In other words, the ARN which allows the appliance to obtain a temporary token, which then allows it to connect to S3 buckets without using the customer secret access key.

External ID

Mandatory and visible only if Role ARN is enabled

The external ID of the role that will be assumed. Usually, it’s an ID provided by the entity which owns the S3 bucket. The owner of that bucket takes the AWS Account ID of the hosting ReversingLabs account and builds an ARN with it.

Role session name

Mandatory and visible only if Role ARN is enabled

Name of the session visible in AWS logs. Can be any string.

ARN token duration

Mandatory and visible only if Role ARN is enabled

How long before the authentication token expires and is refreshed. The minimum value is 900 seconds.

AWS S3 bucket

Mandatory

Specify the name of an existing S3 bucket which contains the samples to process. The bucket name can be between 3 and 63 characters long, and can contain only lower-case characters, numbers, periods, and dashes. Each label in the bucket name must start with a lowercase letter or number. The bucket name cannot contain underscores, end with a dash, have consecutive periods, or use dashes adjacent to periods. The bucket name cannot be formatted as an IP address.

Processing Priority

Mandatory

Assign a priority for processing files from an S3 bucket on a scale of 1 (highest) to 5 (lowest). Multiple buckets may share the same priority. The default value is 5.

AWS S3 folder

Optional

The input folder inside the specified bucket which contains the samples to process. All other samples will be ignored. The folder name can be between 3 and 63 characters long, and can contain only lower-case characters, numbers, periods, and dashes. Each label in the folder name must start with a lowercase letter or number. The folder name cannot contain underscores, end with a dash, have consecutive periods, or use dashes adjacent to periods. The folder name cannot be formatted as an IP address. If the folder is not configured, the root of the bucket is treated as the input folder.

S3 endpoint URL

Optional

Enter a custom S3 endpoint URL. Specifying the protocol is optional. Leave empty if using standard AWS S3.

Server Side Encryption Type

Optional

Leave blank unless the bucket policy enforces SSE headers to be sent to S3. Valid options are either “AES256” or “aws:kms”

Connect securely

Optional

If selected, the connector will not accept connections to S3 buckets with untrusted or expired certificates. This setting only applies when a custom S3 endpoint is used.

SMTP Connector

The SMTP connector allows analyzing incoming email traffic on the appliance to protect users from malicious content. When enabled, the connector service collects emails (with attachments) and uploads them to the appliance for analysis. Each email message is saved as one file. If email uploading fails for any reason, the connector automatically retries to upload it to the appliance.

When the analysis is complete, each email message receives a classification status from the appliance. In this operating mode, the connector acts as an SMTP Relay. Therefore, the connector should not be used as a front-end service for accepting raw email traffic, but only as a system inside an already established secure processing pipeline for SMTP email.

To allow the SMTP connector to inspect and collect email traffic, users must ensure that the SMTP traffic in their network is diverted to port 25/TCP prior to configuring the connector on the appliance.

Warning

Additional port configuration may be required on the appliance. Because it involves manually modifying configuration files, this action can cause the appliance to malfunction. Contact ReversingLabs Support for instructions and guidance.

Profiles

There are two profiles for this connector: Default and Strict. These two profiles correspond to different Postfix configuration files. Clicking the underlined See How It Affects Postfix Config text will display a pop-up modal with a detailed Default and Strict profile Postfix configuration.

In the Default profile, you don’t enforce TLS traffic and you accept any SMTP client. This corresponds to the following Postfix configuration:

mynetworks = 0.0.0.0/0 [::]/0
smtpd_tls_security_level = may
smtp_tls_security_level = may

In the Strict profile, you do enforce TLS and you can also specify trusted SMTP clients (highlighted line 1 in the example below; see Postfix docs for the specific syntax). The relevant portion of the configuration looks like this in Strict mode:

mynetworks = 0.0.0.0/0 [::]/0
smtpd_tls_security_level = encrypt
smtp_tls_security_level = encrypt
smtpd_tls_mandatory_ciphers = high
smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1
smtpd_tls_mandatory_exclude_ciphers = aNULL, MD5

Starting the Connector

After the connector is enabled, click the Start connector button. This will initiate the connector service on the appliance.

Pausing and Disabling the Connector

While the connector service is active, the Start connector button changes into Pause connector. Clicking this button temporarily halts the connector service, which in turn stops receiving and analyzing new email traffic. The connector service records the last state and is able to resume scanning when Start connector is clicked again.

If the connector service is active during a scheduled or manually executed Purge action, the system will automatically stop the service before performing the Purge action, and start it after the Purge action is complete.

To disable the entire connector service on the appliance, click Disable connector at the bottom of the page. When the connector is disabled, it will not be possible to reconfigure, start, or pause it until the service is enabled again.

Using Advanced Connector Options

In addition to main connector options for every connector service, users can set advanced options.

Advanced options for a connector refer to actions that the connector service can perform on files after the appliance finishes analyzing them.

Specifically, the connector can be configured to automatically sort files into user-defined sorting folders on the connector user account. Files are sorted into folders based on the classification status they receive during analysis (malicious, suspicious, known, unknown).

For Azure Data Lake, Network File Share, and S3 Connectors, advanced options can be configured for every storage unit individually. This means that the sorting criteria, folder names, and folder paths can be different on each configured storage unit.

IMAP - MS Exchange - AbuseBox Connector

Advanced options (AbuseBox)

Enable automatic message filing

Selecting the checkbox will allow the connector to move analyzed emails and sort them into email folders in the configured Exchange email user account. This checkbox toggles the availability of other options in the Advanced Options section.

Malware folder

Specify the name of the email folder into which the connector will store emails classified as “Malicious” (malware). This folder will be created if it doesn’t exist. This field is mandatory when Enable automatic message filing is selected.

Unknown folder

Specify the name of the email folder into which the connector will store emails with no malicious content detected (classified as Known, or not classified at all = Unknown). This folder will be created if it doesn’t exist. This field is mandatory when Enable automatic message filing is selected.

Allow suspicious

When selected, emails classified as “Suspicious” will be moved to the configured Unknown folder. If this checkbox is not selected, files classified as “Suspicious” will by default be sorted into the configured Malware folder.

Azure Data Lake, Microsoft Cloud Storage, S3 Connectors

Advanced options (Azure Data Lake, Microsoft Cloud Storage, S3)

Enable Same Hash Rescan

Selecting the checkbox will force the connector to rescan samples that share the same hash. This checkbox can only be enabled for the S3 connector.

Delete source files

Selecting the checkbox will allow the connector to delete source files on the connector storage after they have been processed.

Enable automatic file sorting

Selecting the checkbox will allow the connector to store analyzed files and sort them into folders based on their classification. Usually, the connector skips already uploaded files. If this option is enabled and some files have already been uploaded, they will be uploaded to the Worker again. On Spectra Analyze, files are skipped in both cases.

Sort Malware detected by Microsoft Cloud Storage

If enabled, the samples which are identified as Malware by Microsoft Cloud Storage will be moved to the Malware folder. These samples are not processed by Spectra Detect. This checkbox can only be enabled for Microsoft Cloud Storage connector.

Goodware folder

Specify the path to folder into which the connector will store files classified as “Known” (goodware). This field is mandatory when Enable automatic file sorting is selected. The path specified here is relative to the address of the connector storage unit. If the folder doesn’t already exist on the service, it will be automatically created after saving the configuration.

Malware folder

Specify the path to folder into which the connector will store files classified as “Malicious” (malware). This field is mandatory when Enable automatic file sorting is selected. The path specified here is relative to the address of the connector storage unit. If the folder doesn’t already exist on the service, it will be automatically created after saving the configuration.

Unknown folder

Specify the path to folder into which the connector will store files without classification (“Unknown” status). The path specified here is relative to the address of the connector storage unit. If the folder doesn’t already exist on the container, it will be automatically created after saving the service.

Suspicious folder

Specify the path to folder into which the connector will store files classified as “Suspicious”. The path specified here is relative to the address of the connector storage unit. If the folder doesn’t already exist on the srvice, it will be automatically created after saving the configuration.

Network File Share Connector

Advanced options (Network File Share)

Delete source files

Selecting the checkbox will allow the connector to delete source files on the network share after they have been processed.

Enable automatic file sorting

Selecting the checkbox will allow the connector to store analyzed files and sort them into folders on every configured S3 bucket based on their classification status. This checkbox toggles the availability of other options in the Advanced Options section.

Goodware folder

Specify the path to folder into which the connector will store files classified as “Known” (goodware). This field is mandatory when Enable automatic file sorting is selected. The path specified here is relative to the address of the network file share. If the folder doesn’t already exist on the network share, it will be automatically created after saving the configuration.

Malware folder

Specify the path to folder into which the connector will store files classified as “Malicious” (malware). This field is mandatory when Enable automatic file sorting is selected. The path specified here is relative to the address of the network file share. If the folder doesn’t already exist on the network share, it will be automatically created after saving the configuration.

Unknown folder

Specify the path to folder into which the connector will store files without classification (“Unknown” status). The path specified here is relative to the address of the network file share. If this field is left empty, unknown files will be stored either to the Goodware or to the Malware folder, depending on the “Allow unknown” setting.

Known threshold

Files classified as “Known” (goodware) with the trust factor value higher than the one configured here will be stored into the configured Malware folder. “Known” files with the trust factor less than or equal to the value configured here will be stored into the configured Goodware folder. Supported values are 0 to 5. Default is 5 (saves all to the Goodware folder). This field is mandatory when Enable automatic file sorting is selected.

Allow unknown

When selected, files with the “Unknown” classification status are stored into the configured Goodware folder. If this checkbox is not selected, files with the “Unknown” status are either stored into the Unknown folder (if the Unknown folder is configured), or to the Malware folder (if the Unknown folder is not configured).

Allow suspicious

When selected, files classified as “Suspicious” will be stored into the configured Goodware folder. If this checkbox is not selected, files classified as “Suspicious” will be stored into the configured Malware folder.

Global Configuration

In addition to every connector service having specific configuration settings, there is a Global Configuration section at the bottom of every connector page. These settings apply to all configured connectors.

Global Configuration

Save files that had encountered errors during processing

Original files that were not successfully uploaded will be saved to /data/connectors/connector-[CONNECTOR_SHORTNAME]/error_files/

Max upload retries

Number of times the connector will attempt to upload the file to the processing appliance. Upon reaching the number of retries, it will be saved in the error_files/ destination or be discarded

Max upload timeout

Period (in seconds) between upload attempts of the sample being re-uploaded.

Upload algorithm

The algorithm used for managing delays between attempting to reupload the samples. In Exponential backoff, the delay is defined by multiplying the Max upload timeout parameter by 2, until reaching the maximum value of 5 minutes. Linear backoff will always use the Max upload timeout value for the timeout period between reuploads.

Max upload delay

In case the Worker cluster is under high load, this parameter is used to delay any new upload to the cluster. The delay parameter will be multiplied by the internal factor determined by the load on the appliance. If set to 0, the delay will be disabled.

Database cleanup period

Specifies the number of days for which the data will be preserved.

Database cleanup interval

Specifies the time (in seconds), in which the database cleanup will be performed.

Max File Size (MB)

The maximum file size the connector will transfer to the appliance for analysis. Setting it to 0 disables the option. (Available for AWS S3 and Azure Data Lake Connectors)

Unique Usernames

Unique Usernames

Connector

Unique Username

Email AbuseBox Connector

abusebox_connector

Azure Data Lake Connector

azure-data-lake_connector

Network File Share Connector

fileshare_connector

Microsoft Cloud Storage

graph-api-connector

S3 Connector

s3_connector

SMTP Connector

smtp_connector