Troubleshooting
This guide covers common issues with File Inspection Engine (FIE) and the steps to resolve them.
Container exits immediately on startup
Symptom
The FIE container starts and exits within a few seconds. docker ps shows it in an Exited state. The container never becomes ready to accept requests.
Cause
- A fatal configuration error occurred during initialization, such as a missing or malformed
RL_LICENSEenvironment variable. - A required CLI flag has an invalid value (for example, an invalid duration format for
--timeout). - The container cannot bind to the configured HTTP port because it is already in use.
- The static analysis engine (Spectra Core) failed to initialize due to insufficient resources.
Solution
- Inspect the container logs immediately after exit:
docker logs <container-name>
or for a Kubernetes pod:
kubectl logs <pod-name> --previous -n <namespace>
- Look for startup error messages. A missing license produces:
FATAL: License validation failed
Confirm that the RL_LICENSE environment variable is set and contains the full license file content:
docker run -e RL_LICENSE="$(cat /path/to/license.lic)" \
registry.reversinglabs.com/fie/file-inspection-engine:<version>
- Check for port conflicts if the log shows a bind error:
Error: listen tcp :8000: bind: address already in use
Change the host port mapping or stop the process occupying the port:
docker run -p 9000:8000 ... # Map to a different host port
-
Review the Configuration Reference to verify that all provided flags use correct formats. Boolean flags require
=trueor=falsewhen explicitly set (for example,--cloud-updates=false, not--cloud-updates false). -
For Kubernetes deployments, check the Helm values for misconfigured environment variables or resource limits that are too low. See the Helm Values Reference.
/readyz returns 503 — not ready
Symptom
After starting the container, polling the readiness endpoint returns a non-200 status:
curl http://localhost:8000/readyz
# Returns 503 or another 5xx/4xx status
The container is running but not accepting file submissions.
Cause
- Threat data has not finished downloading. FIE requires the threat database to be available before it becomes ready, as described in Starting the File Inspection Engine.
- All Spectra Core instances are currently busy (high load or concurrency limit reached).
- The license is invalid or has expired, preventing the engine from completing initialization.
Solution
- Check the container logs for readiness-related messages:
docker logs <container-name> -f
On first startup, you will see threat data download progress. Wait for the download to complete. The container logs Instance is ready when at least one analysis instance is available:
{"level":"info","process":"fie","instance_id":"core-regular-0.abc12","message":"Instance is ready"}
- Check the
/statusendpoint for more detail on the current state:
curl http://localhost:8000/status
Review the license.valid_until field and the spectra_core.available_regular_cores field. If available_regular_cores shows 0/N, all instances are busy or failed to initialize.
-
If the license has expired, update the
RL_LICENSEenvironment variable and restart the container. See License validation error on startup. -
For the
/readyzendpoint behavior when under load, see Request Rejection. The endpoint returns a non-200 status when memory or concurrency limits are exceeded — this is expected behavior, not a fault.
Threat database download fails or is slow
Symptom
The container starts but remains in a not-ready state for a long time. Logs show repeated download failures or slow progress:
{"level":"warn","process":"fie","message":"Failed to download threat data segment, retrying"}
Or the threat data download does not start at all.
Cause
- Outbound HTTPS connectivity from the container to the ReversingLabs update infrastructure is blocked by a firewall or requires a proxy.
- The license does not include the appropriate threat data entitlement.
- The
--without-malicious-threat-dataflag is not set, but network access to the update server is unavailable. - The container's DNS is not resolving the update server hostname.
Solution
- Test outbound connectivity from inside the container:
docker exec <container-name> curl -I https://updates.reversinglabs.com
If this fails, the container cannot reach the update server. Check firewall egress rules and ensure that outbound HTTPS (port 443) is permitted.
- If a proxy is required, configure it using the
--proxy-addressflag or theRL_PROXY_ADDRESSenvironment variable:
docker run -e RL_PROXY_ADDRESS="http://proxy.company.internal:8080" \
-e RL_LICENSE="..." \
registry.reversinglabs.com/fie/file-inspection-engine:<version>
-
For air-gapped environments, use the offline threat data download process. See Air-Gapped Kubernetes Deployment for the procedure to pre-load threat data without internet connectivity.
-
The
RL_RETRY_COUNTenvironment variable controls how many times FIE retries failed segment downloads (default: 3). For flaky connections, increase this value:
docker run -e RL_RETRY_COUNT=10 ...
- If you want to run FIE without downloading malicious threat data (relying on static analysis only), set
--without-malicious-threat-data=true. See the Configuration Reference for the implications of this option.
Analysis returns 503 Service Unavailable
Symptom
POST /scan requests return:
HTTP/1.1 503 Service Unavailable
or:
HTTP/1.1 429 Too Many Requests
{"error":"The concurrency limit has been reached"}
or:
HTTP/1.1 429 Too Many Requests
{"error":"Analysis not accepted due to high processing load"}
Cause
- All Spectra Core instances are busy processing other files (high load).
- The concurrency limit configured with
--concurrency-limithas been reached. - Memory usage has exceeded the
--processing-unavailable-at-memory-percentthreshold.
Solution
- Review the response status codes. A 429 with
"The concurrency limit has been reached"means too many simultaneous requests are active; retry after a short delay. - Monitor the
/statusendpoint to see current instance availability:
curl http://localhost:8000/status | python3 -m json.tool | grep -A4 "spectra_core"
The available_regular_cores and available_large_cores fields show how many instances are currently free.
- Implement retry logic with backoff in your client for 429 responses. Do not retry at a constant rate under load — this worsens congestion.
- Increase the number of Spectra Core instances (
--number-of-regular-cores) to handle higher concurrency, subject to available CPU and memory. See the Configuration Reference. - Check logs for the high-load indicators described in Logging:
{"level":"warn","process":"core","message":"High processing load"}
Wait for "High processing load over" before resuming normal submission rates.
- For sustained high throughput, consider deploying multiple FIE instances behind a load balancer, with each instance's
/readyzendpoint used as the health check.
Port binding conflict
Symptom
The container fails to start with an error in the logs:
Error: listen tcp :8000: bind: address already in use
or Docker reports:
docker: Error response from daemon: driver failed programming external connectivity:
Bind for 0.0.0.0:8000 failed: port is already allocated.
Cause
- Another process on the host is already using port 8000 (the default FIE HTTP port).
- A previous FIE container is still running and holding the port.
- The Docker daemon has reserved the port range that includes 8000.
Solution
- Identify what is using the port:
sudo lsof -i :8000
sudo ss -tlnp | grep 8000
- If an old FIE container is occupying the port, stop it:
docker ps -a | grep fie
docker stop <old-container-id>
docker rm <old-container-id>
- Map the container to a different host port without changing the internal port:
docker run -p 9001:8000 \
-e RL_LICENSE="..." \
registry.reversinglabs.com/fie/file-inspection-engine:<version>
- To change the port the FIE process listens on internally, use the
--http-addressflag:
docker run -p 9001:9001 \
-e RL_HTTP_ADDRESS=":9001" \
-e RL_LICENSE="..." \
registry.reversinglabs.com/fie/file-inspection-engine:<version>
See the Configuration Reference for the --http-address option.
Out of memory (OOM) — container killed
Symptom
The container is killed abruptly during analysis. Docker events or Kubernetes events show:
OOMKilled
or the host dmesg contains:
Out of memory: Kill process <pid> (fie) score <N> or sacrifice child
Cause
- The container memory limit is too low for the number of Spectra Core instances and the file types being analyzed.
- Files with very high decompression ratios (deeply nested archives) are consuming more memory than expected.
- The temporary directory is mounted as
tmpfs, which counts toward container memory usage.
Solution
- Increase the container memory limit. As a general guideline, allocate at least 1–2 GB of memory per Spectra Core instance, plus overhead for the FIE process itself.
For Docker:
docker run --memory="8g" ...
For Kubernetes, update the resource limits in the Helm values. See the Helm Values Reference.
-
If
tmpfsis used as the temporary directory, its contents count toward container memory. Consider switching to a host-mounted volume for temporary files to avoid this. -
Enable the memory threshold check using
--processing-unavailable-at-memory-percent. This causes FIE to reject new submissions when memory usage is high, preventing OOM rather than being killed:
docker run -e RL_PROCESSING_UNAVAILABLE_AT_MEMORY_PERCENT=85 ...
When memory exceeds 85%, the engine logs:
{"level":"warn","message":"Memory use is above the threshold of 90%"}
and starts returning HTTP 429 to new submissions. See Memory Usage.
-
Reduce the number of concurrent Spectra Core instances (
--number-of-regular-cores) to lower peak memory consumption. -
Review the platform requirements for recommended memory allocations based on instance count and expected file types.
Large files time out during analysis
Symptom
Analysis of files above a certain size returns:
HTTP/1.1 524
{"error": "The analysis could not be completed within the configured maximum analysis time"}
The container logs show:
{"level":"warn","message":"Analysis aborted due to a timeout"}
{"level":"warn","message":"Analysis has timed out"}
Cause
- The
--timeoutvalue is too short for the complexity of the file being analyzed. - A very large or deeply nested archive requires more time to unpack and analyze than the timeout allows.
- After a timeout, the Spectra Core instance handling the file is restarted, temporarily reducing available capacity.
Solution
- Increase the analysis timeout using the
--timeoutflag. Duration values uses,m, orhsuffixes:
docker run -e RL_TIMEOUT="5m" ...
Note: very short timeout values are not recommended because instance restarts after a timeout can cause cascading delays. See Timeouts.
- After a timeout, the affected instance restarts automatically. Monitor logs for
"Instance is ready"to confirm recovery:
{"level":"info","message":"Instance is ready"}
- For predictably large files, configure a dedicated large-file instance pool using
--number-of-large-coresand--large-file-threshold. These instances process one file at a time, and their separate timeout can be tuned independently:
docker run \
-e RL_NUMBER_OF_LARGE_CORES=2 \
-e RL_LARGE_FILE_THRESHOLD=50 \
-e RL_TIMEOUT="10m" \
...
See the Configuration Reference for all large-file pool options.
- Use the Check for Hard Timeout procedure to distinguish regular timeouts from hard timeouts caused by Spectra Core process termination.
License validation error on startup
Symptom
The container exits immediately or the /readyz endpoint returns a non-200 status. Container logs contain:
FATAL: License validation failed
or:
License expired
The /status endpoint returns a valid_until date in the past.
Cause
- The
RL_LICENSEenvironment variable is not set. - The license file content is truncated, incorrectly formatted, or was copied with extra whitespace or line breaks.
- The license has reached its expiration date.
- For network-validated licenses, the container cannot reach the ReversingLabs license server.
Solution
- Confirm the
RL_LICENSEenvironment variable is set. Pass the license as the entire file contents:
# Using a license file on disk
docker run -e RL_LICENSE="$(cat /path/to/rl-license.lic)" \
registry.reversinglabs.com/fie/file-inspection-engine:<version>
For Kubernetes, store the license as a Secret and reference it in the pod spec:
kubectl create secret generic fie-license \
--from-file=RL_LICENSE=/path/to/rl-license.lic
- Verify the license has not expired using the
/statusendpoint:
curl http://localhost:8000/status | python3 -m json.tool | grep valid_until
- If the license is expired, contact your ReversingLabs account manager or support@reversinglabs.com to obtain a renewed license.
- Note that
RL_LICENSEis only available as an environment variable, not as a CLI flag. See the Configuration Reference for theRL_LICENSEparameter notes.
Analysis results show UNKNOWN for all files
Symptom
All files submitted to /scan return "classification": "OK" regardless of file type, and no malicious verdicts are produced even for files known to be malicious.
Cause
- The
--without-malicious-threat-data=trueflag is set, which disables downloading of malicious threat data and prevents malicious classifications from threat data matching. - Threat data has not yet downloaded successfully, so the engine is operating without a populated database.
- The threat database timestamp is very old (stale), indicating updates have not been applied for an extended period.
Solution
- Check the current threat data configuration and status using
/status:
curl http://localhost:8000/status | python3 -m json.tool
Review the threat_data.enabled_classifications field. If it shows an empty array ([]), malicious classification from threat data is disabled. The version.threat_data field shows when the database was last updated.
- If
enabled_classificationsis empty, check whether--without-malicious-threat-data=trueis set in your configuration. Remove this flag (or set it tofalse) if you want malicious threat data to be used:
docker run -e RL_WITHOUT_MALICIOUS_THREAT_DATA=false ...
-
If threat data is enabled but stale, verify that cloud updates are working. Check
--cloud-updatesis not set tofalseand that the container can reach the update server. See Threat database download fails or is slow. -
Note that with
--without-malicious-threat-data=false(the default), FIE still classifies files using Spectra Core static analysis, so some malicious files will be detected even without threat data. However, threat data significantly improves detection coverage.
/status endpoint shows zero available instances
Symptom
The /status endpoint shows all Spectra Core instances as unavailable:
{
"spectra_core": {
"available_regular_cores": "0% (0/4)",
"available_large_cores": "0% (0/2)"
}
}
All /scan requests are being rejected with 429 or 503.
Cause
- All instances are busy processing files submitted simultaneously.
- One or more instances have timed out and are in the process of restarting.
- All instances failed to initialize during startup (for example, due to resource exhaustion).
Solution
- Wait briefly and re-check
/status. Instances that are restarting after a timeout typically recover within a few seconds. Look for"Instance is ready"log messages:
docker logs <container-name> -f | grep "Instance is ready"
-
If instances are busy (not restarting), reduce the rate of incoming requests and allow in-flight analyses to complete. Check the concurrency limit (
concurrency_limitin/status) and compare it to the number of active instances. -
If instances failed during startup, check logs for initialization errors:
docker logs <container-name> 2>&1 | grep -i "error\|fatal\|failed"
-
Check for OOM conditions — if instances are being killed by the kernel before they can finish initializing, the available count will remain at zero.
-
For a persistent
0/Nstate where all instances are stuck, restart the container. If this state recurs, review the platform requirements to ensure the host has sufficient CPU and memory for the configured number of instances. -
For Kubernetes deployments, check whether the pod itself is in a degraded state:
kubectl describe pod <fie-pod-name> -n <namespace>
kubectl top pod <fie-pod-name> -n <namespace>