On-Prem Single-Node Installation Guide
This On-Prem Single-Node Installation Guide applies for version 3.4.
Overview
The single-node installation sets up all components on a single machine. This is recommended for:
Small deployments with limited data volume
Testing environments
Proof-of-concept

The following services are installed and run on single node:
Services | Description |
|---|---|
OpenSearch | Indexes and stores findings data received from sensors |
OpenSearch Dashboards | Self-service tool to perform advanced searches, create custom visualizations, and build tailored reports for findings |
MongoDB | Stores operational and configuration data for the platform |
Kafka | Central communication backbone for the platform. It carries job requests, execution events, raw findings, processed findings, control messages, and system events for asynchronous processing across platform |
Ingestion Service | Always-on ingestion endpoint for receiving data streams from sensors. It publishes data to the Kafka pipeline for further processing |
Manager Service | Always-on service responsible for job lifecycle management and post-scan orchestration |
Scheduler Service | Always-on orchestration service responsible for initiating jobs |
Secrets Management Service | Always-on service responsible for secure key management, token generation and encryption across the platform. |
Fluentd | Always-on service that consumes raw findings events from Kafka, applies transformations, and indexes them into the OpenSearch Findings Datastore |
Web UI | External web interface for the platform |
Web API | API services for supporting Web UI |
CBOM Exporter | Handles CBOM export jobs |
HAProxy | Internal load balancer and reverse proxy for routing traffic for all services |
Prerequisites
Operating System Requirements
The following Linux operating systems are supported:
Officially supported: Red Hat Enterprise Linux (RHEL) 8 and 9+
Compatible (expected to work): CentOS 8 or 9+, Alma Linux 8 or 9+
System requirements:
Requirements | Minimum |
|---|---|
CPU cores | 4 |
Memory | 32GB |
Disk space | 50GB |
Ports
The following default ports are used internally by various components. All ports are configurable.
Service | Default Port | Ingress |
|---|---|---|
HA Proxy | 8443 | Yes |
Web UI | 3000 | No |
Web API | 7443 | No |
OpenSearch Dashboards | 5443 | No |
OpenSearch | 9200/9300 | No |
Ingestion Service | 4443 | No |
FluentD | 6443 | No |
Analytics Manager Service | 3443 | No |
Secrets Management Service | 2443 | No |
Scheduler Service | 1443 | No |
Kafka | 9092/9093/9094 | No |
MongoDB | 27017 | No |
CBOM Exporter | 11443 | No |
Firewall rules must allow ingress traffic.
Certificate Requirements
The platform uses mutual TLS (mTLS) for secure authenticated communication between all services and server components.
All required certificates can be generated and self-signed or you can provide your own certificates. For detailed information and instructions on various options for certificates, see On-Prem Certificates Management Guide.
generate_cert.sh generates following certificates:
Type | Purpose | Files |
|---|---|---|
CA Cert | For self signing certificates generated by Location: |
|
External (user facing) certificate | TLS certificate for frontend proxy. This certificate is presented to anyone accessing the AgileSec Platform and is used for all ingress TLS traffic to the platform endpoint. Ideally, it should be issued by a publicly trusted (public) Certificate Authority to ensure browser and client trust. Location: |
|
Client Certificates | mTLS client authentication for internal service-to-service communication. Set config setting Location:
|
|
Internal server certificates | TLS certificates for all internal services. By default, a single wild card certificate is generated and used by all internal services. Location: |
|
Admin client certificates | Admin user for OpenSearch and MongoDB post install setup. Only required during installation or post install setup Location: |
|
SM service keystore | Required by SM service for storing encryption keys. Location: |
|
IdP certificate | SAML Identity Provider (IdP) signing certificate Location: |
|
It is recommended to run generate_cert.sh to see list of files and locations.
Domain Name Requirements
Platforms external FQDN is determined by the following configuration settings.
A single FQDN is required. Following is recommended:
agilesec.<external domain>
For example: agilesec.dilithiumbank.com
Installation Steps
Step 1: Prepare the Environment
Ensure that you have the installer package and that your node meet the minimum memory and cpu requirements.
Download the installation zip archive from the Keyfactor download portal.
Extract the zip archive to your preferred location. We recommend to using
agilesec-analytics:
unzip -d <installer_directory> <installer_package>.zip
cd <installer_directory>
Ensure the installation script is executable:
cd <installer_directory>
chmod +x install_analytics.sh
Environment configuration is required before starting the installation. A sample configuration file for a single-node installation is available at
generate_envs/single_node_config.conf. You can use the default values for all settings. However, it is recommended to review and update following settings for single-node installations:
Setting | Purpose | Default |
|---|---|---|
| Organization name used by the platform |
|
| Primary external-facing hostname (FQDN host portion) for the platform |
|
| Primary external-facing domain for the platform |
|
| External-facing port for accessing the platform |
|
| Base Distinguished Name (DN) used to generate server certificates |
|
| Base DN used to generate internal client certificates |
|
| Enables support for v2 sensors |
|
Generate config file for single-node installation.
./generate_envs/generate_envs.sh -t single-node
generate_envs.sh will copy env.single_node file to <installation_directory>/.env
Generate certificates
You can generate and self-sign all required certificates using <installer_directory>/certificates/generate_certs.sh. Alternately, you can use certificates issued by your own CA. For POCs and first-time installations, it is recommended that you generate all certificates using generate_certs.sh.
Run following command to generate and self-sign all required certificates. The .env file is required to generate certificates. By default, the script looks for .env file under <installation directory>. This command will populate the certificate files under the certificates directory:
cd certificates/
./generate_certs.sh
[Optional] Using your own certificates
If you are using your own certificates, perform following steps:**
Copy your CA cert chain to
<installation_directory>/certificates/caCopy server and client certificates to
<installation_directory>/certificates/<analytics_internal_domain>.Certificate filenames must match those listed under Certificate Requirements.
Step 2: Install the Platform
Ensure you have either a DNS entry for
<analytics_hostname>.<external domain>(recommended) pointing to your node's IP address, or an entry in your/etc/hostsfor<analytics_hostname>.<external domain>pointing to your node's IP address.Make sure settings file
<installer_directory>/.envis present.Run
sudo ./scripts/tune.sh -u <username>to update following:
System settings:
Sysctl settingRecommended valuevm.max_map_count262144fs.file-max65536
Security settings in
/etc/security/limits.conffor file descriptors and number of threads. This is needed by OpenSearch:Setting- nofile 65536- nproc 65536soft memlock unlimitedhard memlock unlimited
/etc/hostsentries:
<private ip> <node 1 hostname>.<analytics_internal_domain>
Only entries for internal nodes are added to /etc/hosts.
Install git binary
Alternately, you can perform above steps manually
Run the following command to start the installation, then follow the prompts:
cd <installer_directory>
./install_analytics.sh -u <user> -p <installation-dir> -v
Note: <installation-dir> is a new, separate directory where the installed files will reside.
To see additional install options run install_analytics -l. If any required parameters are omitted, the script will prompt you to enter them interactively.
Post-Installation Verification
Verify Service Health
Run ./scripts/manage.sh status to check the status of all services. If any service shows Not running, try restarting it. See Managing Services for instructions on starting and restarting services.
You should see following 13 services in Running status:
$ ./scripts/manage.sh status
SERVICE DESCRIPTION STATUS
------------------------ ---------------------------------------- --------------------
opensearch OpenSearch Search Engine Running (PID: 1450891)
opensearch-dashboards OpenSearch Dashboards Running (PID: 1452817)
td-agent Fluentd Data Collector Running (2 instances)
haproxy HAProxy Load Balancer Running (PID: 1451285)
mongodb MongoDB Server Running (PID: 1451281)
kafka Kafka Server Running (PID: 1451295)
webui Web UI Microservice Running (PID: 1451316)
api Web API Microservice Running (PID: 1452924)
cbom CBOM Exporter Microservice Running (PID: 1453074)
sm Security Manager Microservice Running (PID: 1453083)
analytics-manager Analytics Manager Microservice Running (PID: 1451408)
ingestion Ingestion Microservice Running (PID: 1451419)
scheduler Scheduler Microservice Running (PID: 1451450)
Access the Platform UI
To log in to the new Web UI, use the URL displayed at the end of install,
Login URL:
https://<analytics_hostname>.<analytics_domain>:<analytics_port>Username:
admin@<analytics_domain>Password:
For example, using the default settings:
Login URL:
https://analytics.kf-agilesec.com:8443Username:
admin@kf-agilesec.comPassword:
HelloWorld123456!
You will login screen like this:

After logging in, the Overview Dashboard should show 0 across all charts, as shown in the screenshot below:

Run a Smoke Test
Follow these steps to run a quick smoke test and confirm the platform is working:
Step 1: In the Web UI, go to Sensors -> Network Scan

Step 2: On network scan page, enter an HTTPS URL to scan (for example: https://www.google.com), then click Scan to start the scan.

Step 3: While the scan is running, you fill see a screen similar to following:

Step 4: Once scan is completed successfully you will see a screen similar to following:

Step 5: At this point, the scan has completed and the pipeline is waiting for the policy execution to finish. Policy execution can take up to 45 seconds. Until policies run, all findings will show a Pending status under the Findings tab:

Step 6: Once policies have run, the Score column show a risk score instead of Pending, as shown in the screenshot below:

Step 7: Shortly after policies run successfully, a backend process performs additional analysis on the findings. Once this process completes, you will see Successful statuses as shown in the screenshot below. This confirms the platform is working as expected.

Managing Services
After installation, you can manage services using the unified service manager script at ./scripts/manage.sh
The manage.sh script provides a centralized way to manage all platform services:
cd /scripts ./manage.sh [options] [services...]
Actions
Action | Description |
|---|---|
start | Start services |
stop | Stop services |
restart | Stop and then start services |
reload | Reload service configuration where supported |
status | Check status of services |
list | List available services |
help | Display help message |
Options
Option | Description |
|---|---|
-d, --debug | Enable debug mode (show service output in console) |
-h, --help | Display help message and exit |
Available Services
Service | Description |
|---|---|
haproxy | HAProxy Load Balancer |
opensearch | OpenSearch Search Engine |
opensearch-dashboards | OpenSearch Dashboards |
mongodb | MongoDB Server |
kafka | Kafka Server |
td-agent | Fluentd Data Collector |
webui | Web UI Microservice |
api | Web API Microservice |
sm | Security Manager Microservice |
analytics-manager | Analytics Manager Microservice |
ingestion | Ingestion Microservice |
scheduler | Scheduler Microservice |
FluentD | Fluentd Data Collector |
If no specific services are specified, the action will be applied to all installed services.
Usage Examples
To start all services:
./manage.sh start
To start only OpenSearch with debug output
./manage.sh start -d opensearch
To start multiple specific services
./manage.sh start opensearch opensearch-dashboards
The script automatically resolves dependencies, starting OpenSearch first (as it's a dependency for other services) before starting any dependent services.
to stop all services
# Stop all installed services
./manage.sh stop
To stop only specific services
./manage.sh stop haproxy td-agent
The script stops services in reverse dependency order to ensure a clean shutdown.
To Restart all installed services:
# Restart all installed services
./manage.sh restart
To restart only OpenSearch
./manage.sh restart opensearch
To reload configuration:
./manage.sh reload haproxy
To list status of all services
./manage.sh status
To check status of specific services:
./manage.sh status opensearch td-agent
To list all available services:
./manage.sh list
If HAProxy is configured to use a privileged port (< 1000), you need root privileges to start or stop it. The script will display the appropriate command to run with sudo.
Post Installation Configuration
SAML Setup
For detailed instructions on various SSO integration options, see Authentication and Access Control.
Only users with the Platform Admin privilege role for the organization can edit the SAML 2.0 configuration. To access the SAML setup page, do the following:
Step 1: Go to Settings -> Authentication Options

Step 2: Turn on SAML 2.0 Single Sign-On
Step3: Open Settings for SAML 2.0 Configuration, then configure the SAML settings for your environment. Details for each configurable option are provided in the next section.

Configuration Options
Service Provider Information
Field | Description | Azure AD SSO configuration equivalent field |
|---|---|---|
Organization SAML ID | Organization's unique SAML internal identifier. | N/A |
Callback URL | The URL where the IdP should redirect/post to after authentication. | Reply URL (Assertion Consumer Service URL) |
SP Entity ID | Service Provider Entity ID | Identifier (Entity ID) |
Authorization Settings
Field | Description | Azure AD SSO configuration equivalent field |
|---|---|---|
Assertions Signed | Indicates if SAML assertions should be signed. | SAML Signing Certificate, Signing Option includes "Sign SAML assertion" |
Authentication Response Signed | Indicates if the SAML authentication response should be signed. | SAML Signing Certificate, Signing Option includes "Sign SAML response" |
Only SHA256 signing algorithm is supported for now.
Identity Provider Configuration
Field | Description | Azure AD SSO configuration equivalent field |
|---|---|---|
IDP Metadata | Raw XML metadata for the Identity Provider. | SAML Certificates / Federation Metadata XML |
Metadata URL | URL to fetch the IdP metadata. | SAML Certificates / App Federation Metadata Url |
Required to select one of the options.
Custom Attribute Configuration
Field | Description | Azure AD SSO configuration equivalent field |
|---|---|---|
Claim Name | Case-sensitive claim name provided by the SAML IdP used for role mapping. | Attributes & Claims. |
Role Mapping Configuration
Field | Description | Note |
|---|---|---|
IDP role | Role assigned by IDP | Mapped group claim name's value |
ISG role | Mapped AgileSec role to assigned role | AgileSec role you want to assign to that group |
Troubleshooting
If you encounter issues during installation or operation:
Installation Issues
Check the console output for specific error messages.
Verify that all prerequisites are met.
Ensure all certificate files are correctly placed and have proper permissions and filename.
Check the
.envfile to make sure following are correct:private_ipanalytics_hostnameanalytics_domainanalytics_portcluster_frontend_node_ipscluster_backend_node_ips
Make sure analytics fqdn
<analytics_hostname>.<analytics_domain>is reachable on<analytics_port>
Service Issues
Check service logs in the
<installation_path>/logsdirectory.Verify port availability using
netstatorsscommands.Ensure proper certificate permissions and ownership.
Check disk space with
df -h.Verify memory availability with
free -h.
Common Errors
Certificate Issues: Ensure all certificate DNs and CNs match specs from certificates requirements section.
Port Conflicts: Verify no other services are using the required ports.
Permission Denied: Check file and directory permissions.
Memory Errors: Verify you have sufficient memory available (minimum 8GB recommended).