On-Prem High Availability (HA) Guide
This guide describes how to configure the AgileSec Platform for high availability in on-premises environments. It covers the following scenarios:
Setting up a highly available AgileSec platform cluster with no single point of failure for a new installation.
Adding capacity to an existing cluster to eliminate single points of failure and achieve high availability.
In this context, high availability means that the cluster remains operational if any single node becomes unavailable.
Prerequisites
Ensure you have the installer package and that your nodes meet the minimum memory, CPU, and disk requirements.
Requirements | Backend (Minimum) | Backend (Production) | Scan-node | Frontend | Additional Frontends |
CPU cores | 4 | 8 | 2 | 4 | 2 |
Memory | 32GB | 64GB - small scan volume | 8GB | 16GB | 16GB |
Disk space | 50GB | 100GB - small scan volume | 50GB | 50GB | 50GB |
For convenient file transfers between nodes, it is recommended to set up SSH key-based passwordless authentication from backend-1 to all other nodes, though manual file transfer methods can also be used. For details, refer to the Red Hat Documentation Using secure communications between two systems with OpenSSH. Certificates and configuration files will be generated on backend-1 and copied to other nodes. Use consistent file names and directory structures when copying files between machines.
Familiarity with the basic 3-node (two backends, one frontend) On-Prem installation guide is recommended. The section Adding a new Frontend to an existing cluster assumes you have a running 3-node cluster.
Four-Node Installation for New Cluster (2 Backends, 2 Frontends)
Note: It is recommended to build the cluster nodes in the following order.
Step 1: Setup Cluster Configuration on Backend-1 (be-1)
Configure the cluster topology by editing the generate_envs/multi_node_config.conf file with your environment-specific details. Add the private IPs of all your nodes (be-1, be-2, fe-1, fe-2).
cd
vi generate_envs/multi_node_config.conf
Add the private IPs of your two backend nodes and two frontend nodes. Also, uncomment the following frontend2 entries in generate_envs/multi_node_config.conf:
frontend2_private_ipfrontend2_node_hostnamefrontend2_node_idfrontend2_node_profile
After editing, your entries should look similar to the following:
grep -e '^frontend' -e '^backend' generate_envs/multi_node_config.conf
backend1_private_ip="X.X.X.X"
backend2_private_ip="X.X.X.X"
frontend1_private_ip="X.X.X.X"
frontend2_private_ip="X.X.X.X"
backend1_node_hostname="backend-1"
backend1_node_id=1
backend1_node_profile="PRIMARY_FULL_BACKEND"
backend2_node_hostname="backend-2"
backend2_node_id=2
backend2_node_profile="FULL_BACKEND"
frontend1_node_hostname="frontend-1"
frontend1_node_id=3
frontend1_node_profile="PRIMARY_FRONTEND"
frontend2_node_hostname="frontend-2"
frontend2_node_id=6
frontend2_node_profile="ADDITIONAL_FRONTEND"
Step 2: Generate Configuration Files for All Nodes
The following command will generate four configuration files, one for each node:
./generate_envs/generate_envs.sh -t multi-node
The generated files will be:
./generate_envs/generated_envs/env.backend-1./generate_envs/generated_envs/env.backend-2./generate_envs/generated_envs/env.frontend-1./generate_envs/generated_envs/env.frontend-2
The generate_envs.sh script will also copy the env.backend-1 file to <installer_directory>/.env.
Step 3: Setup Certificates
Run the following to generate all certificates and self-sign them using the <installer_directory>/certificates/generate_certs.sh script. Alternatively, you can use certificates generated by your own CA. For POCs and first-time installations, it is recommended to generate all certificates using generate_certs.sh.
cd certificates/
./generate_certs.sh
The ./generate_certs.sh script will also create a file called kf-agilesec.internal-certs.tgz, which needs to be copied to all other nodes. This file conveniently contains the env.backend-1, env.backend-2, env.frontend-1, and env.frontend-2 files in addition to the certificates.
Step 4: Copy Files to All Other Nodes
Copy files to all other nodes from backend-1. For each node, copy the kf-agilesec.internal-certs.tgz file to <installer_directory>/certificates/.
scp kf-agilesec.internal-certs.tgz $BE-2_IP:<installer_directory>/certificates/
scp kf-agilesec.internal-certs.tgz $FE-1_IP:<installer_directory>/certificates/
scp kf-agilesec.internal-certs.tgz $FE-2_IP:<installer_directory>/certificates/
Step 5: External FQDN Setup (Optional)
DNS-based load balancing for frontends can provide an external stable FQDN for both installation and post-install operations. Below are the steps to configure your DNS using AWS Route53 as your DNS provider. Other DNS providers that support DNS-based load balancing can be configured similarly.
Assuming you own
<external domain>, create a public hosted zone for<external domain>in AWS Route53 and follow the steps below to configure the external FQDN in that zone.For each frontend IP address in your public hosted zone for
<external domain>, create a DNS entry for <analytics_hostname>.<external domain> with "Record type = A" and "Routing policy = Multivalue Answer" with the value of your frontend IP address. You can set "Record ID" to any value, e.g., "fe-1" for frontend-1, "fe-2" for frontend-2.Validate that your DNS configuration has propagated by using the following command. The number of entries should equal the number of your frontend nodes:
dig +short . A
Step 6: Install on Backend-1 (BE-1)
Ensure you have either a DNS entry for
<analytics_hostname>.<external domain>(recommended for HA) pointing to frontend IPs or an entry in your/etc/hostsfor<analytics_hostname>.<external domain>pointing to either frontend-1 or frontend-2 IP address.Since all environment files were generated on BE-1, there should already be a .env file at
<installer_directory>/.envon BE-1. Run the following to install the software on BE-1:
cd
sudo ./scripts/tune.sh -u <user>
./install_analytics.sh -u <user> -p <installation-directory>
Note: The <installation-dir> is the directory where your installed files will reside.
Step 7: Install on Backend-2 (BE-2)
Ensure you have either a DNS entry for
<analytics_hostname>.<external domain>(recommended for HA) pointing to frontend IPs or an entry in your/etc/hostsfor<analytics_hostname>.<external domain>pointing to either frontend-1 or frontend-2 IP address. If you have an upstream load balancer, it is better to use the IP of the load balancer.On BE-2, follow these steps to unarchive the files, copy the .env to the
<installer_directory>root, and install the software:
cd /certificates/
tar zxvf kf-agilesec.internal-certs.tgz
cp env.backend-2 ../.env
cd ../
sudo ./scripts/tune.sh -u <user>
./install_analytics.sh -u <user> -p <installation-directory>
Step 8: Install on Frontend-1 (FE-1)
If your DNS provider does not resolve the FQDN for FE-1, add the following entry to your /etc/hosts: $FE-1_IP agilesec.kf-agilesec.com
cd <installer_directory>/certificates/
tar zxvf kf-agilesec.internal-certs.tgz
cp env.frontend-1 ../.env
cd ../
sudo ./scripts/tune.sh -u <user>
./install_analytics.sh -u <user> -p <installation-dir> -v
At the end of the installation, the installer provides the following access details:
Access information for Web UI
Login URL
Admin username - Password (as provided during installation)
Ingestion service endpoint for v3 unified sensor
Ingestion endpoint for v2 sensors
Step 9: Install on FE-2
If your DNS provider does not resolve the FQDN for FE-2, add the following entry to your /etc/hosts: $FE-2_IP agilesec.kf-agilesec.com
cd <installer_directory>/certificates/
tar zxvf kf-agilesec.internal-certs.tgz
cp env.frontend-2 ../.env
cd ../
sudo ./scripts/tune.sh -u <user>
./install_analytics.sh -u <user> -p <installation-dir>
Add a New Frontend to an Existing Cluster
Assumption: This section assumes you already have an existing working cluster with at least one frontend node. For this specific example, we assume you have a 4-node (BE-1, BE-2, FE-1, FE-2) working cluster that you set up in the previous section. This example can also be applied if you have a 3-node working cluster with BE-1, BE-2, and FE-1 and are adding FE-2.
To add a new frontend node called frontend-3 (FE-3), follow these steps:
Step 1: On BE-1, Add a New Frontend-3 Configuration Block
A. Ensure that <installer_directory>/generate_envs/multi_node_config.conf has the following configurations added for FE-3. Add your private IP to the frontend3_private_ip field:
frontend3_node_hostname="frontend-3"
frontend3_private_ip="X.X.X.X"
frontend3_node_id=7
frontend3_node_profile="ADDITIONAL_FRONTEND"
B. After completing the above step, your frontend configurations should look like this:
$ grep -e '^frontend' -e '^backend' generate_envs/multi_node_config.conf
frontend1_node_hostname="frontend-1"
frontend1_private_ip="X.X.X.X"
frontend1_node_id=3
frontend1_node_profile="PRIMARY_FRONTEND"
frontend2_node_hostname="frontend-2"
frontend2_private_ip="X.X.X.X"
frontend2_node_id=6
frontend2_node_profile="ADDITIONAL_FRONTEND"
frontend3_node_hostname="frontend-3"
frontend3_private_ip="X.X.X.X"
frontend3_node_id=7
frontend3_node_profile="ADDITIONAL_FRONTEND"
Note: The difference between PRIMARY_FRONTEND and ADDITIONAL_FRONTEND is that PRIMARY_FRONTEND runs an additional MongoDB arbiter service.
Step 2: Generate Configuration Files for Each Node
A. Run ./generate_envs/generate_envs.sh -t multi-node to regenerate the following files:
<installer_directory>/generate_envs/generated_envs/env.backend-2<installer_directory>/generate_envs/generated_envs/env.backend-1<installer_directory>/generate_envs/generated_envs/env.frontend-3<installer_directory>/generate_envs/generated_envs/env.frontend-2<installer_directory>/generate_envs/generated_envs/env.frontend-1
Step 3: From BE-1, Copy All Frontend Configuration Files
Copy all frontend configuration files to their respective frontend machines:
scp <installer_directory>/generate_envs/generated_envs/env.frontend-1 \
$FE-1_IP:<installer_directory>/.env
scp <installer_directory>/generate_envs/generated_envs/env.frontend-2 \
$FE-2_IP:<installer_directory>/.env
scp <installer_directory>/generate_envs/generated_envs/env.frontend-3 \
$FE-3_IP:<installer_directory>/.env
Also, copy the certificates bundle kf-agilesec.internal-certs.tgz to FE-3:
scp <installer_directory>/certificates/kf-agilesec.internal-certs.tgz \
$FE-3_IP:<installer_directory>/certificates
Step 4: Install FE-3
If your DNS provider does not resolve the FQDN for FE-3, add the following entry to your /etc/hosts: $FE-3_IP agilesec.kf-agilesec.com.
Run the following to install FE-3:
cd <installer_directory>/certificates/
tar zxvf kf-agilesec.internal-certs.tgz
cd ../
sudo ./scripts/tune.sh -u <user>
./install_analytics.sh -u <user> -p <installation-dir>
Step 5: Patch the Existing Frontends (FE-1 and FE-2)
On FE-1:
cd <installer_directory>
./install_analytics.sh -u <user> -p <installation-dir> patch new-frontend -v
sudo ./scripts/tune.sh -u <user>
On FE-2:
cd <installer_directory>
./install_analytics.sh -u <user> -p <installation-dir> patch new-frontend -v
sudo ./scripts/tune.sh -u <user>
Add an Upstream Load Balancer to an Existing 4-Node Cluster
External upstream load balancing for frontend traffic has been tested using AWS NLB (Network Load Balancer) to forward TCP/TLS on port 8443 to HAProxy. The following steps demonstrate the configuration using AWS as an example. The same concepts apply to other cloud providers or on-premises load balancers with similar capabilities.
Note: This example uses port 8443, but frontends can be configured to use different ports (e.g., 443). Adjust the port numbers in the following steps according to your configuration.
Prerequisites on the Frontend Nodes
Confirm HAProxy is listening on 0.0.0.0:8443 (or the instance's private IP on 8443).
Ensure each node is reachable on port 8443 from within the network.
Decide how you want health checks to work:
Easiest: TCP health check on 8443 (checks if port is open)
Better: HTTP/HTTPS health check to an HAProxy endpoint (checks if HAProxy is working). HAProxy frontends can respond to the /health-check endpoint.
Step 1: Create an AWS Target Group for the Frontend Nodes
In EC2 Console → Target Groups → Create target group:
Target type
Select Instances (typical for EC2) or IP (if you want to register IPs directly)
Protocol / Port
Protocol: TCP
Port: (e.g., 8443 or 443)
Health checks
Protocol: TCP (simple) or HTTP/HTTPS (preferred, since we have the /health-check URL)
Port: Traffic port (same as your frontend port)
Create the target group, then Register targets:
Add your frontend instances (or IPs)
After registering, check the Targets tab → Health status to ensure they become healthy
Step 2: Create the Network Load Balancer (NLB)
In EC2 Console → Load Balancers → Create load balancer → Network Load Balancer:
Scheme: Internet-facing (public) or Internal (private-only) based on your organizational policy and needs
IP address type: IPv4
Network mapping:
Select the VPC
Select the subnet in which your frontend VMs reside
Optional: Create/choose an NLB security group that allows inbound TCP traffic on your frontend port from the sources you want (0.0.0.0/0 for public, or your corporate CIDRs, etc.)
Step 3: Add the Listener and Attach the Target Group
While creating the NLB (or afterward):
Listener
Protocol: TCP
Port: (e.g., 8443 or 443)
Default action
Forward to the target group you created in the previous step
Step 4: Lock Down the HAProxy Instances' Security Group (Important)
On the HAProxy instances' security group, ensure inbound rules allow: - TCP traffic on your frontend port (e.g., 8443 or 443) from the NLB security group (recommended), so clients cannot hit HAProxy directly - The health check port (same port if using traffic-port health checks)
Step 5: Validate NLB-Based Flow
Either point your external FQDN to the NLB (recommended) or update your /etc/hosts to point to the NLB IP address for local testing.
Log in to https://<analytics_hostname>.<external domain>:<your_frontend_port> and run a network scan as a smoke test. For smoke test execution details, see either the single-node or multi-node installation guide.
Add a New Scan Node to an Existing Cluster
Scan nodes are asynchronous stateless worker nodes that subscribe to Kafka topics to get scan requests, execute the scan, and publish data back to Kafka. Scan nodes only run HAProxy and Scheduler services.
If you want to decouple or distribute scan operations on separate nodes, you can provision one or more scan nodes and run the following installation steps on each scan node. Scan nodes use the env.backend-1 file for configuration.
Step 1: From Backend-1, Copy Certificates and Configuration to Scan Node
scp <installer_directory>/certificates/kf-agilesec.internal-certs.tgz \
$SN-1_IP:<installer_directory>/certificates
Step 2: Install Scan Node
cd <installer-root>/certificates/
tar zxvf kf-agilesec.internal-certs.tgz
cp env.backend-1 ../.env
cd ../
sudo ./scripts/tune.sh -u ec2-user -r scan
./install_analytics.sh -u ec2-user -p <installation-dir> -r scan
Note: Both tune.sh and install_analytics.sh require the special flag -r scan for scan node installations.
Once you have one or more scan nodes, you have the option to permanently disable the Scheduler service on backend-1 and backend-2.
Test Your HA Setup
Test Frontends
Stop one of the frontends and run a network scan through the UI. The scan should complete successfully, confirming that the remaining frontend(s) can handle all traffic.
Test Backends
Stop one of the backends and run a network scan through the UI. The scan should complete successfully, confirming that the cluster remains operational with a single backend node failure.
Note: For a truly highly available cluster, ensure you can lose any single node (frontend or backend) without service interruption.