Skip to main content
Skip table of contents

Appliance Update Paths

This page describes the recommended update path for the Next Generation Hardware Appliances. For cluster deployments and as standalone appliances, as well as the known limitations and failure scenarios associated with firmware updates (including HSM firmware).

Before considering an firmware update, it is important to understand and evaluate the various scenarios listed below and their potential implications.

  • which failure patterns can occur if the recommended path is not followed (for example, broken HSM admin keys or blocked firmware updates),

  • what impact of these issues have (including potential data loss),

  • which workarounds and recovery options are available (for example, factory reset, restore from backup).

The “known issues” described here ensure transparency, making it easy to identify:

  • which risks are realistic

  • under what conditions they occur

  • and what measures are necessary to achieve a secure and supported target system.

Update Matrix for the Next Generation Hardware Appliance
4.0.0 to 5.2.0

The tables show the possible update scenarios.
Green: no known issues
Yellow *1: see description Issue #1
Yellow *2: see description Issue #2

Update u.trust cluster matrix

From/To

4.0.0

5.0.0

5.1.0

5.1.1

5.1.2

5.2.0

4.0.0

#1

#1

#2

#1

5.0.0

#1

#1

#2

#1

5.1.0

#1

5.1.1

5.1.2

5.2.0

Update Luna cluster matrix

From/To

5.0.0

5.1.0

5.1.1

5.1.2

5.2.0

5.0.0

#3

5.1.0

#3

5.1.1

#3

5.1.2

5.2.0

Update u.trust standalone matrix

From/To

4.0.0

5.0.0

5.1.0

5.1.1

5.1.2

5.2.0

4.0.0

#2

5.0.0

#2

5.1.0

5.1.1

5.1.2

5.2.0

Update Luna standalone matrix

From/To

5.0.0

5.1.0

5.1.1

5.1.2

5.2.0

5.0.0

5.1.0

5.1.1

5.1.2

5.2.0

Known issues in u.trust / Luna update paths

This section describes known issues that can occur when updating the Next Generation Hardware Appliance with u.trust or Luna HSMs, and how they are addressed in releases up to and including 5.2.0.

  • Issue #4 and Issue #1 can affect all versions below 5.1.2 and can only be resolved with a factory reset + backup restore on at least 5.1.2.

  • Issue #2 and Issue #3 are fixed in 5.2.0.

  • When following an update path to 5.2.0 or later, the issues below can still affect you during intermediate steps if the described preconditions are met. In particular, avoid cross-node KSP restore on 5.1.0/5.1.1 and always ensure you have a valid backup before upgrading.

Issue #1 – u.trust: HSM firmware update fails on secondary nodes and restored appliances

Affected variants
u.trust

Affected scenarios

Appliances that are or were part of a cluster and have node ID greater than 1
with u.trust HSM where all of the following conditions are met:

  • Appliance firmware is 5.1.0 or 5.1.1.

  • HSM firmware is 4.90.0.0.

  • You attempt to run the u.trust HSM firmware update:

    • on a cluster node other than node1, or

    • on an appliance that has been restored from a backup.

Tested scenarios include:

  • Restoring a backup and then attempting the HSM firmware update.

  • A two-node cluster where both nodes use HSM FW 4.90.0.0, and the HSM firmware update is started on the second node.

Problem
The HSM firmware update fails on the affected nodes. Logs show errors during creation of the firmware-admin device user, for example:

CHAI_AUTH_FAIL: A device user could not be authenticated.

As long as the HSM firmware cannot be updated, further appliance firmware updates remain blocked.

The underlying issue is that the existing HSM admin user is not in a suitable state on secondary/restored nodes when the firmware update is executed.

Workaround

From an appliance/customer perspective, the effective workaround is:

  1. Ensure a recent backup of the appliance/cluster exists.

  2. Factory reset the affected node.

  3. Update the Appliance(s) to at least 5.1.2

  4. Initialize the u.trust HSM so the firmware is updated to 6.0.0.0.

  5. Reconfigure the node:

    • standalone: restore configuration from backup,

    • cluster: rejoin the node to the cluster and restore configuration as required.

Fixed in
5.1.2 and later.

Issue #2 – u.trust HSM firmware update fails after appliance FW update from 5.0.0 to 5.1.2 (This is a generic bug that happens on standalone and cluster setups.)

Affected variants
u.trust

Affected scenarios

  • Appliance with u.trust HSM.

  • Appliance firmware is updated directly from 5.0.0 to 5.1.2.

  • After the appliance firmware update, Webconf requires an HSM firmware update from 4.90.0.0 to 6.0.0.0.

Problem

When starting the HSM firmware update:

  • The update fails, leaving the HSM firmware at 4.90.0.0.

  • The appliance remains in a state where the HSM update step is required but cannot be completed, so the overall appliance update procedure is effectively stuck.

Workaround

A backup taken before the 5.1.2 update is required:

  1. Ensure a valid backup exists from before the update to 5.1.2.

  2. Factory reset the appliance

  3. Restore the backup taken before the appliance update.

  4. After reboot, the appliance returns to a healthy state with HSM firmware 6.0.0.0.

  5. If you intend to enable FIPS mode, take a fresh backup and repeat the reset + restore procedure with “Enable FIPS” selected during restore.

Fixed in
5.2.0.

When planning an update path to 5.2.0 or later, use a path that includes the fixed code (for example, update directly to 5.2.0 where supported) to avoid this scenario. If you must update via 5.1.2, strictly follow the backup/restore guidance above.

Issue #3 – Missing FIPS flag migration blocks adding cluster nodes

Affected variants
u.trust, Luna

Affected scenarios

Appliances (u.trust or Luna) that:

  • Were initially installed on a firmware version lower than 5.1.2 (without FIPS support in configuration).

  • Were then updated to 5.1.2 and have HSM firmware 6.0.0.0 (often via the Issue #2 workaround: backup, reset, restore).

  • Are now used as the base for a cluster where a new node is added.

When adding a new cluster node, the step fails with “Distribution of cluster configuration failed”

Problem

  • Adding a cluster node fails with a server-side exception in Webconf.

  • On the joining node, logs show a stack trace ending in java.lang.NullPointerException in ClusterService.setClusterConfig(...).

  • Migration status flag fips is missing, which causes the clustering procedure unstable

Workaround (on 5.1.2)

Two options:

  1. Update to a fixed release

    • Update the appliance to 5.2.0 or later, where a migration ensures the FIPS flag is set correctly

    • Retry adding the cluster node.

  2. Manual correction (requires SSH and support involvement)

    • Have support access the appliance via SSH.

    • Manually set the missing FIPS configuration flag in the persisted configuration.

    • Restart the appliance.

    • Retry adding the cluster node.

Fixed in
5.2.0.

For update paths targeting 5.2.0 or later, this issue is automatically handled by the migration logic once 5.2.0 is installed. The risk exists only on 5.1.2 systems that originated from pre-5.1.2 installations and attempt to add cluster nodes before upgrading to 5.2.0.

Issue #4 – KSP restore between cluster nodes breaks KSP/backup operations

Affected variants
u.trust

Affected scenarios
An appliance on 5.1.0 or 5.1.1 restores a Key Storage Package (KSP) that was created on 5.1.0 or 5.1.1 and comes from another cluster node. The HSM user DB backup that is part of a KSP included a user that is unique per appliance. Restoring that user broke the creation of HSM key backup.

Problem
The KSP restore between nodes fails. In 5.1.0 and 5.1.1 the slot user backup inside the KSP incorrectly included the user that is used to create key backups. This user is unique per appliance and restoring this user from a different appliance broke the key backup functionality. Observable effects include:

  • KSP restore operations failing with an authentication error (for example: “Authentication failed. You probably inserted the wrong card or provided the wrong PIN”).

  • Subsequent KSP operations (creating or restoring KSPs) and some cluster operations (such as adding nodes) will fail because the affected internal users are no longer usable.

This issue is not related to FIPS mode.

Workaround

  1. Ensure you have a backup taken before the KSP restore.

  2. Factory reset the affected appliance.

  3. Restore the backup.

  4. Do not reuse the problematic KSP file for further restores if you have not updated to 5.1.2. 5.1.2 implemented a mechanism that automatically ignores the internal system user when you restore a KSP that includes the unwanted user. This means you can restore a KSP created on 5.1.0 or 5.1.1 on an appliance >= 5.1.2 without the issue.

Fixed in
5.1.2 and later versions.

For update paths to 5.2.0, avoid cross-node KSP restore on 5.1.0/5.1.1 entirely.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.