1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Dividing etcd backup assembly

This commit is contained in:
Laura Hinson
2025-05-14 20:18:43 -04:00
committed by openshift-cherrypick-robot
parent 677705007a
commit 957418eaec
31 changed files with 225 additions and 168 deletions

View File

@@ -0,0 +1 @@
../_attributes/

View File

@@ -0,0 +1,36 @@
//NOTE TO CONTRIBUTORS:
//
//If you update any of the content in this assembly file, be sure to also make the same changes in the assemblies in the following file: backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc.
:_mod-docs-content-type: ASSEMBLY
[id="etcd-backup"]
include::_attributes/common-attributes.adoc[]
= Backing up and restoring etcd data
:context: etcd-backup
toc::[]
As the key-value store for {product-title}, etcd persists the state of all resource objects.
Back up the etcd data for your cluster regularly and store it in a secure location, ideally outside the {product-title} environment. Do not take an etcd backup before the first certificate rotation completes, which occurs 24 hours after installation, otherwise the backup will contain expired certificates. It is also recommended to take etcd backups during non-peak usage hours because the etcd snapshot has a high I/O cost.
Be sure to take an etcd backup before you update your cluster. Taking a backup before you update is important because when you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an {product-title} 4.17.5 cluster must use an etcd backup that was taken from 4.17.5.
[IMPORTANT]
====
Back up your cluster's etcd data by performing a single invocation of the backup script on a control plane host. Do not take a backup for each control plane host.
====
After you have an etcd backup, you can xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore to a previous cluster state].
// Backing up etcd data
include::modules/backup-etcd.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../hosted_control_planes/hcp_high_availability/hcp-recovering-etcd-cluster.adoc#hcp-recovering-etcd-cluster[Recovering an unhealthy etcd cluster]
// Creating automated etcd backups
include::modules/etcd-creating-automated-backups.adoc[leveloffset=+1]
include::modules/creating-single-etcd-backup.adoc[leveloffset=+2]
include::modules/creating-recurring-etcd-backups.adoc[leveloffset=+2]

View File

@@ -0,0 +1,81 @@
//NOTE TO CONTRIBUTORS:
//
//If you update any of the content in this assembly file, be sure to also make the same changes in the assemblies in the following directory: backup_and_restore/control_plane_backup_and_restore/disaster-recovery/.
:_mod-docs-content-type: ASSEMBLY
[id="etcd-disaster-recovery"]
include::_attributes/common-attributes.adoc[]
= Disaster recovery
:context: etcd-disaster-recovery
toc::[]
The disaster recovery documentation provides information for administrators on how to recover from several disaster situations that might occur with their {product-title} cluster. As an administrator, you might need to follow one or more of the following procedures to return your cluster to a working state.
[IMPORTANT]
====
Disaster recovery requires you to have at least one healthy control plane host.
====
[id="etcd-dr-quorum"]
== Quorum restoration
You can use the `quorum-restore.sh` script to restore etcd quorum on clusters that are offline due to quorum loss. When quorum is lost, the {product-title} API becomes read-only. After quorum is restored, the {product-title} API returns to read/write mode.
// Restoring etcd quorum for high availability clusters
include::modules/dr-restoring-etcd-quorum-ha.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../installing/installing_bare_metal/upi/installing-bare-metal.adoc#installing-bare-metal[Installing a user-provisioned cluster on bare metal]
* xref:../../installing/installing_bare_metal/bare-metal-expanding-the-cluster.adoc#replacing-a-bare-metal-control-plane-node_bare-metal-expanding[Replacing a bare-metal control plane node]
[NOTE]
====
If you have a majority of your control plane nodes still available and have an etcd quorum, xref:../../backup_and_restore/control_plane_backup_and_restore/replacing-unhealthy-etcd-member.adoc#replacing-unhealthy-etcd-member[replace a single unhealthy etcd member].
====
[id="etcd-dr-restore"]
== Restoring to a previous cluster state
To restore the cluster to a previous state, you must have previously backed up the `etcd` data by creating a snapshot. You will use this snapshot to restore the cluster state. For more information, see "Backing up etcd data".
If applicable, you might also need to xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-3-expired-certs.adoc#dr-recovering-expired-certs[recover from expired control plane certificates].
[WARNING]
====
Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This procedure should only be used as a last resort.
Before performing a restore, see "About restoring to a previous cluster state" for more information on the impact to the cluster.
====
// About restoring to a previous cluster state
include::modules/dr-restoring-cluster-state-about.adoc[leveloffset=+2]
// Restoring to a previous cluster state for a single node
include::modules/dr-restoring-cluster-state-sno.adoc[leveloffset=+2]
// Restoring to a previous cluster state
include::modules/dr-restoring-cluster-state.adoc[leveloffset=+2]
// Restoring a cluster from etcd backup manually
include::modules/manually-restoring-cluster-etcd-backup.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backing-up-etcd-data_backup-etcd[Backing up etcd data]
* xref:../../installing/installing_bare_metal/upi/installing-bare-metal.adoc#installing-bare-metal[Installing a user-provisioned cluster on bare metal]
* xref:../../networking/accessing-hosts.adoc#accessing-hosts-on-aws_accessing-hosts[Accessing hosts on Amazon Web Services in an installer-provisioned infrastructure cluster]
* xref:../../installing/installing_bare_metal/bare-metal-expanding-the-cluster.adoc#replacing-a-bare-metal-control-plane-node_bare-metal-expanding[Replacing a bare-metal control plane node]
include::modules/dr-scenario-cluster-state-issues.adoc[leveloffset=+2]
// Recovering from expired control plane certificates
include::modules/dr-recover-expired-control-plane-certs.adoc[leveloffset=+1]
//Testing restore procedures
include::modules/dr-testing-restore-procedures.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[Restoring to a previous cluster state]

View File

@@ -0,0 +1 @@
../images/

View File

@@ -0,0 +1 @@
../modules/

View File

@@ -0,0 +1,55 @@
//NOTE TO CONTRIBUTORS:
//
//If you update any of the content in this assembly file, be sure to also make the same changes in the assemblies in the following file: backup_and_restore/control_plane_backup_and_restore/replacing-unhealthy-etcd-member.adoc.
:_mod-docs-content-type: ASSEMBLY
[id="replace-unhealthy-etcd-member"]
include::_attributes/common-attributes.adoc[]
= Replacing an unhealthy etcd member
:context: replace-unhealthy-etcd-member
toc::[]
The process to replace a single unhealthy etcd member depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or because the etcd pod is crashlooping.
[NOTE]
====
If you have lost the majority of your control plane hosts, follow the disaster recovery procedure to xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore to a previous cluster state] instead of this procedure.
If the control plane certificates are not valid on the member being replaced, then you must follow the procedure to xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-3-expired-certs.adoc#dr-recovering-expired-certs[recover from expired control plane certificates] instead of this procedure.
If a control plane node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member.
====
// Identifying an unhealthy etcd member
include::modules/restore-identify-unhealthy-etcd-member.adoc[leveloffset=+1]
// Determining the state of the unhealthy etcd member
include::modules/restore-determine-state-etcd-member.adoc[leveloffset=+1]
== Replacing the unhealthy etcd member
Depending on the state of your unhealthy etcd member, use one of the following procedures:
* Replacing an unhealthy etcd member whose machine is not running or whose node is not ready
* Installing a primary control plane node on an unhealthy cluster
* Replacing an unhealthy etcd member whose etcd pod is crashlooping
* Replacing an unhealthy stopped baremetal etcd member
// Replacing an unhealthy etcd member whose machine is not running or whose node is not ready
include::modules/restore-replace-stopped-etcd-member.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../machine_management/control_plane_machine_management/cpmso-troubleshooting.adoc#cpmso-ts-etcd-degraded_cpmso-troubleshooting[Recovering a degraded etcd Operator]
* link:https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2024/html/installing_openshift_container_platform_with_the_assisted_installer/expanding-the-cluster#installing-primary-control-plane-node-unhealthy-cluster_expanding-the-cluster[Installing a primary control plane node on an unhealthy cluster]
// Replacing an unhealthy etcd member whose etcd pod is crashlooping
include::modules/restore-replace-crashlooping-etcd-member.adoc[leveloffset=+2]
// Replacing an unhealthy baremetal stopped etcd member
include::modules/restore-replace-stopped-baremetal-etcd-member.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../machine_management/deleting-machine.adoc#machine-lifecycle-hook-deletion-etcd_deleting-machine[Quorum protection with machine lifecycle hooks]

View File

@@ -0,0 +1 @@
../snippets

View File

@@ -1,149 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="etcd-backup"]
include::_attributes/common-attributes.adoc[]
= Backing up and restoring etcd data
:context: etcd-backup
toc::[]
As the key-value store for {product-title}, etcd persists the state of all resource objects.
Back up the etcd data for your cluster regularly and store it in a secure location, ideally outside the {product-title} environment. Do not take an etcd backup before the first certificate rotation completes, which occurs 24 hours after installation, otherwise the backup will contain expired certificates. It is also recommended to take etcd backups during non-peak usage hours because the etcd snapshot has a high I/O cost.
Be sure to take an etcd backup before you update your cluster. Taking a backup before you update is important because when you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an {product-title} 4.17.5 cluster must use an etcd backup that was taken from 4.17.5.
[IMPORTANT]
====
Back up your cluster's etcd data by performing a single invocation of the backup script on a control plane host. Do not take a backup for each control plane host.
====
After you have an etcd backup, you can xref:../etcd/etcd-backup.adoc#dr-scenario-2-restoring-cluster-state-about_etcd-backup[restore to a previous cluster state].
// Backing up etcd data
include::modules/backup-etcd.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../hosted_control_planes/hcp_high_availability/hcp-recovering-etcd-cluster.adoc[Recovering an unhealthy etcd cluster for {hcp}]
// Creating automated etcd backups
include::modules/etcd-creating-automated-backups.adoc[leveloffset=+1]
// Creating a single etcd backup
include::modules/creating-single-etcd-backup.adoc[leveloffset=+2]
// Creating recurring etcd backups
include::modules/creating-recurring-etcd-backups.adoc[leveloffset=+2]
[id="replace-unhealthy-etcd-member_{context}"]
== Replacing an unhealthy etcd member
The process to replace a single unhealthy etcd member depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or because the etcd pod is crashlooping.
[NOTE]
====
If you have lost the majority of your control plane hosts, follow the disaster recovery procedure to xref:../etcd/etcd-backup.adoc#dr-scenario-2-restoring-cluster-state-about_etcd-backup[restore to a previous cluster state] instead of this procedure.
If the control plane certificates are not valid on the member being replaced, then you must follow the procedure to xref:../etcd/etcd-backup.adoc#dr-scenario-3-recovering-expired-certs_etcd-backup[recover from expired control plane certificates] instead of this procedure.
If a control plane node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member.
====
// Identifying an unhealthy etcd member
include::modules/restore-identify-unhealthy-etcd-member.adoc[leveloffset=+2]
[.role=_additional-resources]
.Additional resources
* xref:../etcd/etcd-backup.adoc#backing-up-etcd-data_etcd-backup[Backing up etcd data]
// Determining the state of the unhealthy etcd member
include::modules/restore-determine-state-etcd-member.adoc[leveloffset=+2]
// Replacing an unhealthy etcd member whose machine is not running or whose node is not ready
include::modules/restore-replace-stopped-etcd-member.adoc[leveloffset=+3]
[role="_additional-resources"]
.Additional resources
* xref:../machine_management/control_plane_machine_management/cpmso-troubleshooting.adoc#cpmso-ts-etcd-degraded_cpmso-troubleshooting[Recovering a degraded etcd Operator]
* link:https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2023/html/assisted_installer_for_openshift_container_platform/expanding-the-cluster#installing-primary-control-plane-node-unhealthy-cluster_expanding-the-cluster[Installing a primary control plane node on an unhealthy cluster]
// Replacing an unhealthy etcd member whose etcd pod is crashlooping
include::modules/restore-replace-crashlooping-etcd-member.adoc[leveloffset=+3]
// Replacing an unhealthy baremetal stopped etcd member
include::modules/restore-replace-stopped-baremetal-etcd-member.adoc[leveloffset=+3]
[role="_additional-resources"]
.Additional resources
* xref:../machine_management/deleting-machine.adoc#machine-lifecycle-hook-deletion-etcd_deleting-machine[Quorum protection with machine lifecycle hooks]
[id="etcd-disaster-recovery_{context}"]
== Disaster recovery
The disaster recovery documentation provides information for administrators on how to recover from several disaster situations that might occur with their {product-title} cluster. As an administrator, you might need to follow one or more of the following procedures to return your cluster to a working state.
[IMPORTANT]
====
Disaster recovery requires you to have at least one healthy control plane host.
====
xref:../etcd/etcd-backup.adoc#dr-restoring-etcd-quorum-ha_etcd-backup[Restoring etcd quorum for high availability clusters]:: This solution handles situations where you have lost the majority of your control plane hosts, leading to etcd quorum loss and the cluster going offline. This solution does not require an etcd backup.
+
[NOTE]
====
If you have a majority of your control plane nodes still available and have an etcd quorum, xref:../etcd/etcd-backup.adoc#replace-unhealthy-etcd-member_etcd-backup[replace a single unhealthy etcd member].
====
xref:../etcd/etcd-backup.adoc#dr-scenario-2-restoring-cluster-state-about_etcd-backup[Restoring to a previous cluster state]:: This solution handles situations where you want to restore your cluster to a previous state, for example, if an administrator deletes something critical. If you have taken an etcd backup, you can restore your cluster to a previous state.
+
If applicable, you might also need to xref:../etcd/etcd-backup.adoc#dr-scenario-3-recovering-expired-certs_etcd-backup[recover from expired control plane certificates].
+
[WARNING]
====
Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This procedure should only be used as a last resort.
Before performing a restore, see xref:../etcd/etcd-backup.adoc#dr-scenario-2-restoring-cluster-state-about_etcd-backup[Restoring a cluster state] for more information on the impact to the cluster.
====
xref:../etcd/etcd-backup.adoc#dr-scenario-3-recovering-expired-certs_etcd-backup[Recovering from expired control plane certificates]:: This solution handles situations where your control plane certificates have expired. For example, if you shut down your cluster before the first certificate rotation, which occurs 24 hours after installation, your certificates will not be rotated and will expire. You can follow this procedure to recover from expired control plane certificates.
// Testing restore procedures
include::modules/dr-testing-restore-procedures.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../etcd/etcd-backup.adoc#dr-scenario-2-restoring-cluster-state-about_etcd-backup[Restoring to a previous cluster state]
// Restoring etcd quorum for high availability clusters
include::modules/dr-restoring-etcd-quorum-ha.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../installing/installing_bare_metal/upi/installing-bare-metal.adoc[Installing a user-provisioned cluster on bare metal]
* xref:../installing/installing_bare_metal/bare-metal-expanding-the-cluster.adoc#replacing-a-bare-metal-control-plane-node_bare-metal-expanding[Replacing a bare-metal control plane node]
// Restoring to a previous cluster state
include::modules/dr-restoring-cluster-state-about.adoc[leveloffset=+2]
// Restoring to a previous cluster state for a single node
include::modules/dr-restoring-cluster-state-sno.adoc[leveloffset=+3]
// Restoring to a previous cluster state
include::modules/dr-restoring-cluster-state.adoc[leveloffset=+3]
// Restoring a cluster from etcd backup manually
include::modules/manually-restoring-cluster-etcd-backup.adoc[leveloffset=+3]
[role="_additional-resources"]
.Additional resources
* xref:../etcd/etcd-backup.adoc#backing-up-etcd-data_etcd-backup[Backing up etcd data]
* xref:../installing/installing_bare_metal/upi/installing-bare-metal.adoc[Installing a user-provisioned cluster on bare metal]
* xref:../networking/accessing-hosts.adoc#accessing-hosts[Accessing hosts on Amazon Web Services in an installer-provisioned infrastructure cluster]
* xref:../installing/installing_bare_metal/bare-metal-expanding-the-cluster.adoc#replacing-a-bare-metal-control-plane-node_bare-metal-expanding[Replacing a bare-metal control plane node]
// Issues and workarounds for restoring a persistent storage state
include::modules/dr-scenario-cluster-state-issues.adoc[leveloffset=+3]
// Recovering from expired control plane certificates
include::modules/dr-recover-expired-control-plane-certs.adoc[leveloffset=+2]