// Module included in the following assemblies: // // * backup_and_recovery/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc // * etcd/etcd-backup-restore/etcd-disaster-recovery.adoc :_mod-docs-content-type: CONCEPT [id="dr-scenario-2-restoring-cluster-state-about_{context}"] = About restoring to a previous cluster state To restore the cluster to a previous state, you must have previously backed up the `etcd` data by creating a snapshot. You will use this snapshot to restore the cluster state. For more information, see "Backing up etcd data". You can use an etcd backup to restore your cluster to a previous state. This can be used to recover from the following situations: * The cluster has lost the majority of control plane hosts (quorum loss). * An administrator has deleted something critical and must restore to recover the cluster. [WARNING] ==== Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This should only be used as a last resort. If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup. ==== Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, persistent volume controllers, and {product-title} Operators, including the network Operator. It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues. In extreme cases, the cluster can lose track of persistent volumes, delete critical workloads that no longer exist, reimage machines, and rewrite CA bundles with expired certificates.