mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
138 lines
5.1 KiB
Plaintext
138 lines
5.1 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * hosted_control_planes/hcp-disaster-recovery-oadp.adoc
|
|
|
|
:_mod-docs-content-type: PROCEDURE
|
|
[id="hcp-dr-oadp-restore_{context}"]
|
|
= Restoring a hosted cluster into the same management cluster by using {oadp-short}
|
|
|
|
You can restore the hosted cluster by creating the `Restore` custom resource (CR).
|
|
|
|
* If you are using an _in-place_ update, InfraEnv does not need spare nodes. You need to re-provision the worker nodes from the new management cluster.
|
|
* If you are using a _replace_ update, you need some spare nodes for InfraEnv to deploy the worker nodes.
|
|
|
|
[IMPORTANT]
|
|
====
|
|
After you back up your hosted cluster, you must destroy it to initiate the restoring process. To initiate node provisioning, you must back up workloads in the data plane before deleting the hosted cluster.
|
|
====
|
|
|
|
.Prerequisites
|
|
|
|
* You completed the steps in link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.15/html/clusters/cluster_mce_overview#remove-a-cluster-by-using-the-console[Removing a cluster by using the console] to delete your hosted cluster.
|
|
* You completed the steps in link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.15/html/clusters/cluster_mce_overview#removing-a-cluster-from-management-in-special-cases[Removing remaining resources after removing a cluster].
|
|
|
|
To monitor and observe the backup process, see "Observing the backup and restore process".
|
|
|
|
.Procedure
|
|
|
|
. Verify that no pods and persistent volume claims (PVCs) are present in the hosted control plane namespace by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc get pod pvc -n <hosted_control_plane_namespace>
|
|
----
|
|
+
|
|
.Expected output
|
|
[source,terminal]
|
|
----
|
|
No resources found
|
|
----
|
|
|
|
. Create a YAML file that defines the `Restore` CR:
|
|
+
|
|
.Example `restore-hosted-cluster.yaml` file
|
|
[source,yaml]
|
|
----
|
|
apiVersion: velero.io/v1
|
|
kind: Restore
|
|
metadata:
|
|
name: <restore_resource_name> <1>
|
|
namespace: openshift-adp
|
|
spec:
|
|
backupName: <backup_resource_name> <2>
|
|
restorePVs: true <3>
|
|
existingResourcePolicy: update <4>
|
|
excludedResources:
|
|
- nodes
|
|
- events
|
|
- events.events.k8s.io
|
|
- backups.velero.io
|
|
- restores.velero.io
|
|
- resticrepositories.velero.io
|
|
----
|
|
<1> Replace `<restore_resource_name>` with the name of your `Restore` resource.
|
|
<2> Replace `<backup_resource_name>` with the name of your `Backup` resource.
|
|
<3> Initiates the recovery of persistent volumes (PVs) and its pods.
|
|
<4> Ensures that the existing objects are overwritten with the backed up content.
|
|
+
|
|
[IMPORTANT]
|
|
====
|
|
You must create the `infraenv` resource in a separate namespace. Do not delete the `infraenv` resource during the restore process. The `infraenv` resource is mandatory for the new nodes to be reprovisioned.
|
|
====
|
|
|
|
. Apply the `Restore` CR by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc apply -f restore-hosted-cluster.yaml
|
|
----
|
|
|
|
. Verify if the value of the `status.phase` is `Completed` by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc get hostedcluster <hosted_cluster_name> -n <hosted_cluster_namespace> \
|
|
-o jsonpath='{.status.phase}'
|
|
----
|
|
|
|
. After the restore process is complete, start the reconciliation of the `HostedCluster` and `NodePool` resources that you paused during backing up of the control plane workload:
|
|
|
|
.. Start the reconciliation of the `HostedCluster` resource by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc --kubeconfig <management_cluster_kubeconfig_file> \
|
|
patch hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \
|
|
--type json \
|
|
-p '[{"op": "add", "path": "/spec/pausedUntil", "value": "false"}]'
|
|
----
|
|
|
|
.. Start the reconciliation of the `NodePool` resource by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc --kubeconfig <management_cluster_kubeconfig_file> \
|
|
patch nodepool -n <hosted_cluster_namespace> <node_pool_name> \
|
|
--type json \
|
|
-p '[{"op": "add", "path": "/spec/pausedUntil", "value": "false"}]'
|
|
----
|
|
|
|
. Start the reconciliation of the Agent provider resources that you paused during backing up of the control plane workload:
|
|
|
|
.. Start the reconciliation of the `AgentCluster` resource by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc --kubeconfig <management_cluster_kubeconfig_file> \
|
|
annotate agentcluster -n <hosted_control_plane_namespace> \
|
|
cluster.x-k8s.io/paused- --overwrite=true --all
|
|
----
|
|
|
|
.. Start the reconciliation of the `AgentMachine` resource by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc --kubeconfig <management_cluster_kubeconfig_file> \
|
|
annotate agentmachine -n <hosted_control_plane_namespace> \
|
|
cluster.x-k8s.io/paused- --overwrite=true --all
|
|
----
|
|
|
|
. Remove the `hypershift.openshift.io/skip-delete-hosted-controlplane-namespace-` annotation in the `HostedCluster` resource to avoid manually deleting the hosted control plane namespace by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc --kubeconfig <management_cluster_kubeconfig_file> \
|
|
annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \
|
|
hypershift.openshift.io/skip-delete-hosted-controlplane-namespace- \
|
|
--overwrite=true --all
|
|
---- |