1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

Merge pull request #79309 from openshift-cherrypick-robot/cherry-pick-78045-to-enterprise-4.17

[enterprise-4.17] #CNV38755 - new dr content
This commit is contained in:
Jeana Routh
2024-07-23 14:00:43 -04:00
committed by GitHub
5 changed files with 116 additions and 13 deletions

View File

@@ -6,25 +6,19 @@
[id="virt-about-dr-methods_{context}"]
= About disaster recovery methods
For an overview of disaster recovery (DR) concepts, architecture, and planning considerations, see the link:https://access.redhat.com/articles/7041594[Red Hat {VirtProductName} disaster recovery guide] in the Red Hat Knowledgebase.
For an overview of disaster recovery (DR) concepts, architecture, and planning considerations, see the link:https://access.redhat.com/articles/7041594[Red{nbsp}Hat {VirtProductName} disaster recovery guide] in the Red{nbsp}Hat Knowledgebase.
The two primary DR methods for {VirtProductName} are Metropolitan Disaster Recovery (Metro-DR) and Regional-DR.
Metro-DR::
[id="metro-dr_{context}"]
== Metro-DR
Metro-DR uses synchronous replication. It writes to storage at both the primary and secondary sites so that the data is always synchronized between sites. Because the storage provider is responsible for ensuring that the synchronization succeeds, the environment must meet the throughput and latency requirements of the storage provider.
Regional-DR::
[id="regional-dr_{context}"]
== Regional-DR
Regional-DR uses asynchronous replication. The data in the primary site is synchronized with the secondary site at regular intervals. For this type of replication, you can have a higher latency connection between the primary and secondary sites.
[id="metro-dr_{context}"]
== Metro-DR for {rh-storage-first}
{VirtProductName} supports the link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.14/html-single/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/index#metro-dr-solution[Metro-DR solution for {rh-storage}], which provides two-way synchronous data replication between managed {VirtProductName} clusters installed on primary and secondary sites. This solution combines {rh-rhacm-first}, Red Hat Ceph Storage, and {rh-storage} components.
Use this solution during a site disaster to fail applications from the primary to the secondary site, and to relocate the application back to the primary site after restoring the disaster site.
This synchronous solution is only available to metropolitan distance data centers with a 10 millisecond latency or less.
For more information about using the Metro-DR solution for {rh-storage} with {VirtProductName}, see link:https://access.redhat.com/articles/7053115[the Red Hat Knowledgebase].
:FeatureName: Regional-DR
include::snippets/technology-preview.adoc[]

View File

@@ -0,0 +1,58 @@
// Module included in the following assemblies:
//
// * virt/backup_restore/virt-disaster-recovery.adoc
:_mod-docs-content-type: CONCEPT
[id="virt-defining-apps-for-dr_{context}"]
= Defining applications for disaster recovery
Define applications for disaster recovery by using VMs that {rh-rhacm-first} manages or discovers.
[id="best-practices-{rh-rhacm}-managed-vm_{context}"]
== Best practices when defining an {rh-rhacm}-managed VM
An {rh-rhacm}-managed application that includes a VM must be created by using a GitOps workflow and by creating an {rh-rhacm} application or `ApplicationSet`.
There are several actions you can take to improve your experience and chance of success when defining an {rh-rhacm}-managed VM.
[discrete]
[id="use-a-pvc-and-populator_{context}"]
=== Use a PVC and populator to define storage for the VM
Because data volumes create persistent volume claims (PVCs) implicitly, data volumes and VMs with data volume templates do not fit as neatly into the GitOps model.
[discrete]
[id="use-import-method_{context}"]
=== Use the import method when choosing a population source for your VM disk
Use the import method to work around limitations in Regional-DR that prevent you from protecting VMs that use cloned PVCs.
Select a {op-system-base} image from the software catalog to use the import method. Red{nbsp}Hat recommends using a specific version of the image rather than a floating tag for consistent results. The KubeVirt community maintains container disks for other operating systems in a Quay repository.
[discrete]
[id="use-pull-node_{context}"]
=== Use `pullMethod: node`
Use the pod `pullMethod: node` when creating a data volume from a registry source to take advantage of the {product-title} pull secret, which is required to pull container images from the Red{nbsp}Hat registry.
[id="best-practices-{rh-rhacm}-discovered-vm_{context}"]
== Best practices when defining an {rh-rhacm}-discovered virtual machine
You can configure any VM in the cluster that is not an {rh-rhacm}-managed application as an {rh-rhacm}-discovered application. This includes VMs imported by using the Migration Toolkit for Virtualization (MTV), VMs created by using the {VirtProductName} web console, or VMs created by any other means, such as the CLI.
There are several actions you can take to improve your experience and chance of success when defining an {rh-rhacm}-discovered VM.
[discrete]
[id="protect-the-vm_{context}"]
=== Protect the VM when using MTV, the {VirtProductName} web console, or a custom VM
Because automatic labeling is not currently available, the application owner must manually label the components of the VM application when using MTV, the {VirtProductName} web console, or a custom VM.
After creating the VM, apply a common label to the following resources associated with the VM: `VirtualMachine`, `DataVolume`, `PersistentVolumeClaim`, `Service`, `Route`, `Secret`, and `ConfigMap`. Do not label virtual machine instances (VMIs) or pods
since {VirtProductName} creates and manages these automatically.
[discrete]
[id="working-vm-contains_{context}"]
=== Include more than the `VirtualMachine` object in the VM
Working VMs typically also contain data volumes, persistent volume claims (PVCs), services, routes, secrets, `ConfigMap` objects, and `VirtualMachineSnapshot` objects.
[discrete]
[id="part-of-larger-app_{context}"]
=== Include the VM as part of a larger logical application
This includes other pod-based workloads and VMs.

View File

@@ -0,0 +1,19 @@
// Module included in the following assemblies:
//
// * virt/backup_restore/virt-disaster-recovery.adoc
:_mod-docs-content-type: CONCEPT
[id="metro-dr-odf_{context}"]
= Metro-DR for {rh-storage-first}
{VirtProductName} supports the link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.14/html-single/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/index#metro-dr-solution[Metro-DR solution for {rh-storage}], which provides two-way synchronous data replication between managed {VirtProductName} clusters installed on primary and secondary sites. This solution combines {rh-rhacm-first}, Red{nbsp}Hat Ceph Storage, and {rh-storage} components.
Use this solution during a site disaster to failover applications from the primary to the secondary site, and relocate the applications back to the primary site after restoring the disaster site.
This synchronous solution is only available to metropolitan distance data centers with a 10-millisecond latency or less.
For more information about using the Metro-DR solution for {rh-storage} with {VirtProductName}, see link:https://access.redhat.com/articles/7053115[the Red{nbsp}Hat Knowledgebase] or {ibm-title}s {rh-storage} Metro-DR documentation.
[role="_additional-resources-dr"]
.Additional resources
* link:https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/index[Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads]

View File

@@ -0,0 +1,25 @@
// Module included in the following assemblies:
//
// * virt/backup_restore/virt-disaster-recovery.adoc
:_mod-docs-content-type: CONCEPT
[id="virt-vm-behavior-dr_{context}"]
= VM behavior during disaster recovery scenarios
VMs typically act similarly to pod-based workloads during both relocate and failover disaster recovery flows.
[discrete]
[id="dr-relocate_{context}"]
== Relocate
Use relocate to move an application from the primary environment to the secondary environment when the primary environment is still accessible. During relocate, the VM is gracefully terminated, any unreplicated data is synchronized to the secondary environment, and the VM starts in the secondary environment.
Becauase the terminates gracefully, there is no data loss in this scenario. Therefore, the VM operating system does not need to perform crash recovery.
[discrete]
[id="dr-failover_{context}"]
== Failover
Use failover when there is a critical failure in the primary environment that makes it impractical or impossible to use relocation to move the workload to a secondary environment. When failover is executed, the storage is fenced from the primary environment, the I/O to the VM disks is abruptly halted, and the VM restarts in the secondary environment using the replicated data.
You should expect data loss due to failover. The extent of loss depends on whether you use Metro-DR, which uses synchronous replication, or Regional-DR, which uses asynchronous replication. Because Regional-DR uses snapshot-based replication intervals, the window of data loss is proportional to the replication interval length. When the VM restarts, the operating system might perform crash recovery.

View File

@@ -9,3 +9,10 @@ toc::[]
{VirtProductName} supports using disaster recovery (DR) solutions to ensure that your environment can recover after a site outage. To use these methods, you must plan your {VirtProductName} deployment in advance.
include::modules/virt-about-dr-methods.adoc[leveloffset=+1]
include::modules/virt-defining-apps-for-dr.adoc[leveloffset=+1]
include::modules/virt-vm-behavior-dr.adoc[leveloffset=+1]
include::modules/virt-metro-dr-odf.adoc[leveloffset=+1]
[role="_additional-resources-dr"]
.Additional resources
* link:https://docs.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.10[Red{nbsp}Hat Advanced Cluster Management for Kubernetes 2.10]