1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

delete unused OADP files in main

Signed-off-by: Shruti Deshpande <shdeshpa@redhat.com>
This commit is contained in:
Shruti Deshpande
2025-09-26 10:37:37 +05:30
parent 17d6c6ea15
commit acb69bddee
30 changed files with 0 additions and 1915 deletions

View File

@@ -1,65 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="oadp-12-data-mover-ceph-doc"]
= Using OADP 1.2 Data Mover with Ceph storage
include::_attributes/common-attributes.adoc[]
:context: backing-up-applications
toc::[]
You can use OADP 1.2 Data Mover to back up and restore application data for clusters that use CephFS, CephRBD, or both.
OADP 1.2 Data Mover leverages Ceph features that support large-scale environments. One of these is the shallow copy method, which is available for {product-title} 4.12 and later. This feature supports backing up and restoring `StorageClass` and `AccessMode` resources other than what is found on the source persistent volume claim (PVC).
[IMPORTANT]
====
The CephFS shallow copy feature is a backup feature. It is not part of restore operations.
====
include::modules/oadp-ceph-prerequisites.adoc[leveloffset=+1]
[id="defining-crs-for-12-data-mover"]
== Defining custom resources for use with OADP 1.2 Data Mover
When you install {rh-storage-first}, it automatically creates default CephFS and a CephRBD `StorageClass` and `VolumeSnapshotClass` custom resources (CRs). You must define these CRs for use with OpenShift API for Data Protection (OADP) 1.2 Data Mover.
After you define the CRs, you must make several other changes to your environment before you can perform your back up and restore operations.
include::modules/oadp-ceph-preparing-cephfs-crs.adoc[leveloffset=+2]
include::modules/oadp-ceph-preparing-cephrbd-crs.adoc[leveloffset=+2]
include::modules/oadp-ceph-preparing-crs-additional.adoc[leveloffset=+2]
[id="oadp-ceph-back-up-restore-cephfs"]
== Backing up and restoring data using OADP 1.2 Data Mover and CephFS storage
You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data using CephFS storage by enabling the shallow copy feature of CephFS.
include::snippets/oadp-ceph-cr-prerequisites.adoc[]
:context: !backing-up-applications
:context: cephfs
include::modules/oadp-ceph-cephfs-back-up-dba.adoc[leveloffset=+2]
include::modules/oadp-ceph-cephfs-back-up.adoc[leveloffset=+2]
include::modules/oadp-ceph-cephfs-restore.adoc[leveloffset=+2]
[id="oadp-ceph-split"]
== Backing up and restoring data using OADP 1.2 Data Mover and split volumes (CephFS and Ceph RBD)
You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to backup and restore data in an environment that has _split volumes_, that is, an environment that uses both CephFS and CephRBD.
include::snippets/oadp-ceph-cr-prerequisites.adoc[]
:context: !cephfs
:context: split
include::modules/oadp-ceph-split-back-up-dba.adoc[leveloffset=+2]
include::modules/oadp-ceph-cephfs-back-up.adoc[leveloffset=+2]
include::modules/oadp-ceph-cephfs-restore.adoc[leveloffset=+2]
:context: !split
:context: backing-up-applications
include::modules/oadp-deletion-policy-1-2.adoc[leveloffset=+1]

View File

@@ -1,26 +0,0 @@
[id="oadp-cleaning-up-after-data-mover-1-1-backup-doc"]
= Cleaning up after a backup using OADP 1.1 Data Mover
include::_attributes/common-attributes.adoc[]
:context: datamover11
toc::[]
For OADP 1.1 Data Mover, you must perform a data cleanup after you perform a backup.
The cleanup consists of deleting the following resources:
* Snapshots in a bucket
* Cluster resources
* Volume snapshot backups (VSBs) after a backup procedure that is either run by a schedule or is run repetitively
include::modules/oadp-cleaning-up-after-data-mover-snapshots.adoc[leveloffset=+1]
[id="deleting-cluster-resources-data-mover"]
== Deleting cluster resources
OADP 1.1 Data Mover might leave cluster resources whether or not it successfully backs up your container storage interface (CSI) volume snapshots to a remote object store.
include::modules/oadp-deleting-cluster-resources-following-success.adoc[leveloffset=+2]
include::modules/oadp-deleting-cluster-resources-following-failure.adoc[leveloffset=+2]
include::modules/oadp-vsb-cleanup-after-scheduler.adoc[leveloffset=+1]

View File

@@ -1,27 +0,0 @@
// Module included in the following assemblies:
// * backup_and_restore/application_backup_and_restore/troubleshooting.adoc
:_mod-docs-content-type: CONCEPT
[id="migration-combining-must-gather_{context}"]
= Combining options when using the must-gather tool
Currently, it is not possible to combine must-gather scripts, for example specifying a timeout threshold while permitting insecure TLS connections. In some situations, you can get around this limitation by setting up internal variables on the must-gather command line, such as the following example:
[source,terminal,subs="attributes+"]
----
$ oc adm must-gather --image={must-gather-v1-4} -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds>
----
In this example, set the `skip_tls` variable before running the `gather_with_timeout` script. The result is a combination of `gather_with_timeout` and `gather_without_tls`.
The only other variables that you can specify this way are the following:
* `logs_since`, with a default value of `72h`
* `request_timeout`, with a default value of `0s`
If `DataProtectionApplication` custom resource (CR) is configured with `s3Url` and `insecureSkipTLS: true`, the CR does not collect the necessary logs because of a missing CA certificate. To collect those logs, run the `must-gather` command with the following option:
[source,terminal,subs="attributes+"]
----
$ oc adm must-gather --image={must-gather-v1-4} -- /usr/bin/gather_without_tls true
----

View File

@@ -1,24 +0,0 @@
// Module included in the following assemblies:
//
// * migrating_from_ocp_3_to_4/migrating-applications-3-4.adoc
// * migration_toolkit_for_containers/migrating-applications-with-mtc
:_mod-docs-content-type: PROCEDURE
[id="creating-ca-bundle_{context}"]
= Creating a CA certificate bundle file for self-signed certificates
If you use a self-signed certificate to secure a cluster or a replication repository for the {mtc-first}, certificate verification might fail with the following error message: `Certificate signed by unknown authority`.
You can create a custom CA certificate bundle file and upload it in the {mtc-short} web console when you add a cluster or a replication repository.
.Procedure
Download a CA certificate from a remote endpoint and save it as a CA bundle file:
[source,terminal]
----
$ echo -n | openssl s_client -connect <host_FQDN>:<port> \ <1>
| sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> <2>
----
<1> Specify the host FQDN and port of the endpoint, for example, `api.my-cluster.example.com:6443`.
<2> Specify the name of the CA bundle file.

View File

@@ -1,20 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/oadp-release-notes-1-2.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-backing-up-dpa-configuration-1-2-0_{context}"]
= Backing up the DPA configuration
You must back up your current `DataProtectionApplication` (DPA) configuration.
.Procedure
* Save your current DPA configuration by running the following command:
+
.Example
[source,terminal]
----
$ oc get dpa -n openshift-adp -o yaml > dpa.orig.backup
----

View File

@@ -1,19 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/oadp-release-notes-1-3.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-backing-up-dpa-configuration-1-3-0_{context}"]
= Backing up the DPA configuration
You must back up your current `DataProtectionApplication` (DPA) configuration.
.Procedure
* Save your current DPA configuration by running the following command:
+
.Example
[source,terminal]
----
$ oc get dpa -n openshift-adp -o yaml > dpa.orig.backup
----

View File

@@ -1,83 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-ceph-cephfs-back-up-dba_{context}"]
= Creating a DPA for use with CephFS storage
You must create a Data Protection Application (DPA) CR before you use the OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data using CephFS storage.
.Procedure
. For the OADP 1.2 Data Mover, you must verify that the `deletionPolicy` field of the `VolumeSnapshotClass` CR is set to `Retain` by running the following command:
+
[source,terminal]
----
$ oc get volumesnapshotclass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"Retention Policy: "}{.deletionPolicy}{"\n"}{end}'
----
. Verify that the labels of the `VolumeSnapshotClass` CR are set to `true` by running the following command:
+
[source,terminal]
----
$ oc get volumesnapshotclass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"labels: "}{.metadata.labels}{"\n"}{end}'
----
. Verify that the `storageclass.kubernetes.io/is-default-class` annotation of the `StorageClass` CR is set to `true` by running the following command:
+
[source,terminal]
----
$ oc get storageClass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"annotations: "}{.metadata.annotations}{"\n"}{end}'
----
. Create a Data Protection Application (DPA) CR similar to the following example:
+
.Example DPA CR
+
[source,yaml]
----
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: velero-sample
namespace: openshift-adp
spec:
backupLocations:
- velero:
config:
profile: default
region: us-east-1
credential:
key: cloud
name: cloud-credentials
default: true
objectStorage:
bucket: <my_bucket>
prefix: velero
provider: aws
configuration:
restic:
enable: false # <1>
velero:
defaultPlugins:
- openshift
- aws
- csi
- vsm
features:
dataMover:
credentialName: <restic_secret_name> # <2>
enable: true # <3>
volumeOptionsForStorageClasses: # <4>
ocs-storagecluster-cephfs:
sourceVolumeOptions:
accessMode: ReadOnlyMany
cacheAccessMode: ReadWriteMany
cacheStorageClassName: ocs-storagecluster-cephfs
storageClassName: ocs-storagecluster-cephfs-shallow
----
<1> There is no default value for the `enable` field. Valid values are `true` or `false`.
<2> Use the Restic `Secret` that you created when you prepared your environment for working with OADP 1.2 Data Mover and Ceph. If you do not use your Restic `Secret`, the CR uses the default value `dm-credential` for this parameter.
<3> There is no default value for the `enable` field. Valid values are `true` or `false`.
<4> Optional parameter. You can define a different set of `VolumeOptionsForStorageClass` labels for each `storageClass` volume. This configuration provides a backup for volumes with different providers. The optional `VolumeOptionsForStorageClass` parameter is typically used with CephFS but can be used for any storage type.

View File

@@ -1,67 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc
ifeval::["{context}" == "cephfs"]
:cephfs:
endif::[]
ifeval::["{context}" == "split"]
:split:
endif::[]
:_mod-docs-content-type: PROCEDURE
[id="oadp-ceph-cephfs-back-up_{context}"]
ifdef::cephfs[]
= Backing up data using OADP 1.2 Data Mover and CephFS storage
You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up data using CephFS storage by enabling the shallow copy feature of CephFS storage.
endif::cephfs[]
ifdef::split[]
= Backing up data using OADP 1.2 Data Mover and split volumes
You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up data in an environment that has split volumes.
endif::split[]
.Procedure
. Create a `Backup` CR as in the following example:
+
.Example `Backup` CR
+
[source,yaml]
----
apiVersion: velero.io/v1
kind: Backup
metadata:
name: <backup_name>
namespace: <protected_ns>
spec:
includedNamespaces:
- <app_ns>
storageLocation: velero-sample-1
----
. Monitor the progress of the `VolumeSnapshotBackup` CRs by completing the following steps:
.. To check the progress of all the `VolumeSnapshotBackup` CRs, run the following command:
+
[source,terminal]
----
$ oc get vsb -n <app_ns>
----
.. To check the progress of a specific `VolumeSnapshotBackup` CR, run the following command:
+
[source,terminal]
----
$ oc get vsb <vsb_name> -n <app_ns> -ojsonpath="{.status.phase}`
----
. Wait several minutes until the `VolumeSnapshotBackup` CR has the status `Completed`.
. Verify that there is at least one snapshot in the object store that is given in the Restic `Secret`. You can check for this snapshot in your targeted `BackupStorageLocation` storage provider that has a prefix of `/<OADP_namespace>`.
ifeval::["{context}" == "cephfs"]
:!cephfs:
endif::[]
ifeval::["{context}" == "split"]
:!split:
endif::[]

View File

@@ -1,83 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc
ifeval::["{context}" == "cephfs"]
:cephfs:
endif::[]
ifeval::["{context}" == "split"]
:split:
endif::[]
:_mod-docs-content-type: PROCEDURE
[id="oadp-ceph-cephfs-restore_{context}"]
ifdef::cephfs[]
= Restoring data using OADP 1.2 Data Mover and CephFS storage
You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to restore data using CephFS storage if the shallow copy feature of CephFS storage was enabled for the back up procedure. The shallow copy feature is not used in the restore procedure.
endif::cephfs[]
ifdef::split[]
= Restoring data using OADP 1.2 Data Mover and split volumes
You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to restore data in an environment that has split volumes, if the shallow copy feature of CephFS storage was enabled for the back up procedure. The shallow copy feature is not used in the restore procedure.
endif::split[]
.Procedure
. Delete the application namespace by running the following command:
+
[source,terminal]
----
$ oc delete vsb -n <app_namespace> --all
----
. Delete any `VolumeSnapshotContent` CRs that were created during backup by running the following command:
+
[source,terminal]
----
$ oc delete volumesnapshotcontent --all
----
. Create a `Restore` CR as in the following example:
+
.Example `Restore` CR
+
[source,yaml]
----
apiVersion: velero.io/v1
kind: Restore
metadata:
name: <restore_name>
namespace: <protected_ns>
spec:
backupName: <previous_backup_name>
----
. Monitor the progress of the `VolumeSnapshotRestore` CRs by doing the following:
.. To check the progress of all the `VolumeSnapshotRestore` CRs, run the following command:
+
[source,terminal]
----
$ oc get vsr -n <app_ns>
----
.. To check the progress of a specific `VolumeSnapshotRestore` CR, run the following command:
+
[source,terminal]
----
$ oc get vsr <vsr_name> -n <app_ns> -ojsonpath="{.status.phase}
----
. Verify that your application data has been restored by running the following command:
+
[source,terminal]
----
$ oc get route <route_name> -n <app_ns> -ojsonpath="{.spec.host}"
----
ifeval::["{context}" == "cephfs"]
:!cephfs:
endif::[]
ifeval::["{context}" == "split"]
:!split:
endif::[]

View File

@@ -1,65 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-ceph-preparing-cephfs-crs_{context}"]
= Defining CephFS custom resources for use with OADP 1.2 Data Mover
When you install {rh-storage-first}, it automatically creates a default CephFS `StorageClass` custom resource (CR) and a default CephFS `VolumeSnapshotClass` CR. You can define these CRs for use with OpenShift API for Data Protection (OADP) 1.2 Data Mover.
.Procedure
. Define the `VolumeSnapshotClass` CR as in the following example:
+
.Example `VolumeSnapshotClass` CR
+
[source,yaml]
----
apiVersion: snapshot.storage.k8s.io/v1
deletionPolicy: <deletion_policy_type> <1>
driver: openshift-storage.cephfs.csi.ceph.com
kind: VolumeSnapshotClass
metadata:
annotations:
snapshot.storage.kubernetes.io/is-default-class: true <2>
labels:
velero.io/csi-volumesnapshot-class: true <3>
name: ocs-storagecluster-cephfsplugin-snapclass
parameters:
clusterID: openshift-storage
csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage
----
<1> OADP supports the `Retain` and `Delete` deletion policy types for CSI and Data Mover backup and restore. For the OADP 1.2 Data Mover, set the deletion policy type to `Retain`.
<2> Must be set to `true`.
<3> Must be set to `true`.
. Define the `StorageClass` CR as in the following example:
+
.Example `StorageClass` CR
+
[source,yaml]
----
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ocs-storagecluster-cephfs
annotations:
description: Provides RWO and RWX Filesystem volumes
storageclass.kubernetes.io/is-default-class: true <1>
provisioner: openshift-storage.cephfs.csi.ceph.com
parameters:
clusterID: openshift-storage
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
fsName: ocs-storagecluster-cephfilesystem
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
----
<1> Must be set to `true`.

View File

@@ -1,63 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-ceph-preparing-cephrbd-crs_{context}"]
= Defining CephRBD custom resources for use with OADP 1.2 Data Mover
When you install {rh-storage-first}, it automatically creates a default CephRBD `StorageClass` custom resource (CR) and a default CephRBD `VolumeSnapshotClass` CR. You can define these CRs for use with OpenShift API for Data Protection (OADP) 1.2 Data Mover.
.Procedure
. Define the `VolumeSnapshotClass` CR as in the following example:
+
.Example `VolumeSnapshotClass` CR
+
[source,yaml]
----
apiVersion: snapshot.storage.k8s.io/v1
deletionPolicy: <deletion_policy_type> <1>
driver: openshift-storage.rbd.csi.ceph.com
kind: VolumeSnapshotClass
metadata:
labels:
velero.io/csi-volumesnapshot-class: true <2>
name: ocs-storagecluster-rbdplugin-snapclass
parameters:
clusterID: openshift-storage
csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage
----
<1> OADP supports the `Retain` and `Delete` deletion policy types for CSI and Data Mover backup and restore. For the OADP 1.2 Data Mover, set the deletion policy type to `Retain`.
<2> Must be set to `true`.
. Define the `StorageClass` CR as in the following example:
+
.Example `StorageClass` CR
+
[source,yaml]
----
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ocs-storagecluster-ceph-rbd
annotations:
description: 'Provides RWO Filesystem volumes, and RWO and RWX Block volumes'
provisioner: openshift-storage.rbd.csi.ceph.com
parameters:
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
imageFormat: '2'
clusterID: openshift-storage
imageFeatures: layering
csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
pool: ocs-storagecluster-cephblockpool
csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
----

View File

@@ -1,65 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-ceph-preparing-crs-additional_{context}"]
= Defining additional custom resources for use with OADP 1.2 Data Mover
After you redefine the default `StorageClass` and CephRBD `VolumeSnapshotClass` custom resources (CRs), you must create the following CRs:
* A CephFS `StorageClass` CR defined to use the shallow copy feature
* A Restic `Secret` CR
.Procedure
. Create a CephFS `StorageClass` CR and set the `backingSnapshot` parameter set to `true` as in the following example:
+
.Example CephFS `StorageClass` CR with `backingSnapshot` set to `true`
+
[source, yaml]
----
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ocs-storagecluster-cephfs-shallow
annotations:
description: Provides RWO and RWX Filesystem volumes
storageclass.kubernetes.io/is-default-class: false
provisioner: openshift-storage.cephfs.csi.ceph.com
parameters:
csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
clusterID: openshift-storage
fsName: ocs-storagecluster-cephfilesystem
csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
backingSnapshot: true <1>
csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
----
<1> Must be set to `true`.
+
[IMPORTANT]
====
Ensure that the CephFS `VolumeSnapshotClass` and `StorageClass` CRs have the same value for `provisioner`.
====
. Configure a Restic `Secret` CR as in the following example:
+
.Example Restic `Secret` CR
+
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
name: <secret_name>
namespace: <namespace>
type: Opaque
stringData:
RESTIC_PASSWORD: <restic_password>
----

View File

@@ -1,16 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc
:_mod-docs-content-type: CONCEPT
[id="oadp-ceph-prerequisites_{context}"]
= Prerequisites for using OADP 1.2 Data Mover with Ceph storage
The following prerequisites apply to all back up and restore operations of data using {oadp-first} 1.2 Data Mover in a cluster that uses Ceph storage:
* You have installed {product-title} 4.12 or later.
* You have installed the OADP Operator.
* You have created a secret `cloud-credentials` in the namespace `openshift-adp.`
* You have installed {rh-storage-first}.
* You have installed the latest VolSync Operator by using Operator Lifecycle Manager.

View File

@@ -1,67 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-ceph-split-back-up-dba_{context}"]
= Creating a DPA for use with split volumes
You must create a Data Protection Application (DPA) CR before you use the OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data using split volumes.
.Procedure
* Create a Data Protection Application (DPA) CR as in the following example:
+
.Example DPA CR for environment with split volumes
[source,yaml]
----
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: velero-sample
namespace: openshift-adp
spec:
backupLocations:
- velero:
config:
profile: default
region: us-east-1
credential:
key: cloud
name: cloud-credentials
default: true
objectStorage:
bucket: <my-bucket>
prefix: velero
provider: aws
configuration:
restic:
enable: false
velero:
defaultPlugins:
- openshift
- aws
- csi
- vsm
features:
dataMover:
credentialName: <restic_secret_name> # <1>
enable: true
volumeOptionsForStorageClasses: # <2>
ocs-storagecluster-cephfs:
sourceVolumeOptions:
accessMode: ReadOnlyMany
cacheAccessMode: ReadWriteMany
cacheStorageClassName: ocs-storagecluster-cephfs
storageClassName: ocs-storagecluster-cephfs-shallow
ocs-storagecluster-ceph-rbd:
sourceVolumeOptions:
storageClassName: ocs-storagecluster-ceph-rbd
cacheStorageClassName: ocs-storagecluster-ceph-rbd
destinationVolumeOptions:
storageClassName: ocs-storagecluster-ceph-rbd
cacheStorageClassName: ocs-storagecluster-ceph-rbd
----
<1> Use the Restic `Secret` that you created when you prepared your environment for working with OADP 1.2 Data Mover and Ceph. If you do not, then the CR will use the default value `dm-credential` for this parameter.
<2> A different set of `VolumeOptionsForStorageClass` labels can be defined for each `storageClass` volume, thus allowing a backup to volumes with different providers. The `VolumeOptionsForStorageClass` parameter is meant for use with CephFS. However, the optional `VolumeOptionsForStorageClass` parameter could be used for any storage type.

View File

@@ -1,16 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-cleaning-up-after-data-mover-snapshots_{context}"]
= Deleting snapshots in a bucket
OADP 1.1 Data Mover might leave one or more snapshots in a bucket after a backup. You can either delete all the snapshots or delete individual snapshots.
.Procedure
* To delete all snapshots in your bucket, delete the `/<protected_namespace>` folder that is specified in the Data Protection Application (DPA) `.spec.backupLocation.objectStorage.bucket` resource.
* To delete an individual snapshot:
. Browse to the `/<protected_namespace>` folder that is specified in the DPA `.spec.backupLocation.objectStorage.bucket` resource.
. Delete the appropriate folders that are prefixed with `/<volumeSnapshotContent name>-pvc` where `<VolumeSnapshotContent_name>` is the `VolumeSnapshotContent` created by Data Mover per PVC.

View File

@@ -1,55 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/oadp-release-notes-1-3.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-converting-dpa-to-new-version-1-3-0_{context}"]
= Converting DPA to the new version
If you need to move backups off cluster with the Data Mover, reconfigure the `DataProtectionApplication` (DPA) manifest as follows.
.Procedure
. Click *Operators* → *Installed Operators* and select the OADP Operator.
. In the *Provided APIs* section, click *View more*.
. Click *Create instance* in the *DataProtectionApplication* box.
. Click *YAML View* to display the current DPA parameters.
+
.Example current DPA
[source,yaml]
----
spec:
configuration:
features:
dataMover:
enable: true
credentialName: dm-credentials
velero:
defaultPlugins:
- vsm
- csi
- openshift
# ...
----
. Update the DPA parameters:
* Remove the `features.dataMover` key and values from the DPA.
* Remove the VolumeSnapshotMover (VSM) plugin.
* Add the `nodeAgent` key and values.
+
.Example updated DPA
[source,yaml]
----
spec:
configuration:
nodeAgent:
enable: true
uploaderType: kopia
velero:
defaultPlugins:
- csi
- openshift
# ...
----
. Wait for the DPA to reconcile successfully.

View File

@@ -1,56 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/oadp-release-notes-1-2.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-converting-to-new-dpa-1-2-0_{context}"]
= Converting DPA to the new version
If you use the fields that were updated in the `spec.configuration.velero.args` stanza, you must configure your `DataProtectionApplication` (DPA) manifest to use the new parameter names.
.Procedure
. Click *Operators* → *Installed Operators* and select the OADP Operator.
. Select *Provided APIs*, click *Create instance* in the *DataProtectionApplication* box.
. Click *YAML View* to display the current DPA parameters.
+
.Example current DPA
[source,yaml]
----
spec:
configuration:
velero:
args:
default-volumes-to-fs-backup: true
default-restic-prune-frequency: 6000
fs-backup-timeout: 600
# ...
----
. Update the DPA parameters:
. Update the DPA parameter names without changing their values:
.. Change the `default-volumes-to-restic` key to `default-volumes-to-fs-backup`.
.. Change the `default-restic-prune-frequency` key to `default-repo-maintain-frequency`.
.. Change the `restic-timeout` key to `fs-backup-timeout`.
+
.Example updated DPA
[source,yaml]
----
spec:
configuration:
velero:
args:
default-volumes-to-fs-backup: true
default-repo-maintain-frequency: 6000
fs-backup-timeout: 600
# ...
----
. Wait for the DPA to reconcile successfully.
[NOTE]
====
The default timeout value for the Restic file system backup is one hour. In OADP 1.3.1 and later, the default timeout value for Restic and Kopia is four hours.
====

View File

@@ -1,78 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-deleting-cluster-resources-following-failure_{context}"]
= Deleting cluster resources following a partially successful or a failed backup and restore that used Data Mover
If your backup and restore operation that uses Data Mover either fails or only partially succeeds, you must clean up any `VolumeSnapshotBackup` (VSB) or `VolumeSnapshotRestore` custom resource definitions (CRDs) that exist in the application namespace, and clean up any extra resources created by these controllers.
.Procedure
. Clean up cluster resources that remain after a backup operation where you used Data Mover by entering the following commands:
.. Delete VSB CRDs on the application namespace, the namespace with the application PVCs to backup and restore:
+
[source,terminal]
----
$ oc delete vsb -n <app_namespace> --all
----
.. Delete `VolumeSnapshot` CRs:
+
[source,terminal]
----
$ oc delete volumesnapshot -A --all
----
.. Delete `VolumeSnapshotContent` CRs:
+
[source,terminal]
----
$ oc delete volumesnapshotcontent --all
----
.. Delete any PVCs on the protected namespace, the namespace the Operator is installed on.
+
[source,terminal]
----
$ oc delete pvc -n <protected_namespace> --all
----
.. Delete any `ReplicationSource` resources on the namespace.
+
[source,terminal]
----
$ oc delete replicationsource -n <protected_namespace> --all
----
. Clean up cluster resources that remain after a restore operation using Data Mover by entering the following commands:
.. Delete VSR CRDs:
+
[source,terminal]
----
$ oc delete vsr -n <app-ns> --all
----
.. Delete `VolumeSnapshot` CRs:
+
[source,terminal]
----
$ oc delete volumesnapshot -A --all
----
.. Delete `VolumeSnapshotContent` CRs:
+
[source,terminal]
----
$ oc delete volumesnapshotcontent --all
----
.. Delete any `ReplicationDestination` resources on the namespace.
+
[source,terminal]
----
$ oc delete replicationdestination -n <protected_namespace> --all
----

View File

@@ -1,32 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-deleting-cluster-resources-following-success_{context}"]
= Deleting cluster resources following a successful backup and restore that used Data Mover
You can delete any `VolumeSnapshotBackup` or `VolumeSnapshotRestore` CRs that remain in your application namespace after a successful backup and restore where you used Data Mover.
.Procedure
. Delete cluster resources that remain on the application namespace, the namespace with the application PVCs to backup and restore, after a backup where you use Data Mover:
+
[source,terminal]
----
$ oc delete vsb -n <app_namespace> --all
----
. Delete cluster resources that remain after a restore where you use Data Mover:
+
[source,terminal]
----
$ oc delete vsr -n <app_namespace> --all
----
. If needed, delete any `VolumeSnapshotContent` resources that remain after a backup and restore where you use Data Mover:
+
[source,terminal]
----
$ oc delete volumesnapshotcontent --all
----

View File

@@ -1,23 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc
:_mod-docs-content-type: CONCEPT
[id="oadp-deletion-policy-1-2_{context}"]
= Deletion policy for OADP 1.2
The deletion policy determines rules for removing data from a system, specifying when and how deletion occurs based on factors such as retention periods, data sensitivity, and compliance requirements. It manages data removal effectively while meeting regulations and preserving valuable information.
[id="oadp-deletion-policy-guidelines-1-2_{context}"]
== Deletion policy guidelines for OADP 1.2
Review the following deletion policy guidelines for the OADP 1.2:
* To use OADP 1.2.x Data Mover to backup and restore, set the `deletionPolicy` field to `Retain` in the `VolumeSnapshotClass` custom resource (CR).
* In OADP 1.2.x, to use CSI backup and restore, you can set the `deletionPolicy` field to either `Retain` or `Delete` in the `VolumeSnapshotClass` CR.
[IMPORTANT]
====
OADP 1.2.x Data Mover to backup and restore is a Technology Preview feature and is not supported without a support exception.
====

View File

@@ -1,70 +0,0 @@
// Module included in the following assemblies:
// oadp-features-plugins-known-issues
// * backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc
// * backup_and_restore/application_backup_and_restore/troubleshooting.adoc
:_mod-docs-content-type: CONCEPT
[id="oadp-features-plugins-known-issues_{context}"]
= OADP plugins known issues
The following section describes known issues in {oadp-first} plugins:
[id="velero-plugin-panic_{context}"]
== Velero plugin panics during imagestream backups due to a missing secret
When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant `oadp-<BSL Name>-<BSL Provider>-registry-secret`.
When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with a panic error:
[source,terminal]
----
024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item"
backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io,
namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked:
runtime error: index out of range with length 1, stack trace: goroutine 94…
----
[id="velero-plugin-panic-workaround_{context}"]
=== Workaround to avoid the panic error
To avoid the Velero plugin panic error, perform the following steps:
. Label the custom BSL with the relevant label
+
[source,terminal]
----
$ oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl
----
. After the BSL is labeled, wait until the DPA reconciles.
+
[NOTE]
====
You can force the reconciliation by making any minor change to the DPA itself.
====
. When the DPA reconciles, confirm that the relevant `oadp-<BSL Name>-<BSL Provider>-registry-secret` has been created and that the correct registry data has been populated into it:
+
[source,terminal]
----
$ oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'
----
[id="openshift-adp-controller-manager-seg-fault_{context}"]
== OpenShift ADP Controller segmentation fault
If you configure a DPA with both `cloudstorage` and `restic` enabled, the `openshift-adp-controller-manager` pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault.
You can have either `velero` or `cloudstorage` defined, because they are mutually exclusive fields.
* If you have both `velero` and `cloudstorage` defined, the `openshift-adp-controller-manager` fails.
* If you have neither `velero` nor `cloudstorage` defined, the `openshift-adp-controller-manager` fails.
For more information about this issue, see link:https://issues.redhat.com/browse/OADP-1054[OADP-1054].
[id="openshift-adp-controller-manager-seg-fault-workaround_{context}"]
=== OpenShift ADP Controller segmentation fault workaround
You must define either `velero` or `cloudstorage` when you configure a DPA. If you define both APIs in your DPA, the `openshift-adp-controller-manager` pod fails with a crash loop segmentation fault.

View File

@@ -1,381 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/installing/installing-oadp-aws.adoc
// * backup_and_restore/application_backup_and_restore/installing/installing-oadp-azure.adoc
// * backup_and_restore/application_backup_and_restore/installing/installing-oadp-gcp.adoc
// * backup_and_restore/application_backup_and_restore/installing/installing-oadp-mcg.adoc
// * backup_and_restore/application_backup_and_restore/installing/installing-oadp-ocs.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-installing-dpa-1-2-and-earlier_{context}"]
= Installing the Data Protection Application 1.2 and earlier
You install the Data Protection Application (DPA) by creating an instance of the `DataProtectionApplication` API.
.Prerequisites
* You must install the OADP Operator.
* You must configure object storage as a backup location.
* If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
* If the backup and snapshot locations use the same credentials, you must create a `Secret` with the default name, `{credentials}`.
ifdef::installing-oadp-azure,installing-oadp-gcp,installing-oadp-mcg,installing-oadp-ocs[]
* If the backup and snapshot locations use different credentials, you must create two `Secrets`:
** `Secret` with a custom name for the backup location. You add this `Secret` to the `DataProtectionApplication` CR.
** `Secret` with another custom name for the snapshot location. You add this `Secret` to the `DataProtectionApplication` CR.
endif::[]
ifdef::installing-oadp-aws[]
* If the backup and snapshot locations use different credentials, you must create a `Secret` with the default name, `{credentials}`, which contains separate profiles for the backup and snapshot location credentials.
endif::[]
+
[NOTE]
====
If you do not want to specify backup or snapshot locations during the installation, you can create a default `Secret` with an empty `credentials-velero` file. If there is no default `Secret`, the installation will fail.
====
+
[NOTE]
====
Velero creates a secret named `velero-repo-credentials` in the OADP namespace, which contains a default backup repository password.
You can update the secret with your own password encoded as base64 *before* you run your first backup targeted to the backup repository. The value of the key to update is `Data[repository-password]`.
After you create your DPA, the first time that you run a backup targeted to the backup repository, Velero creates a backup repository whose secret is `velero-repo-credentials`, which contains either the default password or the one you replaced it with.
If you update the secret password *after* the first backup, the new password will not match the password in `velero-repo-credentials`, and therefore, Velero will not be able to connect with the older backups.
====
.Procedure
. Click *Operators* -> *Installed Operators* and select the OADP Operator.
. Under *Provided APIs*, click *Create instance* in the *DataProtectionApplication* box.
. Click *YAML View* and update the parameters of the `DataProtectionApplication` manifest:
ifdef::installing-oadp-aws[]
+
[source,yaml,subs="attributes+"]
----
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: <dpa_sample>
namespace: openshift-adp
spec:
configuration:
velero:
defaultPlugins:
- openshift # <1>
- aws
resourceTimeout: 10m # <2>
restic:
enable: true # <3>
podConfig:
nodeSelector: <node_selector> # <4>
backupLocations:
- name: default
velero:
provider: {provider}
default: true
objectStorage:
bucket: <bucket_name> # <5>
prefix: <prefix> # <6>
config:
region: <region>
profile: "default"
s3ForcePathStyle: "true" # <7>
s3Url: <s3_url> # <8>
credential:
key: cloud
name: {credentials} # <9>
snapshotLocations: # <10>
- velero:
provider: {provider}
config:
region: <region> # <11>
profile: "default"
credential:
key: cloud
name: {credentials} # <12>
----
<1> The `openshift` plugin is mandatory.
<2> Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m.
<3> Set this value to `false` if you want to disable the Restic installation. Restic deploys a daemon set, which means that Restic pods run on each working node. In OADP version 1.2 and later, you can configure Restic for backups by adding `spec.defaultVolumesToFsBackup: true` to the `Backup` CR. In OADP version 1.1, add `spec.defaultVolumesToRestic: true` to the `Backup` CR.
<4> Specify on which nodes Restic is available. By default, Restic runs on all nodes.
<5> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
<6> Specify a prefix for Velero backups, for example, `velero`, if the bucket is used for multiple purposes.
<7> Specify whether to force path style URLs for S3 objects (Boolean). Not Required for AWS S3. Required only for S3 compatible storage.
<8> Specify the URL of the object store that you are using to store backups. Not required for AWS S3. Required only for S3 compatible storage.
<9> Specify the name of the `Secret` object that you created. If you do not specify this value, the default name, `{credentials}`, is used. If you specify a custom name, the custom name is used for the backup location.
<10> Specify a snapshot location, unless you use CSI snapshots or Restic to back up PVs.
<11> The snapshot location must be in the same region as the PVs.
<12> Specify the name of the `Secret` object that you created. If you do not specify this value, the default name, `{credentials}`, is used. If you specify a custom name, the custom name is used for the snapshot location. If your backup and snapshot locations use different credentials, create separate profiles in the `credentials-velero` file.
endif::[]
ifdef::installing-oadp-azure[]
+
[source,yaml,subs="attributes+"]
----
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: <dpa_sample>
namespace: openshift-adp
spec:
configuration:
velero:
defaultPlugins:
- azure
- openshift # <1>
resourceTimeout: 10m # <2>
restic:
enable: true # <3>
podConfig:
nodeSelector: <node_selector> # <4>
backupLocations:
- velero:
config:
resourceGroup: <azure_resource_group> # <5>
storageAccount: <azure_storage_account_id> # <6>
subscriptionId: <azure_subscription_id> # <7>
storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY
credential:
key: cloud
name: {credentials} # <8>
provider: {provider}
default: true
objectStorage:
bucket: <bucket_name> # <9>
prefix: <prefix> # <10>
snapshotLocations: # <11>
- velero:
config:
resourceGroup: <azure_resource_group>
subscriptionId: <azure_subscription_id>
incremental: "true"
name: default
provider: {provider}
credential:
key: cloud
name: {credentials} # <12>
----
<1> The `openshift` plugin is mandatory.
<2> Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m.
<3> Set this value to `false` if you want to disable the Restic installation. Restic deploys a daemon set, which means that Restic pods run on each working node. In OADP version 1.2 and later, you can configure Restic for backups by adding `spec.defaultVolumesToFsBackup: true` to the `Backup` CR. In OADP version 1.1, add `spec.defaultVolumesToRestic: true` to the `Backup` CR.
<4> Specify on which nodes Restic is available. By default, Restic runs on all nodes.
<5> Specify the Azure resource group.
<6> Specify the Azure storage account ID.
<7> Specify the Azure subscription ID.
<8> If you do not specify this value, the default name, `{credentials}`, is used. If you specify a custom name, the custom name is used for the backup location.
<9> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
<10> Specify a prefix for Velero backups, for example, `velero`, if the bucket is used for multiple purposes.
<11> You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs.
<12> Specify the name of the `Secret` object that you created. If you do not specify this value, the default name, `{credentials}`, is used. If you specify a custom name, the custom name is used for the backup location.
endif::[]
ifdef::installing-oadp-gcp[]
+
[source,yaml,subs="attributes+"]
----
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: <dpa_sample>
namespace: openshift-adp
spec:
configuration:
velero:
defaultPlugins:
- gcp
- openshift # <1>
resourceTimeout: 10m # <2>
restic:
enable: true # <3>
podConfig:
nodeSelector: <node_selector> # <4>
backupLocations:
- velero:
provider: {provider}
default: true
credential:
key: cloud # <5>
name: {credentials} # <6>
objectStorage:
bucket: <bucket_name> # <7>
prefix: <prefix> # <8>
snapshotLocations: # <9>
- velero:
provider: {provider}
default: true
config:
project: <project>
snapshotLocation: us-west1 # <10>
credential:
key: cloud
name: {credentials} # <11>
----
<1> The `openshift` plugin is mandatory.
<2> Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m.
<3> Set this value to `false` if you want to disable the Restic installation. Restic deploys a daemon set, which means that Restic pods run on each working node. In OADP version 1.2 and later, you can configure Restic for backups by adding `spec.defaultVolumesToFsBackup: true` to the `Backup` CR. In OADP version 1.1, add `spec.defaultVolumesToRestic: true` to the `Backup` CR.
<4> Specify on which nodes Restic is available. By default, Restic runs on all nodes.
<5> Secret key that contains credentials. For Google workload identity federation cloud authentication use `service_account.json`.
<6> Secret name that contains credentials. If you do not specify this value, the default name, `{credentials}`, is used.
<7> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
<8> Specify a prefix for Velero backups, for example, `velero`, if the bucket is used for multiple purposes.
<9> Specify a snapshot location, unless you use CSI snapshots or Restic to back up PVs.
<10> The snapshot location must be in the same region as the PVs.
<11> Specify the name of the `Secret` object that you created. If you do not specify this value, the default name, `{credentials}`, is used. If you specify a custom name, the custom name is used for the snapshot location.
endif::[]
ifdef::installing-oadp-mcg[]
+
[source,yaml,subs="attributes+"]
----
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: <dpa_sample>
namespace: openshift-adp
spec:
configuration:
velero:
defaultPlugins:
- aws # <1>
- openshift # <2>
resourceTimeout: 10m # <3>
restic:
enable: true # <4>
podConfig:
nodeSelector: <node_selector> # <5>
backupLocations:
- velero:
config:
profile: "default"
region: <region_name> <6>
s3Url: <url> # <7>
insecureSkipTLSVerify: "true"
s3ForcePathStyle: "true"
provider: {provider}
default: true
credential:
key: cloud
name: {credentials} # <8>
objectStorage:
bucket: <bucket_name> # <9>
prefix: <prefix> # <10>
----
<1> An object store plugin corresponding to your storage locations is required. For all S3 providers, the required plugin is `aws`. For {azure-short} and {gcp-short} object stores, the `azure` or `gcp` plugin is required.
<2> The `openshift` plugin is mandatory.
<3> Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m.
<4> Set this value to `false` if you want to disable the Restic installation. Restic deploys a daemon set, which means that Restic pods run on each working node. In OADP version 1.2 and later, you can configure Restic for backups by adding `spec.defaultVolumesToFsBackup: true` to the `Backup` CR. In OADP version 1.1, add `spec.defaultVolumesToRestic: true` to the `Backup` CR.
<5> Specify on which nodes Restic is available. By default, Restic runs on all nodes.
<6> Specify the region, following the naming convention of the documentation of your object storage server.
<7> Specify the URL of the S3 endpoint.
<8> If you do not specify this value, the default name, `{credentials}`, is used. If you specify a custom name, the custom name is used for the backup location.
<9> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
<10> Specify a prefix for Velero backups, for example, `velero`, if the bucket is used for multiple purposes.
endif::[]
ifdef::installing-oadp-ocs[]
+
[source,yaml,subs="attributes+"]
----
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: <dpa_sample>
namespace: openshift-adp
spec:
configuration:
velero:
defaultPlugins:
- aws # <1>
- kubevirt # <2>
- csi # <3>
- openshift # <4>
resourceTimeout: 10m # <5>
restic:
enable: true # <6>
podConfig:
nodeSelector: <node_selector> # <7>
backupLocations:
- velero:
provider: {provider} # <8>
default: true
credential:
key: cloud
name: <default_secret> # <9>
objectStorage:
bucket: <bucket_name> # <10>
prefix: <prefix> # <11>
----
<1> An object store plugin corresponding to your storage locations is required. For all S3 providers, the required plugin is `aws`. For {azure-short} and {gcp-short} object stores, the `azure` or `gcp` plugin is required.
<2> Optional: The `kubevirt` plugin is used with {VirtProductName}.
<3> Specify the `csi` default plugin if you use CSI snapshots to back up PVs. The `csi` plugin uses the link:https://{velero-domain}/docs/main/csi/[Velero CSI beta snapshot APIs]. You do not need to configure a snapshot location.
<4> The `openshift` plugin is mandatory.
<5> Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m.
<6> Set this value to `false` if you want to disable the Restic installation. Restic deploys a daemon set, which means that Restic pods run on each working node. In OADP version 1.2 and later, you can configure Restic for backups by adding `spec.defaultVolumesToFsBackup: true` to the `Backup` CR. In OADP version 1.1, add `spec.defaultVolumesToRestic: true` to the `Backup` CR.
<7> Specify on which nodes Restic is available. By default, Restic runs on all nodes.
<8> Specify the backup provider.
<9> Specify the correct default name for the `Secret`, for example, `cloud-credentials-gcp`, if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a `Secret` name, the default name is used.
<10> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
<11> Specify a prefix for Velero backups, for example, `velero`, if the bucket is used for multiple purposes.
endif::[]
. Click *Create*.
[id="verifying-oadp-installation-1-2_{context}"]
.Verification
. Verify the installation by viewing the {oadp-first} resources by running the following command:
+
[source,terminal]
----
$ oc get all -n openshift-adp
----
+
.Example output
+
----
NAME READY STATUS RESTARTS AGE
pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s
pod/restic-9cq4q 1/1 Running 0 94s
pod/restic-m4lts 1/1 Running 0 94s
pod/restic-pv4kr 1/1 Running 0 95s
pod/velero-588db7f655-n842v 1/1 Running 0 95s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/restic 3 3 3 3 3 <none> 96s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s
deployment.apps/velero 1/1 1 1 96s
NAME DESIRED CURRENT READY AGE
replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s
replicaset.apps/velero-588db7f655 1 1 1 96s
----
. Verify that the `DataProtectionApplication` (DPA) is reconciled by running the following command:
+
[source,terminal]
----
$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'
----
.Example output
[source,yaml]
+
----
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}
----
. Verify the `type` is set to `Reconciled`.
. Verify the backup storage location and confirm that the `PHASE` is `Available` by running the following command:
+
[source,terminal]
----
$ oc get backupstoragelocations.velero.io -n openshift-adp
----
+
.Example output
[source,terminal]
----
NAME PHASE LAST VALIDATED AGE DEFAULT
dpa-sample-1 Available 1s 3d16h true
----

View File

@@ -1,165 +0,0 @@
// Module included in the following assemblies:
//
// * rosa_backing_up_and_restoring_applications/backing-up-applications.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-installing-oadp-rosa-sts_{context}"]
= Installing the OADP Operator and providing the IAM role
AWS Security Token Service (AWS STS) is a global web service that provides short-term credentials for IAM or federated users. {product-title} (ROSA) with STS is the recommended credential mode for ROSA clusters. This document describes how to install OpenShift API for Data Protection (OADP) on (ROSA) with AWS STS.
[IMPORTANT]
====
Restic and Kopia are not supported in the OADP on ROSA with AWS STS environment. Make sure that the Restic/Kopia node agent is disabled.
For backing up volumes, OADP on ROSA with AWS STS supports only native snapshots and CSI snapshots. See _Known Issues_ for more information.
====
[IMPORTANT]
====
In an Amazon ROSA cluster using STS authentication, restoring backed-up data in a different AWS region is not supported.
The Data Mover feature is not currently supported in ROSA clusters. You can use native AWS S3 tools for moving data.
====
.Prerequisites
* A {openshift-rosa} cluster with the required access and tokens. For instructions, see the procedure in "Preparing AWS credentials". If you plan to use two different clusters for backing up and restoring, you need to prepare AWS credentials, including `ROLE_ARN`, for each cluster.
.Procedure
. Create an OpenShift secret from your AWS token file by entering the following commands.
.. Create the credentials file:
+
[source,terminal]
----
$ cat <<EOF > ${SCRATCH}/credentials
[default]
role_arn = ${ROLE_ARN}
web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
EOF
----
.. Create a namespace for OADP:
+
[source,terminal]
----
$ oc create namespace openshift-adp
----
.. Create the OpenShift secret:
+
[source,terminal]
----
$ oc -n openshift-adp create secret generic cloud-credentials \
--from-file=${SCRATCH}/credentials
----
+
[NOTE]
====
In {product-title} versions 4.15 and later, the OADP Operator supports a new standardized STS workflow through the Operator Lifecycle Manager (OLM)
and Cloud Credentials Operator (CCO). In this workflow, you do not need to create the above
secret, you only need to supply the role ARN during link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html/operators/user-tasks#olm-installing-from-operatorhub-using-web-console_olm-installing-operators-in-namespace[the installation of OLM-managed operators via the {product-title} web console].
The above secret is created automatically via CCO.
====
. Install the OADP Operator.
.. In the {product-title} web console, navigate to Operators *->* OperatorHub.
.. Search for the OADP Operator, then click *Install*.
. Create AWS cloud storage using your AWS credentials:
+
[source,terminal]
----
$ cat << EOF | oc create -f -
apiVersion: oadp.openshift.io/v1alpha1
kind: CloudStorage
metadata:
name: ${CLUSTER_NAME}-oadp
namespace: openshift-adp
spec:
creationSecret:
key: credentials
name: cloud-credentials
enableSharedConfig: true
name: ${CLUSTER_NAME}-oadp
provider: aws
region: $REGION
EOF
----
. Create the `DataProtectionApplication` resource, which is used to configure the connection to the storage where the backups and volume snapshots are stored:
+
[source,terminal]
----
$ cat << EOF | oc create -f -
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: ${CLUSTER_NAME}-dpa
namespace: openshift-adp
spec:
backupLocations:
- bucket:
cloudStorageRef:
name: ${CLUSTER_NAME}-oadp
credential:
key: credentials
name: cloud-credentials
prefix: velero
default: true
config:
region: ${REGION}
configuration:
velero:
defaultPlugins:
- openshift
- aws
nodeAgent: <1>
enable: false
uploaderType: restic
snapshotLocations:
- velero:
config:
credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials <2>
enableSharedConfig: "true" <3>
profile: default <4>
region: ${REGION} <5>
provider: aws
EOF
----
<1> See the first note below.
<2> The `credentialsFile` field is the mounted location of the bucket credential on the pod.
<3> The `enableSharedConfig` field allows the `snapshotLocations` to share or reuse the credential defined for the bucket.
<4> Use the profile name set in the AWS credentials file.
<5> Specify `region` as your AWS region. This must be the same as the cluster region.
+
You are now ready to backup and restore OpenShift applications, as described in the link:https://docs.openshift.com/container-platform/4.11/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.html[OADP documentation].
[NOTE]
====
The `enable` parameter of `restic` is set to `false` in this configuration because OADP does not support Restic in ROSA environments.
If you are using OADP 1.2, replace this configuration:
[source,terminal]
----
nodeAgent:
enable: false
uploaderType: restic
----
with the following:
[source,terminal]
----
restic:
enable: false
----
====
[NOTE]
====
If you want to use two different clusters for backing up and restoring, the two clusters must have identical AWS S3 storage names in both the cloudstorage CR and the OADP `DataProtectionApplication` configuration.
====

View File

@@ -1,171 +0,0 @@
// Module included in the following assemblies:
//
// * rosa_backing_up_and_restoring_applications/backing-up-applications.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-preparing-aws-credentials_{context}"]
= Preparing AWS credentials
An AWS account must be ready to accept an OADP installation.
.Procedure
. Create the following environment variables by running the following commands:
+
[NOTE]
====
Change the cluster name to match your ROSA cluster, and ensure you are logged into the cluster as an administrator. Ensure that all fields are outputted correctly before continuing.
====
+
[source,terminal]
----
$ export CLUSTER_NAME=my-cluster <1>
export ROSA_CLUSTER_ID=$(rosa describe cluster -c ${CLUSTER_NAME} --output json | jq -r .id)
export REGION=$(rosa describe cluster -c ${CLUSTER_NAME} --output json | jq -r .region.id)
export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||')
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export CLUSTER_VERSION=$(rosa describe cluster -c ${CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.')
export ROLE_NAME="${CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials"
export SCRATCH="/tmp/${CLUSTER_NAME}/oadp"
mkdir -p ${SCRATCH}
echo "Cluster ID: ${ROSA_CLUSTER_ID}, Region: ${REGION}, OIDC Endpoint:
${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
----
+
<1> Replace `my-cluster` with your ROSA cluster name.
. On the AWS account, create an IAM policy to allow access to S3.
.. Check to see if the policy exists by running the following command:
+
[source,terminal]
----
$ POLICY_ARN=$(aws iam list-policies --query "Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}" --output text) <1>
----
+
<1> Replace `RosaOadp` with your policy name.
.. Use the following command to create the policy JSON file and then create the policy in ROSA.
+
[NOTE]
====
If the policy ARN is not found, the command will create the policy. If the policy ARN already exists, the `if` statement will intentionally skip the policy creation.
====
+
[source,terminal]
----
$ if [[ -z "${POLICY_ARN}" ]]; then
cat << EOF > ${SCRATCH}/policy.json <1>
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:PutBucketTagging",
"s3:GetBucketTagging",
"s3:PutEncryptionConfiguration",
"s3:GetEncryptionConfiguration",
"s3:PutLifecycleConfiguration",
"s3:GetLifecycleConfiguration",
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucketMultipartUploads",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts",
"ec2:DescribeSnapshots",
"ec2:DescribeVolumes",
"ec2:DescribeVolumeAttribute",
"ec2:DescribeVolumesModifications",
"ec2:DescribeVolumeStatus",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot"
],
"Resource": "*"
}
]}
EOF
POLICY_ARN=$(aws iam create-policy --policy-name "RosaOadpVer1" \
--policy-document file:///${SCRATCH}/policy.json --query Policy.Arn \
--tags Key=rosa_openshift_version,Value=${CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp \
--output text)
fi
----
+
<1> `SCRATCH` is a name for a temporary directory created for the environment variables.
.. View the policy ARN by running the following command:
+
[source,terminal]
----
$ echo ${POLICY_ARN}
----
. Create an IAM role trust policy for the cluster:
.. Create the trust policy file by running the following command:
+
[source,terminal]
----
$ cat <<EOF > ${SCRATCH}/trust-policy.json
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_ENDPOINT}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_ENDPOINT}:sub": [
"system:serviceaccount:openshift-adp:openshift-adp-controller-manager",
"system:serviceaccount:openshift-adp:velero"]
}
}
}]
}
EOF
----
.. Create the role by running the following command:
+
[source,terminal]
----
$ ROLE_ARN=$(aws iam create-role --role-name \
"${ROLE_NAME}" \
--assume-role-policy-document file://${SCRATCH}/trust-policy.json \
--tags Key=rosa_cluster_id,Value=${ROSA_CLUSTER_ID}
Key=rosa_openshift_version,Value=${CLUSTER_VERSION}
Key=rosa_role_prefix,Value=ManagedOpenShift
Key=operator_namespace,Value=openshift-adp
Key=operator_name,Value=openshift-oadp \
--query Role.Arn --output text)
----
.. View the role ARN by running the following command:
+
[source,terminal]
----
$ echo ${ROLE_ARN}
----
. Attach the IAM policy to the IAM role by running the following command:
+
[source,terminal]
----
$ aws iam attach-role-policy --role-name "${ROLE_NAME}" \
--policy-arn ${POLICY_ARN}
----
.Next steps
* Continue to _Installing the OADP Operator and providing the IAM role_.

View File

@@ -1,28 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/oadp-release-notes-1-3.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-upgrade-from-oadp-data-mover-1-2-0_{context}"]
= Upgrading from OADP 1.2 Technology Preview Data Mover
{oadp-first} 1.2 Data Mover backups *cannot* be restored with OADP 1.3. To prevent a gap in the data protection of your applications, complete the following steps before upgrading to OADP 1.3:
.Procedure
. If your cluster backups are sufficient and Container Storage Interface (CSI) storage is available,
back up the applications with a CSI backup.
. If you require off cluster backups:
.. Back up the applications with a file system backup that uses the `--default-volumes-to-fs-backup=true or backup.spec.defaultVolumesToFsBackup` options.
.. Back up the applications with your object storage plugins, for example, `velero-plugin-for-aws`.
[NOTE]
====
The default timeout value for the Restic file system backup is one hour. In OADP 1.3.1 and later, the default timeout value for Restic and Kopia is four hours.
====
[IMPORTANT]
====
To restore OADP 1.2 Data Mover backup, you must uninstall OADP, and install and configure OADP 1.2.
====

View File

@@ -1,17 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/oadp-release-notes-1-2.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-upgrading-dpa-operator-1-2-0_{context}"]
= Upgrading the OADP Operator
Use the following sequence when upgrading the {oadp-first} Operator.
.Procedure
. Change your subscription channel for the OADP Operator from `stable-1.1` to `stable-1.2`.
. Allow time for the Operator and containers to update and restart.

View File

@@ -1,17 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/oadp-release-notes-1-3.adoc
:_mod-docs-content-type: PROCEDURE
[id="oadp-upgrading-dpa-operator-1-3-0_{context}"]
= Upgrading the OADP Operator
Use the following sequence when upgrading the {oadp-first} Operator.
.Procedure
. Change your subscription channel for the OADP Operator from `stable-1.2` to `stable-1.3`.
. Allow time for the Operator and containers to update and restart.

View File

@@ -1,98 +0,0 @@
// Module included in the following assemblies:
//
// * backup_and_restore/oadp-release-notes-1-3.adoc
:_mod-docs-content-type: PROCEDURE
[id="verifying-upgrade-1-3-0_{context}"]
= Verifying the upgrade
Use the following procedure to verify the upgrade.
.Procedure
. Verify the installation by viewing the {oadp-first} resources by running the following command:
+
[source,terminal]
----
$ oc get all -n openshift-adp
----
+
.Example output
+
----
NAME READY STATUS RESTARTS AGE
pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s
pod/node-agent-9cq4q 1/1 Running 0 94s
pod/node-agent-m4lts 1/1 Running 0 94s
pod/node-agent-pv4kr 1/1 Running 0 95s
pod/velero-588db7f655-n842v 1/1 Running 0 95s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s
service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/node-agent 3 3 3 3 3 <none> 96s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s
deployment.apps/velero 1/1 1 1 96s
NAME DESIRED CURRENT READY AGE
replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s
replicaset.apps/velero-588db7f655 1 1 1 96s
----
. Verify that the `DataProtectionApplication` (DPA) is reconciled by running the following command:
+
[source,terminal]
----
$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'
----
.Example output
[source,yaml]
+
----
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}
----
. Verify the `type` is set to `Reconciled`.
. Verify the backup storage location and confirm that the `PHASE` is `Available` by running the following command:
+
[source,terminal]
----
$ oc get backupstoragelocations.velero.io -n openshift-adp
----
.Example output
[source,yaml]
+
----
NAME PHASE LAST VALIDATED AGE DEFAULT
dpa-sample-1 Available 1s 3d16h true
----
In OADP 1.3 you can start data movement off cluster per backup versus creating a `DataProtectionApplication` (DPA) configuration.
.Example
[source,terminal]
----
$ velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true
----
.Example
[source,yaml]
----
apiVersion: velero.io/v1
kind: Backup
metadata:
name: example-backup
namespace: openshift-adp
spec:
snapshotMoveData: true
includedNamespaces:
- mysql-persistent
storageLocation: dpa-sample-1
ttl: 720h0m0s
# ...
----

View File

@@ -1,18 +0,0 @@
// Module included in the following assemblies:
// * backup_and_restore/application_backup_and_restore/troubleshooting.adoc
:_mod-docs-content-type: PROCEDURE
[id="support-insecure-tls-connections_{context}"]
= Using must-gather with insecure TLS connections
If a custom CA certificate is used, the `must-gather` pod fails to grab the output for `velero logs/describe`. To use the `must-gather` tool with insecure TLS connections, you can pass the `gather_without_tls` flag to the `must-gather` command.
.Procedure
* Pass the `gather_without_tls` flag, with value set to `true`, to the `must-gather` tool by using the following command:
[source,terminal,subs="attributes+"]
----
$ oc adm must-gather --image={must-gather-v1-4} -- /usr/bin/gather_without_tls <true/false>
----
By default, the flag value is set to `false`. Set the value to `true` to allow insecure TLS connections.