mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
88 lines
4.7 KiB
Plaintext
88 lines
4.7 KiB
Plaintext
// Module included in the following assemblies:
|
||
//
|
||
// * migrating_from_ocp_3_to_4/migrating-applications-3-4.adoc
|
||
// * migration_toolkit_for_containers/migrating-applications-with-mtc
|
||
|
||
:_mod-docs-content-type: PROCEDURE
|
||
[id="migration-creating-migration-plan-cam_{context}"]
|
||
= Creating a migration plan in the {mtc-short} web console
|
||
|
||
You can create a migration plan in the {mtc-first} web console.
|
||
|
||
.Prerequisites
|
||
|
||
* You must be logged in as a user with `cluster-admin` privileges on all clusters.
|
||
* You must ensure that the same {mtc-short} version is installed on all clusters.
|
||
* You must add the clusters and the replication repository to the {mtc-short} web console.
|
||
* If you want to use the _move_ data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume.
|
||
* If you want to use direct image migration, you must specify the exposed route to the image registry of the source cluster. This can be done by using the {mtc-short} web console or by updating the `MigCluster` custom resource manifest.
|
||
|
||
.Procedure
|
||
|
||
. In the {mtc-short} web console, click *Migration plans*.
|
||
. Click *Add migration plan*.
|
||
. Enter the *Plan name*.
|
||
+
|
||
The migration plan name must not exceed 253 lower-case alphanumeric characters (`a-z, 0-9`) and must not contain spaces or underscores (`_`).
|
||
|
||
. Select a *Source cluster*, a *Target cluster*, and a *Repository*.
|
||
. Click *Next*.
|
||
. Select the projects for migration.
|
||
. Optional: Click the edit icon beside a project to change the target namespace.
|
||
+
|
||
[WARNING]
|
||
====
|
||
{mtc-full} 1.8.6 and later versions do not support multiple migration plans for a single namespace.
|
||
====
|
||
. Click *Next*.
|
||
. Select a *Migration type* for each PV:
|
||
|
||
* The *Copy* option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster.
|
||
* The *Move* option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using.
|
||
|
||
. Click *Next*.
|
||
. Select a *Copy method* for each PV:
|
||
|
||
* *Snapshot copy* backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than *Filesystem copy*.
|
||
* *Filesystem copy* backs up the files on the source cluster and restores them on the target cluster.
|
||
+
|
||
The file system copy method is required for direct volume migration.
|
||
|
||
. You can select *Verify copy* to verify data migrated with *Filesystem copy*. Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance.
|
||
|
||
. Select a *Target storage class*.
|
||
+
|
||
If you selected *Filesystem copy*, you can change the target storage class.
|
||
|
||
. Click *Next*.
|
||
. On the *Migration options* page, the *Direct image migration* option is selected if you specified an exposed image registry route for the source cluster. The *Direct PV migration* option is selected if you are migrating data with *Filesystem copy*.
|
||
+
|
||
The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster.
|
||
|
||
. Click *Next*.
|
||
. Optional: Click *Add Hook* to add a hook to the migration plan.
|
||
+
|
||
A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step.
|
||
|
||
.. Enter the name of the hook to display in the web console.
|
||
.. If the hook is an Ansible playbook, select *Ansible playbook* and click *Browse* to upload the playbook or paste the contents of the playbook in the field.
|
||
.. Optional: Specify an Ansible runtime image if you are not using the default hook image.
|
||
.. If the hook is not an Ansible playbook, select *Custom container image* and specify the image name and path.
|
||
+
|
||
A custom container image can include Ansible playbooks.
|
||
|
||
.. Select *Source cluster* or *Target cluster*.
|
||
.. Enter the *Service account name* and the *Service account namespace*.
|
||
.. Select the migration step for the hook:
|
||
|
||
* *preBackup*: Before the application workload is backed up on the source cluster
|
||
* *postBackup*: After the application workload is backed up on the source cluster
|
||
* *preRestore*: Before the application workload is restored on the target cluster
|
||
* *postRestore*: After the application workload is restored on the target cluster
|
||
|
||
.. Click *Add*.
|
||
|
||
. Click *Finish*.
|
||
+
|
||
The migration plan is displayed in the *Migration plans* list.
|