1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
This commit is contained in:
Avital Pinnick
2020-09-29 11:14:33 +03:00
parent e189af6b5b
commit d3aa43581b
40 changed files with 208 additions and 181 deletions

View File

@@ -1450,7 +1450,7 @@ Topics:
- Name: Recovering from expired control plane certificates
File: scenario-3-expired-certs
---
Name: Migration
Name: Migration Tookit for Containers
Dir: migration
Distros: openshift-enterprise,openshift-webscale,openshift-origin
Topics:
@@ -1463,13 +1463,13 @@ Topics:
File: planning-migration-3-to-4
- Name: Migration tools and prerequisites
File: migrating-application-workloads-3-4
- Name: Deploying the Cluster Application Migration tool
- Name: Deploying the Migration Toolkit for Containers
File: deploying-cam-3-4
- Name: Configuring a replication repository
File: configuring-replication-repository-3-4
- Name: Migrating applications with the CAM web console
- Name: Migrating your applications
File: migrating-applications-with-cam-3-4
- Name: Migrating control plane settings with the Control Plane Migration Assistant
- Name: Migrating your control plane settings
File: migrating-with-cpma
- Name: Troubleshooting
File: troubleshooting-3-4
@@ -1479,11 +1479,11 @@ Topics:
Topics:
- Name: Migration tools and prerequisites
File: migrating-application-workloads-4-1-4
- Name: Deploying the Cluster Application Migration tool
- Name: Deploying the Migration Toolkit for Containers
File: deploying-cam-4-1-4
- Name: Configuring a replication repository
File: configuring-replication-repository-4-1-4
- Name: Migrating applications with the CAM web console
- Name: Migrating your applications
File: migrating-applications-with-cam-4-1-4
- Name: Troubleshooting
File: troubleshooting-4-1-4
@@ -1493,11 +1493,11 @@ Topics:
Topics:
- Name: Migration tools and prerequisites
File: migrating-application-workloads-4-2-4
- Name: Deploying the Cluster Application Migration tool
- Name: Deploying the Migration Toolkit for Containers
File: deploying-cam-4-2-4
- Name: Configuring a replication repository
File: configuring-replication-repository-4-2-4
- Name: Migrating applications with the CAM web console
- Name: Migrating your applications
File: migrating-applications-with-cam-4-2-4
- Name: Troubleshooting
File: troubleshooting-4-2-4

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

After

Width:  |  Height:  |  Size: 136 KiB

View File

@@ -15,5 +15,5 @@ Learn about the differences between {product-title} versions 3 and 4. Prior to t
xref:../../migration/migrating_3_4/migrating-application-workloads-3-4.adoc#migrating-application-workloads-3-4[Performing your migration]::
Learn about and use the tools to perform your migration:
* Cluster Application Migration (CAM) tool to migrate your application workloads
* {mtc-first} to migrate your application workloads
* Control Plane Migration Assistant (CPMA) to migrate your control plane

View File

@@ -6,9 +6,9 @@ include::modules/common-attributes.adoc[]
toc::[]
You must configure an object storage to use as a replication repository. The Cluster Application Migration tool copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster.
You must configure an object storage to use as a replication repository. The {mtc-first} copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster.
The CAM tool supports the xref:../../migration/migrating_3_4/migrating-application-workloads-3-4.adoc#migration-understanding-data-copy-methods_migrating-3-4[file system and snapshot data copy methods] for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
{mtc-full} supports the xref:../../migration/migrating_3_4/migrating-application-workloads-3-4.adoc#migration-understanding-data-copy-methods_migrating-3-4[file system and snapshot data copy methods] for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
The following storage providers are supported:

View File

@@ -1,26 +1,26 @@
[id='deploying-cam-3-4']
= Deploying and upgrading the Cluster Application Migration tool
= Deploying the Migration Toolkit for Containers
include::modules/common-attributes.adoc[]
:context: migrating-3-4
:migrating-3-4:
toc::[]
You can install the Cluster Application Migration Operator on an {product-title} {product-version} target cluster and an {product-title} 3 source cluster. The Cluster Application Migration Operator installs the Cluster Application Migration (CAM) tool on the target cluster by default.
You can deploy the {mtc-first} on an {product-title} {product-version} target cluster and an {product-title} 3 source cluster by installing the {mtc-operator}. The {mtc-operator} installs the {mtc-first} on the target cluster by default.
[NOTE]
====
Optional: You can configure the Cluster Application Migration Operator to install the CAM tool link:https://access.redhat.com/articles/5064151[on an {product-title} 3 cluster or on a remote cluster].
Optional: You can configure the {mtc-operator} to install the {mtc-short} link:https://access.redhat.com/articles/5064151[on an {product-title} 3 cluster or on a remote cluster].
====
In a restricted environment, you can install the Cluster Application Migration Operator from a local mirror registry.
In a restricted environment, you can install the {mtc-operator} from a local mirror registry.
After you have installed the Cluster Application Migration Operator on your clusters, you can launch the CAM tool.
After you have installed the {mtc-operator} on your clusters, you can launch the {mtc-short} web console.
[id='installing-cam-operator_{context}']
== Installing the Cluster Application Migration Operator
== Installing the {mtc-operator}
You can install the Cluster Application Migration Operator with the Operator Lifecycle Manager (OLM) on an {product-title} {product-version} target cluster and manually on an {product-title} 3 source cluster.
You can install the {mtc-operator} with the Operator Lifecycle Manager (OLM) on an {product-title} {product-version} target cluster and manually on an {product-title} 3 source cluster.
include::modules/migration-installing-cam-operator-ocp-4.adoc[leveloffset=+2]
include::modules/migration-installing-cam-operator-ocp-3.adoc[leveloffset=+2]
@@ -29,19 +29,22 @@ include::modules/migration-installing-cam-operator-ocp-3.adoc[leveloffset=+2]
:context: disconnected-3-4
:disconnected-3-4:
[id='installing-cam-operator_{context}']
== Installing the Cluster Application Migration Operator in a restricted environment
== Installing the {mtc-operator} in a restricted environment
You can install the Cluster Application Migration Operator with the Operator Lifecycle Manager (OLM) on an {product-title} {product-version} target cluster and manually on an {product-title} 3 source cluster.
You can install the {mtc-operator} with the Operator Lifecycle Manager (OLM) on an {product-title} {product-version} target cluster and manually on an {product-title} 3 source cluster.
For {product-title} {product-version}, you can build a custom Operator catalog image, push it to a local mirror image registry, and configure OLM to install the Cluster Application Migration Operator from the local registry. A `mapping.txt` file is created when you run the `oc adm catalog mirror` command.
For {product-title} {product-version}, you can build a custom Operator catalog image, push it to a local mirror image registry, and configure OLM to install the {mtc-operator} from the local registry. A `mapping.txt` file is created when you run the `oc adm catalog mirror` command.
On the {product-title} 3 cluster, you can create a manifest file based on the Operator image and edit the file to point to your local image registry. The `image` value in the manifest file uses the `sha256` value from the `mapping.txt` file. Then, you can use the local image to create the Cluster Application Migration Operator.
On the {product-title} 3 cluster, you can create a manifest file based on the Operator image and edit the file to point to your local image registry. The `image` value in the manifest file uses the `sha256` value from the `mapping.txt` file. Then, you can use the local image to create the {mtc-operator}.
ifeval::["{product-version}" != "4.2"]
.Additional resources
* xref:../../operators/admin/olm-restricted-networks.adoc#olm-restricted-networks[Using Operator Lifecycle Manager on restricted networks]
include::modules/olm-building-operator-catalog-image.adoc[leveloffset=+2]
endif::[]
include::modules/olm-restricted-networks-configuring-operatorhub.adoc[leveloffset=+2]
include::modules/migration-installing-cam-operator-ocp-4.adoc[leveloffset=+2]
include::modules/migration-installing-cam-operator-ocp-3.adoc[leveloffset=+2]

View File

@@ -6,19 +6,19 @@ include::modules/common-attributes.adoc[]
toc::[]
You can migrate application workloads from {product-title} 3.7, 3.9, 3.10, and 3.11 to {product-title} {product-version} with the Cluster Application Migration (CAM) tool. The CAM tool enables you to control the migration and to minimize application downtime.
You can migrate application workloads from {product-title} 3.7, 3.9, 3.10, and 3.11 to {product-title} {product-version} with the {mtc-first}. {mtc-short} enables you to control the migration and to minimize application downtime.
The CAM tool's web console and API, based on Kubernetes Custom Resources, enable you to migrate stateful application workloads at the granularity of a namespace.
{mtc-short} web console and API, based on Kubernetes Custom Resources, enable you to migrate stateful application workloads at the granularity of a namespace.
The CAM tool supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
{mtc-short} supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
You can use migration hooks to run Ansible playbooks at certain points during the migration. The hooks are added when you create a migration plan.
[NOTE]
====
The service catalog is deprecated in {product-title} 4. You can migrate workload resources provisioned with the service catalog from {product-title} 3 to 4 but you cannot perform service catalog actions, such as `provision`, `deprovision`, or `update`, on these workloads after migration.
The service catalog is deprecated in {product-title} 4. You can migrate workload resources provisioned with the service catalog from {product-title} 3 to 4 but you cannot perform service catalog actions such as `provision`, `deprovision`, or `update` on these workloads after migration.
The CAM tool displays a message about service catalog resources, for example, `ClusterServiceClass`, `ServiceInstance`, or `ServiceBinding`, that cannot be migrated.
The {mtc-short} web console displays a message if the service catalog resources cannot be migrated.
====
The Control Plane Migration Assistant (CPMA) is a CLI-based tool that assists you in migrating the control plane. The CPMA processes the {product-title} 3 configuration files and generates Custom Resource (CR) manifest files, which are consumed by {product-title} {product-version} Operators.

View File

@@ -1,12 +1,12 @@
[id='migrating-applications-with-cam_{context}']
= Migrating applications with the CAM web console
= Migrating your applications
include::modules/common-attributes.adoc[]
:context: migrating-3-4
:migrating-3-4:
toc::[]
You can migrate application workloads by adding your clusters and replication repository to the CAM web console. Then, you can create and run a migration plan.
You must add your clusters and a replication repository to the {mtc-short} web console. Then, you can create and run a migration plan.
If your cluster or replication repository are secured with self-signed certificates, you can create a CA certificate bundle file or disable SSL verification.

View File

@@ -1,5 +1,5 @@
[id='migrating-with-cpma']
= Migrating control plane settings with the Control Plane Migration Assistant (CPMA)
= Migrating your control plane settings
include::modules/common-attributes.adoc[]
:context: migrating-3-4
:migrating-3-4:

View File

@@ -8,7 +8,7 @@ toc::[]
You must configure an object storage to use as a replication repository. The Cluster Application Migration tool copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster.
The CAM tool supports the xref:../../migration/migrating_4_1_4/migrating-application-workloads-4-1-4.adoc#migration-understanding-data-copy-methods_migrating-4-1-4[file system and snapshot data copy methods] for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
{mtc-short} supports the xref:../../migration/migrating_4_1_4/migrating-application-workloads-4-1-4.adoc#migration-understanding-data-copy-methods_migrating-4-1-4[file system and snapshot data copy methods] for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
The following storage providers are supported:

View File

@@ -1,21 +1,21 @@
[id='deploying-cam-4-1-4']
= Deploying and upgrading the Cluster Application Migration tool
= Deploying the Migration Toolkit for Containers
include::modules/common-attributes.adoc[]
:context: migrating-4-1-4
:migrating-4-1-4:
toc::[]
You can install the Cluster Application Migration Operator on your {product-title} {product-version} target cluster and 4.1 source cluster. The Cluster Application Migration Operator installs the Cluster Application Migration (CAM) tool on the target cluster by default.
You can install the Cluster Application Migration Operator on your {product-title} {product-version} target cluster and 4.1 source cluster. The Cluster Application Migration Operator installs the {mtc-first} on the target cluster by default.
[NOTE]
====
Optional: You can configure the Cluster Application Migration Operator to install the CAM tool link:https://access.redhat.com/articles/5064151[on an {product-title} 3 cluster or on a remote cluster].
Optional: You can configure the Cluster Application Migration Operator to install the {mtc-short} link:https://access.redhat.com/articles/5064151[on an {product-title} 3 cluster or on a remote cluster].
====
In a restricted environment, you can install the Cluster Application Migration Operator from a local mirror registry.
After you have installed the Cluster Application Migration Operator on your clusters, you can launch the CAM tool.
After you have installed the Cluster Application Migration Operator on your clusters, you can launch the {mtc-short}.
== Installing the Cluster Application Migration Operator
@@ -40,11 +40,14 @@ include::modules/migration-installing-cam-operator-ocp-4.adoc[leveloffset=+2]
You can build a custom Operator catalog image for {product-title} 4, push it to a local mirror image registry, and configure the Operator Lifecycle Manager to install the Cluster Application Migration Operator from the local registry.
ifeval::["{product-version}" != "4.2"]
.Additional resources
* xref:../../operators/admin/olm-restricted-networks.adoc#olm-restricted-networks[Using Operator Lifecycle Manager on restricted networks]
include::modules/olm-building-operator-catalog-image.adoc[leveloffset=+2]
endif::[]
include::modules/olm-restricted-networks-configuring-operatorhub.adoc[leveloffset=+2]
:!disconnected-4-1-4:

View File

@@ -6,16 +6,16 @@ include::modules/common-attributes.adoc[]
toc::[]
You can migrate application workloads from {product-title} 4.1 to {product-version} with the Cluster Application Migration (CAM) tool. The CAM tool enables you to control the migration and to minimize application downtime.
You can migrate application workloads from {product-title} 4.1 to {product-version} with the {mtc-first}. {mtc-short} enables you to control the migration and to minimize application downtime.
[NOTE]
====
You can migrate between {product-title} clusters of the same version, for example, from 4.1 to 4.1, as long as the source and target clusters are configured correctly.
====
The CAM tool's web console and API, based on Kubernetes Custom Resources, enable you to migrate stateful and stateless application workloads at the granularity of a namespace.
{mtc-short} web console and API, based on Kubernetes Custom Resources, enable you to migrate stateful and stateless application workloads at the granularity of a namespace.
The CAM tool supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
{mtc-short} supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
You can use migration hooks to run Ansible playbooks at certain points during the migration. The hooks are added when you create a migration plan.

View File

@@ -1,12 +1,12 @@
[id='migrating-applications-with-cam-4-1-4']
= Migrating applications with the CAM web console
= Migrating your applications
include::modules/common-attributes.adoc[]
:context: migrating-4-1-4
:migrating-4-1-4:
toc::[]
You can migrate application workloads by adding your clusters and replication repository to the CAM web console. Then, you can create and run a migration plan.
You must add your clusters and a replication repository to the {mtc-short} web console. Then, you can create and run a migration plan.
If your cluster or replication repository are secured with self-signed certificates, you can create a CA certificate bundle file or disable SSL verification.

View File

@@ -8,7 +8,7 @@ toc::[]
You must configure an object storage to use as a replication repository. The Cluster Application Migration tool copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster.
The CAM tool supports the xref:../../migration/migrating_4_2_4/migrating-application-workloads-4-2-4.adoc#migration-understanding-data-copy-methods_migrating-4-2-4[file system and snapshot data copy methods] for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
{mtc-short} supports the xref:../../migration/migrating_4_2_4/migrating-application-workloads-4-2-4.adoc#migration-understanding-data-copy-methods_migrating-4-2-4[file system and snapshot data copy methods] for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
The following storage providers are supported:

View File

@@ -1,21 +1,21 @@
[id='deploying-cam-4-2-4']
= Deploying and upgrading the Cluster Application Migration tool
= Deploying the Migration Toolkit for Containers
include::modules/common-attributes.adoc[]
:context: migrating-4-2-4
:migrating-4-2-4:
toc::[]
You can install the Cluster Application Migration Operator on your {product-title} {product-version} target cluster and 4.2 source cluster. The Cluster Application Migration Operator installs the Cluster Application Migration (CAM) tool on the target cluster by default.
You can install the Cluster Application Migration Operator on your {product-title} {product-version} target cluster and 4.2 source cluster. The Cluster Application Migration Operator installs the {mtc-first} on the target cluster by default.
[NOTE]
====
Optional: You can configure the Cluster Application Migration Operator to install the CAM tool link:https://access.redhat.com/articles/5064151[on an {product-title} 3 cluster or on a remote cluster].
Optional: You can configure the Cluster Application Migration Operator to install the {mtc-short} link:https://access.redhat.com/articles/5064151[on an {product-title} 3 cluster or on a remote cluster].
====
In a restricted environment, you can install the Cluster Application Migration Operator from a local mirror registry.
After you have installed the Cluster Application Migration Operator on your clusters, you can launch the CAM tool.
After you have installed the Cluster Application Migration Operator on your clusters, you can launch the {mtc-short}.
== Installing the Cluster Application Migration Operator
@@ -40,11 +40,14 @@ include::modules/migration-installing-cam-operator-ocp-4.adoc[leveloffset=+2]
You can build a custom Operator catalog image for {product-title} 4, push it to a local mirror image registry, and configure OLM to install the Operator from the local registry.
ifeval::["{product-version}" != "4.2"]
.Additional resources
* xref:../../operators/admin/olm-restricted-networks.adoc#olm-restricted-networks[Using Operator Lifecycle Manager on restricted networks]
include::modules/olm-building-operator-catalog-image.adoc[leveloffset=+2]
endif::[]
include::modules/olm-restricted-networks-configuring-operatorhub.adoc[leveloffset=+2]
:!disconnected-4-2-4:

View File

@@ -6,16 +6,16 @@ include::modules/common-attributes.adoc[]
toc::[]
You can migrate application workloads from {product-title} 4.2 and later to {product-version} with the Cluster Application Migration (CAM) tool. The CAM tool enables you to control the migration and to minimize application downtime.
You can migrate application workloads from {product-title} 4.2 to {product-version} with the {mtc-first}. {mtc-short} enables you to control the migration and to minimize application downtime.
[NOTE]
====
You can migrate between {product-title} clusters of the same version, for example, from 4.2 to 4.2 or from 4.3 to 4.3, as long as the source and target clusters are configured correctly.
====
The CAM tool's web console and API, based on Kubernetes Custom Resources, enable you to migrate stateful and stateless application workloads at the granularity of a namespace.
{mtc-short} web console and API, based on Kubernetes Custom Resources, enable you to migrate stateful and stateless application workloads at the granularity of a namespace.
The CAM tool supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
{mtc-short} supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
You can use migration hooks to run Ansible playbooks at certain points during the migration. The hooks are added when you create a migration plan.

View File

@@ -1,12 +1,12 @@
[id='migrating-applications-with-cam-4-2-4']
= Migrating applications with the CAM web console
= Migrating your applications
include::modules/common-attributes.adoc[]
:context: migrating-4-2-4
:migrating-4-2-4:
toc::[]
You can migrate application workloads by adding your clusters and replication repository to the CAM web console. Then, you can create and run a migration plan.
You must add your clusters and a replication repository to the {mtc-short} web console. Then, you can create and run a migration plan.
If your cluster or replication repository are secured with self-signed certificates, you can create a CA certificate bundle file or disable SSL verification.

View File

@@ -25,14 +25,13 @@ endif::[]
:rh-virtualization-first: Red Hat Virtualization (RHV)
:rh-virtualization: RHV
:launch: image:app-launcher.png[title="Application Launcher"]
// for CAM rebranding as MTC
// :mtc-short: MTC
:mtc-short: CAM
// :mtc-full: Migration Toolkit for Containers
// :mtc-first: Migration Toolkit for Containers ({mtc-short})
// :mtc-operator: Migration Toolkit for Containers Operator
:mtc-version: 1.2
:mtc-version-z: 1.2.5
:mtc-rhcam-folder: rhcam-1-2
:mtc-ocp3-download: registry.redhat.io/rhcam-1-2/openshift-migration-rhel7-operator:v1.2
:mtc-must-gather: registry.redhat.io/rhcam-1-2/openshift-migration-must-gather-rhel8
:mtc-short: MTC
:mtc-full: Migration Toolkit for Containers
:mtc-first: Migration Toolkit for Containers ({mtc-short})
:mtc-operator: MTC Operator
:mtc-version: 1.3
:mtc-version-z: 1.3.0
:mtc-rhcam-folder: rhcam-1-3
// image names to be confirmed
:mtc-ocp3-download: registry.redhat.io/rhcam-1-3/openshift-migration-rhel7-operator:v1.3
:mtc-must-gather: registry.redhat.io/rhcam-1-3/openshift-migration-must-gather-rhel8

View File

@@ -4,9 +4,9 @@
// * migration/migrating_4_1_4/migrating-applications-with-cam-4-1-4.adoc
// * migration/migrating_4_2_4/migrating-applications-with-cam-4-2-4.adoc
[id='migration-adding-cluster-to-cam_{context}']
= Adding a cluster to the CAM web console
= Adding a cluster to the {mtc-short} web console
You can add a cluster to the CAM web console.
You can add a cluster to the {mtc-short} web console.
.Prerequisites
@@ -32,7 +32,7 @@ $ oc sa get-token migration-controller -n openshift-migration
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ
----
. Log in to the CAM web console.
. Log in to the {mtc-short} web console.
. In the *Clusters* section, click *Add cluster*.
. Fill in the following fields:
@@ -45,4 +45,4 @@ eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiw
. Click *Add cluster*.
+
The cluster appears in the *Clusters* section.
The cluster appears in the *Clusters* section of the {mtc-short} web console.

View File

@@ -4,9 +4,9 @@
// * migration/migrating_4_1_4/migrating-applications-with-cam-4-1-4.adoc
// * migration/migrating_4_2_4/migrating-applications-with-cam-4-2-4.adoc
[id='migration-adding-replication-repository-to-cam_{context}']
= Adding a replication repository to the CAM web console
= Adding a replication repository to the {mtc-short} web console
You can add an object storage bucket as a replication repository to the CAM web console.
You can add an object storage bucket as a replication repository to the {mtc-short} web console.
.Prerequisites
@@ -14,13 +14,13 @@ You can add an object storage bucket as a replication repository to the CAM web
.Procedure
. Log in to the CAM web console.
. Log in to the {mtc-short} web console.
. In the *Replication repositories* section, click *Add repository*.
. Select a *Storage provider type* and fill in the following fields:
* *AWS* for AWS S3, MCG, and generic S3 providers:
** *Replication repository name*: Specify the replication repository name in the CAM web console.
** *Replication repository name*: Specify the replication repository name in the {mtc-short} web console.
** *S3 bucket name*: Specify the name of the S3 bucket you created.
** *S3 bucket region*: Specify the S3 bucket region. *Required* for AWS S3. *Optional* for other S3 providers.
** *S3 endpoint*: Specify the URL of the S3 service, not the bucket, for example, `\https://<s3-storage.apps.cluster.com>`. *Required* for a generic S3 provider. You must use the `https://` prefix.
@@ -32,13 +32,13 @@ You can add an object storage bucket as a replication repository to the CAM web
* *GCP*:
** *Replication repository name*: Specify the replication repository name in the CAM web console.
** *Replication repository name*: Specify the replication repository name in the {mtc-short} web console.
** *GCP bucket name*: Specify the name of the GCP bucket.
** *GCP credential JSON blob*: Specify the string in the `credentials-velero` file.
* *Azure*:
** *Replication repository name*: Specify the replication repository name in the CAM web console.
** *Replication repository name*: Specify the replication repository name in the {mtc-short} web console.
** *Azure resource group*: Specify the resource group of the Azure Blob storage.
** *Azure storage account name*: Specify the Azure Blob storage account name.
** *Azure credentials - INI file contents*: Specify the string in the `credentials-velero` file.

View File

@@ -99,8 +99,9 @@ $ cat > velero-s3-policy.json <<EOF
}
EOF
----
<1> To grant access to a single S3 bucket, specify the bucket name. To grant access to all AWS S3 buckets, specify `*` instead of a bucket name:
<1> To grant access to a single S3 bucket, specify the bucket name. To grant access to all AWS S3 buckets, specify `*` instead of a bucket name as in the following example:
+
.Example output
[source,terminal]
----
"Resource": [
@@ -142,4 +143,4 @@ $ aws iam create-access-key --user-name velero
}
}
----
<1> Record the `AWS_SECRET_ACCESS_KEY` and the `AWS_ACCESS_KEY_ID` for adding the AWS repository to the CAM web console.
<1> Record the `AWS_SECRET_ACCESS_KEY` and the `AWS_ACCESS_KEY_ID` for adding the AWS repository to the {mtc-short} web console.

View File

@@ -83,7 +83,7 @@ $ AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` \
AZURE_CLIENT_ID=`az ad sp list --display-name "velero" --query '[0].appId' -o tsv`
----
. Save the service principal's credentials in the `credentials-velero` file:
. Save the service principal credentials in the `credentials-velero` file:
+
[source,terminal]
----

View File

@@ -53,7 +53,7 @@ $ gsutil mb gs://$BUCKET/
$ PROJECT_ID=$(gcloud config get-value project)
----
. Create a `velero` service account:
. Create a `velero` IAM service account:
+
[source,terminal]
----
@@ -61,7 +61,7 @@ $ gcloud iam service-accounts create velero \
--display-name "Velero Storage"
----
. Set the `SERVICE_ACCOUNT_EMAIL` variable to the service account's email address:
. Create the `SERVICE_ACCOUNT_EMAIL` variable:
+
[source,terminal]
----
@@ -70,7 +70,7 @@ $ SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
--format 'value(email)')
----
. Grant permissions to the service account:
. Create the `ROLE_PERMISSIONS` variable:
+
[source,terminal]
----
@@ -84,20 +84,35 @@ $ ROLE_PERMISSIONS=(
compute.snapshots.delete
compute.zones.get
)
----
gcloud iam roles create velero.server \
. Create the `velero.server` custom role:
+
[source,terminal]
----
$ gcloud iam roles create velero.server \
--project $PROJECT_ID \
--title "Velero Server" \
--permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
--role projects/$PROJECT_ID/roles/velero.server
gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
----
. Save the service account's keys to the `credentials-velero` file in the current directory:
. Add IAM policy binding to the project:
+
[source,terminal]
----
$ gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
--role projects/$PROJECT_ID/roles/velero.server
----
. Update the IAM service account:
+
[source,terminal]
----
$ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
----
. Save the IAM service account keys to the `credentials-velero` file in the current directory:
+
[source,terminal]
----

View File

@@ -139,7 +139,7 @@ spec:
additionalConfig:
bucketclass: mcg-pv-pool-bc
----
<1> Record the bucket name for adding the replication repository to the CAM web console.
<1> Record the bucket name for adding the replication repository to the {mtc-short} web console.
. Create the `ObjectBucketClaim` object:
+
@@ -157,7 +157,7 @@ $ watch -n 30 'oc get -n openshift-storage objectbucketclaim migstorage -o yaml'
+
This process can take five to ten minutes.
. Obtain and record the following values, which are required when you add the replication repository to the CAM web console:
. Obtain and record the following values, which are required when you add the replication repository to the {mtc-short} web console:
* S3 endpoint:
+
@@ -168,6 +168,7 @@ $ oc get route -n openshift-storage s3
* S3 provider access key:
+
[source,terminal]
----
$ oc get secret -n openshift-storage migstorage -o go-template='{{ .data.AWS_ACCESS_KEY_ID }}' | base64 -d
----

View File

@@ -7,7 +7,7 @@
If you use a self-signed certificate to secure a cluster or a replication repository, certificate verification might fail with the following error message: `Certificate signed by unknown authority`.
You can create a custom CA certificate bundle file and upload it in the CAM web console when you add a cluster or a replication repository.
You can create a custom CA certificate bundle file and upload it in the {mtc-short} web console when you add a cluster or a replication repository.
.Procedure

View File

@@ -4,24 +4,24 @@
// * migration/migrating_4_1_4/migrating-applications-with-cam-4-1-4.adoc
// * migration/migrating_4_2_4/migrating-applications-with-cam-4-2-4.adoc
[id='migration-creating-migration-plan-cam_{context}']
= Creating a migration plan in the CAM web console
= Creating a migration plan in the {mtc-short} web console
You can create a migration plan in the CAM web console.
You can create a migration plan in the {mtc-short} web console.
.Prerequisites
* The CAM web console must contain the following:
* The {mtc-short} web console must contain the following:
** Source cluster
** Target cluster, which is added automatically during the CAM tool installation
** Target cluster
** Replication repository
* The source and target clusters must have network access to each other and to the replication repository.
* If you use snapshots to copy data, the source and target clusters must run on the same cloud provider (AWS, GCP, or Azure) and in the same region.
* If you use snapshots to copy data, the source and target clusters must run on the same cloud provider (AWS, GCP, or Azure) and be located in the same region.
.Procedure
. Log in to the CAM web console.
. Log in to the {mtc-short} web console.
. In the *Plans* section, click *Add plan*.
. Enter the *Plan name* and click *Next*.
+
@@ -34,7 +34,7 @@ The *Plan name* can contain up to 253 lower-case alphanumeric characters (`a-z,
* *Copy* copies the data in a source cluster's PV to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster.
+
Optional: You can verify data copied with the filesystem method by selecting *Verify copy*. This option, which generates a checksum for each source file and checks it after restoration, significantly reduces performance.
Optional: You can verify data copied with the file system method by selecting *Verify copy*. This option generates a checksum for each source file and checks it after restoration. The operation significantly reduces performance.
* *Move* unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.

View File

@@ -6,11 +6,11 @@
[id='migration-downloading-logs_{context}']
= Downloading migration logs
You can download the Velero, Restic, and Migration controller logs in the CAM web console to troubleshoot a failed migration.
You can download the Velero, Restic, and Migration controller logs in the {mtc-short} web console to troubleshoot a failed migration.
.Procedure
. Log in to the CAM console.
. Log in to the {mtc-short} console.
. Click *Plans* to view the list of migration plans.
. Click the *Options* menu {kebab} of a specific migration plan and select *Logs*.
. Click *Download Logs* to download the logs of the Migration controller, Velero, and Restic for all clusters.
@@ -18,7 +18,7 @@ You can download the Velero, Restic, and Migration controller logs in the CAM we
.. Specify the log options:
* *Cluster*: Select the source, target, or CAM host cluster.
* *Cluster*: Select the source, target, or {mtc-short} host cluster.
* *Log source*: Select *Velero*, *Restic*, or *Controller*.
* *Pod source*: Select the Pod name, for example, `controller-manager-78c469849c-v6wcf`
+

View File

@@ -5,7 +5,7 @@
[id='migration-excluding-resources_{context}']
= Excluding resources from a migration plan
You can exclude resources, for example, imagestreams, persistent volumes (PVs), or subscriptions, from a migration plan.
You can exclude resources, for example, ImageStreams, persistent volumes (PVs), or subscriptions, from a migration plan in order to reduce the load or to migrate images or PVs with a different tool.
.Procedure

View File

@@ -3,21 +3,21 @@
// * migration/migrating_3_4/deploying-cam-3-4.adoc
[id="migration-installing-cam-operator-ocp-3_{context}"]
ifdef::migrating-3-4[]
= Installing the Cluster Application Migration Operator on an {product-title} 3 source cluster
= Installing the {mtc-operator} on an {product-title} 3 source cluster
You can install the Cluster Application Migration Operator manually on an {product-title} 3 source cluster.
You can install the {mtc-operator} manually on an {product-title} 3 source cluster.
endif::[]
ifdef::disconnected-3-4[]
= Installing the Cluster Application Migration Operator on an {product-title} 3 source cluster in a restricted environment
= Installing the {mtc-operator} on an {product-title} 3 source cluster in a restricted environment
You can create a manifest file based on the Cluster Application Migration Operator image and edit the manifest to point to your local image registry. Then, you can use the local image to create the Cluster Application Migration Operator on an {product-title} 3 source cluster.
You can create a manifest file based on the {mtc-operator} image and edit the manifest to point to your local image registry. Then, you can use the local image to create the {mtc-operator} on an {product-title} 3 source cluster.
endif::[]
[IMPORTANT]
====
You must install the same {mtc-short} version on the {product-title} 3 and 4 clusters. The {mtc-short} Operator on the {product-title} 4 cluster is updated automatically by the Operator Lifecycle Manager.
To ensure that you have the same version on the {product-title} 3 cluster, download the `operator.yml` and `controller-3.yml` files when you are ready to create and run the migration plan.
To ensure that you have the latest version on the {product-title} 3 cluster, download the `operator.yml` and `controller-3.yml` files when you are ready to create and run the migration plan.
====
.Prerequisites
@@ -109,7 +109,7 @@ $ oc run test --image registry.redhat.io/ubi8 --command sleep infinity
----
endif::[]
. Create the Cluster Application Migration Operator CR object:
. Create the {mtc-operator} CR object:
+
[source,terminal]
----
@@ -132,7 +132,7 @@ rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists <1
Error from server (AlreadyExists): error when creating "./operator.yml":
rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists
----
<1> You can ignore `Error from server (AlreadyExists)` messages. They are caused by the Cluster Application Migration Operator creating resources for earlier versions of {product-title} 3 that are provided in later releases.
<1> You can ignore `Error from server (AlreadyExists)` messages. They are caused by the {mtc-operator} creating resources for earlier versions of {product-title} 3 that are provided in later releases.
. Create the Migration Controller CR object:
+

View File

@@ -5,39 +5,39 @@
// * migration/migrating_4_2_4/deploying-cam-4-2-4.adoc
[id="migration-installing-cam-operator-ocp-4_{context}"]
ifdef::source-4-1-4[]
= Installing the Cluster Application Migration Operator on an {product-title} 4.1 source cluster
= Installing the {mtc-operator} on an {product-title} 4.1 source cluster
endif::[]
ifdef::source-4-2-4[]
= Installing the Cluster Application Migration Operator on an {product-title} 4.2 source cluster
= Installing the {mtc-operator} on an {product-title} 4.2 source cluster
endif::[]
ifdef::disconnected-source-4-1-4[]
= Installing the Cluster Application Migration Operator on an {product-title} 4.1 source cluster in a restricted environment
= Installing the {mtc-operator} on an {product-title} 4.1 source cluster in a restricted environment
endif::[]
ifdef::disconnected-source-4-2-4[]
= Installing the Cluster Application Migration Operator on an {product-title} 4.2 source cluster in a restricted environment
= Installing the {mtc-operator} on an {product-title} 4.2 source cluster in a restricted environment
endif::[]
ifdef::migrating-3-4,target-4-1-4,target-4-2-4[]
= Installing the Cluster Application Migration Operator on an {product-title} {product-version} target cluster
= Installing the {mtc-operator} on an {product-title} {product-version} target cluster
endif::[]
ifdef::disconnected-3-4,disconnected-target-4-1-4,disconnected-target-4-2-4[]
= Installing the Cluster Application Migration Operator on an {product-title} {product-version} target cluster in a restricted environment
= Installing the {mtc-operator} on an {product-title} {product-version} target cluster in a restricted environment
endif::[]
ifdef::source-4-1-4,source-4-2-4,disconnected-source-4-1-4,disconnected-source-4-2-4[]
You can install the Cluster Application Migration Operator on an {product-title} 4 source cluster with the Operator Lifecycle Manager (OLM).
You can install the {mtc-operator} on an {product-title} 4 source cluster with the Operator Lifecycle Manager (OLM).
endif::[]
ifdef::migrating-3-4,target-4-1-4,target-4-2-4,disconnected-3-4,disconnected-target-4-1-4,disconnected-target-4-2-4[]
You can install the Cluster Application Migration Operator on an {product-title} {product-version} target cluster with the Operator Lifecycle Manager (OLM).
You can install the {mtc-operator} on an {product-title} {product-version} target cluster with the Operator Lifecycle Manager (OLM).
The Cluster Application Migration Operator installs the Cluster Application Migration tool on the target cluster by default.
The {mtc-operator} installs the {mtc-full} on the target cluster by default.
endif::[]
ifdef::disconnected-3-4,disconnected-target-4-1-4,disconnected-target-4-2-4,disconnected-source-4-1-4,disconnected-source-4-2-4[]
.Prerequisites
* You created a custom Operator catalog and pushed it to a mirror registry.
* You configured OLM to install the Cluster Application Migration Operator from the mirror registry.
* You have created a custom Operator catalog and pushed it to a mirror registry.
* You have configured OLM to install the {mtc-operator} from the mirror registry.
endif::[]
.Procedure
@@ -48,13 +48,14 @@ endif::[]
ifdef::source-4-1-4[]
. In the {product-title} web console, click *Catalog* -> *OperatorHub*.
endif::[]
. Use the *Filter by keyword* field (in this case, `Migration`) to find the *Cluster Application Migration Operator*.
. Select the *Cluster Application Migration Operator* and click *Install*.
. Use the *Filter by keyword* field (in this case, `Migration`) to find the *{mtc-operator}*.
. Select the *{mtc-operator}* and click *Install*.
. On the *Install Operator* page, click *Install*.
+
On the *Installed Operators* page, the *Cluster Application Migration Operator* appears in the *openshift-migration* project with the status *Succeeded*.
On the *Installed Operators* page, the *{mtc-operator}* appears in the *openshift-migration* project with the status *Succeeded*.
. Click *Cluster Application Migration Operator*.
. Click *{mtc-operator}*.
. Under *Provided APIs*, locate the *Migration Controller* tile, and click *Create Instance*.
ifdef::source-4-1-4[]

View File

@@ -3,15 +3,15 @@
[id='migration-installing-cpma_{context}']
= Installing the Control Plane Migration Assistant
You can download the Control Plane Migration Assistant (CPMA) binary file from the Red Hat Customer Portal and install it on Linux, MacOSX, or Windows operating systems.
You can download the Control Plane Migration Assistant (CPMA) binary file from the Red Hat Customer Portal and install it on Linux, macOS, or Windows operating systems.
.Procedure
. In the link:https://access.redhat.com[Red Hat Customer Portal], navigate to *Downloads* -> *Red Hat {product-title}*.
. On the *Download Red Hat {product-title}* page, select *Red Hat {product-title}* from the *Product Variant* list.
. Select *CPMA 1.0 for RHEL 7* from the *Version* list. This binary works on RHEL 7 and RHEL 8.
. Click *Download Now* to download `cpma` for Linux or MacOSX or `cpma.exe` for Windows.
. Save the file in a directory defined as `$PATH` for Linux or MacOSX or `%PATH%` for Windows.
. Click *Download Now* to download `cpma` for Linux and macOS or `cpma.exe` for Windows.
. Save the file in a directory defined as `$PATH` for Linux and macOS or `%PATH%` for Windows.
. For Linux, make the file executable:
+
[source,terminal]

View File

@@ -8,7 +8,7 @@
This release has the following known issues:
* During migration, the Cluster Application Migration (CAM) tool preserves the following namespace annotations:
* During migration, the {mtc-first} preserves the following namespace annotations:
** `openshift.io/sa.scc.mcs`
** `openshift.io/sa.scc.supplemental-groups`
@@ -16,15 +16,15 @@ This release has the following known issues:
+
These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1748440[*BZ#1748440*])
* If an AWS bucket is added to the CAM web console and then deleted, its status remains `True` because the MigStorage CR is not updated. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1738564[*BZ#1738564*])
* If an AWS bucket is added to the {mtc-short} web console and then deleted, its status remains `True` because the MigStorage CR is not updated. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1738564[*BZ#1738564*])
* Most cluster-scoped resources are not yet handled by the CAM tool. If your applications require cluster-scoped resources, you may have to create them manually on the target cluster.
* Most cluster-scoped resources are not yet handled by {mtc-short}. If your applications require cluster-scoped resources, you may have to create them manually on the target cluster.
* If a migration fails, the migration plan does not retain custom PV settings for quiesced pods. You must manually roll back the migration, delete the migration plan, and create a new migration plan with your PV settings. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1784899[*BZ#1784899*])
* If a large migration fails because Restic times out, you can increase the `restic_timeout` parameter value (default: `1h`) in the Migration Controller CR.
* If you select the data verification option for PVs that are migrated with the filesystem copy method, performance is significantly slower. Velero generates a checksum for each file and checks it when the file is restored.
* If you select the data verification option for PVs that are migrated with the file system copy method, performance is significantly slower.
ifeval::["{mtc-version}" < "1.4"]

View File

@@ -4,14 +4,14 @@
// * migration/migrating_4_1_4/migrating-applications-with-cam-4-1-4.adoc
// * migration/migrating_4_2_4/migrating-applications-with-cam-4-2-4.adoc
[id="migration-launching-cam_{context}"]
= Launching the CAM web console
= Launching the {mtc-short} web console
You can launch the CAM web console in a browser.
You can launch the {mtc-short} web console in a browser.
.Procedure
. Log in to the {product-title} cluster on which you have installed the CAM tool.
. Obtain the CAM web console URL by entering the following command:
. Log in to the {product-title} cluster on which you have installed {mtc-short}.
. Obtain the {mtc-short} web console URL by entering the following command:
+
[source,terminal]
----
@@ -20,11 +20,11 @@ $ oc get -n openshift-migration route/migration -o go-template='https://{{ .spec
+
The output resembles the following: `\https://migration-openshift-migration.apps.cluster.openshift.com`.
. Launch a browser and navigate to the CAM web console.
. Launch a browser and navigate to the {mtc-short} web console.
+
[NOTE]
====
If you try to access the CAM web console immediately after installing the Cluster Application Migration Operator, the console may not load because the Operator is still configuring the cluster. Wait a few minutes and retry.
If you try to access the {mtc-short} web console immediately after installing the {mtc-operator}, the console may not load because the Operator is still configuring the cluster. Wait a few minutes and retry.
====
. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster's API server. The web page guides you through the process of accepting the remaining certificates.

View File

@@ -22,7 +22,7 @@ The default value of `restic_timeout` is one hour. You can increase this paramet
.Procedure
. In the {product-title} web console, navigate to *Operators* -> *Installed Operators*.
. Click *Cluster Application Migration Operator*.
. Click *{mtc-operator}*.
. In the *MigrationController* tab, click *migration-controller*.
. In the *YAML* tab, update the following parameter value:
+
@@ -38,7 +38,7 @@ spec:
[id='restic-verification-error-migmigration_{context}']
== `ResticVerifyErrors` in the MigMigration Custom Resource
If data verification fails when migrating a PV with the filesystem data copy method, the following error appears in the MigMigration Custom Resource (CR).
If data verification fails when migrating a PV with the file system data copy method, the following error appears in the MigMigration Custom Resource (CR).
.Example output
[source,yaml]
@@ -113,5 +113,5 @@ The output identifies the Restic Pod that logged the errors.
+
[source,terminal]
----
$ oc logs -f restic-nr2v5
$ oc logs -f <restic-nr2v5>
----

View File

@@ -4,13 +4,13 @@
// * migration/migrating_4_1_4/migrating-applications-with-cam-4-1-4.adoc
// * migration/migrating_4_2_4/migrating-applications-with-cam-4-2-4.adoc
[id='migration-running-migration-plan-cam_{context}']
= Running a migration plan in the CAM web console
= Running a migration plan in the {mtc-short} web console
You can stage or migrate applications and data with the migration plan you created in the CAM web console.
You can stage or migrate applications and data with the migration plan you created in the {mtc-short} web console.
.Prerequisites
The CAM web console must contain the following:
The {mtc-short} web console must contain the following:
* Source cluster
* Target cluster
@@ -27,7 +27,7 @@ The CAM web console must contain the following:
$ oc adm prune images
----
. Log in to the CAM web console.
. Log in to the {mtc-short} web console.
. Select a migration plan.
. Click *Stage* to copy data from the source cluster to the target cluster without stopping the application.
+

View File

@@ -4,29 +4,29 @@
// * migration/migrating_4_1_4/migrating-application-workloads-4-1-4.adoc
// * migration/migrating_4_2_4/migrating-application-workloads-4-2-4.adoc
[id='migration-understanding-cam_{context}']
= About the Cluster Application Migration tool
= About the {mtc-full}
The Cluster Application Migration (CAM) tool enables you to migrate Kubernetes resources, persistent volume data, and internal container images from an {product-title} source cluster to an {product-title} {product-version} target cluster, using the CAM web console or the Kubernetes API.
The {mtc-first} enables you to migrate Kubernetes resources, persistent volume data, and internal container images from an {product-title} source cluster to an {product-title} {product-version} target cluster, using the {mtc-short} web console or the Kubernetes API.
Migrating an application with the CAM web console involves the following steps:
Migrating an application with the {mtc-short} web console involves the following steps:
. Install the Cluster Application Migration Operator on all clusters.
. Install the {mtc-operator} on all clusters.
+
You can install the Cluster Application Migration Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry.
You can install the {mtc-operator} in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry.
. Configure the replication repository, an intermediate object storage that the CAM tool uses to migrate data.
. Configure the replication repository, an intermediate object storage that {mtc-short} uses to migrate data.
+
The source and target clusters must have network access to the replication repository during migration. In a restricted environment, you can use an internally hosted S3 storage repository. If you use a proxy server, you must ensure that replication repository is whitelisted.
. Add the source cluster to the CAM web console.
. Add the replication repository to the CAM web console.
. Add the source cluster to the {mtc-short} web console.
. Add the replication repository to the {mtc-short} web console.
. Create a migration plan, with one of the following data migration options:
* *Copy*: The CAM tool copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster.
* *Copy*: {mtc-short} copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster.
+
image::migration-PV-copy.png[]
* *Move*: The CAM tool unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.
* *Move*: {mtc-short} unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.
+
[NOTE]
====

View File

@@ -3,9 +3,9 @@
[id='migration-understanding-cpma_{context}']
= About the Control Plane Migration Assistant
The Control Plane Migration Assistant (CPMA) is a CLI-based tool that assists you in migrating the control plane from {product-title} 3.7 (or later) to {product-version}. The CPMA processes the {product-title} 3 configuration files and generates Custom Resource (CR) manifest files, which are consumed by {product-title} {product-version} Operators.
The Control Plane Migration Assistant (CPMA) is a CLI-based tool that assists you in migrating the control plane from {product-title} 3.7 (or later) to {product-version}. CPMA processes the {product-title} 3 configuration files and generates Custom Resource (CR) manifest files, which are consumed by {product-title} {product-version} Operators.
Because {product-title} 3 and 4 have significant configuration differences, not all parameters are processed. The CPMA can generate a report that describes whether features are supported fully, partially, or not at all.
Because {product-title} 3 and 4 have significant configuration differences, not all parameters are processed. CPMA can generate a report that describes whether features are supported fully, partially, or not at all.
.Configuration files

View File

@@ -6,12 +6,12 @@
[id='migration-understanding-data-copy-methods_{context}']
= About data copy methods
The CAM tool supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
{mtc-short} supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
[id='file-system-copy-method_{context}']
== File system copy method
The CAM tool copies data files from the source cluster to the replication repository, and from there to the target cluster.
{mtc-short} copies data files from the source cluster to the replication repository, and from there to the target cluster.
[cols="1,1", options="header"]
.File system copy method summary
@@ -27,7 +27,7 @@ a|* Slower than the snapshot copy method
[id='snapshot-copy-method_{context}']
== Snapshot copy method
The CAM tool copies a snapshot of the source cluster's data to a cloud provider's object storage, configured as a replication repository. The data is restored on the target cluster.
{mtc-short} copies a snapshot of the source cluster's data to a cloud provider's object storage, configured as a replication repository. The data is restored on the target cluster.
AWS, Google Cloud Provider, and Microsoft Azure support the snapshot copy method.

View File

@@ -3,39 +3,39 @@
// * migration/migrating_4_1_4/deploying-cam-4-1-4.adoc
// * migration/migrating_4_2_4/deploying-cam-4-2-4.adoc
[id='migration-upgrading-migration-tool_{context}']
= Upgrading the Cluster Application Migration Tool
= Upgrading the {mtc-full}
You can upgrade your Cluster Application Migration (CAM) tool on your source and target clusters.
You can upgrade the {mtc-first} on your source and target clusters.
If you are upgrading from CAM 1.1 to 1.2, you must update the service account token in the CAM web console.
If you are upgrading from {mtc-short} 1.1 to 1.2, you must update the service account token in the {mtc-short} web console.
[id='upgrading-cam-ocp-4_{context}']
== Upgrading the CAM tool on an {product-title} 4 cluster
== Upgrading {mtc-short} on an {product-title} 4 cluster
You can upgrade the CAM tool on an {product-title} 4 cluster with the Operator Lifecycle Manager.
You can upgrade {mtc-short} on an {product-title} 4 cluster with the Operator Lifecycle Manager.
If you selected the *Automatic* approval option when you subscribed to the Cluster Application Migration Operator, the CAM tool is updated automatically.
If you selected the *Automatic* approval option when you installed the {mtc-operator}, {mtc-short} is updated automatically.
The following procedure enables you to change the *Manual* approval option to *Automatic* or to change the release channel.
.Procedure
. In the {product-title} console, navigate to *Operators* > *Installed Operators*.
. Click *Cluster Application Migration Operator*.
. Click *{mtc-operator}*.
. In the *Subscription* tab, change the *Approval* option to *Automatic*.
. Optional: Edit the *Channel*.
+
Updating the subscription deploys the updated Cluster Application Migration Operator and updates the CAM tool components.
Updating the subscription deploys the updated {mtc-operator} and updates the {mtc-short} components.
ifdef::migrating-3-4[]
[id='upgrading-cam-ocp-3_{context}']
== Upgrading the CAM tool on an {product-title} 3 cluster
== Upgrading {mtc-short} on an {product-title} 3 cluster
You can upgrade the CAM tool on an {product-title} 3 cluster by downloading the latest `operator.yml` file and replacing the existing Cluster Application Migration Operator CR object.
You can upgrade {mtc-short} on an {product-title} 3 cluster by downloading the latest `operator.yml` file and replacing the existing {mtc-operator} CR object.
[NOTE]
====
If you remove and re-create the namespace, you must update the cluster's service account token in the CAM web console.
If you remove and re-create the namespace, you must update the cluster's service account token in the {mtc-short} web console.
====
.Procedure
@@ -51,12 +51,12 @@ $ sudo podman login registry.redhat.io
+
[source,terminal]
----
$ sudo podman cp $(sudo podman create registry.redhat.io/rhcam-1-2/openshift-migration-rhel7-operator:v1.2):/operator.yml ./
$ sudo podman cp $(sudo podman create {mtc-ocp3-download}):/operator.yml ./
----
. Log in to your {product-title} 3 cluster.
. Deploy the updated Cluster Application Migration Operator CR object:
. Deploy the updated {mtc-operator} CR object:
+
[source,terminal]
----
@@ -64,7 +64,7 @@ $ oc replace -f operator.yml
----
ifeval::["{mtc-version-z}" == "1.2.4"]
. If you are upgrading the {mtc-short} on an {product-title} 3.7 cluster, delete the Velero and Restic objects:
. If you are upgrading {mtc-short} on an {product-title} 3.7 cluster, delete the Velero and Restic objects:
+
[source,terminal]
----
@@ -95,9 +95,9 @@ endif::[]
[id='updating-service-account-token_{context}']
== Updating the service account token
If you are upgrading from CAM 1.1 to 1.2, you must update the service account token in the CAM web console.
If you are upgrading {mtc-short} 1.1 to 1.2, you must update the service account token in the {mtc-short} web console.
CAM 1.1 uses the `mig` service account, while CAM 1.2 uses the `migration-controller` service account.
{mtc-short} 1.1 uses the `mig` service account, while {mtc-short} 1.2 uses the `migration-controller` service account.
.Procedure
@@ -108,7 +108,7 @@ CAM 1.1 uses the `mig` service account, while CAM 1.2 uses the `migration-contro
$ oc sa get-token -n openshift-migration migration-controller
----
. Log in to the CAM web console and click *Clusters*.
. Log in to the {mtc-short} web console and click *Clusters*.
. Click the Options menu {kebab} of the cluster and select *Edit*.
. Copy the new token to the *Service account token* field.
. Click *Update cluster* and then click *Close*.

View File

@@ -5,7 +5,7 @@
The Control Plane Migration Assistant (CPMA) generates CR manifests, which are consumed by {product-title} {product-version} Operators, and a report that indicates which {product-title} 3 features are supported fully, partially, or not at all.
The CPMA can run in remote mode, retrieving the configuration files from the source cluster using SSH, or in local mode, using local copies of the source cluster's configuration files.
CPMA can run in remote mode, retrieving the configuration files from the source cluster using SSH, or in local mode, using local copies of the source cluster's configuration files.
.Prerequisites
@@ -35,6 +35,7 @@ $ cpma --manifests=false <1>
+
Each prompt requires you to provide input, as in the following example:
+
.Example output
[source,terminal]
----
? Do you wish to save configuration for future use? true

View File

@@ -6,15 +6,15 @@
[id='migration-viewing-migration-crs_{context}']
= Viewing migration Custom Resources
The Cluster Application Migration (CAM) tool creates the following Custom Resources (CRs):
The {mtc-first} creates the following Custom Resources (CRs):
image::migration-architecture.png[migration architecture diagram]
image:darkcircle-1.png[20,20] link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migcluster_types.go[MigCluster] (configuration, CAM cluster): Cluster definition
image:darkcircle-1.png[20,20] link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migcluster_types.go[MigCluster] (configuration, {mtc-short} cluster): Cluster definition
image:darkcircle-2.png[20,20] link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migstorage_types.go[MigStorage] (configuration, CAM cluster): Storage definition
image:darkcircle-2.png[20,20] link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migstorage_types.go[MigStorage] (configuration, {mtc-short} cluster): Storage definition
image:darkcircle-3.png[20,20] link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migplan_types.go[MigPlan] (configuration, CAM cluster): Migration plan
image:darkcircle-3.png[20,20] link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migplan_types.go[MigPlan] (configuration, {mtc-short} cluster): Migration plan
The MigPlan CR describes the source and target clusters, repository, and namespace(s) being migrated. It is associated with 0, 1, or many MigMigration CRs.
@@ -23,11 +23,11 @@ The MigPlan CR describes the source and target clusters, repository, and namespa
Deleting a MigPlan CR deletes the associated MigMigration CRs.
====
image:darkcircle-4.png[20,20] link:https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/backup_storage_location.go[BackupStorageLocation] (configuration, CAM cluster): Location of Velero backup objects
image:darkcircle-4.png[20,20] link:https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/backup_storage_location.go[BackupStorageLocation] (configuration, {mtc-short} cluster): Location of Velero backup objects
image:darkcircle-5.png[20,20] link:https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/volume_snapshot_location.go[VolumeSnapshotLocation] (configuration, CAM cluster): Location of Velero volume snapshots
image:darkcircle-5.png[20,20] link:https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/volume_snapshot_location.go[VolumeSnapshotLocation] (configuration, {mtc-short} cluster): Location of Velero volume snapshots
image:darkcircle-6.png[20,20] link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migmigration_types.go[MigMigration] (action, CAM cluster): Migration, created during migration
image:darkcircle-6.png[20,20] link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migmigration_types.go[MigMigration] (action, {mtc-short} cluster): Migration, created during migration
A MigMigration CR is created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR.