1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Revert "OCP 3 to 4 migration"

This reverts commit 355ac609b5.
This commit is contained in:
Avital Pinnick
2019-10-30 11:39:40 +02:00
parent 355ac609b5
commit cade1429a2
35 changed files with 0 additions and 1089 deletions

View File

@@ -996,13 +996,6 @@ Topics:
- Name: Recovering from expired control plane certificates
File: scenario-3-expired-certs
---
Name: Migration
Dir: migration
Distros: openshift-enterprise
Topics:
- Name: Migrating OpenShift Container Platform 3 to 4
File: migrating-openshift-3-to-4
---
Name: CLI tools
Dir: cli_reference
Distros: openshift-enterprise,openshift-origin,openshift-dedicated,openshift-online

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

View File

@@ -1 +0,0 @@
../images/

View File

@@ -1,85 +0,0 @@
[id="migrating-openshift-3-to-4"]
= Migrating {product-title} 3.x to 4.2
include::modules/common-attributes.adoc[]
:context: migrating-openshift-3-to-4
toc::[]
You can migrate application workloads from {product-title} 3.7 (and later) to {product-title} 4.2 with the Cluster Application Migration (CAM) tool. The CAM tool enables you to control the migration and to minimize application downtime.
The CAM tool's web console and API, based on Kubernetes custom resources, enable you to migrate stateful application workloads at the granularity of a namespace.
You can migrate data to a different storage class, for example, from Red Hat Gluster Storage or NFS storage on an {product-title} 3.x cluster to Red Hat Ceph Storage on an {product-title} 4.2 cluster.
Optionally, you can use the xref:migration-understanding-cpma_{context}[Control Plane Migration Assistant (CPMA)] to assist you in migrating control plane settings.
.Prerequisites
* The source cluster must be {product-title} 3.7 or later.
* The target cluster must be {product-title} 4.2.
* You must have xref:../cli_reference/openshift_cli/administrator-cli-commands.html#policy[`cluster-admin` privileges] on all clusters.
* You must have `podman` installed.
* You must have a replication repository that supports the S3 API and is accessible to the source and target clusters.
* If your application uses images from the `openshift` namespace, the required versions of the images must be present on the target cluster. If not, you must xref:../openshift_images/image-streams-manage.html#images-imagestreams-update-tag_image-streams-managing[update the `imagestreamtags` references] to use an available version that is compatible with your application.
+
[NOTE]
====
If the `imagestreamtags` cannot be updated, you can manually upload equivalent images to the application namespaces and update the applications to reference them.
The following `imagestreamtags` have been _removed_ from {product-title} 4.2:
* `dotnet:1.0`, `dotnet:1.1`, `dotnet:2.0`
* `dotnet-runtime:2.0`
* `mariadb:10.1`
* `mongodb:2.4`, `mongodb:2.6`
* `mysql:5.5`, `mysql:5.6`
* `nginx:1.8`
* `nodejs:0.10`, `nodejs:4`, `nodejs:6`
* `perl:5.16`, `perl:5.20`
* `php:5.5`, `php:5.6`
* `postgresql:9.2`, `postgresql:9.4`, `postgresql:9.5`
* `python:3.3`, `python:3.4`
* `ruby:2.0`, `ruby:2.2`
====
include::modules/migration-understanding-cam.adoc[leveloffset=+1]
== Installing the CAM Operator
You must install the CAM Operator xref:installing-migration-operator-manually_{context}[manually on the {product-title} 3.x source cluster] and xref:installing-migration-operator-with-olm_{context}[with OLM on the {product-title} 4.2 target cluster].
include::modules/migration-installing-migration-operator-manually.adoc[leveloffset=+2]
include::modules/migration-installing-migration-operator-olm.adoc[leveloffset=+2]
// == Configuring cross-origin resource sharing
//
// You must configure cross-origin resource sharing on the source cluster to enable the CAM tool to communicate with the cluster's API server.
include::modules/migration-configuring-cors-3.adoc[leveloffset=+2]
// include::modules/migration-configuring-cors-4.adoc[leveloffset=+2]
== Migrating applications with the CAM web console
include::modules/migration-launching-cam.adoc[leveloffset=+2]
include::modules/migration-adding-cluster-to-cam.adoc[leveloffset=+2]
include::modules/migration-adding-replication-repository-to-cam.adoc[leveloffset=+2]
include::modules/migration-changing-migration-plan-limits.adoc[leveloffset=+2]
include::modules/migration-creating-migration-plan-cam.adoc[leveloffset=+2]
include::modules/migration-running-migration-plan-cam.adoc[leveloffset=+2]
== Migrating control plane settings with the Control Plane Migration Assistant
include::modules/migration-understanding-cpma.adoc[leveloffset=+2]
include::modules/migration-installing-cpma.adoc[leveloffset=+2]
include::modules/migration-using-cpma.adoc[leveloffset=+2]
== Troubleshooting a failed migration
You can view the migration custom resources (CRs) and download logs to troubleshoot a failed migration.
include::modules/migration-custom-resources.adoc[leveloffset=+2]
include::modules/migration-viewing-migration-crs.adoc[leveloffset=+2]
include::modules/migration-downloading-logs.adoc[leveloffset=+2]
include::modules/migration-restic-timeout.adoc[leveloffset=+2]
include::modules/migration-known-issues.adoc[leveloffset=+1]

View File

@@ -1 +0,0 @@
../modules/

View File

@@ -1,33 +0,0 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-adding-cluster-to-cam_{context}']
= Adding a cluster to the CAM web console
You can add a source cluster to the CAM web console.
.Prerequisites
* Cross-origin resource sharing must be configured on the {product-title} 3 source cluster.
.Procedure
. Log in to the source cluster.
. Obtain the service account token:
+
----
$ oc sa get-token mig -n openshift-migration
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ
----
. Log in to the CAM web console on the {product-title} 4 cluster.
. In the *Clusters* section, click *Add cluster*.
. Fill in the following fields:
* *Cluster name*: May contain lower-case letters (`a-z`) and numbers (`0-9`). Must not contain spaces or international characters.
* *Url*: URL of the cluster's API server, for example, `\https://<master1.example.com>:8443`.
* *Service account token*: String that you obtained from the source cluster.
. Click *Add cluster*.
+
The cluster appears in the *Clusters* section.

View File

@@ -1,44 +0,0 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-adding-replication-repository-to-cam_{context}']
= Adding a replication repository to the CAM web console
You can add an replication repository to the CAM web console.
.Prerequisites
* The replication repository must support the S3 API.
+
[NOTE]
====
You can deploy a local S3 object storage with the link:https://github.com/fusor/mig-operator/blob/release-1.0/docs/usage/ObjectStorage.md[upstream NooBaa project or AWS S3].
====
* The replication repository must be accessible to the source and target clusters.
.Procedure
. Log in to the CAM web console on the {product-title} 4 cluster.
. In the *Replication repositories* section, click *Add replication repository*.
. Fill in the following fields:
* *Replication repository name*
* *S3 bucket name*
* *S3 bucket region*: Required for AWS S3 if the bucket region is not *us-east-1*. Optional for a generic S3 repository.
* *S3 endpoint*: Required for a generic S3 repository. This is the URL of the S3 service, not the bucket, for example, `\http://s3-noobaa.apps.cluster.com`.
+
[NOTE]
====
Currently, `https://` is supported only for AWS. For other providers, use `http://`.
====
* *S3 provider access key*
* *S3 provider secret access key*
. Click *Add replication repository* and wait for connection validation.
. Click *Close*.
+
The replication repository appears in the *Replication repositories* section.

View File

@@ -1,53 +0,0 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-changing-migration-plan-limits_{context}']
= Changing migration plan limits for large migrations
You can change the migration plan limits for large migrations.
[IMPORTANT]
====
Changes should first be tested in your environment to avoid a failed migration.
====
A single migration plan has the following default limits:
* 10 namespaces
+
If this limit is exceeded, the CAM web console displays a *Namespace limit exceeded* error and you cannot create a migration plan.
* 100 Pods
+
If the Pod limit is exceeded, the CAM web console displays a warning message similar to the following example: *Plan has been validated with warning condition(s). See warning message. Pod limit: 3 exceeded, found: 4*.
* 100 persistent volumes
+
If the persistent volume limit is exceeded, the CAM web console displays a similar warning message.
.Procedure
. Edit the Migration controller CR:
+
----
$ oc get migrationcontroller -n openshift-migration
NAME AGE
migration-controller 5d19h
$ oc edit migrationcontroller -n openshift-migration
----
. Update the following parameters:
+
[source,yaml]
----
[...]
migration_controller: true
# This configuration is loaded into mig-controller, and should be set on the
# cluster where `migration_controller: true`
mig_pv_limit: 100
mig_pod_limit: 100
mig_namespace_limit: 10
[...]
----

View File

@@ -1,66 +0,0 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-configuring-cors-3_{context}']
= Configuring cross-origin resource sharing on an {product-title} 3 cluster
You must configure cross-origin resource sharing on the {product-title} 3 cluster to enable communication between the CAM tool on the target cluster and the source cluster's API server.
.Procedure
. Log in to the {product-title} 4 cluster.
. Obtain the value for the CAM tool's CORS configuration:
+
----
$ oc get -n openshift-migration route/migration -o go-template='(?i)//{{ .spec.host }}(:|\z){{ println }}' | sed 's,\.,\\.,g'
----
. Log in to the {product-title} 3 cluster.
. Add the CORS configuration value to the `corsAllowedOrigins` stanza in the `/etc/origin/master/master-config.yaml` configuration file:
+
----
corsAllowedOrigins:
- (?i)//migration-openshift-migration\.apps\.cluster\.com(:|\z) <1>
- (?i)//openshift\.default\.svc(:|\z)
- (?i)//kubernetes\.default(:|\z)
----
<1> Update the CAM tool's CORS configuration.
. Restart the API server and controller manager to apply the changes:
+
* In {product-title} 3.7 and 3.9, these components run as stand-alone host processes managed by `systemd` and are restarted by running the following command:
+
----
$ systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
----
* In {product-title} 3.10 and 3.11, these components run in static Pods managed by a kubelet and are restarted by running the following commands:
+
----
$ /usr/local/bin/master-restart api
$ /usr/local/bin/master-restart controller
----
. Verify the configuration:
+
----
$ curl -v -k -X OPTIONS \
"<cluster_url>/apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migclusters" \ <1>
-H "Access-Control-Request-Method: GET" \
-H "Access-Control-Request-Headers: authorization" \
-H "Origin: https://<CAM_tool_url>" <2>
----
<1> Update the source cluster URL.
<2> Update the CAM tool URL.
+
The output appears similar to the following:
+
----
< HTTP/2 204
< access-control-allow-credentials: true
< access-control-allow-headers: Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization, X-Requested-With, If-Modified-Since
< access-control-allow-methods: POST, GET, OPTIONS, PUT, DELETE, PATCH
< access-control-allow-origin: https://migration-openshift-migration.apps.cluster
< access-control-expose-headers: Date
< cache-control: no-store
----

View File

@@ -1,78 +0,0 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-configuring-cors-4_{context}']
= Configuring cross-origin resource sharing on {product-title} 4 clusters
If you are migrating to or from an {product-title} 4 cluster that does not have the CAM tool installed, you must configure configure cross-origin resource sharing so that the CAM tool can access the cluster's API server.
.Procedure
. Log in to the CAM tool cluster.
. Obtain the value for the CAM tool's CORS configuration:
+
----
$ oc get -n openshift-migration route/migration -o go-template='(?i)//{{ .spec.host }}(:|\z){{ println }}' | sed 's,\.,\\.,g'
----
. Log in to the {product-title} 4 cluster.
. Edit the API server CR:
+
----
$ oc edit authentication.operator cluster
----
. Add the CORS configuration value to the `additionalCORSAllowedOrigins` stanza:
+
[source,yaml]
----
spec:
additionalCORSAllowedOrigins:
- (?i)//migration-openshift-migration\.apps\.cluster\.com(:|\z) <1>
----
<1> Update the CORS configuration value.
. Save the file to apply the changes.
. Edit the Kubernetes API server CR:
+
----
$ oc edit kubeapiserver.operator cluster
----
. If you are configuring CORS on a 4.1 {product-title} cluster, add `corsAllowedOrigins` and the CORS configuration value to the `unsupportedConfigOverrides` stanza:
+
[source,yaml]
----
spec:
unsupportedConfigOverrides:
corsAllowedOrigins:
- (?i)//migration-openshift-migration\.apps\.cluster\.com(:|\z) <1>
----
<1> Update the CORS configuration value.
. Save the file to apply the changes.
. Verify the configuration:
+
----
$ curl -v -k -X OPTIONS \
"<cluster_url>/apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migclusters" \ <1>
-H "Access-Control-Request-Method: GET" \
-H "Access-Control-Request-Headers: authorization" \
-H "Origin: https://<CAM_tool_url>" <2>
----
<1> `<cluster_url>`: Update the cluster URL.
<2> `<CAM_tool_url>`: Update the CAM tool URL.
+
The output appears similar to the following:
+
----
< HTTP/2 204
< access-control-allow-credentials: true
< access-control-allow-headers: Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization, X-Requested-With, If-Modified-Since
< access-control-allow-methods: POST, GET, OPTIONS, PUT, DELETE, PATCH
< access-control-allow-origin: https://migration-openshift-migration.apps.cluster
< access-control-expose-headers: Date
< cache-control: no-store
----

View File

@@ -1,42 +0,0 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-creating-migration-plan-cam_{context}']
= Creating a migration plan in the CAM web console
You can create a migration plan in the CAM web console.
.Prerequisites
The CAM web console must contain the following:
* Source cluster
* Target cluster, which is added automatically during the CAM tool installation
* Replication repository
.Procedure
. Log in to the CAM web console on the {product-title} 4 cluster.
. In the *Plans* section, click *Add plan*.
. Enter the *Plan name* and click *Next*.
+
The *Plan name* can contain up to 253 lower-case alphanumeric characters (`a-z, 0-9`). It must not contain spaces or underscores (`_`).
. Select a *Source cluster*.
. Select a *Target cluster*.
. Select a *Replication repository*.
. Select the projects to be migrated and click *Next*.
. Select *Copy* or *Move* for the PVs:
* *Copy* copies the data in a source cluster's PV to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster.
* *Move* unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.
. Click *Next*.
. Select a *Storage class* for the PVs.
+
You can change the storage class, for example, from Red Hat Gluster Storage or NFS storage on an {product-title} 3.x cluster to Red Hat Ceph Storage on an {product-title} 4.2 cluster.
. Click *Next*.
. Click *Close*.
+
The migration plan appears in the *Plans* section.

View File

@@ -1,40 +0,0 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-custom-resources_{context}']
= Understanding the migration custom resources
The CAM tool creates the following custom resources (CRs) for migration:
image::migration-architecture.png[migration architecture diagram]
image:darkcircle-1.png[20,20] link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migcluster_types.go[MigCluster] (configuration, CAM cluster): Cluster definition
image:darkcircle-2.png[20,20] link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migstorage_types.go[MigStorage] (configuration, CAM cluster): Storage definition
image:darkcircle-3.png[20,20] link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migplan_types.go[MigPlan] (configuration, CAM cluster): Migration plan
The MigPlan CR describes the source and target clusters, repository, and namespace(s) being migrated. It is associated with 0, 1, or many MigMigration CRs.
[NOTE]
====
Deleting a MigPlan CR deletes the associated MigMigration CRs.
====
image:darkcircle-4.png[20,20] link:https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/backup_storage_location.go[BackupStorageLocation] (configuration, CAM cluster): Location of Velero backup objects
image:darkcircle-5.png[20,20] link:https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/volume_snapshot_location.go[VolumeSnapshotLocation] (configuration, CAM cluster): Location of Velero volume snapshots
image:darkcircle-6.png[20,20] link:https://github.com/fusor/mig-controller/blob/master/pkg/apis/migration/v1alpha1/migmigration_types.go[MigMigration] (action, CAM cluster): Migration, created during migration
A MigMigration CR is created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR.
image:darkcircle-7.png[20,20] link:https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/backup.go[Backup] (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster:
* Backup CR #1 for Kubernetes objects
* Backup CR #2 for PV data
image:darkcircle-8.png[20,20] link:https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/restore.go[Restore] (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster:
* Restore CR #1 (using Backup CR #2) for PV data
* Restore CR #2 (using Backup CR #1) for Kubernetes objects

View File

@@ -1,26 +0,0 @@
// Module included in the following assemblies:
// migration/migrating-openshift-3-to-4.adoc
[id='migration-downloading-logs_{context}']
= Downloading migration logs
You can download the Velero, Restic, and Migration controller logs in the CAM web console to troubleshoot a failed migration.
.Procedure
. Click the *Options* menu {kebab} of a migration plan and select *Logs*.
. To download a specific log, select the following:
* *Cluster*: Source or target cluster
* *Log source*: Velero, Restic, or Migration controller
* *Pod source*: For example, `velero-_7659c69dd7-ctb5x_`
. Click *Download all logs* to download the Migration controller log and the Velero and Restic logs of the source and target clusters.
Optionally, you can access the logs by using the CLI, as in the following example:
----
$ oc get pods -n openshift-migration | grep controller
controller-manager-78c469849c-v6wcf 1/1 Running 0 4h49m
$ oc logs controller-manager-78c469849c-v6wcf -f -n openshift-migration
----

View File

@@ -1,19 +0,0 @@
// Module included in the following assemblies:
// migration/migrating-openshift-3-to-4.adoc
[id='migration-installing-cpma_{context}']
= Installing the Control Plane Migration Assistant
You can download the Control Plane Migration Assistant (CPMA) binary file from the Red Hat Customer Portal and install it on Linux, MacOSX, or Windows operating systems.
.Procedure
. In the link:https://access.redhat.com[Red Hat Customer Portal], navigate to *Downloads* -> *Red Hat {product-title}*.
. On the *Download Red Hat {product-title}* page, select *Red Hat {product-title}* from the *Product Variant* list.
. Select *CPMA 1.0 for RHEL 7* from the *Version* list. This binary works on RHEL 7 and RHEL 8.
. Click *Download Now* to download `cpma` for Linux or MacOSX or `cpma.exe` for Windows.
. Save the file in a directory defined as `$PATH` for Linux or MacOSX or `%PATH%` for Windows.
. For Linux, make the file executable:
+
----
$ sudo chmod +x cpma
----

View File

@@ -1,70 +0,0 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id="installing-migration-operator-manually_{context}"]
= Installing the CAM Operator manually on {product-title} 3
You can install the CAM Operator manually on an {product-title} 3 source cluster, which does not support OLM.
// [NOTE]
// ====
// You can install the CAM Operator manually on an {product-title} 4 cluster, but it is normally installed with OLM.
// ====
.Prerequisites
* You must have `podman` installed.
.Procedure
. Download the `operator.yml` file:
+
----
$ podman cp $(podman create registry.redhat.io/rhcam-1-0/openshift-migration-rhel7-operator:v1.0 ):/operator.yml ./
----
. Download the `controller-3.yml` file:
//
// * {product-title} 3:
+
----
podman cp $(podman create registry.redhat.io/rhcam-1-0/openshift-migration-rhel7-operator:v1.0 ):/controller-3.yml ./
----
//
// * {product-title} 4:
// +
// ----
// $ podman cp $(podman create registry.redhat.io/rhcam-1-0/openshift-migration-rhel7-operator:v1.0 ):/controller-4.yml ./
// ----
// +
// The `controller-4.yml` file installs the Migration controller CR and CAM web console on the {product-title} 4 cluster by default. If you do not want to install them on this cluster, update the values of the following parameters:
// +
// [source,yaml]
// ----
// migration_controller: false
// migration_ui: false
// ----
. Create the CAM Operator CR object:
+
----
$ oc create -f operator.yml
----
. Create the Migration controller CR object:
//
// * {product-title} 3:
+
----
$ oc create -f controller-3.yml
----
//
// * {product-title} 4:
// +
// ----
// $ oc create -f controller-4.yml
// ----
. Use the `oc get pods` command to verify that Velero is running.
// +
// If you installed the Migration controller and CAM web console on the cluster, verify that the migration controller and migration UI are also running.

View File

@@ -1,70 +0,0 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id="installing-migration-operator-with-olm_{context}"]
= Installing the CAM Operator with OLM on {product-title} 4
You can install the CAM Operator on the {product-title} 4 target cluster with OLM.
The CAM Operator installs the Migration controller CR and the CAM web console on this cluster.
.Procedure
. In the {product-title} web console, click *Administration* -> *Namespaces*.
. On the *Namespaces* page:
.. Click *Create Namespace*.
.. Enter `openshift-migration` in the *Name* field and click *Create*.
. Click *Operators* -> *OperatorHub*.
. On the *OperatorHub* page:
.. Scroll or type a keyword into the *Filter by keyword* field (in this case, *Migration*) to find the *Cluster Application Migration Operator*.
.. Select the *Cluster Application Migration Operator* and click *Install*.
. On the *Create Operator Subscription* page:
.. Select the *openshift-migration* namespace if it is not already selected.
.. Select an *Automatic* or *Manual* approval strategy.
.. Click *Subscribe*.
. Click *Catalog* -> *Installed Operators*.
+
The *Cluster Application Migration Operator* is listed in the *openshift-migration* project with the status *InstallSucceeded*.
. On the *Installed Operators* page:
.. Under *Provided APIs*, click *View 12 more...*.
.. Click *Create New* -> *MigrationController*.
.. Click *Create*.
. Click *Workloads* -> *Pods* to verify that the Controller Manager, Migration UI, Restic, and Velero Pods are running.
// . Click *Networking* -> *Routes*.
// +
// In the `openshift-migration` namespace, the CAM tool URL is the URL listed under *Location*. You will use the CAM tool URL to configure cross-origin resource sharing on the source cluster and to launch the CAM web console.
// .. If you are installing the CAM Operator on an {product-title} _4.1_ cluster, add the following parameter:
// +
// [source,yaml]
// ----
// spec:
// deprecated_cors_configuration: true
// ----
//
// .. If you are installing the CAM Operator on an {product-title} 4 _source_ cluster, update the following parameter values so that the Migration controller and CAM web console are not installed:
// +
// [source,yaml]
// ----
// spec:
// ...
// migration_controller: false
// migration_ui: false
// ----
// .. Click *Create*.
// . If you installed the Migration controller and CAM web console on this cluster:
//
// . Click *Workloads* -> *Pods* to verify that the Controller manager, Migration UI, Restic, and Velero Pods are running.
// . Click *Networking* -> *Routes*. In the `openshift-migration` namespace, the CAM tool URL is the URL listed under *Location*. You will use the CAM tool URL to configure cross-origin resource sharing on the source cluster and to launch the CAM web console.
// . Click *Networking* -> *Routes*.
// +
// In the `openshift-migration` namespace, the CAM tool URL is the URL listed under *Location* if the CAM is running on this cluster. You will use the CAM tool URL to configure cross-origin resource sharing on the other clusters and to launch the CAM web console.
//
// . If you did not install the Migration controller and CAM web console, click *Workloads* -> *Pods* to verify that the Rustic and Velero Pods are running.

View File

@@ -1,20 +0,0 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-known-issues_{context}']
= Known issues
This release has the following known issues:
* During migration, the CAM tool preserves the following namespace annotations:
** `openshift.io/sa.scc.mcs`
** `openshift.io/sa.scc.supplemental-groups`
** `openshift.io/sa.scc.uid-range`
+
These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1748440[*BZ#1748440*])
* When adding an S3 endpoint to the CAM web console, `https://` is supported only for AWS. For other S3 providers, use `http://`.
* If an AWS bucket is added to the CAM web console and then deleted, its status remains `True` because the MigStorage CR is not updated. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1738564[*BZ#1738564*])
* Migration fails if the Migration controller is running on a cluster other than the target cluster. The `EnsureCloudSecretPropagated` phase is skipped with a logged warning. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1757571[*BZ#1757571*])
* Cluster-scoped resources, including Cluster Role Bindings and Security Context Constraints, are not yet handled by the CAM. If your applications require cluster-scoped resources, you must create them manually on the target cluster. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1759804[*BZ#1759804*])

View File

@@ -1,31 +0,0 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id="migration-launching-cam_{context}"]
= Launching the CAM web console
You can launch the CAM web console that is installed on the {product-title} 4.2 target cluster.
.Procedure
. Log in to the {product-title} 4.2 cluster.
. Obtain the CAM web console URL:
+
----
$ oc get -n openshift-migration route/migration -o go-template='(?i}//{{ .spec.host }}(:|\z){{ println }}' | sed 's,\.,\\.,g'
----
. If you are using self-signed certificates, launch a browser and accept the CA certificates manually for the following:
* CAM tool host's OAuth and API server, for example, `\https://<CAM_web_console_URL>:6443/.well-known/oauth-authorization-server`
* Source cluster's OAuth server, for example, `\https://<master1.cluster.com>/.well-known/oauth-authorization-server`. `<master1.cluster.com>` is the URL of the source cluster's master node.
* Source cluster's API server, for example, `\https://<master1.cluster.com>/api/v1/namespaces`. `<master1.cluster.com>` is the URL of the source cluster's master node.
. Navigate to the CAM web console URL.
+
[NOTE]
====
If you log in to the CAM web console immediately after installing the CAM Operator, the web console may not load because the Operator is still configuring the cluster and enabling cross-origin resource sharing. Wait a few minutes and retry.
====
. Log in with your {product-title} *username* and *password*.

View File

@@ -1,29 +0,0 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-restic-timeout_{context}']
= Restic timeout error
If a migration fails because Restic times out, the following error appears in the Velero log:
----
level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1
----
The default value of `restic_timeout` is one hour. You can increase this for large migrations, keeping in mind that a higher value may delay the return of error messages.
.Procedure
. In the {product-title} web console, navigate to *Catalog* -> *Installed Operators*.
. Click *Cluster Application Migration Operator*.
. In the *MigrationController* tab, click *migration-controller*.
. In the *YAML* tab, update the following parameter value:
+
[source,yaml]
----
spec:
restic_timeout: 1h <1>
----
<1> Valid units are `h` (hours), `m` (minutes), and `s` (seconds), for example, `3h30m15s`.
. Click *Save*.

View File

@@ -1,37 +0,0 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-running-migration-plan-cam_{context}']
= Running a migration plan in the CAM web console
You can stage or migrate applications and data with the migration plan you created in the CAM web console.
.Prerequisites
The CAM web console must contain the following:
* Source cluster
* Target cluster, which is added automatically during the CAM tool installation
* Replication repository
* Valid migration plan
.Procedure
. Log in to the CAM web console on the {product-title} 4 cluster.
. Select a migration plan.
. Click *Stage* to copy data from the source cluster to the target cluster without stopping the application.
+
You can run *Stage* multiple times to reduce the actual migration time.
. When you are ready to migrate the application workload, click *Migrate*.
+
*Migrate* stops the application workload on the source cluster and recreates its resources on the target cluster.
. Optionally, in the *Migrate* window, you can select *Do not stop applications on the source cluster during migration*.
. Click *Migrate*.
. When the migration is complete, verify that the application migrated successfully in the {product-title} 4.2 web console:
.. Click *Home* -> *Projects*.
.. Click the migrated project to view its status.
.. In the *Routes* section, click *Location* to verify that the application is functioning.
.. Click *Storage* -> *Persistent volumes* to verify that the migrated persistent volume is correctly provisioned.

View File

@@ -1,40 +0,0 @@
// Module included in the following assemblies:
//
// migration/migrating_openshift_3_to_4/migrating-openshift-3-to-4.adoc
[id='migration-understanding-cam_{context}']
= Understanding the Cluster Application Migration tool
The Cluster Application Migration (CAM) tool enables you to migrate Kubernetes resources, persistent volume data, and internal container images from an {product-title} 3.x source cluster to an {product-title} 4.2 target cluster, using the CAM web console or the Kubernetes API.
Migrating an application with the CAM web console involves the following steps:
. Installing the CAM Operator manually on the source cluster
. Installing the CAM Operator with OLM on the target cluster
. Configuring cross-origin resource sharing on the source cluster
. Launching the CAM web console
. Adding the source cluster to the CAM web console
. Adding a replication repository to the CAM web console
. Creating a migration plan, with one of the following options:
* *Copy* copies the data in a source cluster's PV to the replication repository and then restores it on a newly created PV, with similar characteristics, in the target cluster. You can change the storage class during migration.
+
image::migration-PV-copy.png[]
* *Move* unmounts a remote volume (for example, NFS) from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters.
+
[NOTE]
====
Although the replication repository does not appear in this diagram, it is required for the actual migration.
====
+
image::migration-PV-move.png[]
. Running the migration plan, with one of the following options:
* *Stage* (optional) copies data to the target cluster without stopping the application.
+
Staging can be run multiple times so that most of the data is copied to the target before migration. This minimizes the actual migration time and application downtime.
* *Migrate* stops the application workload on the source cluster and recreates its resources on the target cluster. Optionally, you can choose to keep the application running when you migrate the workload.
image::OCP_3_to_4_App_migration.png[]

View File

@@ -1,41 +0,0 @@
// Module included in the following assemblies:
// migration/migrating-openshift-3-to-4.adoc
[id='migration-understanding-cpma_{context}']
= Understanding the Control Plane Migration Assistant
The Control Plane Migration Assistant (CPMA) is a CLI-based tool that assists you in migrating the control plane from {product-title} 3.7 (or later) to {product-title} 4.2. The CPMA processes the {product-title} 3 configuration files and generates Custom Resource (CR) manifest files, which are consumed by {product-title} 4.2 Operators.
Because {product-title} 3 and 4 have significant configuration differences, not all parameters are processed. The CPMA can generate a report that describes whether features are supported fully, partially, or not at all.
.Configuration files
CPMA uses the Kubernetes and {product-title} APIs to access the following configuration files on an {product-title} 3 cluster:
* Master configuration file (default: `/etc/origin/master/master-config.yaml`)
* CRI-O configuration file (default: `/etc/crio/crio.conf`)
* etcd configuration file (default: `/etc/etcd/etcd.conf`)
* Image registries file (default: `/etc/containers/registries.conf`)
* Dependent configuration files:
** Password files (for example, HTPasswd)
** ConfigMaps
** Secrets
.CR Manifests
CPMA generates CR manifests for the following configurations:
* API server CA certificate: `100_CPMA-cluster-config-APISecret.yaml`
+
[NOTE]
====
If you are using an unsigned API server CA certificate, you must xref:../authentication/certificates/api-server.html#add-named-api-server_api-server-certificates[add the certificate manually] to the {product-title} 4.2 target cluster.
====
* CRI-O: `100_CPMA-crio-config.yaml`
* Cluster resource quota: `100_CPMA-cluster-quota-resource-x.yaml`
* Project resource quota: `100_CPMA-resource-quota-x.yaml`
* Portable image registry (`/etc/registries/registries.conf`) and portable image policy (`etc/origin/master/master-config.yam`): `100_CPMA-cluster-config-image.yaml`
* OAuth providers: `100_CPMA-cluster-config-oauth.yaml`
* Project configuration: `100_CPMA-cluster-config-project.yaml`
* Scheduler: `100_CPMA-cluster-config-scheduler.yaml`
* SDN: `100_CPMA-cluster-config-sdn.yaml`

View File

@@ -1,100 +0,0 @@
// Module included in the following assemblies:
// migration/migrating-openshift-3-to-4.adoc
[id='migration-using-cpma_{context}']
= Using the Control Plane Migration Assistant
The Control Plane Migration Assistant (CPMA) generates CR manifests, which are consumed by {product-title} 4.2 Operators, and a report that indicates which {product-title} 3 features are supported fully, partially, or not at all.
The CPMA can run in remote mode, retrieving the configuration files from the source cluster using SSH, or in local mode, using local copies of the source cluster's configuration files.
.Prerequisites
* The source cluster must be {product-title} 3.7 or later.
* The source cluster must be updated to the latest synchronous release.
* An environment health check must be run on the source cluster to confirm that there are no diagnostic errors or warnings.
* The CPMA binary must be executable.
* You must have `cluster-admin` privileges for the source cluster.
.Procedure
. Log in to the {product-title} 3 cluster:
+
----
$ oc login https://<master1.example.com> <1>
----
<1> {product-title} 3 master node. You must be logged in to the cluster to receive a token for the Kubernetes and {product-title} APIs.
. Run the CPMA. Each prompt requires you to provide input, as in the following example:
+
----
$ cpma --manifests=false <1>
? Do you wish to save configuration for future use? true
? What will be the source for OCP3 config files? Remote host <2>
? Path to crio config file /etc/crio/crio.conf
? Path to etcd config file /etc/etcd/etcd.conf
? Path to master config file /etc/origin/master/master-config.yaml
? Path to node config file /etc/origin/node/node-config.yaml
? Path to registries config file /etc/containers/registries.conf
? Do wish to find source cluster using KUBECONFIG or prompt it? KUBECONFIG
? Select cluster obtained from KUBECONFIG contexts master1-example-com:443
? Select master node master1.example.com
? SSH login root <3>
? SSH Port 22
? Path to private SSH key /home/user/.ssh/openshift_key
? Path to application data, skip to use current directory .
INFO[29 Aug 19 00:07 UTC] Starting manifest and report generation
INFO[29 Aug 19 00:07 UTC] Transform:Starting for - API
INFO[29 Aug 19 00:07 UTC] APITransform::Extract
INFO[29 Aug 19 00:07 UTC] APITransform::Transform:Reports
INFO[29 Aug 19 00:07 UTC] Transform:Starting for - Cluster
INFO[29 Aug 19 00:08 UTC] ClusterTransform::Transform:Reports
INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportQuotas
INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportPVs
INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportNamespaces
INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportNodes
INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportRBAC
INFO[29 Aug 19 00:08 UTC] ClusterReport::ReportStorageClasses
INFO[29 Aug 19 00:08 UTC] Transform:Starting for - Crio
INFO[29 Aug 19 00:08 UTC] CrioTransform::Extract
WARN[29 Aug 19 00:08 UTC] Skipping Crio: No configuration file available
INFO[29 Aug 19 00:08 UTC] Transform:Starting for - Docker
INFO[29 Aug 19 00:08 UTC] DockerTransform::Extract
INFO[29 Aug 19 00:08 UTC] DockerTransform::Transform:Reports
INFO[29 Aug 19 00:08 UTC] Transform:Starting for - ETCD
INFO[29 Aug 19 00:08 UTC] ETCDTransform::Extract
INFO[29 Aug 19 00:08 UTC] ETCDTransform::Transform:Reports
INFO[29 Aug 19 00:08 UTC] Transform:Starting for - OAuth
INFO[29 Aug 19 00:08 UTC] OAuthTransform::Extract
INFO[29 Aug 19 00:08 UTC] OAuthTransform::Transform:Reports
INFO[29 Aug 19 00:08 UTC] Transform:Starting for - SDN
INFO[29 Aug 19 00:08 UTC] SDNTransform::Extract
INFO[29 Aug 19 00:08 UTC] SDNTransform::Transform:Reports
INFO[29 Aug 19 00:08 UTC] Transform:Starting for - Image
INFO[29 Aug 19 00:08 UTC] ImageTransform::Extract
INFO[29 Aug 19 00:08 UTC] ImageTransform::Transform:Reports
INFO[29 Aug 19 00:08 UTC] Transform:Starting for - Project
INFO[29 Aug 19 00:08 UTC] ProjectTransform::Extract
INFO[29 Aug 19 00:08 UTC] ProjectTransform::Transform:Reports
INFO[29 Aug 19 00:08 UTC] Flushing reports to disk
INFO[29 Aug 19 00:08 UTC] Report:Added: report.json
INFO[29 Aug 19 00:08 UTC] Report:Added: report.html
INFO[29 Aug 19 00:08 UTC] Successfully finished transformations
----
<1> `--manifests=false`: Without generating CR manifests
<2> `Remote host`: Remote mode
<3> `SSH login`: The SSH user must have `sudo` permissions on the {product-title} 3 cluster in order to access the configuration files.
+
The CPMA creates the following files and directory in the current directory if you did not specify an output directory:
* `cpma.yaml` file: Configuration options that you provided when you ran the CPMA
* `/master1.example.com`: Configuration files from the master node
* `report.json`: JSON-encoded report
* `report.html`: HTML-encoded report
. Open the `report.html` file in a browser to view the CPMA report.
. If you generate CR manifests, apply the CR manifests to the {product-title} 4 cluster, as in the following example:
+
----
$ oc apply -f 100_CPMA-cluster-config-secret-htpasswd-secret.yaml
----

View File

@@ -1,156 +0,0 @@
// Module included in the following assemblies:
// migration/migrating-openshift-3-to-4.adoc
[id='migration-viewing-migration-crs_{context}']
= Viewing migration custom resources
To view a migration custom resource:
----
$ oc get <custom_resource> -n openshift-migration
NAME AGE
88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s
$ oc describe <custom_resource> 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration
----
.MigMigration example
----
$ oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration
Name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10
Namespace: openshift-migration
Labels: <none>
Annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147
API Version: migration.openshift.io/v1alpha1
Kind: MigMigration
Metadata:
Creation Timestamp: 2019-08-29T01:01:29Z
Generation: 20
Resource Version: 88179
Self Link: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10
UID: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
Spec:
Mig Plan Ref:
Name: socks-shop-mig-plan
Namespace: openshift-migration
Quiesce Pods: true
Stage: false
Status:
Conditions:
Category: Advisory
Durable: true
Last Transition Time: 2019-08-29T01:03:40Z
Message: The migration has completed successfully.
Reason: Completed
Status: True
Type: Succeeded
Phase: Completed
Start Timestamp: 2019-08-29T01:01:29Z
Events: <none>
----
.Velero backup CR #2 example (PV data)
----
apiVersion: velero.io/v1
kind: Backup
metadata:
annotations:
openshift.io/migrate-copy-phase: final
openshift.io/migrate-quiesce-pods: "true"
openshift.io/migration-registry: 172.30.105.179:5000
openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6
creationTimestamp: "2019-08-29T01:03:15Z"
generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-
generation: 1
labels:
app.kubernetes.io/part-of: migration
migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
velero.io/storage-location: myrepo-vpzq9
name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7
namespace: openshift-migration
resourceVersion: "87313"
selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7
uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6
spec:
excludedNamespaces: []
excludedResources: []
hooks:
resources: []
includeClusterResources: null
includedNamespaces:
- sock-shop
includedResources:
- persistentvolumes
- persistentvolumeclaims
- namespaces
- imagestreams
- imagestreamtags
- secrets
- configmaps
- pods
labelSelector:
matchLabels:
migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6
storageLocation: myrepo-vpzq9
ttl: 720h0m0s
volumeSnapshotLocations:
- myrepo-wv6fx
status:
completionTimestamp: "2019-08-29T01:02:36Z"
errors: 0
expiration: "2019-09-28T01:02:35Z"
phase: Completed
startTimestamp: "2019-08-29T01:02:35Z"
validationErrors: null
version: 1
volumeSnapshotsAttempted: 0
volumeSnapshotsCompleted: 0
warnings: 0
----
.Velero restore CR #2 example (Kubernetes resources)
----
apiVersion: velero.io/v1
kind: Restore
metadata:
annotations:
openshift.io/migrate-copy-phase: final
openshift.io/migrate-quiesce-pods: "true"
openshift.io/migration-registry: 172.30.90.187:5000
openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88
creationTimestamp: "2019-08-28T00:09:49Z"
generateName: e13a1b60-c927-11e9-9555-d129df7f3b96-
generation: 3
labels:
app.kubernetes.io/part-of: migration
migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88
migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88
name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx
namespace: openshift-migration
resourceVersion: "82329"
selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx
uid: 26983ec0-c928-11e9-825a-06fa9fb68c88
spec:
backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f
excludedNamespaces: null
excludedResources:
- nodes
- events
- events.events.k8s.io
- backups.velero.io
- restores.velero.io
- resticrepositories.velero.io
includedNamespaces: null
includedResources: null
namespaceMapping: null
restorePVs: true
status:
errors: 0
failureReason: ""
phase: Completed
validationErrors: null
warnings: 15
----