mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 21:46:22 +01:00
TELCODOCS-1576: Image-based update with Lifecycle Agent Operator
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
c0696bced0
commit
32671a19f2
@@ -279,3 +279,5 @@ endif::[]
|
||||
:odf-full: Red Hat OpenShift Data Foundation
|
||||
:odf-short: ODF
|
||||
:rh-dev-hub: Red Hat Developer Hub
|
||||
//IBU
|
||||
:lcao: Lifecycle Agent
|
||||
|
||||
@@ -2965,6 +2965,8 @@ Topics:
|
||||
File: ztp-sno-additional-worker-node
|
||||
- Name: Pre-caching images for single-node OpenShift deployments
|
||||
File: ztp-precaching-tool
|
||||
- Name: Image-based upgrade for single-node OpenShift clusters
|
||||
File: ztp-image-based-upgrade
|
||||
---
|
||||
Name: Reference design specifications
|
||||
Dir: telco_ref_design_specs
|
||||
|
||||
217
modules/ztp-image-based-upgrade-generate-seed-image.adoc
Normal file
217
modules/ztp-image-based-upgrade-generate-seed-image.adoc
Normal file
@@ -0,0 +1,217 @@
|
||||
// Module included in the following assemblies:
|
||||
// * scalability_and_performance/ztp-image-based-upgrade.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="ztp-image-based-upgrade-seed-generation_{context}"]
|
||||
= Generating a seed image with the {lcao}
|
||||
|
||||
Use the {lcao} to generate the seed image with the `SeedGenerator` CR. The Operator checks for required system configurations, performs any necessary system cleanup before generating the seed image, and launches the image generation. The seed image generation includes the following tasks:
|
||||
|
||||
* Stopping cluster operators
|
||||
* Preparing the seed image configuration
|
||||
* Generating and pushing the seed image to the image repository specified in the `SeedGenerator` CR
|
||||
* Restoring cluster operators
|
||||
* Expiring seed cluster certificates
|
||||
* Generating new certificates for the seed cluster
|
||||
* Restoring and updates the `SeedGenerator` CR on the seed cluster
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
The generated seed image does not include any site-specific data.
|
||||
====
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
During the Developer Preview of this feature, when upgrading a cluster, any custom trusted certificates configured on the cluster will be lost. As a temporary workaround, to preserve these certificates, you must use a seed image from a seed cluster that trusts the certificates.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Deploy a {sno} cluster with a DU profile.
|
||||
* Install the {lcao} on the seed cluster.
|
||||
* Install the OADP Operator on the seed cluster.
|
||||
* Log in as a user with `cluster-admin` privileges.
|
||||
* The seed cluster has the same CPU topology as the target cluster.
|
||||
* The seed cluster has the same IP version as the target cluster.
|
||||
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Dual-stack networking is not supported in this release.
|
||||
====
|
||||
|
||||
* If the target cluster has a proxy configuration, the seed cluster must have a proxy configuration too. The proxy configuration does not have to be the same.
|
||||
* The seed cluster is registered as a managed cluster.
|
||||
* The {lcao} deployed on the target cluster is compatible with the version in the seed image.
|
||||
* The seed cluster has a separate partition for the container images that will be shared between stateroots. For more information, see _Additional resources_.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
If the target cluster has multiple IPs and one of them belongs to the subnet that was used for creating the seed image, the upgrade fails if the target cluster's node IP does not belong to that subnet.
|
||||
====
|
||||
|
||||
.Procedure
|
||||
|
||||
. Detach the seed cluster from the hub cluster either manually or if using ZTP, by removing the `SiteConfig` CR from the `kustomization.yaml`.
|
||||
This deletes any cluster-specific resources from the seed cluster that must not be in the seed image.
|
||||
|
||||
.. If you are using {rh-rhacm}, manually detach the seed cluster by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete managedcluster sno-worker-example
|
||||
----
|
||||
|
||||
.. Wait until the `ManagedCluster` CR is removed. Once the CR is removed, create the proper `SeedGenerator` CR. The {lcao} cleans up the {rh-rhacm} artifacts.
|
||||
|
||||
. If you are using GitOps ZTP, detach your cluster by removing the seed cluster's `SiteConfig` CR from the `kustomization.yaml`:
|
||||
|
||||
.. Remove your seed cluster's `SiteConfig` CR from the `kustomization.yaml`.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
generators:
|
||||
#- example-seed-sno1.yaml
|
||||
- example-target-sno2.yaml
|
||||
- example-target-sno3.yaml
|
||||
----
|
||||
|
||||
.. Commit the `kustomization.yaml` changes in your Git repository and push the changes.
|
||||
+
|
||||
The ArgoCD pipeline detects the changes and removes the managed cluster.
|
||||
|
||||
. Create the `Secret`.
|
||||
|
||||
.. Create the authentication file by running the following command:
|
||||
+
|
||||
--
|
||||
.Authentication file
|
||||
[source,terminal]
|
||||
----
|
||||
$ MY_USER=myuserid
|
||||
$ AUTHFILE=/tmp/my-auth.json
|
||||
$ podman login --authfile ${AUTHFILE} -u ${MY_USER} quay.io/${MY_USER}
|
||||
----
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ base64 -w 0 ${AUTHFILE} ; echo
|
||||
----
|
||||
--
|
||||
|
||||
.. Copy the output into the `seedAuth` field in the `Secret` YAML file named `seedgen` in the `openshift-lifecycle-agent` namespace.
|
||||
+
|
||||
--
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: seedgen <1>
|
||||
namespace: openshift-lifecycle-agent
|
||||
type: Opaque
|
||||
data:
|
||||
seedAuth: <encoded_AUTHFILE> <2>
|
||||
----
|
||||
<1> The `Secret` resource must have the `name: seedgen` and `namespace: openshift-lifecycle-agent` fields.
|
||||
<2> Specifies a base64-encoded authfile for write-access to the registry for pushing the generated seed images.
|
||||
--
|
||||
|
||||
.. Apply the `Secret`.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f secretseedgenerator.yaml
|
||||
----
|
||||
|
||||
. Create the `SeedGenerator` CR:
|
||||
+
|
||||
--
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: lca.openshift.io/v1alpha1
|
||||
kind: SeedGenerator
|
||||
metadata:
|
||||
name: seedimage <1>
|
||||
spec:
|
||||
seedImage: <seed_container_image> <2>
|
||||
----
|
||||
<1> The `SeedGenerator` CR must be named `seedimage`.
|
||||
<2> Specify the container image URL, for example, `quay.io/example/seed-container-image:<tag>`. It is recommended to use the `<seed_cluster_name>:<ocp_version>` format.
|
||||
--
|
||||
|
||||
. Generate the seed image by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f seedgenerator.yaml
|
||||
----
|
||||
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
The cluster reboots and loses API capabilities while the {lcao} generates the seed image.
|
||||
Applying the `SeedGenerator` CR stops the `kubelet` and the CRI-O operations, then it starts the image generation.
|
||||
====
|
||||
|
||||
Once the image generation is complete, the cluster can be reattached to the hub cluster, and you can access it through the API.
|
||||
|
||||
If you want to generate further seed images, you must provision a new seed cluster with the version you want to generate a seed image from.
|
||||
|
||||
.Verification
|
||||
|
||||
. Once the cluster recovers and it is available, you can check the status of the `SeedGenerator` CR:
|
||||
+
|
||||
--
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get seedgenerator -oyaml
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,yaml]
|
||||
----
|
||||
status:
|
||||
conditions:
|
||||
- lastTransitionTime: "2024-02-13T21:24:26Z"
|
||||
message: Seed Generation completed
|
||||
observedGeneration: 1
|
||||
reason: Completed
|
||||
status: "False"
|
||||
type: SeedGenInProgress
|
||||
- lastTransitionTime: "2024-02-13T21:24:26Z"
|
||||
message: Seed Generation completed
|
||||
observedGeneration: 1
|
||||
reason: Completed
|
||||
status: "True"
|
||||
type: SeedGenCompleted <1>
|
||||
observedGeneration: 1
|
||||
----
|
||||
<1> The seed image generation is complete.
|
||||
--
|
||||
|
||||
. Verify that the {sno} cluster is running and is attached to the {rh-rhacm} hub cluster:
|
||||
+
|
||||
--
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get managedclusters sno-worker-example
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get managedclusters sno-worker-example
|
||||
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE
|
||||
sno-worker-example true https://api.sno-worker-example.example.redhat.com True True 21h <1>
|
||||
----
|
||||
<1> The cluster is attached if you see that the value is `True` for both `JOINED` and `AVAILABLE`.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
The cluster requires time to recover after restarting the `kubelet` operation.
|
||||
====
|
||||
--
|
||||
@@ -0,0 +1,111 @@
|
||||
// Module included in the following assemblies:
|
||||
// * scalability_and_performance/ztp-image-based-upgrade.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="installing-lcao-using-cli_{context}"]
|
||||
= Installing the {lcao} by using the CLI
|
||||
|
||||
You can use the OpenShift CLI (`oc`) to install the {lcao} from the 4.15 Operator catalog on both the seed and target cluster.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Install the OpenShift CLI (`oc`).
|
||||
* Log in as a user with `cluster-admin` privileges.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a namespace for the {lcao}.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: openshift-lifecycle-agent
|
||||
annotations:
|
||||
workload.openshift.io/allowed: management
|
||||
----
|
||||
|
||||
.. Create the `Namespace` CR:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f lcao-namespace.yaml
|
||||
----
|
||||
|
||||
. Create an Operator group for the {lcao}.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: operators.coreos.com/v1
|
||||
kind: OperatorGroup
|
||||
metadata:
|
||||
name: openshift-lifecycle-agent
|
||||
namespace: openshift-lifecycle-agent
|
||||
spec:
|
||||
targetNamespaces:
|
||||
- openshift-lifecycle-agent
|
||||
----
|
||||
|
||||
.. Create the `OperatorGroup` CR:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f lcao-operatorgroup.yaml
|
||||
----
|
||||
|
||||
. Create a `Subscription` CR:
|
||||
|
||||
.. Define the `Subscription` CR and save the YAML file, for example, `lcao-subscription.yaml`:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: operators.coreos.com/v1alpha1
|
||||
kind: Subscription
|
||||
metadata:
|
||||
name: openshift-lifecycle-agent-subscription
|
||||
namespace: openshift-lifecycle-agent
|
||||
spec:
|
||||
channel: "alpha"
|
||||
name: lifecycle-agent
|
||||
source: redhat-operators
|
||||
sourceNamespace: openshift-marketplace
|
||||
----
|
||||
|
||||
.. Create the `Subscription` CR by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f lcao-subscription.yaml
|
||||
----
|
||||
|
||||
.Verification
|
||||
|
||||
. Verify that the installation succeeded by inspecting the CSV resource:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get csv -n openshift-lifecycle-agent
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal,subs="attributes+"]
|
||||
----
|
||||
NAME DISPLAY VERSION REPLACES PHASE
|
||||
lifecycle-agent.v{product-version}.0 Openshift Lifecycle Agent {product-version}.0 Succeeded
|
||||
----
|
||||
|
||||
. Verify that the {lcao} is up and running:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get deploy -n openshift-lifecycle-agent
|
||||
----
|
||||
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
lifecycle-agent-controller-manager 1/1 1 1 14s
|
||||
----
|
||||
@@ -0,0 +1,36 @@
|
||||
// Module included in the following assemblies:
|
||||
// * scalability_and_performance/ztp-image-based-upgrade.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="installing-lifecycle-agent-using-web-console_{context}"]
|
||||
= Installing the {lcao} by using the web console
|
||||
|
||||
You can use the {product-title} web console to install the {lcao} from the 4.15 Operator catalog on both the seed and target cluster.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Log in as a user with `cluster-admin` privileges.
|
||||
|
||||
.Procedure
|
||||
|
||||
. In the {product-title} web console, navigate to *Operators* → *OperatorHub*.
|
||||
. Search for the *{lcao}* from the list of available Operators, and then click *Install*.
|
||||
. On the *Install Operator* page, under *A specific namespace on the cluster* select *openshift-lifecycle-agent*. Then, click Install.
|
||||
. Click *Install*.
|
||||
|
||||
.Verification
|
||||
|
||||
To confirm that the installation is successful:
|
||||
|
||||
. Navigate to the *Operators* → *Installed Operators* page.
|
||||
. Ensure that the {lcao} is listed in the *openshift-lifecycle-agent* project with a *Status* of *InstallSucceeded*.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
During installation an Operator might display a *Failed* status. If the installation later succeeds with an *InstallSucceeded* message, you can ignore the *Failed* message.
|
||||
====
|
||||
|
||||
If the Operator is not installed successfully:
|
||||
|
||||
. Go to the *Operators* → *Installed Operators* page and inspect the *Operator Subscriptions* and *Install Plans* tabs for any failure or errors under *Status*.
|
||||
. Go to the *Workloads* → *Pods* page and check the logs for pods in the *openshift-lifecycle-agent* project.
|
||||
249
modules/ztp-image-based-upgrade-prep.adoc
Normal file
249
modules/ztp-image-based-upgrade-prep.adoc
Normal file
@@ -0,0 +1,249 @@
|
||||
// Module included in the following assemblies:
|
||||
// * scalability_and_performance/ztp-image-based-upgrade.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="ztp-image-based-upgrade-prep_{context}"]
|
||||
= Preparing the {sno} cluster for the image-based upgrade
|
||||
|
||||
When you deploy the {lcao} on a cluster, an `ImageBasedUpgrade` CR is automatically created.
|
||||
You edit this CR to specify the image repository of the seed image and to move through the different stages.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Install {lcao} on the target cluster.
|
||||
* Generate a seed image from a compatible seed cluster.
|
||||
* Install the OADP Operator, the `DataProtectionApplication` CR and its secret on the target cluster.
|
||||
* Create an S3-compatible storage solution and a ready-to-use bucket with proper credentials configured. For more information, see _Additional resources_.
|
||||
* Create a separate partition on the target cluster for the container images that is shared between stateroots. For more information about, see _Additional resources_.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
If the target cluster has multiple IPs and one of them belongs to the subnet that was used for creating the seed image, the upgrade fails if the target cluster's node IP does not belong to that subnet.
|
||||
====
|
||||
|
||||
.Procedure
|
||||
|
||||
This example procedure demonstrates how to back up and upgrade a cluster with applications which are using persistent volumes.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
The target cluster does not need to be detached from the hub cluster.
|
||||
====
|
||||
|
||||
. Create your OADP `Backup` and `Restore` CRs.
|
||||
|
||||
.. To filter for backup-specific CRs, use the `lca.openshift.io/apply-label` annotation in your `Backup` CRs. Based on which resources you define in the annotation, the {lcao} applies the `lca.openshift.io/backup: <backup_name>` label and adds the `labelSelector.matchLabels.lca.openshift.io/backup: <backup_name>` label selector to the specified resources when creating the `Backup` CRs.
|
||||
+
|
||||
--
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: velero.io/v1
|
||||
kind: Backup
|
||||
metadata:
|
||||
name: backup-acm-klusterlet
|
||||
namespace: openshift-adp
|
||||
annotations:
|
||||
lca.openshift.io/apply-label: rbac.authorization.k8s.io/v1/clusterroles/klusterlet,apps/v1/deployments/open-cluster-management-agent/klusterlet <1>
|
||||
labels:
|
||||
velero.io/storage-location: default
|
||||
spec:
|
||||
includeNamespace:
|
||||
- open-cluster-management-agent
|
||||
includeClusterScopedResources:
|
||||
- clusterroles
|
||||
includeNamespaceScopedResources:
|
||||
- deployments
|
||||
----
|
||||
<1> The value must be a list of comma-separated objects in the `group/version/resource/name` format for cluster-scoped resources, or in the `group/version/resource/namespace/name` format for namespace-scoped resources. It must be attached to the related `Backup` CR.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
To use the `lca.openshift.io/apply-label` annotation for backing up specific resources, the resources listed in the annotation should also be included in the `spec` section.
|
||||
If the `lca.openshift.io/apply-label` annotation is used in the `Backup` CR, only the resources listed in the annotation will be backed up, even if other resource types are specified in the `spec` section or not.
|
||||
====
|
||||
--
|
||||
|
||||
.. Define the apply order for the OADP Operator in the `Backup` and `Restore` CRs by using the `lca.openshift.io/apply-wave` field:
|
||||
+
|
||||
--
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: velero.io/v1
|
||||
kind: Restore
|
||||
metadata:
|
||||
name: restore-acm-klusterlet
|
||||
namespace: openshift-adp
|
||||
labels:
|
||||
velero.io/storage-location: default
|
||||
annotations:
|
||||
lca.openshift.io/apply-wave: "1"
|
||||
spec:
|
||||
backupName:
|
||||
acm-klusterlet
|
||||
---
|
||||
apiVersion: velero.io/v1
|
||||
kind: Restore
|
||||
metadata:
|
||||
name: restore-example-app
|
||||
namespace: openshift-adp
|
||||
labels:
|
||||
velero.io/storage-location: default
|
||||
annotations:
|
||||
lca.openshift.io/apply-wave: "3"
|
||||
spec:
|
||||
backupName:
|
||||
example-app
|
||||
---
|
||||
apiVersion: velero.io/v1
|
||||
kind: Restore
|
||||
metadata:
|
||||
name: restore-localvolume
|
||||
namespace: openshift-adp
|
||||
labels:
|
||||
velero.io/storage-location: default
|
||||
annotations:
|
||||
lca.openshift.io/apply-wave: "2"
|
||||
spec:
|
||||
backupName:
|
||||
localvolume
|
||||
----
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you do not define the `lca.openshift.io/apply-wave` annotation in the `Backup` or `Restore` CRs, they will be applied together.
|
||||
====
|
||||
--
|
||||
|
||||
.. Create a `kustomization.yaml` that will append the information to a new `ConfigMap`:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
configMapGenerator:
|
||||
- name: oadp-cm-example
|
||||
namespace: openshift-adp
|
||||
files:
|
||||
- backup-acm-klusterlet.yaml
|
||||
- backup-localvolume.yaml
|
||||
- backup-example-app.yaml
|
||||
- restore-acm-klusterlet.yaml
|
||||
- restore-localvolume.yaml
|
||||
- restore-example-app.yaml
|
||||
generatorOptions:
|
||||
disableNameSuffixHash: true <1>
|
||||
----
|
||||
<1> Disables the hash generation at the end of the `ConfigMap` filename which allows the `ConfigMap` file to be overwritten when a new one is generated with the same name.
|
||||
|
||||
.. Create the `ConfigMap`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ kustomize build ./ -o oadp-cm-example.yaml
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: oadp-cm-example
|
||||
namespace: openshift-adp
|
||||
[...]
|
||||
----
|
||||
|
||||
.. Apply the `ConfigMap`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f oadp-cm-example.yaml
|
||||
----
|
||||
|
||||
. (Optional) To keep your custom catalog sources after the upgrade, add them to the `spec.extraManifest` in the `ImageBasedUpgrade` CR. For more information, see xref:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.15/html-single/operators/index#olm-catalogsource_olm-understanding-olm[Catalog source].
|
||||
|
||||
. Edit the `ImageBasedUpgrade` CR:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: lca.openshift.io/v1alpha1
|
||||
kind: ImageBasedUpgrade
|
||||
metadata:
|
||||
name: example-upgrade
|
||||
spec:
|
||||
stage: Idle
|
||||
seedImageRef:
|
||||
version: 4.15.2 <1>
|
||||
image: <seed_container_image> <2>
|
||||
pullSecretRef: <seed_pull_secret> <3>
|
||||
additionalImages:
|
||||
name: ""
|
||||
namespace: ""
|
||||
autoRollbackOnFailure: {} <4>
|
||||
# disabledForPostRebootConfig: "true" <5>
|
||||
# disabledForUpgradeCompletion: "true" <6>
|
||||
# disabledInitMonitor: "true" <7>
|
||||
# initMonitorTimeoutSeconds: 1800 <8>
|
||||
# extraManifests: <9>
|
||||
# - name: sno-extra-manifests
|
||||
# namespace: openshift-lifecycle-agent
|
||||
oadpContent: <10>
|
||||
- name: oadp-cm-example
|
||||
namespace: openshift-adp
|
||||
----
|
||||
<1> Specify the target platform version. The value must match the version of the seed image.
|
||||
<2> Specify the repository where the target cluster can pull the seed image from.
|
||||
<3> Specify the reference to a secret with credentials to pull container images.
|
||||
<4> By default, automatic rollback on failure is enabled throughout the upgrade.
|
||||
<5> (Optional) If set to `true`, this option disables automatic rollback when the reconfiguration of the cluster fails upon the first reboot.
|
||||
<6> (Optional) If set to `true`, this option disables automatic rollback after the {lcao} reports a failed upgrade upon completion.
|
||||
<7> (Optional) If set to `true`, this option disables automatic rollback when the upgrade does not complete after reboot within the time frame specified in the `initMonitorTimeoutSeconds` field.
|
||||
<8> (Optional) Specifies the time frame in seconds. If not defined or set to `0`, the default value of `1800` seconds (30 minutes) is used.
|
||||
<9> (Optional) Specify the extra manifests to apply to the target cluster that are not part of the seed image. You can also add your custom catalog sources that you want to retain after the upgrade.
|
||||
<10> Add the `oadpContent` section with the OADP `ConfigMap` information.
|
||||
|
||||
. Change the value of the `stage` field to `Prep` in the `ImageBasedUpgrade` CR:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch imagebasedupgrades.lca.openshift.io example-upgrade -p='{"spec": {"stage": "Prep"}}' --type=merge -n openshift-lifecycle-agent
|
||||
----
|
||||
|
||||
+
|
||||
The {lcao} checks for the health of the cluster, creates a new `ostree` stateroot, and pulls the seed image to the target cluster.
|
||||
Then, the Operator precaches all the required images on the target cluster.
|
||||
|
||||
.Verification
|
||||
|
||||
. Check the status of the `ImageBasedUpgrade` CR.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get ibu -A -oyaml
|
||||
----
|
||||
|
||||
+
|
||||
.Example output
|
||||
[source,yaml]
|
||||
----
|
||||
status:
|
||||
conditions:
|
||||
- lastTransitionTime: "2024-01-01T09:00:00Z"
|
||||
message: In progress
|
||||
observedGeneration: 2
|
||||
reason: InProgress
|
||||
status: "False"
|
||||
type: Idle
|
||||
- lastTransitionTime: "2024-01-01T09:00:00Z"
|
||||
message: 'Prep completed: total: 121 (pulled: 1, skipped: 120, failed: 0)'
|
||||
observedGeneration: 2
|
||||
reason: Completed
|
||||
status: "True"
|
||||
type: PrepCompleted
|
||||
- lastTransitionTime: "2024-01-01T09:00:00Z"
|
||||
message: Prep completed
|
||||
observedGeneration: 2
|
||||
reason: Completed
|
||||
status: "False"
|
||||
type: PrepInProgress
|
||||
observedGeneration: 2
|
||||
----
|
||||
|
||||
// Troubleshooting?
|
||||
71
modules/ztp-image-based-upgrade-rollback.adoc
Normal file
71
modules/ztp-image-based-upgrade-rollback.adoc
Normal file
@@ -0,0 +1,71 @@
|
||||
// Module included in the following assemblies:
|
||||
// * scalability_and_performance/ztp-image-based-upgrade.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="ztp-image-based-upgrade-rollback_{context}"]
|
||||
= (Optional) Initiating rollback of the {sno} cluster after an image-based upgrade
|
||||
|
||||
You can manually roll back the changes if you encounter unresolvable issues after an upgrade.
|
||||
|
||||
By default, an automatic rollback is initiated on the following conditions:
|
||||
|
||||
* If the reconfiguration of the cluster fails upon the first reboot.
|
||||
* If the {lcao} reports a failed upgrade upon completing the process.
|
||||
* If the upgrade does not complete within the time frame specified in the `initMonitorTimeoutSeconds` field after rebooting.
|
||||
|
||||
You can disable the automatic rollback configuration in the `ImageBasedUpgrade` CR at the `Prep` stage:
|
||||
|
||||
.Example ImageBasedUpgrade CR
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: lca.openshift.io/v1alpha1
|
||||
kind: ImageBasedUpgrade
|
||||
metadata:
|
||||
name: example-upgrade
|
||||
spec:
|
||||
stage: Idle
|
||||
seedImageRef:
|
||||
version: 4.15.2
|
||||
image: <seed_container_image>
|
||||
additionalImages:
|
||||
name: ""
|
||||
namespace: ""
|
||||
autoRollbackOnFailure: {} <1>
|
||||
# disabledForPostRebootConfig: "true" <2>
|
||||
# disabledForUpgradeCompletion: "true" <3>
|
||||
# disabledInitMonitor: "true" <4>
|
||||
# initMonitorTimeoutSeconds: 1800 <5>
|
||||
[...]
|
||||
----
|
||||
<1> By default, automatic rollback on failure is enabled throughout the upgrade.
|
||||
<2> (Optional) If set to `true`, this option disables automatic rollback when the reconfiguration of the cluster fails upon the first reboot.
|
||||
<3> (Optional) If set to `true`, this option disables automatic rollback after the {lcao} reports a failed upgrade upon completion.
|
||||
<4> (Optional) If set to `true`, this option disables automatic rollback when the upgrade does not complete within the time frame specified in the `initMonitorTimeoutSeconds` field after reboot.
|
||||
<5> Specifies the time frame in seconds. If not defined or set to `0`, the default value of `1800` seconds (30 minutes) is used.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Log in to the hub cluster as a user with `cluster-admin` privileges.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Move to the rollback stage by changing the value of the `stage` field to `Rollback` in the `ImageBasedUpgrade` CR.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch imagebasedupgrades.lca.openshift.io example-upgrade -p='{"spec": {"stage": "Rollback"}}' --type=merge
|
||||
----
|
||||
|
||||
. The {lcao} reboots the cluster with the previously installed version of {product-title} and restores the applications.
|
||||
|
||||
. Commit to the rollback by changing the value of the `stage` field to `Idle` in the `ImageBasedUpgrade` CR:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch imagebasedupgrades.lca.openshift.io example-upgrade -p='{"spec": {"stage": "Idle"}}' --type=merge -n openshift-lifecycle-agent
|
||||
----
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
If you move to the `Idle` stage after a rollback, the {lcao} cleans up resources that can be used to troubleshoot a failed upgrade.
|
||||
====
|
||||
212
modules/ztp-image-based-upgrade-share-container-directory.adoc
Normal file
212
modules/ztp-image-based-upgrade-share-container-directory.adoc
Normal file
@@ -0,0 +1,212 @@
|
||||
// Module included in the following assemblies:
|
||||
// * scalability_and_performance/ztp-image-based-upgrade.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="ztp-image-based-upgrade-shared-container-directory_{context}"]
|
||||
= Sharing the container directory between `ostree` stateroots
|
||||
|
||||
You must apply a `MachineConfig` to both the seed and the target clusters during installation time to create a separate partition and share the `/var/lib/containers` directory between the two `ostree` stateroots that will be used during the upgrade process.
|
||||
|
||||
[id="ztp-image-based-upgrade-shared-container-directory-acm_{context}"]
|
||||
== Sharing the container directory between `ostree` stateroots when using {rh-rhacm}
|
||||
|
||||
When you are using {rh-rhacm}, you must apply a `MachineConfig` to both the seed and target clusters.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
You must complete this procedure at installation time.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Log in as a user with `cluster-admin` privileges.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Apply a `MachineConfig` to create a separate partition.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: machineconfiguration.openshift.io/v1
|
||||
kind: MachineConfig
|
||||
metadata:
|
||||
labels:
|
||||
machineconfiguration.openshift.io/role: master
|
||||
name: 98-var-lib-containers-partitioned
|
||||
spec:
|
||||
config:
|
||||
ignition:
|
||||
version: 3.2.0
|
||||
storage:
|
||||
disks:
|
||||
- device: /dev/disk/by-id/wwn-<root_disk> <1>
|
||||
partitions:
|
||||
- label: varlibcontainers
|
||||
startMiB: <start_of_parition> <2>
|
||||
sizeMiB: <parition_size> <3>
|
||||
filesystems:
|
||||
- device: /dev/disk/by-partlabel/varlibcontainers
|
||||
format: xfs
|
||||
mountOptions:
|
||||
- defaults
|
||||
- prjquota
|
||||
path: /var/lib/containers
|
||||
wipeFilesystem: true
|
||||
systemd:
|
||||
units:
|
||||
- contents: |-
|
||||
# Generated by Butane
|
||||
[Unit]
|
||||
Before=local-fs.target
|
||||
Requires=systemd-fsck@dev-disk-by\x2dpartlabel-varlibcontainers.service
|
||||
After=systemd-fsck@dev-disk-by\x2dpartlabel-varlibcontainers.service
|
||||
|
||||
[Mount]
|
||||
Where=/var/lib/containers
|
||||
What=/dev/disk/by-partlabel/varlibcontainers
|
||||
Type=xfs
|
||||
Options=defaults,prjquota
|
||||
|
||||
[Install]
|
||||
RequiredBy=local-fs.target
|
||||
enabled: true
|
||||
name: var-lib-containers.mount
|
||||
----
|
||||
<1> Specify the root disk.
|
||||
<2> Specify the start of the partition in MiB. If the value is too small, the installation will fail.
|
||||
<3> Specify the size of the partition. If the value is too small, the deployments after installation will fail.
|
||||
|
||||
[id="ztp-image-based-upgrade-shared-container-directory-ztp_{context}"]
|
||||
== Sharing the container directory between `ostree` stateroots when using GitOps ZTP
|
||||
|
||||
When you are using the GitOps ZTP workflow, you can do the following procedure to create a separate disk partition on both the seed and target cluster and to share the `/var/lib/containers` directory.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
You must complete this procedure at installation time.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Log in as a user with `cluster-admin` privileges.
|
||||
* Install Butane.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create the `storage.bu` file.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
variant: fcos
|
||||
version: 1.3.0
|
||||
storage:
|
||||
disks:
|
||||
- device: /dev/disk/by-id/wwn-<root_disk> <1>
|
||||
wipe_table: false
|
||||
partitions:
|
||||
- label: var-lib-containers
|
||||
start_mib: <start_of_parition> <2>
|
||||
size_mib: <parition_size> <3>
|
||||
filesystems:
|
||||
- path: /var/lib/containers
|
||||
device: /dev/disk/by-partlabel/var-lib-containers
|
||||
format: xfs
|
||||
wipe_filesystem: true
|
||||
with_mount_unit: true
|
||||
mount_options:
|
||||
- defaults
|
||||
- prjquota
|
||||
----
|
||||
<1> Specify the root disk.
|
||||
<2> Specify the start of the partition in MiB. If the value is too small, the installation will fail.
|
||||
<3> Specify the size of the partition. If the value is too small, the deployments after installation will fail.
|
||||
|
||||
. Convert the `storage.bu` to an Ignition file.
|
||||
+
|
||||
--
|
||||
[source,terminal]
|
||||
----
|
||||
$ butane storage.bu
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
{"ignition":{"version":"3.2.0"},"storage":{"disks":[{"device":"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62","partitions":[{"label":"var-lib-containers","sizeMiB":0,"startMiB":250000}],"wipeTable":false}],"filesystems":[{"device":"/dev/disk/by-partlabel/var-lib-containers","format":"xfs","mountOptions":["defaults","prjquota"],"path":"/var/lib/containers","wipeFilesystem":true}]},"systemd":{"units":[{"contents":"# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target","enabled":true,"name":"var-lib-containers.mount"}]}}
|
||||
----
|
||||
--
|
||||
|
||||
. Copy the output into the `.spec.clusters.nodes.ignitionConfigOverride` field in the `SiteConfig` CR.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
[...]
|
||||
spec:
|
||||
clusters:
|
||||
- nodes:
|
||||
- ignitionConfigOverride: '{"ignition":{"version":"3.2.0"},"storage":{"disks":[{"device":"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62","partitions":[{"label":"var-lib-containers","sizeMiB":0,"startMiB":250000}],"wipeTable":false}],"filesystems":[{"device":"/dev/disk/by-partlabel/var-lib-containers","format":"xfs","mountOptions":["defaults","prjquota"],"path":"/var/lib/containers","wipeFilesystem":true}]},"systemd":{"units":[{"contents":"# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target","enabled":true,"name":"var-lib-containers.mount"}]}}'
|
||||
[...]
|
||||
----
|
||||
|
||||
.Verification
|
||||
|
||||
. During or after installation, verify on the hub cluster that the `BareMetalHost` object shows the annotation.
|
||||
+
|
||||
--
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations["bmac.agent-install.openshift.io/ignition-config-overrides"]
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
"{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}"
|
||||
----
|
||||
--
|
||||
|
||||
. After installation, check the {sno} disk status:
|
||||
+
|
||||
--
|
||||
[source,terminal]
|
||||
----
|
||||
# lsblk
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
|
||||
sda 8:0 0 446.6G 0 disk
|
||||
├─sda1 8:1 0 1M 0 part
|
||||
├─sda2 8:2 0 127M 0 part
|
||||
├─sda3 8:3 0 384M 0 part /boot
|
||||
├─sda4 8:4 0 243.6G 0 part /var
|
||||
│ /sysroot/ostree/deploy/rhcos/var
|
||||
│ /usr
|
||||
│ /etc
|
||||
│ /
|
||||
│ /sysroot
|
||||
└─sda5 8:5 0 202.5G 0 part /var/lib/containers
|
||||
----
|
||||
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
# df -h
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
devtmpfs 4.0M 0 4.0M 0% /dev
|
||||
tmpfs 126G 84K 126G 1% /dev/shm
|
||||
tmpfs 51G 93M 51G 1% /run
|
||||
/dev/sda4 244G 5.2G 239G 3% /sysroot
|
||||
tmpfs 126G 4.0K 126G 1% /tmp
|
||||
/dev/sda5 203G 119G 85G 59% /var/lib/containers
|
||||
/dev/sda3 350M 110M 218M 34% /boot
|
||||
tmpfs 26G 0 26G 0% /run/user/1000
|
||||
----
|
||||
--
|
||||
494
modules/ztp-image-based-upgrade-talm.adoc
Normal file
494
modules/ztp-image-based-upgrade-talm.adoc
Normal file
@@ -0,0 +1,494 @@
|
||||
// Module included in the following assemblies:
|
||||
// * scalability_and_performance/ztp-image-based-upgrade.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="ztp-image-based-upgrade-with-talm_{context}"]
|
||||
= Upgrading the {sno} cluster with {lcao} through GitOps ZTP
|
||||
|
||||
You can upgrade your managed {sno} cluster with the image-based upgrade through GitOps ZTP.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
During the Developer Preview of this feature, when upgrading a cluster, any custom trusted certificates configured on the cluster will be lost. As a temporary workaround, to preserve these certificates, you must use a seed image from a seed cluster that trusts the certificates.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Install {rh-rhacm} 2.9.2. or later version.
|
||||
* Install {cgu-operator}.
|
||||
* Update GitOps ZTP to the latest version.
|
||||
* Provision one or more managed clusters with GitOps ZTP.
|
||||
* Log in as a user with `cluster-admin` privileges.
|
||||
* You generated a seed image from a compatible seed cluster.
|
||||
* Create an S3-compatible storage solution and a ready-to-use bucket with proper credentials configured. For more information, see _Additional resources_.
|
||||
* Create a separate partition on the target cluster for the container images that is shared between stateroots. For more information about, see _Additional resources_.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a policy for the OADP `ConfigMap`, named `oadp-cm-common-policies`. For more information about how to create the `ConfigMap`, follow the first step in _Preparing the single-node OpenShift cluster for the image-based upgrade_ in _Additional resources_.
|
||||
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
Depending on the {rh-rhacm} configuration, the `v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials` object must be backed up.
|
||||
Check if your `MultiClusterHub` CR has the `spec.imagePullSecret` field defined and the secret exists in the `open-cluster-management-agent` namespace in your hub cluster. If the `spec.imagePullSecret` field does not exist, you can remove the `v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials` object from the `lca.openshift.io/apply-label` annotation.
|
||||
====
|
||||
|
||||
. (Optional) Create a policy for the `ConfigMap` of your user-specific extra manifests that are not part of the seed image. The {lcao} does not automatically extract these extra manifests from the seed cluster, so you can add a `ConfigMap` resource of your user-specific extra manifests in the `spec.extraManifests` field in the `ImageBasedUpgrade` CR.
|
||||
|
||||
. (Optional) To keep your custom catalog sources after the upgrade, add them to the `spec.extraManifest` in the `ImageBasedUpgrade` CR. For more information, see xref:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.15/html-single/operators/index#olm-catalogsource_olm-understanding-olm[Catalog source].
|
||||
|
||||
. Create a `PolicyGenTemplate` CR that contains policies for the `Prep` and `Upgrade` stages.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: ran.openshift.io/v1
|
||||
kind: PolicyGenTemplate
|
||||
metadata:
|
||||
name: group-ibu
|
||||
namespace: "ztp-group"
|
||||
spec:
|
||||
bindingRules:
|
||||
group-du-sno: ""
|
||||
mcp: "master"
|
||||
evaluationInterval: <1>
|
||||
compliant: 10s
|
||||
noncompliant: 10s
|
||||
sourceFiles:
|
||||
- fileName: ImageBasedUpgrade.yaml
|
||||
policyName: "prep-policy"
|
||||
spec:
|
||||
stage: Prep
|
||||
seedImageRef: <2>
|
||||
version: "4.15.0"
|
||||
image: "quay.io/user/lca-seed:4.15.0"
|
||||
pullSecretRef:
|
||||
name: "<seed_pull_secret>"
|
||||
oadpContent: <3>
|
||||
- name: "oadp-cm-common-policies"
|
||||
namespace: "openshift-adp"
|
||||
# extraManifests: <4>
|
||||
# - name: sno-extra-manifests
|
||||
# namespace: openshift-lifecycle-agent
|
||||
status:
|
||||
conditions:
|
||||
- reason: Completed
|
||||
status: "True"
|
||||
type: PrepCompleted
|
||||
- fileName: ImageBasedUpgrade.yaml
|
||||
policyName: "upgrade-policy"
|
||||
spec:
|
||||
stage: Upgrade
|
||||
status:
|
||||
conditions:
|
||||
- reason: Completed
|
||||
status: "True"
|
||||
type: UpgradeCompleted
|
||||
----
|
||||
<1> The policy evaluation interval for compliant and non-compliant policies. Set them to `10s` to ensure that the policies status accurately reflects the current upgrade status.
|
||||
<2> Define the seed image, {product-title} version, and pull secret for the upgrade in the `Prep` stage.
|
||||
<3> Define the OADP `ConfigMap` resources required for backup and restore in the `Prep` stage.
|
||||
<4> (Optional) Define the `ConfigMap` resource for your user-specific extra manifests in the `Prep` stage. You can also add your custom catalog sources that you want to retain after the upgrade.
|
||||
|
||||
. Create a `PolicyGenTemplate` CR for the default set of extra manifests:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: ran.openshift.io/v1
|
||||
kind: PolicyGenTemplate
|
||||
metadata:
|
||||
name: sno-ibu
|
||||
spec:
|
||||
bindingRules:
|
||||
sites: "example-sno"
|
||||
du-profile: "4.15.0"
|
||||
mcp: "master"
|
||||
sourceFiles:
|
||||
- fileName: SriovNetwork.yaml
|
||||
policyName: "config-policy"
|
||||
metadata:
|
||||
name: "sriov-nw-du-fh"
|
||||
labels:
|
||||
lca.openshift.io/target-ocp-version: “4.15.0” <1>
|
||||
spec:
|
||||
resourceName: du_fh
|
||||
vlan: 140
|
||||
- fileName: SriovNetworkNodePolicy.yaml
|
||||
policyName: "config-policy"
|
||||
metadata:
|
||||
name: "sriov-nnp-du-fh"
|
||||
labels:
|
||||
lca.openshift.io/target-ocp-version: “4.15.0” <1>
|
||||
spec:
|
||||
deviceType: netdevice
|
||||
isRdma: false
|
||||
nicSelector:
|
||||
pfNames: ["ens5f0"]
|
||||
numVfs: 8
|
||||
priority: 10
|
||||
resourceName: du_fh
|
||||
- fileName: SriovNetwork.yaml
|
||||
policyName: "config-policy"
|
||||
metadata:
|
||||
name: "sriov-nw-du-mh"
|
||||
labels:
|
||||
lca.openshift.io/target-ocp-version: “4.15.0” <1>
|
||||
spec:
|
||||
resourceName: du_mh
|
||||
vlan: 150
|
||||
- fileName: SriovNetworkNodePolicy.yaml
|
||||
policyName: "config-policy"
|
||||
metadata:
|
||||
name: "sriov-nnp-du-mh"
|
||||
labels:
|
||||
lca.openshift.io/target-ocp-version: “4.15.0” <1>
|
||||
spec:
|
||||
deviceType: vfio-pci
|
||||
isRdma: false
|
||||
nicSelector:
|
||||
pfNames: ["ens7f0"]
|
||||
numVfs: 8
|
||||
priority: 10
|
||||
resourceName: du_mh
|
||||
----
|
||||
<1> Ensure that the `lca.openshift.io/target-ocp-version` label matches the target {product-title} version that is specified in the `seedImageRef.version` field of the `ImageBasedUpgrade` CR. The {lcao} only applies the CRs that match the specified version.
|
||||
|
||||
. Commit, and push the created CRs to the GitOps ZTP Git repository.
|
||||
|
||||
.. Verify that the stage and status policies are created:
|
||||
+
|
||||
--
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get policies -n spoke1 | grep -E "group-ibu"
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
ztp-group.group-ibu-prep-policy inform NonCompliant 31h
|
||||
ztp-group.group-ibu-upgrade-policy inform NonCompliant 31h
|
||||
----
|
||||
--
|
||||
|
||||
. To reflect the target platform version, update the `du-profile` or the corresponding policy-binding label in the `SiteConfig` CR.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: ran.openshift.io/v1
|
||||
kind: SiteConfig
|
||||
[...]
|
||||
spec:
|
||||
[...]
|
||||
clusterLabels:
|
||||
du-profile: "4.15.0"
|
||||
----
|
||||
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
Updating the labels to the target platform version unbinds the existing set of policies.
|
||||
====
|
||||
|
||||
. Commit and push the updated `SiteConfig` CR to the GitOps ZTP Git repository.
|
||||
|
||||
. When you are ready to move to the `Prep` stage, create the `ClusterGroupUpgrade` CR with the `Prep` and OADP `ConfigMap` policies:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: ran.openshift.io/v1alpha1
|
||||
kind: ClusterGroupUpgrade
|
||||
metadata:
|
||||
name: cgu-ibu-prep
|
||||
namespace: default
|
||||
spec:
|
||||
clusters:
|
||||
- spoke1
|
||||
enable: true
|
||||
managedPolicies:
|
||||
- oadp-cm-common-policies
|
||||
- group-ibu-prep-policy
|
||||
# - user-spec-extra-manifests
|
||||
remediationStrategy:
|
||||
canaries:
|
||||
- spoke1
|
||||
maxConcurrency: 1
|
||||
timeout: 240
|
||||
----
|
||||
|
||||
. Apply the `Prep` policy:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f cgu-ibu-prep.yml
|
||||
----
|
||||
|
||||
.. Monitor the status and wait for the `cgu-ibu-prep` `ClusterGroupUpgrade` to report `Completed`.
|
||||
+
|
||||
--
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get cgu -n default
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME AGE STATE DETAILS
|
||||
cgu-ibu-prep 31h Completed All clusters are compliant with all the managed policies
|
||||
----
|
||||
--
|
||||
|
||||
. When you are ready to move to the `Upgrade` stage, create the `ClusterGroupUpgrade` CR that references the `Upgrade` policy:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: ran.openshift.io/v1alpha1
|
||||
kind: ClusterGroupUpgrade
|
||||
metadata:
|
||||
name: cgu-ibu-upgrade
|
||||
namespace: default
|
||||
spec:
|
||||
clusters:
|
||||
- spoke1
|
||||
enable: true
|
||||
managedPolicies:
|
||||
- group-ibu-upgrade-policy
|
||||
remediationStrategy:
|
||||
canaries:
|
||||
- spoke1
|
||||
maxConcurrency: 1
|
||||
timeout: 240
|
||||
----
|
||||
|
||||
. Apply the `Upgrade` policy:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f cgu-ibu-upgrade.yml
|
||||
----
|
||||
|
||||
.. Monitor the status and wait for the `cgu-ibu-upgrade` `ClusterGroupUpgrade` to report `Completed`.
|
||||
+
|
||||
--
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get cgu -n default
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME AGE STATE DETAILS
|
||||
cgu-ibu-prep 31h Completed All clusters are compliant with all the managed policies
|
||||
cgu-ibu-upgrade 31h Completed All clusters are compliant with all the managed policies
|
||||
----
|
||||
--
|
||||
|
||||
. When you are satisfied with the changes and ready to finalize the upgrade, create the `PolicyGenTemplate` to finalize the upgrade:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: ran.openshift.io/v1
|
||||
kind: PolicyGenTemplate
|
||||
metadata:
|
||||
name: group-ibu
|
||||
namespace: "ztp-group"
|
||||
spec:
|
||||
bindingRules:
|
||||
group-du-sno: ""
|
||||
mcp: "master"
|
||||
evaluationInterval:
|
||||
compliant: 10s
|
||||
noncompliant: 10s
|
||||
sourceFiles:
|
||||
- fileName: ImageBasedUpgrade.yaml
|
||||
policyName: "finalize-policy"
|
||||
spec:
|
||||
stage: Idle
|
||||
status:
|
||||
conditions:
|
||||
- status: "True"
|
||||
type: Idle
|
||||
----
|
||||
|
||||
. Create a `ClusterGroupUpgrade` CR that references the policy that finalizes the upgrade:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: ran.openshift.io/v1alpha1
|
||||
kind: ClusterGroupUpgrade
|
||||
metadata:
|
||||
name: cgu-ibu-finalize
|
||||
namespace: default
|
||||
spec:
|
||||
clusters:
|
||||
- spoke1
|
||||
enable: true
|
||||
managedPolicies:
|
||||
- group-ibu-finalize-policy
|
||||
remediationStrategy:
|
||||
canaries:
|
||||
- spoke1
|
||||
maxConcurrency: 1
|
||||
timeout: 240
|
||||
----
|
||||
|
||||
. Apply the policy:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f cgu-ibu-finalize.yml
|
||||
----
|
||||
|
||||
.. Monitor the status and wait for the `cgu-ibu-upgrade` `ClusterGroupUpgrade` to report `Completed`.
|
||||
+
|
||||
--
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get cgu -n default
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME AGE STATE DETAILS
|
||||
cgu-ibu-finalize 30h Completed All clusters are compliant with all the managed policies
|
||||
cgu-ibu-prep 31h Completed All clusters are compliant with all the managed policies
|
||||
cgu-ibu-upgrade 31h Completed All clusters are compliant with all the managed policies
|
||||
----
|
||||
--
|
||||
|
||||
[id="ztp-image-based-upgrade-with-talm-rollback_{context}"]
|
||||
== (Optional) Rollback the upgrade with {cgu-operator}
|
||||
|
||||
If you encounter an issue after upgrade, you can start a manual rollback.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Update the `du-profile` or the corresponding policy-binding label with the original platform version in the `SiteConfig` CR:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: ran.openshift.io/v1
|
||||
kind: SiteConfig
|
||||
[...]
|
||||
spec:
|
||||
[...]
|
||||
clusterLabels:
|
||||
du-profile: "4.15.2"
|
||||
----
|
||||
|
||||
. When you are ready to move to the `Rollback` stage, create a `PolicyGenTemplate` CR for the `Rollback` policies:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: ran.openshift.io/v1
|
||||
kind: PolicyGenTemplate
|
||||
metadata:
|
||||
name: group-ibu
|
||||
namespace: "ztp-group"
|
||||
spec:
|
||||
bindingRules:
|
||||
group-du-sno: ""
|
||||
mcp: "master"
|
||||
evaluationInterval:
|
||||
compliant: 10s
|
||||
noncompliant: 10s
|
||||
sourceFiles:
|
||||
- fileName: ImageBasedUpgrade.yaml
|
||||
policyName: "rollback-policy"
|
||||
spec:
|
||||
stage: Rollback
|
||||
status:
|
||||
conditions:
|
||||
- message: Rollback completed
|
||||
reason: Completed
|
||||
status: "True"
|
||||
type: RollbackCompleted
|
||||
----
|
||||
|
||||
. Create a `ClusterGroupUpgrade` CR that references the `Rollback` policies:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: ran.openshift.io/v1alpha1
|
||||
kind: ClusterGroupUpgrade
|
||||
metadata:
|
||||
name: cgu-ibu-rollback
|
||||
namespace: default
|
||||
spec:
|
||||
clusters:
|
||||
- spoke1
|
||||
enable: true
|
||||
managedPolicies:
|
||||
- group-ibu-rollback-policy
|
||||
remediationStrategy:
|
||||
canaries:
|
||||
- spoke1
|
||||
maxConcurrency: 1
|
||||
timeout: 240
|
||||
----
|
||||
|
||||
. Apply the `Rollback` policy:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f cgu-ibu-rollback.yml
|
||||
----
|
||||
|
||||
. When you are satisfied with the changes and ready to finalize the rollback, create the `PolicyGenTemplate` CR:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: ran.openshift.io/v1
|
||||
kind: PolicyGenTemplate
|
||||
metadata:
|
||||
name: group-ibu
|
||||
namespace: "ztp-group"
|
||||
spec:
|
||||
bindingRules:
|
||||
group-du-sno: ""
|
||||
mcp: "master"
|
||||
evaluationInterval:
|
||||
compliant: 10s
|
||||
noncompliant: 10s
|
||||
sourceFiles:
|
||||
- fileName: ImageBasedUpgrade.yaml
|
||||
policyName: "finalize-policy"
|
||||
spec:
|
||||
stage: Idle
|
||||
status:
|
||||
conditions:
|
||||
- status: "True"
|
||||
type: Idle
|
||||
----
|
||||
|
||||
. Create a `ClusterGroupUpgrade` CR that references the policy that finalizes the upgrade:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: ran.openshift.io/v1alpha1
|
||||
kind: ClusterGroupUpgrade
|
||||
metadata:
|
||||
name: cgu-ibu-finalize
|
||||
namespace: default
|
||||
spec:
|
||||
clusters:
|
||||
- spoke1
|
||||
enable: true
|
||||
managedPolicies:
|
||||
- group-ibu-finalize-policy
|
||||
remediationStrategy:
|
||||
canaries:
|
||||
- spoke1
|
||||
maxConcurrency: 1
|
||||
timeout: 240
|
||||
----
|
||||
|
||||
. Apply the policy:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f cgu-ibu-finalize.yml
|
||||
----
|
||||
158
modules/ztp-image-based-upgrade-with-backup.adoc
Normal file
158
modules/ztp-image-based-upgrade-with-backup.adoc
Normal file
@@ -0,0 +1,158 @@
|
||||
// Module included in the following assemblies:
|
||||
// * scalability_and_performance/ztp-image-based-upgrade.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="ztp-image-based-upgrading-with-backup_{context}"]
|
||||
= Upgrading the {sno} cluster with {lcao}
|
||||
|
||||
Once you generated the seed image and completed the `Prep` stage, you can upgrade the target cluster.
|
||||
During the upgrade process, the OADP Operator creates a backup of the artifacts specified in the OADP CRs, then the {lcao} upgrades the cluster.
|
||||
|
||||
If the upgrade fails or stops, an automatic rollback is initiated.
|
||||
If you have an issue after the upgrade, you can initiate a manual rollback.
|
||||
For more information about manual rollback, see "(Optional) Initiating rollback of the {sno} clusters after an image-based upgrade".
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
During the Developer Preview of this feature, when upgrading a cluster, any custom trusted certificates configured on the cluster will be lost. As a temporary workaround, to preserve these certificates, you must use a seed image from a seed cluster that trusts the certificates.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Complete the `Prep` stage.
|
||||
|
||||
.Procedure
|
||||
|
||||
. When you are ready, move to the upgrade stage by changing the value of the `stage` field to `Upgrade` in the `ImageBasedUpgrade` CR.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch imagebasedupgrades.lca.openshift.io example-upgrade -p='{"spec": {"stage": "Upgrade"}}' --type=merge
|
||||
----
|
||||
|
||||
. Check the status of the `ImageBasedUpgrade` CR:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get ibu -A -oyaml
|
||||
----
|
||||
|
||||
+
|
||||
.Example output
|
||||
[source,yaml]
|
||||
----
|
||||
status:
|
||||
conditions:
|
||||
- lastTransitionTime: "2024-01-01T09:00:00Z"
|
||||
message: In progress
|
||||
observedGeneration: 2
|
||||
reason: InProgress
|
||||
status: "False"
|
||||
type: Idle
|
||||
- lastTransitionTime: "2024-01-01T09:00:00Z"
|
||||
message: 'Prep completed: total: 121 (pulled: 1, skipped: 120, failed: 0)'
|
||||
observedGeneration: 2
|
||||
reason: Completed
|
||||
status: "True"
|
||||
type: PrepCompleted
|
||||
- lastTransitionTime: "2024-01-01T09:00:00Z"
|
||||
message: Prep completed
|
||||
observedGeneration: 2
|
||||
reason: Completed
|
||||
status: "False"
|
||||
type: PrepInProgress
|
||||
- lastTransitionTime: "2024-01-01T09:00:00Z"
|
||||
message: Upgrade completed
|
||||
observedGeneration: 3
|
||||
reason: Completed
|
||||
status: "True"
|
||||
type: UpgradeCompleted
|
||||
----
|
||||
|
||||
. The OADP Operator creates a backup of the data specified in the OADP `Backup` and `Restore` CRs.
|
||||
|
||||
. The target cluster reboots.
|
||||
|
||||
. Monitor the status of the CR:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get ibu -A -oyaml
|
||||
----
|
||||
|
||||
. The cluster reboots.
|
||||
|
||||
. Once you are satisfied with the upgrade, commit to the changes by changing the value of the `stage` field to `Idle` in the `ImageBasedUpgrade` CR:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch imagebasedupgrades.lca.openshift.io example-upgrade -p='{"spec": {"stage": "Idle"}}' --type=merge
|
||||
----
|
||||
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
You cannot roll back the changes once you move to the `Idle` stage after an upgrade.
|
||||
====
|
||||
|
||||
+
|
||||
--
|
||||
The {lcao} deletes all resources created during the upgrade process.
|
||||
--
|
||||
|
||||
.Verification
|
||||
|
||||
. Check the status of the `ImageBasedUpgrade` CR:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get ibu -A -oyaml
|
||||
----
|
||||
|
||||
+
|
||||
.Example output
|
||||
[source,yaml]
|
||||
----
|
||||
status:
|
||||
conditions:
|
||||
- lastTransitionTime: "2024-01-01T09:00:00Z"
|
||||
message: In progress
|
||||
observedGeneration: 2
|
||||
reason: InProgress
|
||||
status: "False"
|
||||
type: Idle
|
||||
- lastTransitionTime: "2024-01-01T09:00:00Z"
|
||||
message: 'Prep completed: total: 121 (pulled: 1, skipped: 120, failed: 0)'
|
||||
observedGeneration: 2
|
||||
reason: Completed
|
||||
status: "True"
|
||||
type: PrepCompleted
|
||||
- lastTransitionTime: "2024-01-01T09:00:00Z"
|
||||
message: Prep completed
|
||||
observedGeneration: 2
|
||||
reason: Completed
|
||||
status: "False"
|
||||
type: PrepInProgress
|
||||
- lastTransitionTime: "2024-01-01T09:00:00Z"
|
||||
message: Upgrade completed
|
||||
observedGeneration: 3
|
||||
reason: Completed
|
||||
status: "True"
|
||||
type: UpgradeCompleted
|
||||
----
|
||||
|
||||
. Check the status of the cluster restoration:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get restores -n openshift-adp -o custom-columns=NAME:.metadata.name,Status:.status.phase,Reason:.status.failureReason
|
||||
----
|
||||
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
NAME Status Reason
|
||||
acm-klusterlet Completed <none>
|
||||
apache-app Completed <none>
|
||||
localvolume Completed <none>
|
||||
----
|
||||
149
modules/ztp-image-based-upgrade.adoc
Normal file
149
modules/ztp-image-based-upgrade.adoc
Normal file
@@ -0,0 +1,149 @@
|
||||
// Module included in the following assemblies:
|
||||
// * scalability_and_performance/ztp-image-based-upgrade.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="ztp-image-based-upgrade-concept_{context}"]
|
||||
= Image-based upgrade for {sno} cluster with the {lcao}
|
||||
|
||||
From {product-title} 4.14.7, the {lcao} 4.15 provides you with an alternative way to upgrade the platform version of a {sno} cluster.
|
||||
The image-based upgrade is faster than the standard upgrade method and allows you to directly upgrade from {product-title} <4.y> to <4.y+2>, and <4.y.z> to <4.y.z+n>.
|
||||
|
||||
This upgrade method utilizes a generated OCI image from a dedicated seed cluster that is installed on the target {sno} cluster as a new `ostree` stateroot.
|
||||
A seed cluster is a {sno} cluster deployed with the target {product-title} version, Day 2 Operators, and configurations that is common to all target clusters.
|
||||
|
||||
You can use the seed image, which is generated from the seed cluster, to upgrade the platform version on any {sno} cluster that has the same combination of hardware, Day 2 Operators, and cluster configuration as the seed cluster.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
The image-based upgrade uses custom images that are specific to the hardware platform that the clusters are running on.
|
||||
Each different hardware platform requires a separate seed image.
|
||||
====
|
||||
|
||||
The {lcao} uses two custom resources (CRs) on the participating clusters to orchestrate the upgrade:
|
||||
|
||||
* On the seed cluster, the `SeedGenerator` CR allows for the seed image generation. This CR specifies the repository to push the seed image to.
|
||||
* On the target cluster, the `ImageBasedUpgrade` CR specifies the seed container image for the upgrade of the target cluster and the backup configurations for your workloads.
|
||||
|
||||
.Example SeedGenerator CR
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: lca.openshift.io/v1alpha1
|
||||
kind: SeedGenerator
|
||||
metadata:
|
||||
name: seedimage
|
||||
spec:
|
||||
seedImage: <seed_container_image>
|
||||
----
|
||||
|
||||
.Example ImageBasedUpgrade CR
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: lca.openshift.io/v1alpha1
|
||||
kind: ImageBasedUpgrade
|
||||
metadata:
|
||||
name: example-upgrade
|
||||
spec:
|
||||
stage: Idle <1>
|
||||
seedImageRef: <2>
|
||||
version: <target_version>
|
||||
image: <seed_container_image>
|
||||
pullSecretRef: <seed_pull_secret>
|
||||
additionalImages:
|
||||
name: ""
|
||||
namespace: ""
|
||||
autoRollbackOnFailure: {} <3>
|
||||
# disabledForPostRebootConfig: "true" <4>
|
||||
# disabledForUpgradeCompletion: "true" <5>
|
||||
# disabledInitMonitor: "true" <6>
|
||||
# initMonitorTimeoutSeconds: 1800 <7>
|
||||
# extraManifests: <8>
|
||||
# - name: sno-extra-manifests
|
||||
# namespace: openshift-lifecycle-agent
|
||||
oadpContent: <9>
|
||||
- name: oadp-cm-example
|
||||
namespace: openshift-adp
|
||||
----
|
||||
<1> Defines the desired stage for the `ImageBasedUpgrade` CR. The value can be `Idle`, `Prep`, `Upgrade`, or `Rollback`.
|
||||
<2> Defines the target platform version, the seed image to be used, and the secret required to access the image.
|
||||
<3> Configures the automatic rollback. By default, automatic rollback on failure is enabled throughout the upgrade.
|
||||
<4> (Optional) If set to `true`, this option disables automatic rollback when the reconfiguration of the cluster fails upon the first reboot.
|
||||
<5> (Optional) If set to `true`, this option disables automatic rollback after the {lcao} reports a failed upgrade upon completion.
|
||||
<6> (Optional) If set to `true`, this option disables automatic rollback when the upgrade does not complete after reboot within the time frame specified in the `initMonitorTimeoutSeconds` field.
|
||||
<7> (Optional) Specifies the time frame in seconds. If not defined or set to `0`, the default value of `1800` seconds (30 minutes) is used.
|
||||
<8> (Optional) Specify the list of `ConfigMap` resources that contain the additional extra manifests that you want to apply to the target cluster. You can also add your custom catalog sources that you want to retain after the upgrade.
|
||||
<9> Specify the list of `ConfigMap` resources that contain the OADP `Backup` and `Restore` CRs.
|
||||
|
||||
After generating the seed image on the seed cluster, you can move through the stages on the target cluster by setting the `spec.stage` field to the following values in the `ImageBasedUpgrade` CR:
|
||||
|
||||
* `Idle`
|
||||
* `Prep`
|
||||
* `Upgrade`
|
||||
* `Rollback` (Optional)
|
||||
|
||||
[id="ztp-image-based-upgrade-concept-idle_{context}"]
|
||||
== Idle stage
|
||||
|
||||
The {lcao} creates an `ImageBasedUpgrade` CR set to `stage: Idle` when the Operator is first deployed.
|
||||
This is the default stage.
|
||||
There is no ongoing upgrade and the cluster is ready to move to the `Prep` stage.
|
||||
|
||||
After a successful upgrade or a rollback, you commit to the change by patching the `stage` field to `Idle` in the `ImageBasedUpgrade` CR.
|
||||
Changing to this stage ensures that the {lcao} cleans up resources, so the cluster is ready for upgrades again.
|
||||
|
||||
[id="ztp-image-based-upgrade-concept-prep_{context}"]
|
||||
== Prep stage
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
You can complete this stage before a scheduled maintenance window.
|
||||
====
|
||||
|
||||
During the `Prep` stage, you specify the following upgrade details in the `ImageBasedUpgrade` CR:
|
||||
|
||||
* seed image to use
|
||||
* resources to back up
|
||||
* extra manifests to apply after upgrade
|
||||
|
||||
Then, based on what you specify, the {lcao} prepares for the upgrade without impacting the current running version.
|
||||
This preparation includes ensuring that the target cluster is ready to proceed to the `Upgrade` stage by checking if it meets certain conditions and pulling the seed image to the target cluster with additional container images specified in the seed image.
|
||||
|
||||
You also prepare backup resources with the OADP Operator's `Backup` and `Restore` CRs.
|
||||
These CRs are used in the `Upgrade` stage to reconfigure the cluster, register the cluster with {rh-rhacm}, and restore application artifacts.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
The same version of the applications must function on both the current and the target release of {product-title}.
|
||||
====
|
||||
|
||||
Additionally to the OADP Operator, the {lcao} uses the `ostree` versioning system to create a backup, which allow complete cluster reconfiguration after both upgrade and rollback.
|
||||
|
||||
You can stop the upgrade process at this point by moving to the `Idle` stage or you can start the upgrade by moving to the `Upgrade` stage in the `ImageBasedUpgrade` CR .
|
||||
If you stop, the Operator performs cleanup operations.
|
||||
|
||||
[id="ztp-image-based-upgrade-concept-upgrade_{context}"]
|
||||
== Upgrade stage
|
||||
|
||||
Just before the {lcao} starts the upgrade process, the backup of your cluster resources specified in the `Prep` stage are created on a compatible Object storage solution.
|
||||
After the target cluster reboots with the new platform version, the Operator applies the cluster and application configurations defined in the `Backup` and `Restore` CRs, and applies any extra manifests that are specified in the referenced `ConfigMap` resource.
|
||||
|
||||
The Operator also regenerates the seed image's cluster cryptography.
|
||||
This ensures that each {sno} cluster upgraded with the same seed image has unique and valid cryptographic objects.
|
||||
|
||||
Once you are satisfied with the changes, you can finalize the upgrade by moving to the `Idle` stage.
|
||||
If you encounter issues after the upgrade, you can move to the `Rollback` stage for a manual rollback.
|
||||
|
||||
[id="ztp-image-based-upgrade-concept-rollback_{context}"]
|
||||
== (Optional) Rollback stage
|
||||
|
||||
The rollback stage can be initiated manually or automatically upon failure.
|
||||
During the `Rollback` stage, the {lcao} sets the original `ostree` stateroot as default.
|
||||
Then, the node reboots with the previous release of {product-title} and application configurations.
|
||||
|
||||
By default, automatic rollback is enabled in the `ImageBasedUpgrade` CR.
|
||||
The {lcao} can initiate an automatic rollback if the upgrade fails or if the upgrade does not complete within the specified time limit.
|
||||
For more information about the automatic rollback configurations, see the _(Optional) Initiating rollback of the single-node OpenShift cluster after an image-based upgrade_ section.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
If you move to the `Idle` stage after a rollback, the {lcao} cleans up resources that can be used to troubleshoot a failed upgrade.
|
||||
====
|
||||
@@ -0,0 +1,73 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="ztp-image-based-upgrade-for-sno"]
|
||||
= Image-based upgrade for {sno} clusters
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: ztp-image-based-upgrade
|
||||
|
||||
toc::[]
|
||||
|
||||
You can use the {lcao} to perform an image-based upgrade on managed {sno} clusters.
|
||||
|
||||
:FeatureName: The Lifecycle Agent
|
||||
|
||||
// Lifecycle Agent (LCAO)
|
||||
|
||||
include::modules/ztp-image-based-upgrade.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ztp-image-based-upgrade-installing-lifecycle-agent-using-cli.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/ztp-image-based-upgrade-installing-lifecycle-agent-using-web-console.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/ztp-image-based-upgrade-share-container-directory.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* link:https://www.redhat.com/en/blog/a-guide-to-creating-a-separate-disk-partition-at-installation-time[A Guide for Creating a Separate-disk Partition at Installation Time]
|
||||
|
||||
include::modules/ztp-image-based-upgrade-generate-seed-image.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* link:https://www.redhat.com/en/blog/a-guide-to-creating-a-separate-disk-partition-at-installation-time[A Guide for Creating a Separate-disk Partition at Installation Time]
|
||||
|
||||
* xref:../../scalability_and_performance/ztp_far_edge/ztp-talm-updating-managed-policies.adoc#ztp-image-based-upgrade-shared-container-directory_ztp-image-based-upgrade[Sharing the container directory between `ostree` stateroots]
|
||||
|
||||
* xref:../../backup_and_restore/application_backup_and_restore/installing/oadp-installing-operator.adoc[Installing the OADP Operator]
|
||||
|
||||
include::modules/ztp-image-based-upgrade-prep.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* link:https://www.redhat.com/en/blog/a-guide-to-creating-a-separate-disk-partition-at-installation-time[A Guide for Creating a Separate-disk Partition at Installation Time]
|
||||
|
||||
* xref:../../scalability_and_performance/scalability_and_performance/ztp_far_edge/ztp-image-based-upgrade#ztp-image-based-upgrade-shared-container-directory_ztp-image-based-upgrade[Sharing the container directory between `ostree` stateroots]
|
||||
|
||||
* xref:../../backup_and_restore/application_backup_and_restore/installing/installing-oadp-ocs.adoc#oadp-about-backup-snapshot-locations_installing-oadp-ocs[About backup and snapshot locations and their secrets]
|
||||
|
||||
* xref:../../backup_and_restore/application_backup_and_restore/backing_up_and_restoring/oadp-creating-backup-cr.adoc[Creating a Backup CR]
|
||||
|
||||
* xref:../../backup_and_restore/application_backup_and_restore/backing_up_and_restoring/restoring-applications.adoc#oadp-creating-restore-cr_restoring-applications[Creating a Restore CR]
|
||||
|
||||
include::modules/ztp-image-based-upgrade-with-backup.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/ztp-image-based-upgrade-rollback.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/ztp-image-based-upgrade-talm.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* xref:../../scalability_and_performance/ztp_far_edge/ztp-preparing-the-hub-cluster.adoc#ztp-preparing-the-ztp-git-repository-ver-ind_ztp-preparing-the-hub-cluster[Preparing the GitOps ZTP site configuration repository for version independence]
|
||||
|
||||
* xref:../../scalability_and_performance/ztp_far_edge/ztp_far_edge/ztp-image-based-upgrade#ztp-image-based-upgrade-prep_ztp-image-based-upgrade[Preparing the single-node OpenShift cluster for the image-based upgrade]
|
||||
|
||||
* xref:../../scalability_and_performance/ztp_far_edge/ztp_far_edge/ztp-image-based-upgrade#ztp-image-based-upgrade-shared-container-directory_ztp-image-based-upgrade[Sharing the container directory between `ostree` stateroots]
|
||||
|
||||
* xref:../../backup_and_restore/application_backup_and_restore/installing/installing-oadp-ocs.adoc#oadp-about-backup-snapshot-locations_installing-oadp-ocs[About backup and snapshot locations and their secrets]
|
||||
|
||||
* xref:../../backup_and_restore/application_backup_and_restore/backing_up_and_restoring/oadp-creating-backup-cr.adoc[Creating a Backup CR]
|
||||
|
||||
* xref:../../backup_and_restore/application_backup_and_restore/backing_up_and_restoring/restoring-applications.adoc#oadp-creating-restore-cr_restoring-applications[Creating a Restore CR]
|
||||
@@ -59,6 +59,6 @@ include::modules/cnf-topology-aware-lifecycle-manager-creating-custom-resources.
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* For more information about the {cgu-operator} pre-caching workflow, see xref:../../scalability_and_performance/ztp_far_edge/cnf-talm-for-cluster-upgrades.adoc#talo-precache-feature-concept_cnf-topology-aware-lifecycle-manager[Using the container image pre-cache feature].
|
||||
* For more information about the {cgu-operator} precaching workflow, see xref:../../scalability_and_performance/ztp_far_edge/cnf-talm-for-cluster-upgrades.adoc#talo-precache-feature-concept_cnf-topology-aware-lifecycle-manager[Using the container image precache feature].
|
||||
|
||||
include::modules/cnf-topology-aware-lifecycle-manager-autocreate-cgu-cr-ztp.adoc[leveloffset=+1]
|
||||
include::modules/cnf-topology-aware-lifecycle-manager-autocreate-cgu-cr-ztp.adoc[leveloffset=+1]
|
||||
Reference in New Issue
Block a user