diff --git a/modules/cli-installing-cli.adoc b/modules/cli-installing-cli.adoc index 2598734714..130cafa813 100644 --- a/modules/cli-installing-cli.adoc +++ b/modules/cli-installing-cli.adoc @@ -68,7 +68,7 @@ command-line interface. You can install `oc` on Linux, Windows, or macOS. ==== If you installed an earlier version of `oc`, you cannot use it to complete all of the commands in {product-title} {product-version}. Download and install the new version of `oc`. ifdef::restricted[] -If you are upgrading a cluster in a disconnected environment, install the `oc` version that you plan to upgrade to. +If you are updating a cluster in a disconnected environment, install the `oc` version that you plan to update to. endif::restricted[] ==== diff --git a/modules/machine-health-checks-pausing-web-console.adoc b/modules/machine-health-checks-pausing-web-console.adoc index 05e49e88f6..5638634d32 100644 --- a/modules/machine-health-checks-pausing-web-console.adoc +++ b/modules/machine-health-checks-pausing-web-console.adoc @@ -6,7 +6,7 @@ [id="machine-health-checks-pausing-web-console_{context}"] = Pausing a MachineHealthCheck resource by using the web console -During the upgrade process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the `MachineHealthCheck` resources before updating the cluster. +During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the `MachineHealthCheck` resources before updating the cluster. .Prerequisites diff --git a/modules/machine-health-checks-pausing.adoc b/modules/machine-health-checks-pausing.adoc index 61b6154275..7e7fd69e06 100644 --- a/modules/machine-health-checks-pausing.adoc +++ b/modules/machine-health-checks-pausing.adoc @@ -8,7 +8,7 @@ [id="machine-health-checks-pausing_{context}"] = Pausing a MachineHealthCheck resource -During the upgrade process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the `MachineHealthCheck` resources before updating the cluster. +During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the `MachineHealthCheck` resources before updating the cluster. .Prerequisites diff --git a/modules/oc-mirror-image-set-config-examples.adoc b/modules/oc-mirror-image-set-config-examples.adoc index 118183a309..6193a193ba 100644 --- a/modules/oc-mirror-image-set-config-examples.adoc +++ b/modules/oc-mirror-image-set-config-examples.adoc @@ -12,9 +12,9 @@ The following `ImageSetConfiguration` file examples show the configuration for v // Moved to first; unchanged [discrete] [id="oc-mirror-image-set-examples-shortest-upgrade-path_{context}"] -== Use case: Including the shortest {product-title} upgrade path +== Use case: Including the shortest {product-title} update path -The following `ImageSetConfiguration` file uses a local storage backend and includes all {product-title} versions along the shortest upgrade path from the minimum version of `4.11.37` to the maximum version of `4.12.15`. +The following `ImageSetConfiguration` file uses a local storage backend and includes all {product-title} versions along the shortest update path from the minimum version of `4.11.37` to the maximum version of `4.12.15`. .Example `ImageSetConfiguration` file [source,yaml] diff --git a/modules/oc-mirror-imageset-config-params.adoc b/modules/oc-mirror-imageset-config-params.adoc index 7c388d69be..63a91b2dfa 100644 --- a/modules/oc-mirror-imageset-config-params.adoc +++ b/modules/oc-mirror-imageset-config-params.adoc @@ -156,7 +156,7 @@ operators: |String. For example: `5.2.3-31` |`mirror.operators.packages.channels.minBundle` -|The name of the minimum bundle to include, plus all bundles in the upgrade graph to the channel head. Set this field only if the named bundle has no semantic version metadata. +|The name of the minimum bundle to include, plus all bundles in the update graph to the channel head. Set this field only if the named bundle has no semantic version metadata. |String. For example: `bundleName` |`mirror.operators.packages.channels.minVersion` diff --git a/modules/rhel-compute-about-hooks.adoc b/modules/rhel-compute-about-hooks.adoc index cc7aa9fd83..4ff4b1595e 100644 --- a/modules/rhel-compute-about-hooks.adoc +++ b/modules/rhel-compute-about-hooks.adoc @@ -4,7 +4,7 @@ :_content-type: CONCEPT [id="rhel-compute-about-hooks_{context}"] -= About Ansible hooks for upgrades += About Ansible hooks for updates When you update {product-title}, you can run custom tasks on your Red Hat Enterprise Linux (RHEL) nodes during specific operations by using _hooks_. Hooks @@ -22,4 +22,4 @@ Hooks have the following important limitations: modified or removed in future {product-title} releases. - Hooks do not have error handling, so an error in a hook halts the update process. If you get an error, you must address the problem and then start the -upgrade again. \ No newline at end of file +update again. \ No newline at end of file diff --git a/modules/rhel-compute-updating.adoc b/modules/rhel-compute-updating.adoc index 8af96374d4..c59ef4cc4e 100644 --- a/modules/rhel-compute-updating.adoc +++ b/modules/rhel-compute-updating.adoc @@ -17,7 +17,7 @@ You can also update your compute machines to another minor version of {product-t [IMPORTANT] ==== -You cannot upgrade {op-system-base} 7 compute machines to {op-system-base} 8. You must deploy new {op-system-base} 8 hosts, and the old {op-system-base} 7 hosts should be removed. +You cannot update {op-system-base} 7 compute machines to {op-system-base} 8. You must deploy new {op-system-base} 8 hosts, and the old {op-system-base} 7 hosts should be removed. ==== .Prerequisites @@ -59,7 +59,7 @@ By default, the base OS RHEL with "Minimal" installation option enables firewall + [IMPORTANT] ==== -As of {product-title} 4.11, the Ansible playbooks are provided only for {op-system-base} 8. If a {op-system-base} 7 system was used as a host for the {product-title} 4.10 Ansible playbooks, you must either upgrade the Ansible host to {op-system-base} 8, or create a new Ansible host on a {op-system-base} 8 system and copy over the inventories from the old Ansible host. +As of {product-title} 4.11, the Ansible playbooks are provided only for {op-system-base} 8. If a {op-system-base} 7 system was used as a host for the {product-title} 4.10 Ansible playbooks, you must either update the Ansible host to {op-system-base} 8, or create a new Ansible host on a {op-system-base} 8 system and copy over the inventories from the old Ansible host. ==== .. On the machine that you run the Ansible playbooks, update the Ansible package: @@ -119,7 +119,7 @@ $ ansible-playbook -i //inventory/hosts playbooks/upgrade.yml <1> + [NOTE] ==== -The `upgrade` playbook only upgrades the {product-title} packages. It does not update the operating system packages. +The `upgrade` playbook only updates the {product-title} packages. It does not update the operating system packages. ==== . After you update all of the workers, confirm that all of your cluster nodes have updated to the new version: diff --git a/modules/scheduling-virtual-hardware-update-on-vsphere.adoc b/modules/scheduling-virtual-hardware-update-on-vsphere.adoc index eabcfe473e..9b54e4463f 100644 --- a/modules/scheduling-virtual-hardware-update-on-vsphere.adoc +++ b/modules/scheduling-virtual-hardware-update-on-vsphere.adoc @@ -7,4 +7,4 @@ Virtual hardware updates can be scheduled to occur when a virtual machine is powered on or rebooted. You can schedule your virtual hardware updates exclusively in vCenter by following link:https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-96C06236-C271-4CFE-857E-22D1FDEECC95.html[Schedule a Compatibility Upgrade for a Virtual Machine] in the VMware documentation. -When scheduling an upgrade prior to performing an upgrade of {product-title}, the virtual hardware update occurs when the nodes are rebooted during the course of the {product-title} upgrade. +When scheduling an update prior to performing an update of {product-title}, the virtual hardware update occurs when the nodes are rebooted during the course of the {product-title} update. diff --git a/modules/update-conditional-updates.adoc b/modules/update-conditional-updates.adoc index e076723329..6a826bad0d 100644 --- a/modules/update-conditional-updates.adoc +++ b/modules/update-conditional-updates.adoc @@ -5,10 +5,10 @@ :_content-type: PROCEDURE [id="update-conditional-upgrade-path{context}"] -= Updating along a conditional upgrade path += Updating along a conditional update path -You can update along a recommended conditional upgrade path using the web console or the OpenShift CLI (`oc`). -When a conditional update is not recommended for your cluster, you can update along a conditional upgrade path using the OpenShift CLI (`oc`) 4.10 or later. +You can update along a recommended conditional update path using the web console or the OpenShift CLI (`oc`). +When a conditional update is not recommended for your cluster, you can update along a conditional update path using the OpenShift CLI (`oc`) 4.10 or later. .Procedure diff --git a/modules/update-mirror-repository.adoc b/modules/update-mirror-repository.adoc index 269b76c9ca..c9a15071b5 100644 --- a/modules/update-mirror-repository.adoc +++ b/modules/update-mirror-repository.adoc @@ -24,7 +24,7 @@ endif::[] .Procedure -. Use the link:https://access.redhat.com/labs/ocpupgradegraph/update_channel[Red Hat {product-title} Upgrade Graph visualizer and update planner] to plan an update from one version to another. The OpenShift Upgrade Graph provides channel graphs and a way to confirm that there is an update path between your current and intended cluster versions. +. Use the link:https://access.redhat.com/labs/ocpupgradegraph/update_channel[Red Hat {product-title} Update Graph visualizer and update planner] to plan an update from one version to another. The OpenShift Update Graph provides channel graphs and a way to confirm that there is an update path between your current and intended cluster versions. . Set the required environment variables: .. Export the release version: @@ -145,7 +145,7 @@ $ oc image mirror -a ${LOCAL_SECRET_JSON} --from-dir=${REMOVABLE_MEDIA_PATH}/mi + <1> For `REMOVABLE_MEDIA_PATH`, you must use the same path that you specified when you mirrored the images. -... Use `oc` command-line interface (CLI) to log in to the cluster that you are upgrading. +... Use `oc` command-line interface (CLI) to log in to the cluster that you are updating. ... Apply the mirrored release image signature config map to the connected cluster: + diff --git a/modules/update-service-graph-data.adoc b/modules/update-service-graph-data.adoc index e5f5f73450..3ff24aab77 100644 --- a/modules/update-service-graph-data.adoc +++ b/modules/update-service-graph-data.adoc @@ -5,7 +5,7 @@ [id="update-service-graph-data_{context}"] = Creating the OpenShift Update Service graph data container image -The OpenShift Update Service requires a graph data container image, from which the OpenShift Update Service retrieves information about channel membership and blocked update edges. Graph data is typically fetched directly from the upgrade graph data repository. In environments where an internet connection is unavailable, loading this information from an init container is another way to make the graph data available to the OpenShift Update Service. The role of the init container is to provide a local copy of the graph data, and during pod initialization, the init container copies the data to a volume that is accessible by the service. +The OpenShift Update Service requires a graph data container image, from which the OpenShift Update Service retrieves information about channel membership and blocked update edges. Graph data is typically fetched directly from the update graph data repository. In environments where an internet connection is unavailable, loading this information from an init container is another way to make the graph data available to the OpenShift Update Service. The role of the init container is to provide a local copy of the graph data, and during pod initialization, the init container copies the data to a volume that is accessible by the service. [NOTE] ==== diff --git a/modules/update-upgrading-cli.adoc b/modules/update-upgrading-cli.adoc index 8a7edda1da..e6353cee56 100644 --- a/modules/update-upgrading-cli.adoc +++ b/modules/update-upgrading-cli.adoc @@ -55,11 +55,11 @@ VERSION IMAGE + [NOTE] ==== -For details and information on how to perform an `EUS-to-EUS` channel upgrade, please refer to the +For details and information on how to perform an `EUS-to-EUS` channel update, please refer to the _Preparing to perform an EUS-to-EUS upgrade_ page, listed in the Additional resources section. ==== -. Based on your organization requirements, set the appropriate upgrade channel. For example, you can set your channel to `stable-4.12`, `fast-4.12`, or `eus-4.12`. For more information about channels, refer to _Understanding update channels and releases_ listed in the Additional resources section. +. Based on your organization requirements, set the appropriate update channel. For example, you can set your channel to `stable-4.12`, `fast-4.12`, or `eus-4.12`. For more information about channels, refer to _Understanding update channels and releases_ listed in the Additional resources section. + [source,terminal] ---- @@ -129,7 +129,7 @@ Cluster version is Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.10 (available channels: candidate-4.10, candidate-4.11, eus-4.10, fast-4.10, fast-4.11, stable-4.10) -No updates available. You may force an upgrade to a specific release image, but doing so might not be supported and might result in downtime or data loss. +No updates available. You may force an update to a specific release image, but doing so might not be supported and might result in downtime or data loss. ---- + [NOTE] @@ -141,7 +141,7 @@ NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.26 True True 24m Unable to apply 4.11.0-rc.7: an unknown error has occurred: MultipleErrors ---- ==== -. If you are upgrading your cluster to the next minor version, such as version X.y to X.(y+1), it is recommended to confirm that your nodes are upgraded before deploying workloads that rely on a new feature: +. If you are updating your cluster to the next minor version, such as version X.y to X.(y+1), it is recommended to confirm that your nodes are updated before deploying workloads that rely on a new feature: + [source,terminal] ---- diff --git a/modules/update-upgrading-web.adoc b/modules/update-upgrading-web.adoc index 421ddb9c33..322e409b7f 100644 --- a/modules/update-upgrading-web.adoc +++ b/modules/update-upgrading-web.adoc @@ -55,7 +55,7 @@ The Input channel + [NOTE] ==== -If you are upgrading your cluster to the next minor version, like version 4.y to 4.(y+1), it is recommended to confirm your nodes are updated before deploying workloads that rely on a new feature. Any pools with worker nodes that are not yet updated are displayed on the *Cluster Settings* page. +If you are update your cluster to the next minor version, like version 4.y to 4.(y+1), it is recommended to confirm your nodes are updated before deploying workloads that rely on a new feature. Any pools with worker nodes that are not yet updated are displayed on the *Cluster Settings* page. ==== . After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel. @@ -75,7 +75,7 @@ ifdef::rhel[] + [NOTE] ==== -When you update a cluster that contains Red Hat Enterprise Linux (RHEL) worker machines, those workers temporarily become unavailable during the update process. You must run the upgrade playbook against each RHEL machine as it enters the `NotReady` state for the cluster to finish updating. +When you update a cluster that contains Red Hat Enterprise Linux (RHEL) worker machines, those workers temporarily become unavailable during the update process. You must run the update playbook against each RHEL machine as it enters the `NotReady` state for the cluster to finish updating. ==== endif::rhel[] diff --git a/modules/updating-eus-to-eus-layered-products.adoc b/modules/updating-eus-to-eus-layered-products.adoc index d515897582..d85d656f31 100644 --- a/modules/updating-eus-to-eus-layered-products.adoc +++ b/modules/updating-eus-to-eus-layered-products.adoc @@ -4,11 +4,11 @@ :_content-type: PROCEDURE [id="updating-eus-to-eus-olm-operators_{context}"] -= EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager += EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager In addition to the EUS-to-EUS update steps mentioned for the web console and CLI, there are additional steps to consider when performing EUS-to-EUS updates for clusters with the following: -* Layered products +* Layered products * Operators installed through Operator Lifecycle Manager (OLM) .What is a layered product? @@ -25,15 +25,15 @@ As an example, here are the steps to perform an EUS-to-EUS update from <4.y> to .Example workflow . Pause the worker machine pools. -. Upgrade OpenShift <4.y> -> OpenShift <4.y+1>. -. Upgrade ODF <4.y> -> ODF <4.y+1>. -. Upgrade OpenShift <4.y+1> -> OpenShift <4.y+2>. -. Upgrade to ODF <4.y+2>. +. Update OpenShift <4.y> -> OpenShift <4.y+1>. +. Update ODF <4.y> -> ODF <4.y+1>. +. Update OpenShift <4.y+1> -> OpenShift <4.y+2>. +. Update to ODF <4.y+2>. . Unpause the worker machine pools. [NOTE] ==== -The upgrade to ODF <4.y+2> can happen before or after worker machine pools have been unpaused. +The update to ODF <4.y+2> can happen before or after worker machine pools have been unpaused. ==== diff --git a/modules/updating-sno.adoc b/modules/updating-sno.adoc index 208170a0c6..a362628ec9 100644 --- a/modules/updating-sno.adoc +++ b/modules/updating-sno.adoc @@ -13,7 +13,7 @@ However, note the following limitations: * The prerequisite to pause the `MachineHealthCheck` resources is not required because there is no other node to perform the health check. -* Restoring a single-node {product-title} cluster using an etcd backup is not officially supported. However, it is good practice to perform the etcd backup in case your upgrade fails. If your control plane is healthy, you might be able to restore your cluster to a previous state by using the backup. +* Restoring a single-node {product-title} cluster using an etcd backup is not officially supported. However, it is good practice to perform the etcd backup in case your update fails. If your control plane is healthy, you might be able to restore your cluster to a previous state by using the backup. * Updating a single-node {product-title} cluster requires downtime and can include an automatic reboot. The amount of downtime depends on the update payload, as described in the following scenarios: diff --git a/updating/updating_a_cluster/eus-eus-update.adoc b/updating/updating_a_cluster/eus-eus-update.adoc index 9a42a9db94..34f9c1921c 100644 --- a/updating/updating_a_cluster/eus-eus-update.adoc +++ b/updating/updating_a_cluster/eus-eus-update.adoc @@ -18,7 +18,7 @@ EUS-to-EUS updates are only viable between *even-numbered minor versions* of {pr There are a number of caveats to consider when attempting an EUS-to-EUS update. * EUS-to-EUS updates are only offered after updates between all versions involved have been made available in `stable` channels. -* If you encounter issues during or after upgrading to the odd-numbered minor version but before upgrading to the next even-numbered version, then remediation of those issues may require that non-control plane hosts complete the update to the odd-numbered version before moving forward. +* If you encounter issues during or after updating to the odd-numbered minor version but before updating to the next even-numbered version, then remediation of those issues may require that non-control plane hosts complete the update to the odd-numbered version before moving forward. * You can do a partial update by updating the worker or custom pool nodes to accommodate the time it takes for maintenance. * You can complete the update process during multiple maintenance windows by pausing at intermediate steps. However, plan to complete the entire update within 60 days. This is critical to ensure that normal cluster automation processes are completed. diff --git a/updating/updating_a_cluster/migrating-to-multi-payload.adoc b/updating/updating_a_cluster/migrating-to-multi-payload.adoc index 26dc126bad..0326fb4d5d 100644 --- a/updating/updating_a_cluster/migrating-to-multi-payload.adoc +++ b/updating/updating_a_cluster/migrating-to-multi-payload.adoc @@ -12,7 +12,7 @@ For information about configuring your multi-architecture compute machines, see [IMPORTANT] ==== -Migration from a multi-architecture payload to a single-architecture payload is not supported. Once a cluster has transitioned to using a multi-architecture payload, it can no longer accept a single-architecture upgrade payload. +Migration from a multi-architecture payload to a single-architecture payload is not supported. Once a cluster has transitioned to using a multi-architecture payload, it can no longer accept a single-architecture update payload. ==== include::modules/migrating-to-multi-arch-cli.adoc[leveloffset=+1] diff --git a/updating/updating_a_cluster/updating-cluster-cli.adoc b/updating/updating_a_cluster/updating-cluster-cli.adoc index 0e02b4c309..7c2efc9d11 100644 --- a/updating/updating_a_cluster/updating-cluster-cli.adoc +++ b/updating/updating_a_cluster/updating-cluster-cli.adoc @@ -18,18 +18,18 @@ You can update, or upgrade, an {product-title} cluster within a minor version by * Have access to the cluster as a user with `admin` privileges. See xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions]. * Have a recent xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must restore your cluster to a previous state. -* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before upgrading to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 upgrades for {op-system-base} workers; those hosts must be replaced with a clean operating system install. +* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before updating to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 updates for {op-system-base} workers; those hosts must be replaced with a clean operating system install. * Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster update. See xref:../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators] for more information. * Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. * If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials]. -* Ensure that you address all `Upgradeable=False` conditions so the cluster allows an update to the next minor version. An alert displays at the top of the *Cluster Settings* page when you have one or more cluster Operators that cannot be upgraded. You can still update to the next available patch update for the minor release you are currently on. +* Ensure that you address all `Upgradeable=False` conditions so the cluster allows an update to the next minor version. An alert displays at the top of the *Cluster Settings* page when you have one or more cluster Operators that cannot be updated. You can still update to the next available patch update for the minor release you are currently on. * Review the list of APIs that were removed in Kubernetes 1.26, migrate any affected components to use the new API version, and provide the administrator acknowledgment. For more information, see xref:../../updating/updating-cluster-prepare.adoc#updating-cluster-prepare[Preparing to update to {product-title} 4.13]. -* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain. +* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain. [IMPORTANT] ==== * When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a previous version is not supported. If your update is failing to complete, contact Red Hat support. -* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster upgrades. You must remove this setting before you can upgrade your cluster. +* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster. ==== [role="_additional-resources"] diff --git a/updating/updating_a_cluster/updating-cluster-rhel-compute.adoc b/updating/updating_a_cluster/updating-cluster-rhel-compute.adoc index 3d53e02919..67605a77a3 100644 --- a/updating/updating_a_cluster/updating-cluster-rhel-compute.adoc +++ b/updating/updating_a_cluster/updating-cluster-rhel-compute.adoc @@ -15,9 +15,9 @@ those machines. * Have access to the cluster as a user with `admin` privileges. See xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions]. * Have a recent xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must restore your cluster to a previous state. -* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before upgrading to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 upgrades for {op-system-base} workers; those hosts must be replaced with a clean operating system install. +* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before updating to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 updates for {op-system-base} workers; those hosts must be replaced with a clean operating system install. * If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials]. -* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain. +* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain. [role="_additional-resources"] .Additional resources diff --git a/updating/updating_a_cluster/updating-cluster-web-console.adoc b/updating/updating_a_cluster/updating-cluster-web-console.adoc index 665d566e23..0b2e856d92 100644 --- a/updating/updating_a_cluster/updating-cluster-web-console.adoc +++ b/updating/updating_a_cluster/updating-cluster-web-console.adoc @@ -18,19 +18,19 @@ Use the web console or `oc adm upgrade channel __` to change the update * Have access to the cluster as a user with `admin` privileges. See xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions]. * Have a recent xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must restore your cluster to a previous state. -* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before upgrading to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 upgrades for {op-system-base} workers; those hosts must be replaced with a clean operating system install. +* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before updating to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 updates for {op-system-base} workers; those hosts must be replaced with a clean operating system install. * Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster update. See xref:../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators] for more information. * Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. //remove this???^ or maybe just add another bullet that you can break up the update? * To accommodate the time it takes to update, you are able to do a partial update by updating the worker or custom pool nodes. You can pause and resume within the progress bar of each pool. * If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials]. * Review the list of APIs that were removed in Kubernetes 1.26, migrate any affected components to use the new API version, and provide the administrator acknowledgment. For more information, see xref:../../updating/updating-cluster-prepare.adoc#updating-cluster-prepare[Preparing to update to {product-title} 4.13]. -* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain. +* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain. [IMPORTANT] ==== * When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a previous version is not supported. If your update is failing to complete, contact Red Hat support. -* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster upgrades. You must remove this setting before you can upgrade your cluster. +* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster. ==== [role="_additional-resources"] diff --git a/updating/updating_a_cluster/updating-hosted-control-planes.adoc b/updating/updating_a_cluster/updating-hosted-control-planes.adoc index 0ce0868eca..c30c4f38f8 100644 --- a/updating/updating_a_cluster/updating-hosted-control-planes.adoc +++ b/updating/updating_a_cluster/updating-hosted-control-planes.adoc @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[] toc::[] -On hosted control planes for {product-title}, updates are decoupled between the control plane and the nodes. Your service cluster provider, which is the user that hosts the cluster control planes, can manage the updates as needed. The hosted cluster handles control plane updates, and node pools handle node upgrades. +On hosted control planes for {product-title}, updates are decoupled between the control plane and the nodes. Your service cluster provider, which is the user that hosts the cluster control planes, can manage the updates as needed. The hosted cluster handles control plane updates, and node pools handle node updates. include::modules/updates-for-hosted-control-planes.adoc[leveloffset=+1] include::modules/updating-node-pools-for-hcp.adoc[leveloffset=+1] diff --git a/updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update.adoc b/updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update.adoc index ab5ce49aac..c5502afccd 100644 --- a/updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update.adoc +++ b/updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update.adoc @@ -23,11 +23,11 @@ See xref:../../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define * You must have a recent xref:../../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must xref:../../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore your cluster to a previous state]. * You must ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. * If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials]. -* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain. +* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain. [NOTE] ==== -If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain. +If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain. ==== include::modules/machine-health-checks-pausing.adoc[leveloffset=+1]