mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Fixing upgrade wording in update docs
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
14d9e5381b
commit
d604d37693
@@ -68,7 +68,7 @@ command-line interface. You can install `oc` on Linux, Windows, or macOS.
|
||||
====
|
||||
If you installed an earlier version of `oc`, you cannot use it to complete all of the commands in {product-title} {product-version}. Download and install the new version of `oc`.
|
||||
ifdef::restricted[]
|
||||
If you are upgrading a cluster in a disconnected environment, install the `oc` version that you plan to upgrade to.
|
||||
If you are updating a cluster in a disconnected environment, install the `oc` version that you plan to update to.
|
||||
endif::restricted[]
|
||||
====
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="machine-health-checks-pausing-web-console_{context}"]
|
||||
= Pausing a MachineHealthCheck resource by using the web console
|
||||
|
||||
During the upgrade process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the `MachineHealthCheck` resources before updating the cluster.
|
||||
During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the `MachineHealthCheck` resources before updating the cluster.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
[id="machine-health-checks-pausing_{context}"]
|
||||
= Pausing a MachineHealthCheck resource
|
||||
|
||||
During the upgrade process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the `MachineHealthCheck` resources before updating the cluster.
|
||||
During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the `MachineHealthCheck` resources before updating the cluster.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
|
||||
@@ -12,9 +12,9 @@ The following `ImageSetConfiguration` file examples show the configuration for v
|
||||
// Moved to first; unchanged
|
||||
[discrete]
|
||||
[id="oc-mirror-image-set-examples-shortest-upgrade-path_{context}"]
|
||||
== Use case: Including the shortest {product-title} upgrade path
|
||||
== Use case: Including the shortest {product-title} update path
|
||||
|
||||
The following `ImageSetConfiguration` file uses a local storage backend and includes all {product-title} versions along the shortest upgrade path from the minimum version of `4.11.37` to the maximum version of `4.12.15`.
|
||||
The following `ImageSetConfiguration` file uses a local storage backend and includes all {product-title} versions along the shortest update path from the minimum version of `4.11.37` to the maximum version of `4.12.15`.
|
||||
|
||||
.Example `ImageSetConfiguration` file
|
||||
[source,yaml]
|
||||
|
||||
@@ -156,7 +156,7 @@ operators:
|
||||
|String. For example: `5.2.3-31`
|
||||
|
||||
|`mirror.operators.packages.channels.minBundle`
|
||||
|The name of the minimum bundle to include, plus all bundles in the upgrade graph to the channel head. Set this field only if the named bundle has no semantic version metadata.
|
||||
|The name of the minimum bundle to include, plus all bundles in the update graph to the channel head. Set this field only if the named bundle has no semantic version metadata.
|
||||
|String. For example: `bundleName`
|
||||
|
||||
|`mirror.operators.packages.channels.minVersion`
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="rhel-compute-about-hooks_{context}"]
|
||||
= About Ansible hooks for upgrades
|
||||
= About Ansible hooks for updates
|
||||
|
||||
When you update {product-title}, you can run custom tasks on your Red Hat
|
||||
Enterprise Linux (RHEL) nodes during specific operations by using _hooks_. Hooks
|
||||
@@ -22,4 +22,4 @@ Hooks have the following important limitations:
|
||||
modified or removed in future {product-title} releases.
|
||||
- Hooks do not have error handling, so an error in a hook halts the update
|
||||
process. If you get an error, you must address the problem and then start the
|
||||
upgrade again.
|
||||
update again.
|
||||
@@ -17,7 +17,7 @@ You can also update your compute machines to another minor version of {product-t
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
You cannot upgrade {op-system-base} 7 compute machines to {op-system-base} 8. You must deploy new {op-system-base} 8 hosts, and the old {op-system-base} 7 hosts should be removed.
|
||||
You cannot update {op-system-base} 7 compute machines to {op-system-base} 8. You must deploy new {op-system-base} 8 hosts, and the old {op-system-base} 7 hosts should be removed.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
@@ -59,7 +59,7 @@ By default, the base OS RHEL with "Minimal" installation option enables firewall
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
As of {product-title} 4.11, the Ansible playbooks are provided only for {op-system-base} 8. If a {op-system-base} 7 system was used as a host for the {product-title} 4.10 Ansible playbooks, you must either upgrade the Ansible host to {op-system-base} 8, or create a new Ansible host on a {op-system-base} 8 system and copy over the inventories from the old Ansible host.
|
||||
As of {product-title} 4.11, the Ansible playbooks are provided only for {op-system-base} 8. If a {op-system-base} 7 system was used as a host for the {product-title} 4.10 Ansible playbooks, you must either update the Ansible host to {op-system-base} 8, or create a new Ansible host on a {op-system-base} 8 system and copy over the inventories from the old Ansible host.
|
||||
====
|
||||
|
||||
.. On the machine that you run the Ansible playbooks, update the Ansible package:
|
||||
@@ -119,7 +119,7 @@ $ ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml <1>
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The `upgrade` playbook only upgrades the {product-title} packages. It does not update the operating system packages.
|
||||
The `upgrade` playbook only updates the {product-title} packages. It does not update the operating system packages.
|
||||
====
|
||||
|
||||
. After you update all of the workers, confirm that all of your cluster nodes have updated to the new version:
|
||||
|
||||
@@ -7,4 +7,4 @@
|
||||
|
||||
Virtual hardware updates can be scheduled to occur when a virtual machine is powered on or rebooted. You can schedule your virtual hardware updates exclusively in vCenter by following link:https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-96C06236-C271-4CFE-857E-22D1FDEECC95.html[Schedule a Compatibility Upgrade for a Virtual Machine] in the VMware documentation.
|
||||
|
||||
When scheduling an upgrade prior to performing an upgrade of {product-title}, the virtual hardware update occurs when the nodes are rebooted during the course of the {product-title} upgrade.
|
||||
When scheduling an update prior to performing an update of {product-title}, the virtual hardware update occurs when the nodes are rebooted during the course of the {product-title} update.
|
||||
|
||||
@@ -5,10 +5,10 @@
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="update-conditional-upgrade-path{context}"]
|
||||
= Updating along a conditional upgrade path
|
||||
= Updating along a conditional update path
|
||||
|
||||
You can update along a recommended conditional upgrade path using the web console or the OpenShift CLI (`oc`).
|
||||
When a conditional update is not recommended for your cluster, you can update along a conditional upgrade path using the OpenShift CLI (`oc`) 4.10 or later.
|
||||
You can update along a recommended conditional update path using the web console or the OpenShift CLI (`oc`).
|
||||
When a conditional update is not recommended for your cluster, you can update along a conditional update path using the OpenShift CLI (`oc`) 4.10 or later.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ endif::[]
|
||||
|
||||
.Procedure
|
||||
|
||||
. Use the link:https://access.redhat.com/labs/ocpupgradegraph/update_channel[Red Hat {product-title} Upgrade Graph visualizer and update planner] to plan an update from one version to another. The OpenShift Upgrade Graph provides channel graphs and a way to confirm that there is an update path between your current and intended cluster versions.
|
||||
. Use the link:https://access.redhat.com/labs/ocpupgradegraph/update_channel[Red Hat {product-title} Update Graph visualizer and update planner] to plan an update from one version to another. The OpenShift Update Graph provides channel graphs and a way to confirm that there is an update path between your current and intended cluster versions.
|
||||
|
||||
. Set the required environment variables:
|
||||
.. Export the release version:
|
||||
@@ -145,7 +145,7 @@ $ oc image mirror -a ${LOCAL_SECRET_JSON} --from-dir=${REMOVABLE_MEDIA_PATH}/mi
|
||||
+
|
||||
<1> For `REMOVABLE_MEDIA_PATH`, you must use the same path that you specified when you mirrored the images.
|
||||
|
||||
... Use `oc` command-line interface (CLI) to log in to the cluster that you are upgrading.
|
||||
... Use `oc` command-line interface (CLI) to log in to the cluster that you are updating.
|
||||
|
||||
... Apply the mirrored release image signature config map to the connected cluster:
|
||||
+
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="update-service-graph-data_{context}"]
|
||||
= Creating the OpenShift Update Service graph data container image
|
||||
|
||||
The OpenShift Update Service requires a graph data container image, from which the OpenShift Update Service retrieves information about channel membership and blocked update edges. Graph data is typically fetched directly from the upgrade graph data repository. In environments where an internet connection is unavailable, loading this information from an init container is another way to make the graph data available to the OpenShift Update Service. The role of the init container is to provide a local copy of the graph data, and during pod initialization, the init container copies the data to a volume that is accessible by the service.
|
||||
The OpenShift Update Service requires a graph data container image, from which the OpenShift Update Service retrieves information about channel membership and blocked update edges. Graph data is typically fetched directly from the update graph data repository. In environments where an internet connection is unavailable, loading this information from an init container is another way to make the graph data available to the OpenShift Update Service. The role of the init container is to provide a local copy of the graph data, and during pod initialization, the init container copies the data to a volume that is accessible by the service.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -55,11 +55,11 @@ VERSION IMAGE
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
For details and information on how to perform an `EUS-to-EUS` channel upgrade, please refer to the
|
||||
For details and information on how to perform an `EUS-to-EUS` channel update, please refer to the
|
||||
_Preparing to perform an EUS-to-EUS upgrade_ page, listed in the Additional resources section.
|
||||
====
|
||||
|
||||
. Based on your organization requirements, set the appropriate upgrade channel. For example, you can set your channel to `stable-4.12`, `fast-4.12`, or `eus-4.12`. For more information about channels, refer to _Understanding update channels and releases_ listed in the Additional resources section.
|
||||
. Based on your organization requirements, set the appropriate update channel. For example, you can set your channel to `stable-4.12`, `fast-4.12`, or `eus-4.12`. For more information about channels, refer to _Understanding update channels and releases_ listed in the Additional resources section.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -129,7 +129,7 @@ Cluster version is <version>
|
||||
Upstream is unset, so the cluster will use an appropriate default.
|
||||
Channel: stable-4.10 (available channels: candidate-4.10, candidate-4.11, eus-4.10, fast-4.10, fast-4.11, stable-4.10)
|
||||
|
||||
No updates available. You may force an upgrade to a specific release image, but doing so might not be supported and might result in downtime or data loss.
|
||||
No updates available. You may force an update to a specific release image, but doing so might not be supported and might result in downtime or data loss.
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
@@ -141,7 +141,7 @@ NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
|
||||
version 4.10.26 True True 24m Unable to apply 4.11.0-rc.7: an unknown error has occurred: MultipleErrors
|
||||
----
|
||||
====
|
||||
. If you are upgrading your cluster to the next minor version, such as version X.y to X.(y+1), it is recommended to confirm that your nodes are upgraded before deploying workloads that rely on a new feature:
|
||||
. If you are updating your cluster to the next minor version, such as version X.y to X.(y+1), it is recommended to confirm that your nodes are updated before deploying workloads that rely on a new feature:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -55,7 +55,7 @@ The Input channel
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
If you are upgrading your cluster to the next minor version, like version 4.y to 4.(y+1), it is recommended to confirm your nodes are updated before deploying workloads that rely on a new feature. Any pools with worker nodes that are not yet updated are displayed on the *Cluster Settings* page.
|
||||
If you are update your cluster to the next minor version, like version 4.y to 4.(y+1), it is recommended to confirm your nodes are updated before deploying workloads that rely on a new feature. Any pools with worker nodes that are not yet updated are displayed on the *Cluster Settings* page.
|
||||
====
|
||||
|
||||
. After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel.
|
||||
@@ -75,7 +75,7 @@ ifdef::rhel[]
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
When you update a cluster that contains Red Hat Enterprise Linux (RHEL) worker machines, those workers temporarily become unavailable during the update process. You must run the upgrade playbook against each RHEL machine as it enters the `NotReady` state for the cluster to finish updating.
|
||||
When you update a cluster that contains Red Hat Enterprise Linux (RHEL) worker machines, those workers temporarily become unavailable during the update process. You must run the update playbook against each RHEL machine as it enters the `NotReady` state for the cluster to finish updating.
|
||||
====
|
||||
|
||||
endif::rhel[]
|
||||
|
||||
@@ -4,11 +4,11 @@
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="updating-eus-to-eus-olm-operators_{context}"]
|
||||
= EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager
|
||||
= EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager
|
||||
|
||||
In addition to the EUS-to-EUS update steps mentioned for the web console and CLI, there are additional steps to consider when performing EUS-to-EUS updates for clusters with the following:
|
||||
|
||||
* Layered products
|
||||
* Layered products
|
||||
* Operators installed through Operator Lifecycle Manager (OLM)
|
||||
|
||||
.What is a layered product?
|
||||
@@ -25,15 +25,15 @@ As an example, here are the steps to perform an EUS-to-EUS update from <4.y> to
|
||||
|
||||
.Example workflow
|
||||
. Pause the worker machine pools.
|
||||
. Upgrade OpenShift <4.y> -> OpenShift <4.y+1>.
|
||||
. Upgrade ODF <4.y> -> ODF <4.y+1>.
|
||||
. Upgrade OpenShift <4.y+1> -> OpenShift <4.y+2>.
|
||||
. Upgrade to ODF <4.y+2>.
|
||||
. Update OpenShift <4.y> -> OpenShift <4.y+1>.
|
||||
. Update ODF <4.y> -> ODF <4.y+1>.
|
||||
. Update OpenShift <4.y+1> -> OpenShift <4.y+2>.
|
||||
. Update to ODF <4.y+2>.
|
||||
. Unpause the worker machine pools.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
The upgrade to ODF <4.y+2> can happen before or after worker machine pools have been unpaused.
|
||||
The update to ODF <4.y+2> can happen before or after worker machine pools have been unpaused.
|
||||
====
|
||||
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ However, note the following limitations:
|
||||
|
||||
* The prerequisite to pause the `MachineHealthCheck` resources is not required because there is no other node to perform the health check.
|
||||
|
||||
* Restoring a single-node {product-title} cluster using an etcd backup is not officially supported. However, it is good practice to perform the etcd backup in case your upgrade fails. If your control plane is healthy, you might be able to restore your cluster to a previous state by using the backup.
|
||||
* Restoring a single-node {product-title} cluster using an etcd backup is not officially supported. However, it is good practice to perform the etcd backup in case your update fails. If your control plane is healthy, you might be able to restore your cluster to a previous state by using the backup.
|
||||
|
||||
* Updating a single-node {product-title} cluster requires downtime and can include an automatic reboot. The amount of downtime depends on the update payload, as described in the following scenarios:
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ EUS-to-EUS updates are only viable between *even-numbered minor versions* of {pr
|
||||
There are a number of caveats to consider when attempting an EUS-to-EUS update.
|
||||
|
||||
* EUS-to-EUS updates are only offered after updates between all versions involved have been made available in `stable` channels.
|
||||
* If you encounter issues during or after upgrading to the odd-numbered minor version but before upgrading to the next even-numbered version, then remediation of those issues may require that non-control plane hosts complete the update to the odd-numbered version before moving forward.
|
||||
* If you encounter issues during or after updating to the odd-numbered minor version but before updating to the next even-numbered version, then remediation of those issues may require that non-control plane hosts complete the update to the odd-numbered version before moving forward.
|
||||
* You can do a partial update by updating the worker or custom pool nodes to accommodate the time it takes for maintenance.
|
||||
* You can complete the update process during multiple maintenance windows by pausing at intermediate steps. However, plan to complete the entire update within 60 days. This is critical to ensure that normal cluster automation processes are completed.
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ For information about configuring your multi-architecture compute machines, see
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Migration from a multi-architecture payload to a single-architecture payload is not supported. Once a cluster has transitioned to using a multi-architecture payload, it can no longer accept a single-architecture upgrade payload.
|
||||
Migration from a multi-architecture payload to a single-architecture payload is not supported. Once a cluster has transitioned to using a multi-architecture payload, it can no longer accept a single-architecture update payload.
|
||||
====
|
||||
|
||||
include::modules/migrating-to-multi-arch-cli.adoc[leveloffset=+1]
|
||||
|
||||
@@ -18,18 +18,18 @@ You can update, or upgrade, an {product-title} cluster within a minor version by
|
||||
* Have access to the cluster as a user with `admin` privileges.
|
||||
See xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions].
|
||||
* Have a recent xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must restore your cluster to a previous state.
|
||||
* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before upgrading to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 upgrades for {op-system-base} workers; those hosts must be replaced with a clean operating system install.
|
||||
* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before updating to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 updates for {op-system-base} workers; those hosts must be replaced with a clean operating system install.
|
||||
* Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster update. See xref:../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators] for more information.
|
||||
* Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy.
|
||||
* If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials].
|
||||
* Ensure that you address all `Upgradeable=False` conditions so the cluster allows an update to the next minor version. An alert displays at the top of the *Cluster Settings* page when you have one or more cluster Operators that cannot be upgraded. You can still update to the next available patch update for the minor release you are currently on.
|
||||
* Ensure that you address all `Upgradeable=False` conditions so the cluster allows an update to the next minor version. An alert displays at the top of the *Cluster Settings* page when you have one or more cluster Operators that cannot be updated. You can still update to the next available patch update for the minor release you are currently on.
|
||||
* Review the list of APIs that were removed in Kubernetes 1.26, migrate any affected components to use the new API version, and provide the administrator acknowledgment. For more information, see xref:../../updating/updating-cluster-prepare.adoc#updating-cluster-prepare[Preparing to update to {product-title} 4.13].
|
||||
* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
|
||||
* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
* When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a previous version is not supported. If your update is failing to complete, contact Red Hat support.
|
||||
* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster upgrades. You must remove this setting before you can upgrade your cluster.
|
||||
* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster.
|
||||
====
|
||||
|
||||
[role="_additional-resources"]
|
||||
|
||||
@@ -15,9 +15,9 @@ those machines.
|
||||
* Have access to the cluster as a user with `admin` privileges.
|
||||
See xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions].
|
||||
* Have a recent xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must restore your cluster to a previous state.
|
||||
* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before upgrading to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 upgrades for {op-system-base} workers; those hosts must be replaced with a clean operating system install.
|
||||
* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before updating to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 updates for {op-system-base} workers; those hosts must be replaced with a clean operating system install.
|
||||
* If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials].
|
||||
* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
|
||||
* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
@@ -18,19 +18,19 @@ Use the web console or `oc adm upgrade channel _<channel>_` to change the update
|
||||
* Have access to the cluster as a user with `admin` privileges.
|
||||
See xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions].
|
||||
* Have a recent xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must restore your cluster to a previous state.
|
||||
* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before upgrading to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 upgrades for {op-system-base} workers; those hosts must be replaced with a clean operating system install.
|
||||
* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before updating to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 updates for {op-system-base} workers; those hosts must be replaced with a clean operating system install.
|
||||
* Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster update. See xref:../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators] for more information.
|
||||
* Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy.
|
||||
//remove this???^ or maybe just add another bullet that you can break up the update?
|
||||
* To accommodate the time it takes to update, you are able to do a partial update by updating the worker or custom pool nodes. You can pause and resume within the progress bar of each pool.
|
||||
* If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials].
|
||||
* Review the list of APIs that were removed in Kubernetes 1.26, migrate any affected components to use the new API version, and provide the administrator acknowledgment. For more information, see xref:../../updating/updating-cluster-prepare.adoc#updating-cluster-prepare[Preparing to update to {product-title} 4.13].
|
||||
* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
|
||||
* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
* When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a previous version is not supported. If your update is failing to complete, contact Red Hat support.
|
||||
* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster upgrades. You must remove this setting before you can upgrade your cluster.
|
||||
* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster.
|
||||
====
|
||||
|
||||
[role="_additional-resources"]
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
On hosted control planes for {product-title}, updates are decoupled between the control plane and the nodes. Your service cluster provider, which is the user that hosts the cluster control planes, can manage the updates as needed. The hosted cluster handles control plane updates, and node pools handle node upgrades.
|
||||
On hosted control planes for {product-title}, updates are decoupled between the control plane and the nodes. Your service cluster provider, which is the user that hosts the cluster control planes, can manage the updates as needed. The hosted cluster handles control plane updates, and node pools handle node updates.
|
||||
|
||||
include::modules/updates-for-hosted-control-planes.adoc[leveloffset=+1]
|
||||
include::modules/updating-node-pools-for-hcp.adoc[leveloffset=+1]
|
||||
|
||||
@@ -23,11 +23,11 @@ See xref:../../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define
|
||||
* You must have a recent xref:../../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must xref:../../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore your cluster to a previous state].
|
||||
* You must ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy.
|
||||
* If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials].
|
||||
* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
|
||||
* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
|
||||
If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
|
||||
====
|
||||
|
||||
include::modules/machine-health-checks-pausing.adoc[leveloffset=+1]
|
||||
|
||||
Reference in New Issue
Block a user