1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-5791: Creating Updating a cluster section

This commit is contained in:
Sebastian Kopacz
2023-06-13 16:59:34 -04:00
parent 03189817d8
commit 6ef5155c5d
58 changed files with 257 additions and 257 deletions

View File

@@ -602,47 +602,50 @@ Topics:
- Name: Preparing to update to OKD 4.13
File: updating-cluster-prepare
Distros: openshift-origin
- Name: Preparing to perform an EUS-to-EUS update
File: preparing-eus-eus-upgrade
Distros: openshift-enterprise
- Name: Performing a cluster update
Dir: updating_a_cluster
Topics:
- Name: Updating a cluster using the CLI
File: updating-cluster-cli
- Name: Updating a cluster using the web console
File: updating-cluster-web-console
- Name: Performing an EUS-to-EUS update
File: eus-eus-update
Distros: openshift-enterprise
- Name: Performing a canary rollout update
File: update-using-custom-machine-config-pools
- Name: Updating a cluster that includes RHEL compute machines
File: updating-cluster-rhel-compute
Distros: openshift-enterprise
- Name: Updating a cluster in a disconnected environment
Dir: updating_disconnected_cluster
Topics:
- Name: About cluster updates in a disconnected environment
File: index
- Name: Mirroring OpenShift Container Platform images
File: mirroring-image-repository
- Name: Updating a cluster in a disconnected environment using OSUS
File: disconnected-update-osus
Distros: openshift-enterprise
- Name: Updating a cluster in a disconnected environment without OSUS
File: disconnected-update
Distros: openshift-enterprise
- Name: Updating a cluster in a disconnected environment by using the CLI
File: disconnected-update
Distros: openshift-origin
- Name: Uninstalling OSUS from a cluster
File: uninstalling-osus
Distros: openshift-enterprise
- Name: Updating hardware on nodes running on vSphere
File: updating-hardware-on-nodes-running-on-vsphere
- Name: Migrating to a cluster with multi-architecture compute machines
File: migrating-to-multi-payload
- Name: Updating hosted control planes
File: updating-hosted-control-planes
- Name: Preparing to update a cluster with manually maintained credentials
File: preparing-manual-creds-update
- Name: Updating a cluster using the web console
File: updating-cluster-within-minor
- Name: Updating a cluster using the CLI
File: updating-cluster-cli
- Name: Migrating to a cluster with multi-architecture compute machines
File: migrating-to-multi-payload
- Name: Performing update using canary rollout strategy
File: update-using-custom-machine-config-pools
- Name: Updating a cluster that includes RHEL compute machines
File: updating-cluster-rhel-compute
Distros: openshift-enterprise
- Name: Updating a cluster in a disconnected environment
Dir: updating-restricted-network-cluster
Topics:
- Name: About cluster updates in a disconnected environment
File: index
- Name: Mirroring the OpenShift Container Platform image repository
File: mirroring-image-repository
- Name: Updating a cluster in a disconnected environment using OSUS
File: restricted-network-update-osus
Distros: openshift-enterprise
- Name: Updating a cluster in a disconnected environment without OSUS
File: restricted-network-update
Distros: openshift-enterprise
- Name: Updating a cluster in a disconnected environment by using the CLI
File: restricted-network-update
Distros: openshift-origin
- Name: Uninstalling OSUS from a cluster
File: uninstalling-osus
Distros: openshift-enterprise
- Name: Updating hardware on nodes running on vSphere
File: updating-hardware-on-nodes-running-on-vsphere
- Name: Preflight validation for Kernel Module Management (KMM) Modules
File: kmm-preflight-validation
- Name: Updating hosted control planes
File: updating-hosted-control-planes
# - Name: Troubleshooting an update
# File: updating-troubleshooting
---

View File

@@ -104,7 +104,7 @@ be reviewed by cluster administrators and xref:../operators/admin/olm-adding-ope
* **xref:../scalability_and_performance/recommended-performance-scale-practices/recommended-infrastructure-practices.adoc#scaling-cluster-monitoring-operator[Scale] and xref:../scalability_and_performance/using-node-tuning-operator.adoc#using-node-tuning-operator[tune] clusters**: Set cluster limits, tune nodes, scale cluster monitoring, and optimize networking, storage, and routes for your environment.
* **xref:../updating/updating-restricted-network-cluster/restricted-network-update-osus.adoc#update-service-overview_updating-restricted-network-cluster-osus[Understanding the OpenShift Update Service]**: Learn about installing and managing a local OpenShift Update Service for recommending {product-title} updates in disconnected environments.
* **xref:../updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update-osus.adoc#update-service-overview_updating-restricted-network-cluster-osus[Using the OpenShift Update Service in a disconnected environement]**: Learn about installing and managing a local OpenShift Update Service for recommending {product-title} updates in disconnected environments.
* **xref:../monitoring/monitoring-overview.adoc#monitoring-overview[Monitor clusters]**:
Learn to xref:../monitoring/configuring-the-monitoring-stack.adoc#configuring-the-monitoring-stack[configure the monitoring stack].

View File

@@ -69,7 +69,7 @@ include::modules/oc-mirror-creating-image-set-config.adoc[leveloffset=+1]
* xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#oc-mirror-imageset-config-params_installing-mirroring-disconnected[Image set configuration parameters]
* xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#oc-mirror-image-set-examples_installing-mirroring-disconnected[Image set configuration examples]
* xref:../../updating/updating-restricted-network-cluster/restricted-network-update-osus.adoc#update-service-overview_updating-restricted-network-cluster-osus[Using the OpenShift Update Service in a disconnected environment]
* xref:../../updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update-osus.adoc#update-service-overview_updating-restricted-network-cluster-osus[Using the OpenShift Update Service in a disconnected environment]
[id="mirroring-image-set"]
== Mirroring an image set to a mirror registry
@@ -146,4 +146,4 @@ include::modules/oc-mirror-command-reference.adoc[leveloffset=+1]
[id="additional-resources_installing-mirroring-disconnected"]
== Additional resources
* xref:../../updating/updating-restricted-network-cluster/index.adoc#about-restricted-network-updates[About cluster updates in a disconnected environment]
* xref:../../updating/updating_a_cluster/updating_disconnected_cluster/index.adoc#about-restricted-network-updates[About cluster updates in a disconnected environment]

View File

@@ -27,8 +27,7 @@ include::modules/manually-create-identity-access-management.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../updating/updating-cluster-within-minor.adoc#manually-maintained-credentials-upgrade_updating-cluster-within-minor[Updating a cluster using the web console]
* xref:../../updating/updating-cluster-cli.adoc#manually-maintained-credentials-upgrade_updating-cluster-cli[Updating a cluster using the CLI]
* xref:../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials]
include::modules/mint-mode.adoc[leveloffset=+1]

View File

@@ -18,8 +18,7 @@ include::modules/manually-create-identity-access-management.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../updating/updating-cluster-within-minor.adoc#manually-maintained-credentials-upgrade_updating-cluster-within-minor[Updating a cluster using the web console]
* xref:../../updating/updating-cluster-cli.adoc#manually-maintained-credentials-upgrade_updating-cluster-cli[Updating a cluster using the CLI]
* xref:../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials]
[id="manually-creating-iam-azure-next-steps"]
== Next steps

View File

@@ -38,8 +38,8 @@ include::modules/manually-create-identity-access-management.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_installing-azure-stack-hub-network-customizations-cco"]
.Additional resources
* xref:../../updating/updating-cluster-within-minor.adoc#manually-maintained-credentials-upgrade_updating-cluster-within-minor[Updating a cluster using the web console]
* xref:../../updating/updating-cluster-cli.adoc#manually-maintained-credentials-upgrade_updating-cluster-cli[Updating a cluster using the CLI]
* xref:../../updating/updating_a_cluster/updating-cluster-web-console.adoc#manually-maintained-credentials-upgrade_updating-cluster-web-console[Updating a cluster using the web console]
* xref:../../updating/updating_a_cluster/updating-cluster-cli.adoc#manually-maintained-credentials-upgrade_updating-cluster-cli[Updating a cluster using the CLI]
include::modules/azure-stack-hub-internal-ca.adoc[leveloffset=+1]

View File

@@ -22,8 +22,7 @@ include::modules/manually-create-identity-access-management.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../updating/updating-cluster-within-minor.adoc#manually-maintained-credentials-upgrade_updating-cluster-within-minor[Updating a cluster using the web console]
* xref:../../updating/updating-cluster-cli.adoc#manually-maintained-credentials-upgrade_updating-cluster-cli[Updating a cluster using the CLI]
* xref:../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials]
include::modules/mint-mode.adoc[leveloffset=+1]

View File

@@ -46,7 +46,7 @@ include::modules/vmware-csi-driver-reqs.adoc[leveloffset=+1]
.Additional resources
* To remove a third-party CSI driver, see xref:../../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc#persistent-storage-csi-vsphere-install-issues_persistent-storage-csi-vsphere[Removing a third-party vSphere CSI Driver].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating_a_cluster/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
[id="installation-requirements-user-infra_{context}"]
== Requirements for a cluster with user-provisioned infrastructure

View File

@@ -34,7 +34,7 @@ include::modules/vmware-csi-driver-reqs.adoc[leveloffset=+1]
.Additional resources
* To remove a third-party CSI driver, see xref:../../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc#persistent-storage-csi-vsphere-install-issues_persistent-storage-csi-vsphere[Removing a third-party vSphere CSI Driver].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating_a_cluster/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
[id="installation-requirements-user-infra_{context}"]
== Requirements for a cluster with user-provisioned infrastructure

View File

@@ -38,7 +38,7 @@ include::modules/vmware-csi-driver-reqs.adoc[leveloffset=+1]
.Additional resources
* To remove a third-party CSI driver, see xref:../../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc#persistent-storage-csi-vsphere-install-issues_persistent-storage-csi-vsphere[Removing a third-party vSphere CSI Driver].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating_a_cluster/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
[id="installation-requirements-user-infra_{context}"]
== Requirements for a cluster with user-provisioned infrastructure

View File

@@ -45,7 +45,7 @@ include::modules/vmware-csi-driver-reqs.adoc[leveloffset=+1]
.Additional resources
* To remove a third-party vSphere CSI driver, see xref:../../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc#persistent-storage-csi-vsphere-install-issues_persistent-storage-csi-vsphere[Removing a third-party vSphere CSI Driver].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating_a_cluster/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
include::modules/installation-vsphere-installer-infra-requirements.adoc[leveloffset=+1]

View File

@@ -49,7 +49,7 @@ include::modules/vmware-csi-driver-reqs.adoc[leveloffset=+1]
.Additional resources
* To remove a third-party vSphere CSI driver, see xref:../../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc#persistent-storage-csi-vsphere-install-issues_persistent-storage-csi-vsphere[Removing a third-party vSphere CSI Driver].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating_a_cluster/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
[id="installation-requirements-user-infra_{context}"]
== Requirements for a cluster with user-provisioned infrastructure

View File

@@ -38,7 +38,7 @@ include::modules/vmware-csi-driver-reqs.adoc[leveloffset=+1]
.Additional resources
* To remove a third-party vSphere CSI driver, see xref:../../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc#persistent-storage-csi-vsphere-install-issues_persistent-storage-csi-vsphere[Removing a third-party vSphere CSI Driver].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating_a_cluster/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
include::modules/installation-vsphere-installer-infra-requirements.adoc[leveloffset=+1]

View File

@@ -40,7 +40,7 @@ include::modules/vmware-csi-driver-reqs.adoc[leveloffset=+1]
.Additional resources
* To remove a third-party vSphere CSI driver, see xref:../../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc#persistent-storage-csi-vsphere-install-issues_persistent-storage-csi-vsphere[Removing a third-party vSphere CSI Driver].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating_a_cluster/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
include::modules/installation-vsphere-installer-infra-requirements.adoc[leveloffset=+1]

View File

@@ -38,7 +38,7 @@ include::modules/vmware-csi-driver-reqs.adoc[leveloffset=+1]
.Additional resources
* To remove a third-party vSphere CSI driver, see xref:../../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc#persistent-storage-csi-vsphere-install-issues_persistent-storage-csi-vsphere[Removing a third-party vSphere CSI Driver].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating_a_cluster/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
include::modules/installation-vsphere-installer-infra-requirements.adoc[leveloffset=+1]

View File

@@ -41,7 +41,7 @@ include::modules/vmware-csi-driver-reqs.adoc[leveloffset=+1]
.Additional resources
* To remove a third-party vSphere CSI driver, see xref:../../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc#persistent-storage-csi-vsphere-install-issues_persistent-storage-csi-vsphere[Removing a third-party vSphere CSI Driver].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating_a_cluster/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
[id="installation-requirements-user-infra_{context}"]
== Requirements for a cluster with user-provisioned infrastructure

View File

@@ -41,7 +41,7 @@ include::modules/vmware-csi-driver-reqs.adoc[leveloffset=+1]
.Additional resources
* To remove a third-party vSphere CSI driver, see xref:../../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc#persistent-storage-csi-vsphere-install-issues_persistent-storage-csi-vsphere[Removing a third-party vSphere CSI Driver].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
* To update the hardware version for your vSphere nodes, see xref:../../updating/updating_a_cluster/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on nodes running in vSphere].
[id="installation-requirements-user-infra_{context}"]
== Requirements for a cluster with user-provisioned infrastructure

View File

@@ -24,7 +24,7 @@ include::modules/getting-cluster-version-status-and-update-details.adoc[leveloff
* See xref:../support/troubleshooting/troubleshooting-operator-issues.adoc#troubleshooting-operator-issues[Troubleshooting Operator issues] for information about investigating issues with Operators.
* See xref:../updating/updating-cluster-within-minor.adoc#updating-cluster-within-minor[Updating a cluster between minor versions] for more information on updating your cluster.
* See xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[Updating a cluster using the web console] for more information on updating your cluster.
* See xref:../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels-releases[Understanding update channels and releases] for an overview about update release channels.

View File

@@ -69,7 +69,7 @@ For more information, see xref:../architecture/architecture-installation.adoc#in
In {product-title} 3.11, you upgraded your cluster by running Ansible playbooks. In {product-title} {product-version}, the cluster manages its own updates, including updates to {op-system-first} on cluster nodes. You can easily upgrade your cluster by using the web console or by using the `oc adm upgrade` command from the OpenShift CLI and the Operators will automatically upgrade themselves. If your {product-title} {product-version} cluster has {op-system-base} worker machines, then you will still need to run an Ansible playbook to upgrade those worker machines.
For more information, see xref:../updating/updating-cluster-within-minor.adoc#updating-cluster-within-minor[Updating clusters].
For more information, see xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[Updating clusters].
[id="migration-considerations"]
== Migration considerations

View File

@@ -142,7 +142,7 @@ You use these resources to retrieve information about the cluster. Some configur
|`version`
|In {product-title} {product-version}, you must not customize the `ClusterVersion`
resource for production clusters. Instead, follow the process to
xref:../updating/updating-cluster-within-minor.adoc#updating-cluster-within-minor[update a cluster].
xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[update a cluster].
|`dns.config.openshift.io`
|`cluster`

View File

@@ -1,7 +1,7 @@
:_content-type: ASSEMBLY
:context: multi-architecture-configuration
[id="post-install-multi-architecture-configuration"]
= Configuring multi-architecture compute machines on an {product-title} cluster
= Configuring multi-architecture compute machines on an {product-title} cluster
include::_attributes/common-attributes.adoc[]
toc::[]
@@ -18,27 +18,27 @@ When there are nodes with multiple architectures in your cluster, the architectu
The Cluster Samples Operator is not supported on clusters with multi-architecture compute machines. Your cluster can be created without this capability. For more information, see xref:../post_installation_configuration/enabling-cluster-capabilities.adoc#enabling-cluster-capabilities[Enabling cluster capabilities]
====
For information on migrating your single-architecture cluster to a cluster that supports multi-architecture compute machines, see xref:../updating/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
For information on migrating your single-architecture cluster to a cluster that supports multi-architecture compute machines, see xref:../updating/updating_a_cluster/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
include::modules/multi-architecture-verifying-cluster-compatibility.adoc[leveloffset=+1]
[role="_additional-resources"]
[role="_additional-resources"]
.Additional resources
* xref:../updating/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
* xref:../updating/updating_a_cluster/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
== Creating a cluster with multi-architecture compute machine on Azure
== Creating a cluster with multi-architecture compute machine on Azure
To deploy an Azure cluster with multi-architecture compute machines, you must first create a single-architecture Azure installer-provisioned cluster that uses the multi-architecture installer binary. For more information on Azure installations, see xref:../installing/installing_azure/installing-azure-customizations.adoc[Installing a cluster on Azure with customizations]. You can then add an ARM64 compute machine set to your cluster to create a cluster with multi-architecture compute machines.
To deploy an Azure cluster with multi-architecture compute machines, you must first create a single-architecture Azure installer-provisioned cluster that uses the multi-architecture installer binary. For more information on Azure installations, see xref:../installing/installing_azure/installing-azure-customizations.adoc#installing-azure-customizations[Installing a cluster on Azure with customizations]. You can then add an ARM64 compute machine set to your cluster to create a cluster with multi-architecture compute machines.
The following procedures explain how to generate an ARM64 boot image and create an Azure compute machine set that uses the ARM64 boot image. This adds ARM64 compute nodes to your cluster and deploys the amount of ARM64 virtual machines (VM) that you need.
The following procedures explain how to generate an ARM64 boot image and create an Azure compute machine set that uses the ARM64 boot image. This adds ARM64 compute nodes to your cluster and deploys the amount of ARM64 virtual machines (VM) that you need.
include::modules/multi-architecture-creating-arm64-bootimage.adoc[leveloffset=+2]
include::modules/multi-architecture-modify-machine-set.adoc[leveloffset=+2]
[role="_additional-resources"]
[role="_additional-resources"]
.Additional resources
* xref:../machine_management/creating_machinesets/creating-machineset-azure.adoc[Creating a compute machine set on Azure]
* xref:../machine_management/creating_machinesets/creating-machineset-azure.adoc#creating-machineset-azure[Creating a compute machine set on Azure]
== Creating a cluster with multi-architecture compute machines on AWS
@@ -46,19 +46,19 @@ To create an AWS cluster with multi-architecture compute machines, you must firs
include::modules/multi-architecture-modify-machine-set-aws.adoc[leveloffset=+2]
[role="_additional-resources"]
[role="_additional-resources"]
.Additional resources
* xref:../installing/installing_aws/installing-aws-customizations.adoc#installation-aws-arm-tested-machine-types_installing-aws-customizations[Tested instance types for AWS 64-bit ARM]
== Creating a cluster with multi-architecture compute machine on bare metal (Technology Preview)
To create a cluster with multi-architecture compute machines on bare metal, you must have an existing single-architecture bare metal cluster. For more information on bare metal installations, see xref:../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[Installing a user provisioned cluster on bare metal]. You can then add 64-bit ARM compute machines to your {product-title} cluster on bare metal.
To create a cluster with multi-architecture compute machines on bare metal, you must have an existing single-architecture bare metal cluster. For more information on bare metal installations, see xref:../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[Installing a user provisioned cluster on bare metal]. You can then add 64-bit ARM compute machines to your {product-title} cluster on bare metal.
Before you can add 64-bit ARM nodes to your bare metal cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see xref:../updating/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
Before you can add 64-bit ARM nodes to your bare metal cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see xref:../updating/updating_a_cluster/migrating-to-multi-payload.adoc#migrating-to-multi-payload[Migrating to a cluster with multi-architecture compute machines].
The following procedures explain how to create a {op-system} compute machine using an ISO image or network PXE booting. This will allow you to add ARM64 nodes to your bare metal cluster and deploy a cluster with multi-architecture compute machines.
The following procedures explain how to create a {op-system} compute machine using an ISO image or network PXE booting. This will allow you to add ARM64 nodes to your bare metal cluster and deploy a cluster with multi-architecture compute machines.
:Featurename: Clusters with multi-architecture compute machines on bare metal user-provisioned installations
:Featurename: Clusters with multi-architecture compute machines on bare metal user-provisioned installations
include::snippets/technology-preview.adoc[leveloffset=+2]
include::modules/machine-user-infra-machines-iso.adoc[leveloffset=+2]

View File

@@ -70,14 +70,14 @@ User-defined tags for Microsoft Azure was previously introduced as Technology Pr
=== OpenShift CLI (oc)
[id="oc-new-build-enhancement"]
==== Enhancement to oc new-build
==== Enhancement to oc new-build
A new oc command line interface (CLI) flag, `--import-mode`, has been added to the `oc new-build` command. With this enhancement, users can set the `--import-mode` flag to `Legacy` or `PreserverOriginal`, which provides users the option to trigger builds using a single sub-manifest, or all manifests, respectively.
A new oc command line interface (CLI) flag, `--import-mode`, has been added to the `oc new-build` command. With this enhancement, users can set the `--import-mode` flag to `Legacy` or `PreserverOriginal`, which provides users the option to trigger builds using a single sub-manifest, or all manifests, respectively.
[id="oc-new-app-enhancement"]
==== Enhancement to oc new-app
A new oc command line interface (CLI) flag, `--import-mode`, has been added to the `oc new-app` command. With this enhancement, users can set the `--import-mode` flag to `Legacy` or `PreserverOriginal`, which provides users the option to create new applications using a single sub-manifest, or all manifests, respectively.
A new oc command line interface (CLI) flag, `--import-mode`, has been added to the `oc new-app` command. With this enhancement, users can set the `--import-mode` flag to `Legacy` or `PreserverOriginal`, which provides users the option to create new applications using a single sub-manifest, or all manifests, respectively.
[id="ocp-4-14-ibm-z"]
=== {ibmzProductName} and {linuxoneProductName}
@@ -98,14 +98,14 @@ Ingress Node Firewall Operator was made a technology preview in {product-title}
[id="ocp-4-14-networking-kernal-network-pinning"]
== Dynamic use of non-reserved CPUs for OVS
With this release, the Open vSwitch (OVS) networking stack can dynamically use non-reserved CPUs.
This dynamic use of non-reserved CPUs occurs by default in performance-tuned clusters with a CPU manager policy set to `static`.
The dynamic use of available, non-reserved CPUs maximizes compute resources for OVS and minimizes network latency for workloads during periods of high demand.
OVS remains unable to dynamically use isolated CPUs assigned to containers in `Guaranteed` QoS pods. This separation avoids disruption to critical application workloads.
With this release, the Open vSwitch (OVS) networking stack can dynamically use non-reserved CPUs.
This dynamic use of non-reserved CPUs occurs by default in performance-tuned clusters with a CPU manager policy set to `static`.
The dynamic use of available, non-reserved CPUs maximizes compute resources for OVS and minimizes network latency for workloads during periods of high demand.
OVS remains unable to dynamically use isolated CPUs assigned to containers in `Guaranteed` QoS pods. This separation avoids disruption to critical application workloads.
[NOTE]
====
When the Node Tuning Operator recognizes the performance conditions to activate the use of non-reserved CPUs, there is a several second delay while OVN-Kubernetes configures the CPU affinity alignment of OVS daemons running on the CPUs. During this window, if a `Guaranteed` QoS pod starts, it can experience a latency spike.
When the Node Tuning Operator recognizes the performance conditions to activate the use of non-reserved CPUs, there is a several second delay while OVN-Kubernetes configures the CPU affinity alignment of OVS daemons running on the CPUs. During this window, if a `Guaranteed` QoS pod starts, it can experience a latency spike.
====
[id="ocp-4-14-storage"]
@@ -1117,7 +1117,7 @@ This section will continue to be updated over time to provide notes on enhanceme
[IMPORTANT]
====
For any {product-title} release, always review the instructions on xref:../updating/updating-cluster-within-minor.adoc#updating-cluster-within-minor[updating your cluster] properly.
For any {product-title} release, always review the instructions on xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[updating your cluster] properly.
====
//Update with relevant advisory information

View File

@@ -44,7 +44,7 @@ ifndef::openshift-dedicated,openshift-rosa[]
[role="_additional-resources"]
.Additional resources
* See the xref:../../updating/updating-cluster-within-minor.adoc#updating-cluster-within-minor[{product-title} update documentation] for more information about updating or upgrading a cluster.
* See the xref:../../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[{product-title} update documentation] for more information about updating or upgrading a cluster.
endif::[]
include::modules/telemetry-what-information-is-collected.adoc[leveloffset=+2]

View File

@@ -44,7 +44,7 @@ ifndef::openshift-dedicated[]
[role="_additional-resources"]
.Additional resources
* See the xref:../../updating/updating-cluster-within-minor.adoc#updating-cluster-within-minor[{product-title} update documentation] for more information about updating or upgrading a cluster.
* See the xref:../../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[{product-title} update documentation] for more information about updating or upgrading a cluster.
endif::[]
include::modules/telemetry-what-information-is-collected.adoc[leveloffset=+2]

View File

@@ -40,73 +40,73 @@ include::modules/determining-upgrade-viability-cv-conditiontype.adoc[leveloffset
ifndef::openshift-origin[]
[id="updating-clusters-overview-prepare-eus-to-eus-update"]
== Preparing to perform an EUS-to-EUS update
xref:../updating/preparing-eus-eus-upgrade.adoc#preparing-eus-eus-upgrade[Preparing to perform an EUS-to-EUS update]: Due to fundamental Kubernetes design, all {product-title} updates between minor versions must be serialized. You must update from {product-title} 4.10 to 4.11, and then to 4.12. You cannot update from {product-title} 4.10 to 4.12 directly. However, if you want to update between two Extended Update Support (EUS) versions, you can do so by incurring only a single reboot of non-control plane hosts. For more information, see the following:
== Performing an EUS-to-EUS update
xref:../updating/updating_a_cluster/eus-eus-update.adoc#eus-eus-update[Performing an EUS-to-EUS update]: Due to fundamental Kubernetes design, all {product-title} updates between minor versions must be serialized. You must update from {product-title} 4.10 to 4.11, and then to 4.12. You cannot update from {product-title} 4.10 to 4.12 directly. However, if you want to update between two Extended Update Support (EUS) versions, you can do so by incurring only a single reboot of non-control plane hosts. For more information, see the following:
* xref:../updating/preparing-eus-eus-upgrade.adoc#updating-eus-to-eus-upgrade_eus-to-eus-upgrade[Updating EUS-to-EUS]
* xref:../updating/updating_a_cluster/eus-eus-update.adoc#updating-eus-to-eus-upgrade_eus-to-eus-update[EUS-to-EUS update]
endif::openshift-origin[]
[id="updating-clusters-overview-update-cluster-using-web-console"]
== Updating a cluster using the web console
xref:../updating/updating-cluster-within-minor.adoc#updating-cluster-within-minor[Updating a cluster using the web console]: You can update an {product-title} cluster by using the web console. The following steps update a cluster within a minor version. You can use the same instructions for updating a cluster between minor versions.
xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[Updating a cluster using the web console]: You can update an {product-title} cluster by using the web console. The following steps update a cluster within a minor version. You can use the same instructions for updating a cluster between minor versions.
* xref:../updating/updating-cluster-within-minor.adoc#update-using-custom-machine-config-pools-canary_updating-cluster-within-minor[Performing a canary rollout update]
* xref:../updating/updating-cluster-within-minor.adoc#machine-health-checks-pausing-web-console_updating-cluster-within-minor[Pausing a MachineHealthCheck resource]
* xref:../updating/updating-cluster-within-minor.adoc#update-single-node-openshift_updating-cluster-within-minor[About updating {product-title} on a single-node cluster]
* xref:../updating/updating-cluster-within-minor.adoc#update-upgrading-web_updating-cluster-within-minor[Updating a cluster by using the web console]
* xref:../updating/updating-cluster-within-minor.adoc#update-changing-update-server-web_updating-cluster-within-minor[Changing the update server by using the web console]
* xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#update-using-custom-machine-config-pools-canary_updating-cluster-web-console[Performing a canary rollout update]
* xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#machine-health-checks-pausing-web-console_updating-cluster-web-console[Pausing a MachineHealthCheck resource]
* xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#update-single-node-openshift_updating-cluster-web-console[About updating {product-title} on a single-node cluster]
* xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#update-upgrading-web_updating-cluster-web-console[Updating a cluster by using the web console]
* xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#update-changing-update-server-web_updating-cluster-web-console[Changing the update server by using the web console]
[id="updating-clusters-overview-update-cluster-using-cli"]
== Updating a cluster using the CLI
xref:../updating/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI]: You can update an {product-title} cluster within a minor version by using the OpenShift CLI (`oc`). The following steps update a cluster within a minor version. You can use the same instructions for updating a cluster between minor versions.
xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI]: You can update an {product-title} cluster within a minor version by using the OpenShift CLI (`oc`). The following steps update a cluster within a minor version. You can use the same instructions for updating a cluster between minor versions.
* xref:../updating/updating-cluster-cli.adoc#machine-health-checks-pausing_updating-cluster-cli[Pausing a MachineHealthCheck resource]
* xref:../updating/updating-cluster-cli.adoc#update-single-node-openshift_updating-cluster-cli[About updating {product-title} on a single-node cluster]
* xref:../updating/updating-cluster-cli.adoc#update-upgrading-cli_updating-cluster-cli[Updating a cluster by using the CLI]
* xref:../updating/updating-cluster-cli.adoc#update-changing-update-server-cli_updating-cluster-cli[Changing the update server by using the CLI]
* xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#machine-health-checks-pausing_updating-cluster-cli[Pausing a MachineHealthCheck resource]
* xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#update-single-node-openshift_updating-cluster-cli[About updating {product-title} on a single-node cluster]
* xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#update-upgrading-cli_updating-cluster-cli[Updating a cluster by using the CLI]
* xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#update-changing-update-server-cli_updating-cluster-cli[Changing the update server by using the CLI]
[id="updating-clusters-overview-perform-canary-rollout-update"]
== Performing a canary rollout update
xref:../updating/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools[Performing a canary rollout update]: By controlling the rollout of an update to the worker nodes, you can ensure that mission-critical applications stay available during the whole update, even if the update process causes your applications to fail. Depending on your organizational needs, you might want to update a small subset of worker nodes, evaluate cluster and workload health over a period of time, and then update the remaining nodes. This is referred to as a _canary_ update. Alternatively, you might also want to fit worker node updates, which often requires a host reboot, into smaller defined maintenance windows when it is not possible to take a large maintenance window to update the entire cluster at one time. You can perform the following procedures:
xref:../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools[Performing a canary rollout update]: By controlling the rollout of an update to the worker nodes, you can ensure that mission-critical applications stay available during the whole update, even if the update process causes your applications to fail. Depending on your organizational needs, you might want to update a small subset of worker nodes, evaluate cluster and workload health over a period of time, and then update the remaining nodes. This is referred to as a _canary_ update. Alternatively, you might also want to fit worker node updates, which often requires a host reboot, into smaller defined maintenance windows when it is not possible to take a large maintenance window to update the entire cluster at one time. You can perform the following procedures:
* xref:../updating/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-mcp_update-using-custom-machine-config-pools[Creating machine configuration pools to perform a canary rollout update]
* xref:../updating/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-pause_update-using-custom-machine-config-pools[Pausing the machine configuration pools]
* xref:../updating/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-update_update-using-custom-machine-config-pools[Performing the cluster update]
* xref:../updating/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-unpause_update-using-custom-machine-config-pools[Unpausing the machine configuration pools]
* xref:../updating/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-mcp-remove_update-using-custom-machine-config-pools[Moving a node to the original machine configuration pool]
* xref:../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-mcp_update-using-custom-machine-config-pools[Creating machine configuration pools to perform a canary rollout update]
* xref:../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-pause_update-using-custom-machine-config-pools[Pausing the machine configuration pools]
* xref:../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-update_update-using-custom-machine-config-pools[Performing the cluster update]
* xref:../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-unpause_update-using-custom-machine-config-pools[Unpausing the machine configuration pools]
* xref:../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools-mcp-remove_update-using-custom-machine-config-pools[Moving a node to the original machine configuration pool]
ifndef::openshift-origin[]
[id="updating-clusters-overview-update-cluster-with-rhel-compute-machines"]
== Updating a cluster that includes {op-system-base} compute machines
xref:../updating/updating-cluster-rhel-compute.adoc#updating-cluster-rhel-compute[Updating a cluster that includes {op-system-base} compute machines]: If your cluster contains {op-system-base-full} machines, you must perform additional steps to update those machines. You can perform the following procedures:
xref:../updating/updating_a_cluster/updating-cluster-rhel-compute.adoc#updating-cluster-rhel-compute[Updating a cluster that includes {op-system-base} compute machines]: If your cluster contains {op-system-base-full} machines, you must perform additional steps to update those machines. You can perform the following procedures:
* xref:../updating/updating-cluster-rhel-compute.adoc#update-upgrading-web_updating-cluster-rhel-compute[Updating a cluster by using the web console]
* xref:../updating/updating-cluster-rhel-compute.adoc#updating-cluster-rhel-compute-hooks[Optional: Adding hooks to perform Ansible tasks on {op-system-base} machines]
* xref:../updating/updating-cluster-rhel-compute.adoc#rhel-compute-updating-minor_updating-cluster-rhel-compute[Updating {op-system-base} compute machines in your cluster]
* xref:../updating/updating_a_cluster/updating-cluster-rhel-compute.adoc#update-upgrading-web_updating-cluster-rhel-compute[Updating a cluster by using the web console]
* xref:../updating/updating_a_cluster/updating-cluster-rhel-compute.adoc#updating-cluster-rhel-compute-hooks[Optional: Adding hooks to perform Ansible tasks on {op-system-base} machines]
* xref:../updating/updating_a_cluster/updating-cluster-rhel-compute.adoc#rhel-compute-updating-minor_updating-cluster-rhel-compute[Updating {op-system-base} compute machines in your cluster]
endif::openshift-origin[]
[id="updating-clusters-overview-update-restricted-network-cluster"]
== Updating a cluster in a disconnected environment
xref:../updating/updating-restricted-network-cluster/index.adoc#about-restricted-network-updates[About cluster updates in a disconnected environment]: If your mirror host cannot access both the internet and the cluster, you can mirror the images to a file system that is disconnected from that environment. You can then bring that host or removable media across that gap. If the local container registry and the cluster are connected to the mirror host of a registry, you can directly push the release images to the local registry.
xref:../updating/updating_a_cluster/updating_disconnected_cluster/index.adoc#about-restricted-network-updates[About cluster updates in a disconnected environment]: If your mirror host cannot access both the internet and the cluster, you can mirror the images to a file system that is disconnected from that environment. You can then bring that host or removable media across that gap. If the local container registry and the cluster are connected to the mirror host of a registry, you can directly push the release images to the local registry.
* xref:../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#updating-restricted-network-mirror-host[Preparing your mirror host]
* xref:../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#installation-adding-registry-pull-secret_mirroring-ocp-image-repository[Configuring credentials that allow images to be mirrored]
* xref:../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#mirroring-ocp-image-repository[Mirroring the {product-title} image repository]
* xref:../updating/updating-restricted-network-cluster/restricted-network-update.adoc#update-restricted_updating-restricted-network-cluster[Updating the disconnected cluster]
* xref:../updating/updating-restricted-network-cluster/restricted-network-update.adoc#images-configuration-registry-mirror_updating-restricted-network-cluster[Configuring image registry repository mirroring]
* xref:../updating/updating-restricted-network-cluster/restricted-network-update.adoc#generating-icsp-object-scoped-to-a-registry_updating-restricted-network-cluster[Widening the scope of the mirror image catalog to reduce the frequency of cluster node reboots]
* xref:../updating/updating-restricted-network-cluster/restricted-network-update-osus.adoc#update-service-install[Installing the OpenShift Update Service Operator]
* xref:../updating/updating-restricted-network-cluster/restricted-network-update-osus.adoc#update-service-create-service[Creating an OpenShift Update Service application]
* xref:../updating/updating-restricted-network-cluster/uninstalling-osus.adoc#update-service-delete-service[Deleting an OpenShift Update Service application]
* xref:../updating/updating-restricted-network-cluster/uninstalling-osus.adoc#update-service-uninstall[Uninstalling the OpenShift Update Service Operator]
* xref:../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#updating-restricted-network-mirror-host[Preparing your mirror host]
* xref:../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#installation-adding-registry-pull-secret_mirroring-ocp-image-repository[Configuring credentials that allow images to be mirrored]
* xref:../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#mirroring-ocp-image-repository[Mirroring {product-title} images]
* xref:../updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update.adoc#update-restricted_updating-restricted-network-cluster[Updating the disconnected cluster]
* xref:../updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update.adoc#images-configuration-registry-mirror_updating-restricted-network-cluster[Configuring image registry repository mirroring]
* xref:../updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update.adoc#generating-icsp-object-scoped-to-a-registry_updating-restricted-network-cluster[Widening the scope of the mirror image catalog to reduce the frequency of cluster node reboots]
* xref:../updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update-osus.adoc#update-service-install[Installing the OpenShift Update Service Operator]
* xref:../updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update-osus.adoc#update-service-create-service[Creating an OpenShift Update Service application]
* xref:../updating/updating_a_cluster/updating_disconnected_cluster/uninstalling-osus.adoc#update-service-delete-service[Deleting an OpenShift Update Service application]
* xref:../updating/updating_a_cluster/updating_disconnected_cluster/uninstalling-osus.adoc#update-service-uninstall[Uninstalling the OpenShift Update Service Operator]
[id="updating-clusters-overview-vsphere-updating-hardware"]
== Updating hardware on nodes running in vSphere
xref:../updating/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on vSphere]: You must ensure that your nodes running in vSphere are running on the hardware version supported by OpenShift Container Platform. Currently, hardware version 13 or later is supported for vSphere virtual machines in a cluster. For more information, see the following:
xref:../updating/updating_a_cluster/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-hardware-on-nodes-running-on-vsphere[Updating hardware on vSphere]: You must ensure that your nodes running in vSphere are running on the hardware version supported by OpenShift Container Platform. Currently, hardware version 13 or later is supported for vSphere virtual machines in a cluster. For more information, see the following:
* xref:../updating/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-virtual-hardware-on-vsphere_updating-hardware-on-nodes-running-in-vsphere[Updating virtual hardware on vSphere]
* xref:../updating/updating-hardware-on-nodes-running-on-vsphere.adoc#scheduling-virtual-hardware-update-on-vsphere_updating-hardware-on-nodes-running-in-vsphere[Scheduling an update for virtual hardware on vSphere]
* xref:../updating/updating_a_cluster/updating-hardware-on-nodes-running-on-vsphere.adoc#updating-virtual-hardware-on-vsphere_updating-hardware-on-nodes-running-in-vsphere[Updating virtual hardware on vSphere]
* xref:../updating/updating_a_cluster/updating-hardware-on-nodes-running-on-vsphere.adoc#scheduling-virtual-hardware-update-on-vsphere_updating-hardware-on-nodes-running-in-vsphere[Scheduling an update for virtual hardware on vSphere]
[IMPORTANT]
====
@@ -116,7 +116,7 @@ Using hardware version 13 for your cluster nodes running on vSphere is now depre
[id="updating-clusters-overview-hosted-control-planes"]
== Updating hosted control planes
xref:../updating/updating-hosted-control-planes.adoc#updating-hosted-control-planes[Updating hosted control planes]: On hosted control planes for {product-title}, updates are decoupled between the control plane and the nodes. Your service cluster provider, which is the user that hosts the cluster control planes, can manage the updates as needed. The hosted cluster handles control plane updates, and node pools handle node upgrades. For more information, see the following information:
xref:../updating/updating_a_cluster/updating-hosted-control-planes.adoc#updating-hosted-control-planes[Updating hosted control planes]: On hosted control planes for {product-title}, updates are decoupled between the control plane and the nodes. Your service cluster provider, which is the user that hosts the cluster control planes, can manage the updates as needed. The hosted cluster handles control plane updates, and node pools handle node upgrades. For more information, see the following information:
* xref:../updating/updating-hosted-control-planes.adoc#updates-for-hosted-control-planes_updating-hosted-control-planes[Updates for hosted control planes]
* xref:../updating/updating-hosted-control-planes.adoc#updating-node-pools-for-hcp_updating-hosted-control-planes[Updating node pools for hosted control planes]
* xref:../updating/updating_a_cluster/updating-hosted-control-planes.adoc#updates-for-hosted-control-planes_updating-hosted-control-planes[Updates for hosted control planes]
* xref:../updating/updating_a_cluster/updating-hosted-control-planes.adoc#updating-node-pools-for-hcp_updating-hosted-control-planes[Updating node pools for hosted control planes]

View File

@@ -1,28 +0,0 @@
:_content-type: ASSEMBLY
[id="migrating-clusters-to-multi-payload"]
= Migrating to a cluster with multi-architecture compute machines
include::_attributes/common-attributes.adoc[]
:context: updating-clusters-overview
toc::[]
You can migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines by updating to a multi-architecture, manifest-listed payload. This allows you to add mixed architecture compute nodes to your cluster.
For information about configuring your multi-architecture compute machines, see _Configuring multi-architecture compute machines on an {product-title} cluster_.
[IMPORTANT]
====
Migration from a multi-architecture payload to a single-architecture payload is not supported. Once a cluster has transitioned to using a multi-architecture payload, it can no longer accept a single-architecture upgrade payload.
====
include::modules/migrating-to-multi-arch-cli.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../post_installation_configuration/multi-architecture-configuration.adoc#multi-architecture-creating-arm64-bootimage_multi-architecture-configuration[Configuring multi-architecture compute machines on an {product-title} cluster]
* xref:../updating/updating-cluster-within-minor.adoc#updating-cluster-within-minor[Updating a cluster using the web console]
* xref:../updating/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI]
* xref:../updating/index.adoc#understanding-clusterversion-conditiontypes_updating-clusters-overview[Understanding cluster version condition types]
* xref:../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels-releases[Understanding update channels and releases]
* xref:../installing/installing-preparing.adoc#installing-preparing-install-manage[Selecting a cluster installation type]
* xref:../machine_management/deploying-machine-health-checks.adoc#machine-health-checks-about_deploying-machine-health-checks[About machine health checks]

View File

@@ -45,5 +45,5 @@ include::modules/update-duration-rhel-nodes.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../updating/updating-cluster-rhel-compute.adoc#updating-cluster-rhel-compute[Updating RHEL compute machines]
* xref:../updating/updating_a_cluster/updating-cluster-rhel-compute.adoc#updating-cluster-rhel-compute[Updating RHEL compute machines]
endif::openshift-origin[]

View File

@@ -49,5 +49,5 @@ include::modules/understanding-upgrade-channels.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../updating/updating-cluster-cli.adoc#update-conditional-upgrade-pathupdating-cluster-cli[Updating along a conditional upgrade path]
* xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#update-conditional-upgrade-pathupdating-cluster-cli[Updating along a conditional upgrade path]
* xref:../updating/understanding-upgrade-channels-release.html#fast-stable-channel-strategies_{context}[Choosing the correct channel for your cluster]

View File

@@ -37,7 +37,7 @@ include::modules/update-common-terms.adoc[leveloffset=+1]
* xref:../../post_installation_configuration/machine-configuration-tasks.adoc#machine-config-overview-post-install-machine-configuration-tasks[Machine config overview]
ifdef::openshift-enterprise[]
* xref:../../updating/updating-restricted-network-cluster/restricted-network-update-osus.adoc#update-service-overview_updating-restricted-network-cluster-osus[Using the OpenShift Update Service in a disconnected environment]
* xref:../../updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update-osus.adoc#update-service-overview_updating-restricted-network-cluster-osus[Using the OpenShift Update Service in a disconnected environment]
endif::openshift-enterprise[]
* xref:../../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels_understanding-upgrade-channels-releases[Update channels]

View File

@@ -1 +0,0 @@
../_attributes/

View File

@@ -1 +0,0 @@
../images

View File

@@ -1 +0,0 @@
../modules

View File

@@ -1 +0,0 @@
../snippets

View File

@@ -0,0 +1 @@
../../_attributes/

View File

@@ -1,8 +1,8 @@
:_content-type: ASSEMBLY
[id="preparing-eus-eus-upgrade"]
= Preparing to perform an EUS-to-EUS update
[id="eus-eus-update"]
= Performing an EUS-to-EUS update
include::_attributes/common-attributes.adoc[]
:context: eus-to-eus-upgrade
:context: eus-to-eus-update
toc::[]
@@ -34,9 +34,9 @@ include::modules/updating-eus-to-eus-upgrade-console.adoc[leveloffset=+2]
[id="additional-resources_updating-eus-to-eus-upgrade-console"]
.Additional resources
* xref:../operators/admin/olm-upgrading-operators.adoc#olm-changing-update-channel_olm-upgrading-operators[Preparing for an Operator update]
* xref:../updating/updating-cluster-within-minor.adoc#update-upgrading-web_updating-cluster-within-minor[Updating a cluster by using the web console]
* xref:../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators]
* xref:../../operators/admin/olm-upgrading-operators.adoc#olm-changing-update-channel_olm-upgrading-operators[Preparing for an Operator update]
* xref:../../updating/updating_a_cluster/updating-cluster-web-console.adoc#update-upgrading-web_updating-cluster-web-console[Updating a cluster by using the web console]
* xref:../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators]
include::modules/updating-eus-to-eus-upgrade-cli.adoc[leveloffset=+2]
@@ -45,13 +45,13 @@ include::modules/updating-eus-to-eus-upgrade-cli.adoc[leveloffset=+2]
[id="additional-resources_updating-eus-to-eus-upgrade-cli"]
.Additional resources
* xref:../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators]
* xref:../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators]
include::modules/updating-eus-to-eus-layered-products.adoc[leveloffset=+2]
[role="_additional-resources"]
[id="additional-resources_updating-eus-to-eus-layered-products"]
.Additional resources
* xref:../operators/admin/olm-upgrading-operators.adoc#olm-preparing-upgrade_olm-upgrading-operators[Preparing for an Operator update]
* xref:../updating/preparing-eus-eus-upgrade.adoc#updating-eus-to-eus-upgrade-console_eus-to-eus-upgrade[EUS-to-EUS update using the web console]
* xref:../updating/preparing-eus-eus-upgrade.adoc#updating-eus-to-eus-upgrade-cli_eus-to-eus-upgrade[EUS-to-EUS update using the CLI]
* xref:../../operators/admin/olm-upgrading-operators.adoc#olm-preparing-upgrade_olm-upgrading-operators[Preparing for an Operator update]
* xref:../../updating/updating_a_cluster/eus-eus-update.adoc#updating-eus-to-eus-upgrade-console_eus-to-eus-update[EUS-to-EUS update using the web console]
* xref:../../updating/updating_a_cluster/eus-eus-update.adoc#updating-eus-to-eus-upgrade-cli_eus-to-eus-update[EUS-to-EUS update using the CLI]

View File

@@ -0,0 +1 @@
../../images/

View File

@@ -0,0 +1,28 @@
:_content-type: ASSEMBLY
[id="migrating-clusters-to-multi-payload"]
= Migrating to a cluster with multi-architecture compute machines
include::_attributes/common-attributes.adoc[]
:context: updating-clusters-overview
toc::[]
You can migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines by updating to a multi-architecture, manifest-listed payload. This allows you to add mixed architecture compute nodes to your cluster.
For information about configuring your multi-architecture compute machines, see _Configuring multi-architecture compute machines on an {product-title} cluster_.
[IMPORTANT]
====
Migration from a multi-architecture payload to a single-architecture payload is not supported. Once a cluster has transitioned to using a multi-architecture payload, it can no longer accept a single-architecture upgrade payload.
====
include::modules/migrating-to-multi-arch-cli.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../post_installation_configuration/multi-architecture-configuration.adoc#multi-architecture-configuration[Configuring multi-architecture compute machines on an {product-title} cluster]
* xref:../../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[Updating a cluster using the web console]
* xref:../../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI]
* xref:../../updating/index.adoc#understanding-clusterversion-conditiontypes_updating-clusters-overview[Understanding cluster version condition types]
* xref:../../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels-releases[Understanding update channels and releases]
* xref:../../installing/installing-preparing.adoc#installing-preparing-selecting-cluster-type[Selecting a cluster installation type]
* xref:../../machine_management/deploying-machine-health-checks.adoc#machine-health-checks-about_deploying-machine-health-checks[About machine health checks]

View File

@@ -0,0 +1 @@
../../modules/

View File

@@ -0,0 +1 @@
../../snippets/

View File

@@ -28,7 +28,7 @@ This scenario has not been tested and might result in an undefined cluster state
[IMPORTANT]
====
Pausing a machine config pool prevents the Machine Config Operator from applying any configuration changes on the associated nodes. Pausing an MCP also prevents any automatically rotated certificates from being pushed to the associated nodes, including the automatic CA rotation of the `kube-apiserver-to-kubelet-signer` CA certificate.
Pausing a machine config pool prevents the Machine Config Operator from applying any configuration changes on the associated nodes. Pausing an MCP also prevents any automatically rotated certificates from being pushed to the associated nodes, including the automatic CA rotation of the `kube-apiserver-to-kubelet-signer` CA certificate.
If the MCP is paused when the `kube-apiserver-to-kubelet-signer` CA certificate expires and the MCO attempts to automatically renew the certificate, the MCO cannot push the newly rotated certificates to those nodes. This causes failure in multiple `oc` commands, including `oc debug`, `oc logs`, `oc exec`, and `oc attach`. You receive alerts in the Alerting UI of the {product-title} web console if an MCP is paused when the certificates are rotated.
@@ -64,8 +64,8 @@ include::modules/update-using-custom-machine-config-pools-pause.adoc[leveloffset
When the MCPs enter ready state, you can peform the cluster update. See one of the following update methods, as appropriate for your cluster:
* xref:../updating/updating-cluster-within-minor.adoc#update-upgrading-web_updating-cluster-within-minor[Updating a cluster using the web console]
* xref:../updating/updating-cluster-cli.adoc#update-upgrading-cli_updating-cluster-cli[Updating a cluster using the CLI]
* xref:../../updating/updating_a_cluster/updating-cluster-web-console.adoc#update-upgrading-web_updating-cluster-web-console[Updating a cluster using the web console]
* xref:../../updating/updating_a_cluster/updating-cluster-cli.adoc#update-upgrading-cli_updating-cluster-cli[Updating a cluster using the CLI]
After the update is complete, you can start to unpause the MCPs one-by-one.

View File

@@ -16,25 +16,25 @@ You can update, or upgrade, an {product-title} cluster within a minor version by
== Prerequisites
* Have access to the cluster as a user with `admin` privileges.
See xref:../authentication/using-rbac.adoc[Using RBAC to define and apply permissions].
* Have a recent xref:../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must xref:../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore your cluster to a previous state].
* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before upgrading to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 updates for {op-system-base} workers; those hosts must be replaced with a clean operating system install.
* Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster update. See xref:../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators] for more information.
See xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions].
* Have a recent xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must restore your cluster to a previous state.
* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before upgrading to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 upgrades for {op-system-base} workers; those hosts must be replaced with a clean operating system install.
* Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster update. See xref:../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators] for more information.
* Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy.
* If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials].
* If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials].
* Ensure that you address all `Upgradeable=False` conditions so the cluster allows an update to the next minor version. An alert displays at the top of the *Cluster Settings* page when you have one or more cluster Operators that cannot be upgraded. You can still update to the next available patch update for the minor release you are currently on.
* Review the list of APIs that were removed in Kubernetes 1.26, migrate any affected components to use the new API version, and provide the administrator acknowledgment. For more information, see xref:../updating/updating-cluster-prepare.adoc#updating-cluster-prepare[Preparing to update to {product-title} 4.13].
* Review the list of APIs that were removed in Kubernetes 1.26, migrate any affected components to use the new API version, and provide the administrator acknowledgment. For more information, see xref:../../updating/updating-cluster-prepare.adoc#updating-cluster-prepare[Preparing to update to {product-title} 4.13].
* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
[IMPORTANT]
====
* When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a previous version is not supported. If your update is failing to complete, contact Red Hat support.
* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster.
* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster upgrades. You must remove this setting before you can upgrade your cluster.
====
[role="_additional-resources"]
.Additional resources
* xref:../architecture/architecture-installation.adoc#unmanaged-operators_architecture-installation[Support policy for unmanaged Operators]
* xref:../../architecture/architecture-installation.adoc#unmanaged-operators_architecture-installation[Support policy for unmanaged Operators]
include::modules/machine-health-checks-pausing.adoc[leveloffset=+1]
@@ -43,15 +43,15 @@ include::modules/updating-sno.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* For information on which machine configuration changes require a reboot, see the note in xref:../architecture/control-plane.html#understanding-machine-config-operator_control-plane[Understanding the Machine Config Operator].
* For information on which machine configuration changes require a reboot, see the note in xref:../../architecture/control-plane.adoc#about-machine-config-operator_control-plane[About the Machine Config Operator].
include::modules/update-upgrading-cli.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../updating/preparing-eus-eus-upgrade.adoc#preparing-eus-eus-upgrade[Preparing to perform an EUS-to-EUS update]
* xref:../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels-releases[Understanding update channels and releases]
* xref:../../updating/updating_a_cluster/eus-eus-update.adoc#eus-eus-update[Performing an EUS-to-EUS update]
* xref:../../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels-releases[Understanding update channels and releases]
include::modules/update-conditional-updates.adoc[leveloffset=+1]
@@ -62,7 +62,7 @@ ifndef::openshift-origin[]
[role="_additional-resources"]
.Additional resources
* xref:../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels-releases[Understanding update channels and releases]
* xref:../../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels-releases[Understanding update channels and releases]
endif::openshift-origin[]

View File

@@ -13,16 +13,17 @@ those machines.
== Prerequisites
* Have access to the cluster as a user with `admin` privileges.
See xref:../authentication/using-rbac.adoc[Using RBAC to define and apply permissions].
* Have a recent xref:../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must xref:../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore your cluster to a previous state].
* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before upgrading to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 updates for {op-system-base} workers; those hosts must be replaced with a clean operating system install.
* If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials].
See xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions].
* Have a recent xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must restore your cluster to a previous state.
* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before upgrading to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 upgrades for {op-system-base} workers; those hosts must be replaced with a clean operating system install.
* If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials].
* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
[role="_additional-resources"]
.Additional resources
* xref:../architecture/architecture-installation.adoc#unmanaged-operators_architecture-installation[Support policy for unmanaged Operators]
* xref:../../architecture/architecture-installation.adoc#unmanaged-operators_architecture-installation[Support policy for unmanaged Operators]
include::modules/update-upgrading-web.adoc[leveloffset=+1]

View File

@@ -1,8 +1,8 @@
:_content-type: ASSEMBLY
[id="updating-cluster-within-minor"]
[id="updating-cluster-web-console"]
= Updating a cluster using the web console
include::_attributes/common-attributes.adoc[]
:context: updating-cluster-within-minor
:context: updating-cluster-web-console
toc::[]
@@ -10,37 +10,37 @@ You can update, or upgrade, an {product-title} cluster by using the web console.
[NOTE]
====
Use the web console or `oc adm upgrade channel _<channel>_` to change the update channel. You can follow the steps in xref:../updating/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI] to complete the update after you change to a {product-version} channel.
Use the web console or `oc adm upgrade channel _<channel>_` to change the update channel. You can follow the steps in xref:../../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI] to complete the update after you change to a {product-version} channel.
====
== Prerequisites
* Have access to the cluster as a user with `admin` privileges.
See xref:../authentication/using-rbac.adoc[Using RBAC to define and apply permissions].
* Have a recent xref:../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must xref:../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore your cluster to a previous state].
* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before upgrading to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 updates for {op-system-base} workers; those hosts must be replaced with a clean operating system install.
* Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster update. See xref:../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators] for more information.
See xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions].
* Have a recent xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must restore your cluster to a previous state.
* Support for {op-system-base}7 workers is removed in {product-title} {product-version}. You must replace {op-system-base}7 workers with {op-system-base}8 or {op-system} workers before upgrading to {product-title} {product-version}. Red Hat does not support in-place {op-system-base}7 to {op-system-base}8 upgrades for {op-system-base} workers; those hosts must be replaced with a clean operating system install.
* Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster update. See xref:../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators] for more information.
* Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy.
//remove this???^ or maybe just add another bullet that you can break up the update?
* To accommodate the time it takes to update, you are able to do a partial update by updating the worker or custom pool nodes. You can pause and resume within the progress bar of each pool.
* If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials].
* Review the list of APIs that were removed in Kubernetes 1.26, migrate any affected components to use the new API version, and provide the administrator acknowledgment. For more information, see xref:../updating/updating-cluster-prepare.adoc#updating-cluster-prepare[Preparing to update to {product-title} 4.13].
* If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials].
* Review the list of APIs that were removed in Kubernetes 1.26, migrate any affected components to use the new API version, and provide the administrator acknowledgment. For more information, see xref:../../updating/updating-cluster-prepare.adoc#updating-cluster-prepare[Preparing to update to {product-title} 4.13].
* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
[IMPORTANT]
====
* When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a previous version is not supported. If your update is failing to complete, contact Red Hat support.
* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster.
* Using the `unsupportedConfigOverrides` section to modify the configuration of an Operator is unsupported and might block cluster upgrades. You must remove this setting before you can upgrade your cluster.
====
[role="_additional-resources"]
.Additional resources
* xref:../architecture/architecture-installation.adoc#unmanaged-operators_architecture-installation[Support policy for unmanaged Operators]
* xref:../../architecture/architecture-installation.adoc#unmanaged-operators_architecture-installation[Support policy for unmanaged Operators]
include::modules/update-using-custom-machine-config-pools-canary.adoc[leveloffset=+1]
If you want to use the canary rollout update process, see xref:../updating/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools[Performing a canary rollout update].
If you want to use the canary rollout update process, see xref:../../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools[Performing a canary rollout update].
include::modules/machine-health-checks-pausing-web-console.adoc[leveloffset=+1]
@@ -49,7 +49,7 @@ include::modules/updating-sno.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* For information on which machine configuration changes require a reboot, see the note in xref:../architecture/control-plane.html#understanding-machine-config-operator_control-plane[Understanding the Machine Config Operator].
* For information on which machine configuration changes require a reboot, see the note in xref:../../architecture/control-plane.adoc#about-machine-config-operator_control-plane[About the Machine Config Operator].
include::modules/update-upgrading-web.adoc[leveloffset=+1]
@@ -58,4 +58,4 @@ include::modules/update-changing-update-server-web.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels-releases[Understanding update channels and releases]
* xref:../../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels-releases[Understanding update channels and releases]

View File

@@ -32,6 +32,6 @@ include::modules/update-vsphere-virtual-hardware-on-template.adoc[leveloffset=+2
[role="_additional-resources"]
.Additional resources
* xref:../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-evacuating_nodes-nodes-working[Understanding how to evacuate pods on nodes]
* xref:../../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-evacuating_nodes-nodes-working[Understanding how to evacuate pods on nodes]
include::modules/scheduling-virtual-hardware-update-on-vsphere.adoc[leveloffset=+1]

View File

@@ -0,0 +1 @@
../../../_attributes/

View File

@@ -26,19 +26,19 @@ include::modules/disconnected-osus-overview.adoc[leveloffset=+1]
.Additional resources
* xref:../../updating/understanding_updates/intro-to-updates.adoc#update-service-about_understanding-openshift-updates[About the OpenShift Update Service]
* xref:../../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels-releases[Understanding update channels and releases]
* xref:../../../updating/understanding_updates/intro-to-updates.adoc#update-service-about_understanding-openshift-updates[About the OpenShift Update Service]
* xref:../../../updating/understanding-upgrade-channels-release.adoc#understanding-upgrade-channels-releases[Understanding update channels and releases]
[id="update-service-prereqs"]
== Prerequisites
* You must have the `oc` command-line interface (CLI) tool installed.
* You must provision a local container image registry with the container images for your update, as described in xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#mirroring-ocp-image-repository[Mirroring the {product-title} image repository].
* You must provision a local container image registry with the container images for your update, as described in xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#mirroring-ocp-image-repository[Mirroring {product-title} images].
[id="registry-configuration-for-update-service"]
== Configuring access to a secured registry for the OpenShift Update Service
If the release images are contained in a registry whose HTTPS X.509 certificate is signed by a custom certificate authority, complete the steps in xref:../../registry/configuring-registry-operator.adoc#images-configuration-cas_configuring-registry-operator[Configuring additional trust stores for image registry access] along with following changes for the update service.
If the release images are contained in a registry whose HTTPS X.509 certificate is signed by a custom certificate authority, complete the steps in xref:../../../registry/configuring-registry-operator.adoc#images-configuration-cas_configuring-registry-operator[Configuring additional trust stores for image registry access] along with following changes for the update service.
The OpenShift Update Service Operator needs the config map key name `updateservice-registry` in the registry CA cert.
@@ -71,7 +71,7 @@ To install the OpenShift Update Service, you must first install the OpenShift Up
[NOTE]
====
For clusters that are installed in disconnected environments, also known as disconnected clusters, Operator Lifecycle Manager by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full internet connectivity. For more information, see xref:../../operators/admin/olm-restricted-networks.adoc#olm-restricted-networks[Using Operator Lifecycle Manager on restricted networks].
For clusters that are installed in disconnected environments, also known as disconnected clusters, Operator Lifecycle Manager by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full internet connectivity. For more information, see xref:../../../operators/admin/olm-restricted-networks.adoc#olm-restricted-networks[Using Operator Lifecycle Manager on restricted networks].
====
include::modules/update-service-install-web-console.adoc[leveloffset=+2]
@@ -81,7 +81,7 @@ include::modules/update-service-install-cli.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../operators/user/olm-installing-operators-in-namespace.adoc#olm-installing-operators-in-namespace[Installing Operators in your namespace].
* xref:../../../operators/user/olm-installing-operators-in-namespace.adoc#olm-installing-operators-in-namespace[Installing Operators in your namespace].
include::modules/update-service-graph-data.adoc[leveloffset=+1]
@@ -104,7 +104,7 @@ include::modules/update-service-configure-cvo.adoc[leveloffset=+3]
[NOTE]
====
See xref:../../networking/enable-cluster-wide-proxy.adoc#nw-proxy-configure-object[Enabling the cluster-wide proxy] to configure the CA to trust the update server.
See xref:../../../networking/enable-cluster-wide-proxy.adoc#enable-cluster-wide-proxy[Configuring the cluster-wide proxy] to configure the CA to trust the update server.
====
[id="next-steps_updating-restricted-network-cluster-osus"]
@@ -124,8 +124,8 @@ The release image signature config map allows the Cluster Version Operator (CVO)
After you configure your cluster to use the locally-installed OpenShift Update Service and local mirror registry, you can use any of the following update methods:
** xref:../../updating/updating-cluster-within-minor.adoc#updating-cluster-within-minor[Updating a cluster using the web console]
** xref:../../updating/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI]
** xref:../../updating/preparing-eus-eus-upgrade.adoc#preparing-eus-eus-upgrade[Preparing to perform an EUS-to-EUS update]
** xref:../../updating/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools[Performing a canary rollout update]
** xref:../../updating/updating-cluster-rhel-compute.adoc#updating-cluster-rhel-compute[Updating a cluster that includes RHEL compute machines]
** xref:../../../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[Updating a cluster using the web console]
** xref:../../../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI]
** xref:../../../updating/updating_a_cluster/eus-eus-update.adoc#eus-eus-update[Performing an EUS-to-EUS update]
** xref:../../../updating/updating_a_cluster/update-using-custom-machine-config-pools.adoc#update-using-custom-machine-config-pools[Performing a canary rollout update]
** xref:../../../updating/updating_a_cluster/updating-cluster-rhel-compute.adoc#updating-cluster-rhel-compute[Updating a cluster that includes RHEL compute machines]

View File

@@ -17,12 +17,12 @@ Use the following procedures to update a cluster in a disconnected environment w
== Prerequisites
* You must have the `oc` command-line interface (CLI) tool installed.
* You must provision a local container image registry with the container images for your update, as described in xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#mirroring-ocp-image-repository[Mirroring the {product-title} image repository].
* You must provision a local container image registry with the container images for your update, as described in xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#mirroring-ocp-image-repository[Mirroring {product-title} images].
* You must have access to the cluster as a user with `admin` privileges.
See xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions].
* You must have a recent xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore your cluster to a previous state].
See xref:../../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions].
* You must have a recent xref:../../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must xref:../../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore your cluster to a previous state].
* You must ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy.
* If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials].
* If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../../updating/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials].
* If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If `minAvailable` is set to 1 in `PodDisruptionBudget`, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the `PodDisruptionBudget` field can prevent the node drain.
[NOTE]
@@ -46,6 +46,6 @@ include::modules/generating-icsp-object-scoped-to-a-registry.adoc[leveloffset=+1
[role="_additional-resources"]
== Additional resources
* xref:../../operators/admin/olm-restricted-networks.adoc#olm-restricted-networks[Using Operator Lifecycle Manager on restricted networks]
* xref:../../../operators/admin/olm-restricted-networks.adoc#olm-restricted-networks[Using Operator Lifecycle Manager on restricted networks]
* xref:../../post_installation_configuration/machine-configuration-tasks.adoc#machine-config-overview-post-install-machine-configuration-tasks[Machine Config Overview]
* xref:../../../post_installation_configuration/machine-configuration-tasks.adoc#machine-config-overview-post-install-machine-configuration-tasks[Machine Config Overview]

View File

@@ -0,0 +1 @@
../../../images/

View File

@@ -14,23 +14,23 @@ If the local container registry and the cluster are connected to the mirror regi
A single container image registry is sufficient to host mirrored images for several clusters in the disconnected network.
[id="about-disconnected-updates-mirroring"]
== Mirroring the {product-title} image repository
== Mirroring {product-title} images
To update your cluster in a disconnected environment, your cluster environment must have access to a mirror registry that has the necessary images and resources for your targeted update. The following page has instructions for mirroring images onto a repository in your disconnected cluster:
* xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#mirroring-ocp-image-repository[Mirroring the {product-title} image repository]
* xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#mirroring-ocp-image-repository[Mirroring {product-title} images]
[id="about-disconnected-updates-update"]
== Performing a cluster update in a disconnected environment
You can use one of the following procedures to update a disconnected {product-title} cluster:
* xref:../../updating/updating-restricted-network-cluster/restricted-network-update-osus.adoc#updating-restricted-network-cluster-OSUS[Updating a cluster in a disconnected environment using the OpenShift Update Service]
* xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update-osus.adoc#updating-restricted-network-cluster-OSUS[Updating a cluster in a disconnected environment using the OpenShift Update Service]
* xref:../../updating/updating-restricted-network-cluster/restricted-network-update.adoc#updating-restricted-network-cluster[Updating a cluster in a disconnected environment without the OpenShift Update Service]
* xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update.adoc#updating-restricted-network-cluster[Updating a cluster in a disconnected environment without the OpenShift Update Service]
[id="about-disconnected-updates-uninstalling-osus"]
== Uninstalling the OpenShift Update Service from a cluster
You can use the following procedure to uninstall a local copy of the OpenShift Update Service (OSUS) from your cluster:
* xref:../../updating/updating-restricted-network-cluster/uninstalling-osus.adoc#uninstalling-osus[Uninstalling the OpenShift Update Service from a cluster]
* xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/uninstalling-osus.adoc#uninstalling-osus[Uninstalling the OpenShift Update Service from a cluster]

View File

@@ -1,6 +1,6 @@
:_content-type: ASSEMBLY
[id="mirroring-ocp-image-repository"]
= Mirroring the {product-title} image repository
= Mirroring {product-title} images
include::_attributes/common-attributes.adoc[]
:context: mirroring-ocp-image-repository
@@ -19,7 +19,7 @@ The following steps outline the high-level workflow on how to mirror images to a
. Download the registry pull secret and add it to your cluster.
. If you use the xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#mirroring-ocp-resources-ocmirror[oc-mirror OpenShift CLI (`oc`) plugin]:
. If you use the xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#mirroring-ocp-resources-ocmirror[oc-mirror OpenShift CLI (`oc`) plugin]:
.. Install the oc-mirror plugin on all devices being used to retrieve and push release images.
@@ -31,7 +31,7 @@ The following steps outline the high-level workflow on how to mirror images to a
.. Repeat these steps as needed to update your mirror registry.
. If you use the xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#update-mirror-repository-adm-release-mirror_mirroring-ocp-image-repository[`oc adm release mirror` command]:
. If you use the xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#update-mirror-repository-adm-release-mirror_mirroring-ocp-image-repository[`oc adm release mirror` command]:
.. Set environment variables that correspond to your environment and the release images you want to mirror.
@@ -57,7 +57,7 @@ Compared to using the `oc adm release mirror` command, the oc-mirror plugin has
If you use Red Hat Quay, you must use version 3.6 or later with the oc-mirror plugin. If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay link:https://access.redhat.com/documentation/en-us/red_hat_quay/3.6/html/deploy_red_hat_quay_for_proof-of-concept_non-production_purposes/[for proof-of-concept purposes] or link:https://access.redhat.com/documentation/en-us/red_hat_quay/3.6/html/deploy_red_hat_quay_on_openshift_with_the_quay_operator/[by using the Quay Operator]. If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat Support.
====
+
If you do not have an existing solution for a container image registry, the xref:../../installing/disconnected_install/installing-mirroring-creating-registry.adoc#installing-mirroring-creating-registry[mirror registry for Red Hat OpenShift] is included in {product-title} subscriptions. The _mirror registry for Red Hat OpenShift_ is a small-scale container registry that you can use to mirror {product-title} container images in disconnected installations and updates.
If you do not have an existing solution for a container image registry, the xref:../../../installing/disconnected_install/installing-mirroring-creating-registry.adoc#installing-mirroring-creating-registry[mirror registry for Red Hat OpenShift] is included in {product-title} subscriptions. The _mirror registry for Red Hat OpenShift_ is a small-scale container registry that you can use to mirror {product-title} container images in disconnected installations and updates.
[id="updating-restricted-network-mirror-host"]
== Preparing your mirror host
@@ -69,7 +69,7 @@ include::modules/cli-installing-cli.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../cli_reference/openshift_cli/extending-cli-plugins.adoc#cli-installing-plugins_cli-extend-plugins[Installing and using CLI plugins]
* xref:../../../cli_reference/openshift_cli/extending-cli-plugins.adoc#cli-installing-plugins_cli-extend-plugins[Installing and using CLI plugins]
include::modules/installation-adding-registry-pull-secret.adoc[leveloffset=+2]
@@ -90,7 +90,7 @@ include::modules/installation-about-mirror-registry.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* For information about viewing the CRI-O logs to view the image source, see xref:../../installing/validating-an-installation.adoc#viewing-the-image-pull-source_validating-an-installation[Viewing the image pull source].
* For information about viewing the CRI-O logs to view the image source, see xref:../../../installing/validating-an-installation.adoc#viewing-the-image-pull-source_validating-an-installation[Viewing the image pull source].
// Installing the oc-mirror OpenShift CLI plugin
include::modules/oc-mirror-installing-plugin.adoc[leveloffset=+2]
@@ -101,14 +101,14 @@ include::modules/oc-mirror-creating-image-set-config.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#oc-mirror-imageset-config-params_mirroring-ocp-image-repository[Image set configuration parameters]
* xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#oc-mirror-image-set-examples_mirroring-ocp-image-repository[Image set configuration examples]
* xref:../../updating/updating-restricted-network-cluster/restricted-network-update-osus.adoc#update-service-overview_updating-restricted-network-cluster-osus[About the OpenShift Update Service]
* xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#oc-mirror-imageset-config-params_mirroring-ocp-image-repository[Image set configuration parameters]
* xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#oc-mirror-image-set-examples_mirroring-ocp-image-repository[Image set configuration examples]
* xref:../../../updating/understanding_updates/intro-to-updates.adoc#update-service-about_understanding-openshift-updates[About the OpenShift Update Service]
[id="mirroring-image-set"]
=== Mirroring an image set to a mirror registry
You can use the oc-mirror CLI plugin to mirror images to a mirror registry in a xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#mirroring-image-set-partial[partially disconnected environment] or in a xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#mirroring-image-set-full[fully disconnected environment].
You can use the oc-mirror CLI plugin to mirror images to a mirror registry in a xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#mirroring-image-set-partial[partially disconnected environment] or in a xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#mirroring-image-set-full[fully disconnected environment].
The following procedures assume that you already have your mirror registry set up.
@@ -123,7 +123,7 @@ include::modules/oc-mirror-mirror-to-mirror.adoc[leveloffset=+4]
[id="mirroring-image-set-full"]
==== Mirroring an image set in a fully disconnected environment
To mirror an image set in a fully disconnected environment, you must first xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#oc-mirror-mirror-to-disk_mirroring-ocp-image-repository[mirror the image set to disk], then xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#oc-mirror-disk-to-mirror_mirroring-ocp-image-repository[mirror the image set file on disk to a mirror].
To mirror an image set in a fully disconnected environment, you must first xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#oc-mirror-mirror-to-disk_mirroring-ocp-image-repository[mirror the image set to disk], then xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#oc-mirror-disk-to-mirror_mirroring-ocp-image-repository[mirror the image set file on disk to a mirror].
// Mirroring from mirror to disk
include::modules/oc-mirror-mirror-to-disk.adoc[leveloffset=+4]
@@ -150,10 +150,10 @@ include::modules/oc-mirror-differential-updates.adoc[leveloffset=+3]
[role="_additional-resources"]
.Additional resources
* xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#oc-mirror-image-set-examples_mirroring-ocp-image-repository[Image set configuration examples]
* xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#mirroring-image-set-partial[Mirroring an image set in a partially disconnected environment]
* xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#mirroring-image-set-full[Mirroring an image set in a fully disconnected environment]
* xref:../../updating/updating-restricted-network-cluster/mirroring-image-repository.adoc#oc-mirror-updating-cluster-manifests_mirroring-ocp-image-repository[Configuring your cluster to use the resources generated by oc-mirror]
* xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#oc-mirror-image-set-examples_mirroring-ocp-image-repository[Image set configuration examples]
* xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#mirroring-image-set-partial[Mirroring an image set in a partially disconnected environment]
* xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#mirroring-image-set-full[Mirroring an image set in a fully disconnected environment]
* xref:../../../updating/updating_a_cluster/updating_disconnected_cluster/mirroring-image-repository.adoc#oc-mirror-updating-cluster-manifests_mirroring-ocp-image-repository[Configuring your cluster to use the resources generated by oc-mirror]
// Performing a dry run
include::modules/oc-mirror-dry-run.adoc[leveloffset=+2]
@@ -164,7 +164,7 @@ include::modules/oc-mirror-oci-format.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs-fb[File-based catalogs]
* xref:../../../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs-fb[File-based catalogs]
// Image set configuration parameters
include::modules/oc-mirror-imageset-config-params.adoc[leveloffset=+2]

View File

@@ -0,0 +1 @@
../../../modules/

View File

@@ -0,0 +1 @@
../../../snippets/

View File

@@ -21,7 +21,7 @@ include::modules/virt-about-workload-updates.adoc[leveloffset=+2]
include::modules/virt-about-eus-updates.adoc[leveloffset=+2]
Learn more about xref:../updating/preparing-eus-eus-upgrade.adoc#preparing-eus-eus-upgrade[preparing to perform an EUS-to-EUS update].
Learn more about xref:../updating/updating_a_cluster/eus-eus-update.adoc#eus-eus-update[performing an EUS-to-EUS update].
include::modules/virt-preventing-workload-updates-during-eus-update.adoc[leveloffset=+1]
@@ -48,7 +48,7 @@ Configure workload updates to ensure that VMIs update automatically.
[role="_additional-resources"]
== Additional resources
* xref:../updating/preparing-eus-eus-upgrade.adoc#preparing-eus-eus-upgrade[Preparing to perform an EUS-to-EUS update]
* xref:../updating/updating_a_cluster/eus-eus-update.adoc#eus-eus-update[Performing an EUS-to-EUS update]
* xref:../operators/understanding/olm-what-operators-are.adoc#olm-what-operators-are[What are Operators?]

View File

@@ -315,14 +315,9 @@ endif::[]
- **xref:../scalability_and_performance/recommended-performance-scale-practices/recommended-infrastructure-practices.adoc#scaling-cluster-monitoring-operator[Scale] and xref:../scalability_and_performance/using-node-tuning-operator.adoc#using-node-tuning-operator[tune] clusters**: Set cluster limits, tune nodes, scale cluster monitoring, and optimize networking, storage, and routes for your environment.
- **Update a cluster**:
Use the Cluster Version Operator (CVO) to upgrade your {product-title} cluster. If an update is available from the OpenShift Update Service (OSUS), you apply that cluster update from either the {product-title} xref:../updating/updating-cluster-within-minor.adoc#updating-cluster-within-minor[web console] or the xref:../updating/updating-cluster-cli.adoc#updating-cluster-cli[OpenShift CLI] (`oc`).
Use the Cluster Version Operator (CVO) to upgrade your {product-title} cluster. If an update is available from the OpenShift Update Service (OSUS), you apply that cluster update from either the {product-title} xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[web console] or the xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[OpenShift CLI] (`oc`).
////
There is a separate process for
xref:../updating/updating-disconnected-cluster.adoc#updating-disconnected-cluster[updating a cluster on a restricted network].
////
- **xref:../updating/updating-restricted-network-cluster/restricted-network-update-osus.adoc#update-service-overview_updating-restricted-network-cluster-osus[Understanding the OpenShift Update Service]**: Learn about installing and managing a local OpenShift Update Service for recommending {product-title} updates in disconnected environments.
- **xref:../updating/updating_a_cluster/updating_disconnected_cluster/disconnected-update-osus.adoc#update-service-overview_updating-restricted-network-cluster-osus[Using the OpenShift Update Service in a disconnected environment]**: Learn about installing and managing a local OpenShift Update Service for recommending {product-title} updates in disconnected environments.
- **xref:../nodes/clusters/nodes-cluster-worker-latency-profiles.adoc#nodes-cluster-worker-latency-profiles[Improving cluster stability in high latency environments by using worker latency profiles]**: If your network has latency issues, you can use one of three _worker latency profiles_ to help ensure that your control plane does not accidentally evict pods in case it cannot reach a worker node. You can configure or modify the profile at any time during the life of the cluster.

View File

@@ -66,7 +66,7 @@ Use the following sections to find content to help you learn about and use {prod
|
|
| xref:../updating/updating-cluster-within-minor.adoc#updating-cluster-within-minor[Updating a cluster]
| xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[Updating a cluster]
|
|

View File

@@ -60,7 +60,7 @@ Easy, over-the-air upgrades for asynchronous z-stream releases of
OpenShift v4 is available. Cluster administrators can upgrade using the
*Cluster Settings* tab in the web console.
See
xref:../updating/updating-cluster-within-minor.adoc#updating-cluster-within-minor[Updating a cluster]
xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[Updating a cluster]
for more information.
[id="ocp-operator-hub"]