diff --git a/ai_workloads/kueue/monitoring-pending-workloads.adoc b/ai_workloads/kueue/monitoring-pending-workloads.adoc index aff196c17a..096ede7b66 100644 --- a/ai_workloads/kueue/monitoring-pending-workloads.adoc +++ b/ai_workloads/kueue/monitoring-pending-workloads.adoc @@ -1,7 +1,7 @@ :_mod-docs-content-type: ASSEMBLY include::_attributes/common-attributes.adoc[] [id="monitoring-pending-workloads-install-kueue"] -= Monitoring pending workloads += Monitoring pending workloads :context: monitoring-pending-workloads toc::[] @@ -25,7 +25,7 @@ include::modules/kueue-providing-user-permissions.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/ai_workloads/red-hat-build-of-kueue#rbac-permissions[Configuring role-based permissions] +* link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/ai_workloads/red-hat-build-of-kueue#rbac-permissions[Configuring role-based permissions] include::modules/kueue-monitoring-pending-workloads-on-demand.adoc[leveloffset=+1] @@ -35,4 +35,3 @@ include::modules/kueue-viewing-pending-workloads-clusterqueue.adoc[leveloffset=+ include::modules/kueue-viewing-pending-workloads-localqueue.adoc[leveloffset=+2] include::modules/kueue-modifying-monitoring-settings.adoc[leveloffset=+1] - diff --git a/hardware_accelerators/das-about-dynamic-accelerator-slicer-operator.adoc b/hardware_accelerators/das-about-dynamic-accelerator-slicer-operator.adoc index d73217c9c6..e9cf7ecdc6 100644 --- a/hardware_accelerators/das-about-dynamic-accelerator-slicer-operator.adoc +++ b/hardware_accelerators/das-about-dynamic-accelerator-slicer-operator.adoc @@ -1,6 +1,6 @@ :_mod-docs-content-type: ASSEMBLY [id="das-about-dynamic-accelerator-slicer-operator"] -= Dynamic Accelerator Slicer (DAS) Operator += Dynamic Accelerator Slicer (DAS) Operator include::_attributes/common-attributes.adoc[] :context: das-about-dynamic-accelerator-slicer-operator @@ -10,7 +10,7 @@ toc::[] include::snippets/technology-preview.adoc[] -The Dynamic Accelerator Slicer (DAS) Operator allows you to dynamically slice GPU accelerators in {product-title}, instead of relying on statically sliced GPUs defined when the node is booted. This allows you to dynamically slice GPUs based on specific workload demands, ensuring efficient resource utilization. +The Dynamic Accelerator Slicer (DAS) Operator allows you to dynamically slice GPU accelerators in {product-title}, instead of relying on statically sliced GPUs defined when the node is booted. This allows you to dynamically slice GPUs based on specific workload demands, ensuring efficient resource utilization. Dynamic slicing is useful if you do not know all the accelerator partitions needed in advance on every node on the cluster. @@ -45,7 +45,7 @@ include::modules/das-operator-installing-web-console.adoc[leveloffset=+2] ** xref:../hardware_enablement/psap-node-feature-discovery-operator.adoc#psap-node-feature-discovery-operator[Node Feature Discovery (NFD) Operator] ** link:https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/index.html[NVIDIA GPU Operator] -** link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/specialized_hardware_and_driver_enablement/psap-node-feature-discovery-operator#creating-nfd-cr-web-console_psap-node-feature-discovery-operator[NodeFeatureDiscovery CR] +** link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/specialized_hardware_and_driver_enablement/psap-node-feature-discovery-operator#creating-nfd-cr-web-console_psap-node-feature-discovery-operator[NodeFeatureDiscovery CR] //Installing the Dynamic Accelerator Slicer Operator using the CLI include::modules/das-operator-installing-cli.adoc[leveloffset=+2] @@ -55,7 +55,7 @@ include::modules/das-operator-installing-cli.adoc[leveloffset=+2] * xref:../security/cert_manager_operator/cert-manager-operator-install.adoc#cert-manager-operator-install[{cert-manager-operator}] * xref:../hardware_enablement/psap-node-feature-discovery-operator.adoc#psap-node-feature-discovery-operator[Node Feature Discovery (NFD) Operator] * link:https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/index.html[NVIDIA GPU Operator] -* link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/specialized_hardware_and_driver_enablement/psap-node-feature-discovery-operator#creating-nfd-cr-cli_psap-node-feature-discovery-operator[NodeFeatureDiscovery CR] +* link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/specialized_hardware_and_driver_enablement/psap-node-feature-discovery-operator#creating-nfd-cr-cli_psap-node-feature-discovery-operator[NodeFeatureDiscovery CR] //Uninstalling the Dynamic Accelerator Slicer Operator include::modules/das-operator-uninstalling.adoc[leveloffset=+1] @@ -77,7 +77,3 @@ include::modules/das-operator-troubleshooting.adoc[leveloffset=+1] * link:https://github.com/kubernetes/kubernetes/issues/128043[Kubernetes issue #128043] * xref:../hardware_enablement/psap-node-feature-discovery-operator.adoc#psap-node-feature-discovery-operator[Node Feature Discovery Operator] * link:https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/troubleshooting.html[NVIDIA GPU Operator troubleshooting] - - - - diff --git a/modules/agent-installer-architectures.adoc b/modules/agent-installer-architectures.adoc index cfb3bf0e89..35b90751fb 100644 --- a/modules/agent-installer-architectures.adoc +++ b/modules/agent-installer-architectures.adoc @@ -27,7 +27,7 @@ $ ./openshift-install version .Example output [source,terminal] ---- -./openshift-install 4.20.0 +./openshift-install 4.21.0 built from commit abc123def456 release image quay.io/openshift-release-dev/ocp-release@sha256:123abc456def789ghi012jkl345mno678pqr901stu234vwx567yz0 release architecture amd64 diff --git a/modules/cnf-creating-nrop-cr-hosted-control-plane.adoc b/modules/cnf-creating-nrop-cr-hosted-control-plane.adoc index 2241289c99..baa8b3a204 100644 --- a/modules/cnf-creating-nrop-cr-hosted-control-plane.adoc +++ b/modules/cnf-creating-nrop-cr-hosted-control-plane.adoc @@ -39,7 +39,7 @@ $ oc --kubeconfig="$MGMT_KUBECONFIG" get np -A [source,terminal] ---- NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE -clusters democluster-us-east-1a democluster 1 1 False False 4.20.0 False False +clusters democluster-us-east-1a democluster 1 1 False False 4.21.0 False False ---- + The `node-pool-name` is the `NAME` field in the output. In this example, the `node-pool-name` is `democluster-us-east-1a`. diff --git a/modules/creating-config-files-cluster-install-oci.adoc b/modules/creating-config-files-cluster-install-oci.adoc index f54b461053..22dab6f287 100644 --- a/modules/creating-config-files-cluster-install-oci.adoc +++ b/modules/creating-config-files-cluster-install-oci.adoc @@ -60,9 +60,9 @@ $ ./openshift-install version .Example output for a shared registry binary [source,terminal,subs="quotes"] ---- -./openshift-install 4.20.0 +./openshift-install 4.21.0 built from commit ae7977b7d1ca908674a0d45c5c243c766fa4b2ca -release image registry.ci.openshift.org/origin/release:4.20ocp-release@sha256:0da6316466d60a3a4535d5fed3589feb0391989982fba59d47d4c729912d6363 +release image registry.ci.openshift.org/origin/release:4.21ocp-release@sha256:0da6316466d60a3a4535d5fed3589feb0391989982fba59d47d4c729912d6363 release architecture amd64 ---- ==== diff --git a/modules/external-auth-configuring.adoc b/modules/external-auth-configuring.adoc index 341b9e4b5a..1b8a637f7b 100644 --- a/modules/external-auth-configuring.adoc +++ b/modules/external-auth-configuring.adoc @@ -131,7 +131,7 @@ $ oc get co kube-apiserver [source,terminal] ---- NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -kube-apiserver 4.20.0 True True False 85m NodeInstallerProgressing: 2 node are at revision 8; 1 node is at revision 10 +kube-apiserver 4.21.0 True True False 85m NodeInstallerProgressing: 2 node are at revision 8; 1 node is at revision 10 ---- + The message in the preceding example shows that one node has progressed to the new revision and two nodes have not yet updated. It can take 20 minutes or more to roll out the new revision to all nodes, depending on the size of your cluster. diff --git a/modules/external-auth-disabling.adoc b/modules/external-auth-disabling.adoc index f37d9a6ba9..4d2481b217 100644 --- a/modules/external-auth-disabling.adoc +++ b/modules/external-auth-disabling.adoc @@ -42,7 +42,7 @@ $ oc get co kube-apiserver [source,terminal] ---- NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -kube-apiserver 4.20.0 True True False 85m NodeInstallerProgressing: 2 node are at revision 12; 1 node is at revision 14 +kube-apiserver 4.21.0 True True False 85m NodeInstallerProgressing: 2 node are at revision 12; 1 node is at revision 14 ---- + The message in the preceding example shows that one node has progressed to the new revision and two nodes have not yet updated. It can take 20 minutes or more to roll out the new revision to all nodes, depending on the size of your cluster. diff --git a/modules/hcp-aws-hc-ext-dns.adoc b/modules/hcp-aws-hc-ext-dns.adoc index 4762ecbafc..8e490888b9 100644 --- a/modules/hcp-aws-hc-ext-dns.adoc +++ b/modules/hcp-aws-hc-ext-dns.adoc @@ -64,7 +64,7 @@ $ hcp create cluster aws \ <5> Specify the public hosted zone that the service consumer owns, for example, `service-consumer-domain.com`. <6> Specify the node replica count, for example, `2`. <7> Specify the path to your pull secret file. -<8> Specify the supported {product-title} version that you want to use, for example, `4.20.0-multi`. +<8> Specify the supported {product-title} version that you want to use, for example, `4.21.0-multi`. <9> Specify the public hosted zone that the service provider owns, for example, `service-provider-domain.com`. <10> Set as `PublicAndPrivate`. You can use external DNS with `Public` or `PublicAndPrivate` configurations only. <11> Specify the path to your {aws-short} STS credentials file, for example, `/home/user/sts-creds/sts-creds.json`. \ No newline at end of file diff --git a/modules/hcp-bm-hc-mirror.adoc b/modules/hcp-bm-hc-mirror.adoc index c5f6bb2be0..cf21f904aa 100644 --- a/modules/hcp-bm-hc-mirror.adoc +++ b/modules/hcp-bm-hc-mirror.adoc @@ -52,4 +52,4 @@ $ hcp create cluster agent \ <6> Specify the `icsp.yaml` file that defines ICSP and your mirror registries. <7> Specify the path to your SSH public key. The default file path is `~/.ssh/id_rsa.pub`. <8> Specify your hosted cluster namespace. -<9> Specify the supported {product-title} version that you want to use, for example, `4.20.0-multi`. If you are using a disconnected environment, replace `` with the digest image. To extract the {product-title} release image digest, see "Extracting the {product-title} release image digest". \ No newline at end of file +<9> Specify the supported {product-title} version that you want to use, for example, `4.21.0-multi`. If you are using a disconnected environment, replace `` with the digest image. To extract the {product-title} release image digest, see "Extracting the {product-title} release image digest". \ No newline at end of file diff --git a/modules/hcp-bm-hc.adoc b/modules/hcp-bm-hc.adoc index 145c8eaab1..f0328bd314 100644 --- a/modules/hcp-bm-hc.adoc +++ b/modules/hcp-bm-hc.adoc @@ -7,7 +7,7 @@ [id="hcp-bm-hc_{context}"] = Creating a hosted cluster by using the CLI -On bare-metal infrastructure, you can create or import a hosted cluster. After you enable the Assisted Installer as an add-on to {mce-short} and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. The Agent Cluster API provider connects a management cluster that hosts the control plane and a hosted cluster that consists of only the compute nodes. +On bare-metal infrastructure, you can create or import a hosted cluster. After you enable the Assisted Installer as an add-on to {mce-short} and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. The Agent Cluster API provider connects a management cluster that hosts the control plane and a hosted cluster that consists of only the compute nodes. .Prerequisites @@ -17,7 +17,7 @@ On bare-metal infrastructure, you can create or import a hosted cluster. After y - You cannot create a hosted cluster in the namespace of a {mce-short} managed cluster. -- For best security and management practices, create a hosted cluster separate from other hosted clusters. +- For best security and management practices, create a hosted cluster separate from other hosted clusters. - Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending persistent volume claims (PVCs). @@ -32,10 +32,10 @@ $ oc label node [compute-node-1] topology.kubernetes.io/zone=zone1 + [source,terminal] ---- -$ oc label node [compute-node-2] topology.kubernetes.io/zone=zone2 +$ oc label node [compute-node-2] topology.kubernetes.io/zone=zone2 ---- + -[source,terminal] +[source,terminal] ---- $ oc label node [compute-node-3] topology.kubernetes.io/zone=zone3 ---- @@ -83,7 +83,7 @@ $ hcp create cluster agent \ <7> Specify the path to your SSH public key. The default file path is `~/.ssh/id_rsa.pub`. <8> Specify your hosted cluster namespace. <9> Specify the availability policy for the hosted control plane components. Supported options are `SingleReplica` and `HighlyAvailable`. The default value is `HighlyAvailable`. -<10> Specify the supported {product-title} version that you want to use, such as `4.20.0-multi`. If you are using a disconnected environment, replace `` with the digest image. To extract the {product-title} release image digest, see _Extracting the {product-title} release image digest_. +<10> Specify the supported {product-title} version that you want to use, such as `4.21.0-multi`. If you are using a disconnected environment, replace `` with the digest image. To extract the {product-title} release image digest, see _Extracting the {product-title} release image digest_. <11> Specify the node pool replica count, such as `3`. You must specify the replica count as `0` or greater to create the same number of replicas. Otherwise, you do not create node pools. <12> After the `--ssh-key` flag, specify the path to the SSH key, such as `user/.ssh/id_rsa`. @@ -266,7 +266,7 @@ apiVersion: v1 kind: Service metadata: annotations: - metallb.universe.tf/address-pool: metallb + metallb.universe.tf/address-pool: metallb name: metallb-ingress namespace: openshift-ingress spec: diff --git a/modules/hcp-bm-ingress.adoc b/modules/hcp-bm-ingress.adoc index 4ee1d69965..adab0f8845 100644 --- a/modules/hcp-bm-ingress.adoc +++ b/modules/hcp-bm-ingress.adoc @@ -206,7 +206,7 @@ clusteroperator.config.openshift.io/console clusteroperator.config.openshift.io/ingress 4.x.y True False False 53m ---- + -Replace `<4.x.y>` with the supported {product-title} version that you want to use, for example, `4.20.0-multi`. +Replace `<4.x.y>` with the supported {product-title} version that you want to use, for example, `4.21.0-multi`. ifeval::["{context}" == "hcp-manage-non-bm"] diff --git a/modules/hcp-create-hc-arm64-aws.adoc b/modules/hcp-create-hc-arm64-aws.adoc index 77d5a10187..6dbf932170 100644 --- a/modules/hcp-create-hc-arm64-aws.adoc +++ b/modules/hcp-create-hc-arm64-aws.adoc @@ -37,5 +37,5 @@ $ hcp create cluster aws \ <4> Specify the path to your pull secret, for example, `/user/name/pullsecret`. <5> Specify the path to your AWS STS credentials file, for example, `/home/user/sts-creds/sts-creds.json`. <6> Specify the AWS region name, for example, `us-east-1`. -<7> Specify the supported {product-title} version that you want to use, for example, `4.20.0-multi`. If you are using a disconnected environment, replace `` with the digest image. To extract the {product-title} release image digest, see "Extracting the {product-title} release image digest". +<7> Specify the supported {product-title} version that you want to use, for example, `4.21.0-multi`. If you are using a disconnected environment, replace `` with the digest image. To extract the {product-title} release image digest, see "Extracting the {product-title} release image digest". <8> Specify the Amazon Resource Name (ARN), for example, `arn:aws:iam::820196288204:role/myrole`. \ No newline at end of file diff --git a/modules/hcp-deploy-openstack-create.adoc b/modules/hcp-deploy-openstack-create.adoc index 4188c1ccb2..fb1ae01525 100644 --- a/modules/hcp-deploy-openstack-create.adoc +++ b/modules/hcp-deploy-openstack-create.adoc @@ -27,7 +27,7 @@ $ hcp create cluster openstack \ --openstack-node-flavor m1.xlarge \ --base-domain example.com \ --pull-secret /path/to/pull-secret.json \ - --release-image quay.io/openshift-release-dev/ocp-release:4.20.0-x86_64 \ + --release-image quay.io/openshift-release-dev/ocp-release:4.21.0-x86_64 \ --node-pool-replicas 3 \ --etcd-storage-class lvms-etcd-class ---- diff --git a/modules/hcp-ibm-z-adding-reg-ca-hostedcluster.adoc b/modules/hcp-ibm-z-adding-reg-ca-hostedcluster.adoc index fa3e335792..0a825f13af 100644 --- a/modules/hcp-ibm-z-adding-reg-ca-hostedcluster.adoc +++ b/modules/hcp-ibm-z-adding-reg-ca-hostedcluster.adoc @@ -44,5 +44,5 @@ $ hcp create cluster agent \ <4> Replace the name with your base domain, for example, `example.com`. <5> Replace the etcd storage class name, for example, `lvm-storageclass`. <6> Replace the path to your SSH public key. The default file path is `~/.ssh/id_rsa.pub`. -<7> Replace with the supported {product-title} version that you want to use, for example, `4.20.0-multi`. +<7> Replace with the supported {product-title} version that you want to use, for example, `4.21.0-multi`. <8> Replace the path to Certificate Authority of mirror registry. \ No newline at end of file diff --git a/modules/hcp-non-bm-hc-mirror.adoc b/modules/hcp-non-bm-hc-mirror.adoc index 7b5d76781b..08d968ba8a 100644 --- a/modules/hcp-non-bm-hc-mirror.adoc +++ b/modules/hcp-non-bm-hc-mirror.adoc @@ -52,4 +52,4 @@ $ hcp create cluster agent \ <6> Specify the `icsp.yaml` file that defines ICSP and your mirror registries. <7> Specify the path to your SSH public key. The default file path is `~/.ssh/id_rsa.pub`. <8> Specify your hosted cluster namespace. -<9> Specify the supported {product-title} version that you want to use, for example, `4.20.0-multi`. If you are using a disconnected environment, replace `` with the digest image. To extract the {product-title} release image digest, see _Extracting the {product-title} release image digest_. +<9> Specify the supported {product-title} version that you want to use, for example, `4.21.0-multi`. If you are using a disconnected environment, replace `` with the digest image. To extract the {product-title} release image digest, see _Extracting the {product-title} release image digest_. diff --git a/modules/hcp-non-bm-hc.adoc b/modules/hcp-non-bm-hc.adoc index f4c76c1059..f5f0e731da 100644 --- a/modules/hcp-non-bm-hc.adoc +++ b/modules/hcp-non-bm-hc.adoc @@ -54,7 +54,7 @@ $ hcp create cluster agent \ <7> Specify the path to your SSH public key. The default file path is `~/.ssh/id_rsa.pub`. <8> Specify your hosted cluster namespace. <9> Specify the availability policy for the hosted control plane components. Supported options are `SingleReplica` and `HighlyAvailable`. The default value is `HighlyAvailable`. -<10> Specify the supported {product-title} version that you want to use, for example, `4.20.0-multi`. +<10> Specify the supported {product-title} version that you want to use, for example, `4.21.0-multi`. <11> Specify the node pool replica count, for example, `3`. You must specify the replica count as `0` or greater to create the same number of replicas. Otherwise, no node pools are created. .Verification diff --git a/modules/hcp-np-capacity-blocks.adoc b/modules/hcp-np-capacity-blocks.adoc index 5a46ba1f45..d55bb27af3 100644 --- a/modules/hcp-np-capacity-blocks.adoc +++ b/modules/hcp-np-capacity-blocks.adoc @@ -123,7 +123,7 @@ $ oc get np -n clusters [source,terminal] ---- NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE -clusters cb-np cb-np-hcp 1 1 False False 4.20.0-0.nightly-2025-06-05-224220 False False +clusters cb-np cb-np-hcp 1 1 False False 4.21.0-0.nightly-2025-06-05-224220 False False ---- . Verify that your new compute nodes are created in the hosted cluster by running the following command: diff --git a/modules/hibernating-cluster-hibernate.adoc b/modules/hibernating-cluster-hibernate.adoc index 0f844dbf6c..4faebbd44e 100644 --- a/modules/hibernating-cluster-hibernate.adoc +++ b/modules/hibernating-cluster-hibernate.adoc @@ -57,14 +57,14 @@ $ oc get clusteroperators [source,terminal] ---- NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -authentication 4.20.0-0 True False False 51m -baremetal 4.20.0-0 True False False 72m -cloud-controller-manager 4.20.0-0 True False False 75m -cloud-credential 4.20.0-0 True False False 77m -cluster-api 4.20.0-0 True False False 42m -cluster-autoscaler 4.20.0-0 True False False 72m -config-operator 4.20.0-0 True False False 72m -console 4.20.0-0 True False False 55m +authentication 4.21.0-0 True False False 51m +baremetal 4.21.0-0 True False False 72m +cloud-controller-manager 4.21.0-0 True False False 75m +cloud-credential 4.21.0-0 True False False 77m +cluster-api 4.21.0-0 True False False 42m +cluster-autoscaler 4.21.0-0 True False False 72m +config-operator 4.21.0-0 True False False 72m +console 4.21.0-0 True False False 55m ... ---- + diff --git a/modules/hibernating-cluster-resume.adoc b/modules/hibernating-cluster-resume.adoc index 4365879500..10e94b43a1 100644 --- a/modules/hibernating-cluster-resume.adoc +++ b/modules/hibernating-cluster-resume.adoc @@ -104,14 +104,14 @@ $ oc get clusteroperators [source,terminal] ---- NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -authentication 4.20.0-0 True False False 51m -baremetal 4.20.0-0 True False False 72m -cloud-controller-manager 4.20.0-0 True False False 75m -cloud-credential 4.20.0-0 True False False 77m -cluster-api 4.20.0-0 True False False 42m -cluster-autoscaler 4.20.0-0 True False False 72m -config-operator 4.20.0-0 True False False 72m -console 4.20.0-0 True False False 55m +authentication 4.21.0-0 True False False 51m +baremetal 4.21.0-0 True False False 72m +cloud-controller-manager 4.21.0-0 True False False 75m +cloud-credential 4.21.0-0 True False False 77m +cluster-api 4.21.0-0 True False False 42m +cluster-autoscaler 4.21.0-0 True False False 72m +config-operator 4.21.0-0 True False False 72m +console 4.21.0-0 True False False 55m ... ---- + diff --git a/modules/hosted-control-planes-version-support.adoc b/modules/hosted-control-planes-version-support.adoc index 638b771912..0dea0e7279 100644 --- a/modules/hosted-control-planes-version-support.adoc +++ b/modules/hosted-control-planes-version-support.adoc @@ -43,7 +43,7 @@ You can host different versions of control planes on the same management cluster ---- apiVersion: v1 data: - supported-versions: '{"versions":["4.20"]}' + supported-versions: '{"versions":["4.21"]}' kind: ConfigMap metadata: labels: diff --git a/modules/ibi-create-iso-for-bmh.adoc b/modules/ibi-create-iso-for-bmh.adoc index 256c511d6b..2661b2f5c6 100644 --- a/modules/ibi-create-iso-for-bmh.adoc +++ b/modules/ibi-create-iso-for-bmh.adoc @@ -59,8 +59,8 @@ kind: ImageBasedInstallationConfig metadata: name: example-image-based-installation-config # The following fields are required -seedImage: quay.io/openshift-kni/seed-image:4.20.0 -seedVersion: 4.20.0 +seedImage: quay.io/openshift-kni/seed-image:4.21.0 +seedVersion: 4.21.0 installationDisk: /dev/vda pullSecret: '' # networkConfig is optional and contains the network configuration for the host in NMState format. @@ -89,7 +89,7 @@ kind: ImageBasedInstallationConfig metadata: name: example-image-based-installation-config seedImage: quay.io/repo-id/seed:latest -seedVersion: "4.20.0" +seedVersion: "4.21.0" extraPartitionStart: "-240G" installationDisk: /dev/disk/by-id/wwn-0x62c... sshKey: 'ssh-ed25519 AAAA...' diff --git a/modules/ibi-extra-partition-ibi-install-iso.adoc b/modules/ibi-extra-partition-ibi-install-iso.adoc index a3577647d0..86e2b2d080 100644 --- a/modules/ibi-extra-partition-ibi-install-iso.adoc +++ b/modules/ibi-extra-partition-ibi-install-iso.adoc @@ -27,7 +27,7 @@ kind: ImageBasedInstallationConfig metadata: name: example-extra-partition seedImage: quay.io/repo-id/seed:latest -seedVersion: "4.20.0" +seedVersion: "4.21.0" installationDisk: /dev/sda pullSecret: '{"auths": ...}' # ... diff --git a/modules/insights-operator-one-time-gather.adoc b/modules/insights-operator-one-time-gather.adoc index cb4a337956..8a87696384 100644 --- a/modules/insights-operator-one-time-gather.adoc +++ b/modules/insights-operator-one-time-gather.adoc @@ -20,7 +20,7 @@ You must run a gather operation to create an {insights-operator} archive. + [source,yaml] ---- -include::https://raw.githubusercontent.com/openshift/insights-operator/release-4.20/docs/gather-job.yaml[] +include::https://raw.githubusercontent.com/openshift/insights-operator/release-4.21/docs/gather-job.yaml[] ---- . Copy your `insights-operator` image version: + diff --git a/modules/installation-arm-bootstrap.adoc b/modules/installation-arm-bootstrap.adoc index c92c2c19f2..3ca6dd7276 100644 --- a/modules/installation-arm-bootstrap.adoc +++ b/modules/installation-arm-bootstrap.adoc @@ -21,10 +21,10 @@ bootstrap machine that you need for your {product-title} cluster: [source,json] ---- ifndef::ash[] -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azure/04_bootstrap.json[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azure/04_bootstrap.json[] endif::ash[] ifdef::ash[] -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azurestack/04_bootstrap.json[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azurestack/04_bootstrap.json[] endif::ash[] ---- ==== diff --git a/modules/installation-arm-control-plane.adoc b/modules/installation-arm-control-plane.adoc index 2b06d58f5f..14d3c1a562 100644 --- a/modules/installation-arm-control-plane.adoc +++ b/modules/installation-arm-control-plane.adoc @@ -21,10 +21,10 @@ control plane machines that you need for your {product-title} cluster: [source,json] ---- ifndef::ash[] -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azure/05_masters.json[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azure/05_masters.json[] endif::ash[] ifdef::ash[] -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azurestack/05_masters.json[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azurestack/05_masters.json[] endif::ash[] ---- ==== diff --git a/modules/installation-arm-dns.adoc b/modules/installation-arm-dns.adoc index c2619b8eb8..5d03f6aba2 100644 --- a/modules/installation-arm-dns.adoc +++ b/modules/installation-arm-dns.adoc @@ -22,10 +22,10 @@ cluster: [source,json] ---- ifndef::ash[] -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azure/03_infra.json[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azure/03_infra.json[] endif::ash[] ifdef::ash[] -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azurestack/03_infra.json[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azurestack/03_infra.json[] endif::ash[] ---- ==== diff --git a/modules/installation-arm-image-storage.adoc b/modules/installation-arm-image-storage.adoc index 739f2f368b..57cccba63b 100644 --- a/modules/installation-arm-image-storage.adoc +++ b/modules/installation-arm-image-storage.adoc @@ -21,10 +21,10 @@ stored {op-system-first} image that you need for your {product-title} cluster: [source,json] ---- ifndef::ash[] -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azure/02_storage.json[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azure/02_storage.json[] endif::ash[] ifdef::ash[] -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azurestack/02_storage.json[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azurestack/02_storage.json[] endif::ash[] ---- ==== diff --git a/modules/installation-arm-vnet.adoc b/modules/installation-arm-vnet.adoc index 21e690e064..24626dfbd8 100644 --- a/modules/installation-arm-vnet.adoc +++ b/modules/installation-arm-vnet.adoc @@ -21,10 +21,10 @@ VNet that you need for your {product-title} cluster: [source,json] ---- ifndef::ash[] -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azure/01_vnet.json[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azure/01_vnet.json[] endif::ash[] ifdef::ash[] -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azurestack/01_vnet.json[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azurestack/01_vnet.json[] endif::ash[] ---- ==== diff --git a/modules/installation-arm-worker.adoc b/modules/installation-arm-worker.adoc index 9f85ab2df6..c9fec08f8f 100644 --- a/modules/installation-arm-worker.adoc +++ b/modules/installation-arm-worker.adoc @@ -21,10 +21,10 @@ worker machines that you need for your {product-title} cluster: [source,json] ---- ifndef::ash[] -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azure/06_workers.json[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azure/06_workers.json[] endif::ash[] ifdef::ash[] -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azurestack/06_workers.json[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azurestack/06_workers.json[] endif::ash[] ---- ==== diff --git a/modules/installation-aws-arm-tested-machine-types.adoc b/modules/installation-aws-arm-tested-machine-types.adoc index d978c621fe..270745c185 100644 --- a/modules/installation-aws-arm-tested-machine-types.adoc +++ b/modules/installation-aws-arm-tested-machine-types.adoc @@ -22,5 +22,5 @@ Use the machine types included in the following charts for your AWS ARM instance .Machine types based on 64-bit ARM architecture [%collapsible] ==== -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/docs/user/aws/tested_instance_types_aarch64.md[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/docs/user/aws/tested_instance_types_aarch64.md[] ==== diff --git a/modules/installation-aws-tested-machine-types.adoc b/modules/installation-aws-tested-machine-types.adoc index 222998420b..42ed24a7e7 100644 --- a/modules/installation-aws-tested-machine-types.adoc +++ b/modules/installation-aws-tested-machine-types.adoc @@ -46,7 +46,7 @@ ifndef::local-zone,wavelength-zone,secretregion[] .Machine types based on 64-bit x86 architecture [%collapsible] ==== -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/docs/user/aws/tested_instance_types_x86_64.md[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/docs/user/aws/tested_instance_types_x86_64.md[] ==== endif::local-zone,wavelength-zone,secretregion[] ifdef::local-zone[] diff --git a/modules/installation-azure-arm-tested-machine-types.adoc b/modules/installation-azure-arm-tested-machine-types.adoc index a66b350629..24ead69464 100644 --- a/modules/installation-azure-arm-tested-machine-types.adoc +++ b/modules/installation-azure-arm-tested-machine-types.adoc @@ -18,5 +18,5 @@ The following Microsoft Azure ARM64 instance types have been tested with {produc .Machine types based on 64-bit ARM architecture [%collapsible] ==== -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/docs/user/azure/tested_instance_types_aarch64.md[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/docs/user/azure/tested_instance_types_aarch64.md[] ==== diff --git a/modules/installation-azure-tested-machine-types.adoc b/modules/installation-azure-tested-machine-types.adoc index 43a77c99bd..59272d05c2 100644 --- a/modules/installation-azure-tested-machine-types.adoc +++ b/modules/installation-azure-tested-machine-types.adoc @@ -17,5 +17,5 @@ The following Microsoft Azure instance types have been tested with {product-titl .Machine types based on 64-bit x86 architecture [%collapsible] ==== -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/docs/user/azure/tested_instance_types_x86_64.md[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/docs/user/azure/tested_instance_types_x86_64.md[] ==== diff --git a/modules/installation-cloudformation-bootstrap.adoc b/modules/installation-cloudformation-bootstrap.adoc index 1826f5c535..91f514e9e0 100644 --- a/modules/installation-cloudformation-bootstrap.adoc +++ b/modules/installation-cloudformation-bootstrap.adoc @@ -14,6 +14,6 @@ You can use the following CloudFormation template to deploy the bootstrap machin ==== [source,yaml] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/aws/cloudformation/04_cluster_bootstrap.yaml[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/04_cluster_bootstrap.yaml[] ---- ==== diff --git a/modules/installation-cloudformation-control-plane.adoc b/modules/installation-cloudformation-control-plane.adoc index 2b60c13d2b..c6fe1fd19e 100644 --- a/modules/installation-cloudformation-control-plane.adoc +++ b/modules/installation-cloudformation-control-plane.adoc @@ -15,6 +15,6 @@ machines that you need for your {product-title} cluster. ==== [source,yaml] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/aws/cloudformation/05_cluster_master_nodes.yaml[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/05_cluster_master_nodes.yaml[] ---- ==== diff --git a/modules/installation-cloudformation-dns.adoc b/modules/installation-cloudformation-dns.adoc index 597954d4be..2a03dd2d10 100644 --- a/modules/installation-cloudformation-dns.adoc +++ b/modules/installation-cloudformation-dns.adoc @@ -15,7 +15,7 @@ objects and load balancers that you need for your {product-title} cluster. ==== [source,yaml] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/aws/cloudformation/02_cluster_infra.yaml[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/02_cluster_infra.yaml[] ---- ==== diff --git a/modules/installation-cloudformation-security.adoc b/modules/installation-cloudformation-security.adoc index 66495708a6..a1a4aca558 100644 --- a/modules/installation-cloudformation-security.adoc +++ b/modules/installation-cloudformation-security.adoc @@ -15,6 +15,6 @@ that you need for your {product-title} cluster. ==== [source,yaml] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/aws/cloudformation/03_cluster_security.yaml[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/03_cluster_security.yaml[] ---- ==== diff --git a/modules/installation-cloudformation-vpc.adoc b/modules/installation-cloudformation-vpc.adoc index b01410cdef..a62b7aa35b 100644 --- a/modules/installation-cloudformation-vpc.adoc +++ b/modules/installation-cloudformation-vpc.adoc @@ -15,6 +15,6 @@ you need for your {product-title} cluster. ==== [source,yaml] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/aws/cloudformation/01_vpc.yaml[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/01_vpc.yaml[] ---- ==== diff --git a/modules/installation-cloudformation-worker.adoc b/modules/installation-cloudformation-worker.adoc index b938227eab..5e6a706700 100644 --- a/modules/installation-cloudformation-worker.adoc +++ b/modules/installation-cloudformation-worker.adoc @@ -14,6 +14,6 @@ You can deploy the compute machines that you need for your {product-title} clust ==== [source,yaml] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/aws/cloudformation/06_cluster_worker_node.yaml[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/06_cluster_worker_node.yaml[] ---- ==== \ No newline at end of file diff --git a/modules/installation-deployment-manager-bootstrap.adoc b/modules/installation-deployment-manager-bootstrap.adoc index 584b50ce21..fc4d61c813 100644 --- a/modules/installation-deployment-manager-bootstrap.adoc +++ b/modules/installation-deployment-manager-bootstrap.adoc @@ -15,6 +15,6 @@ machine that you need for your {product-title} cluster: ==== [source,python] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/04_bootstrap.py[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/04_bootstrap.py[] ---- ==== diff --git a/modules/installation-deployment-manager-control-plane.adoc b/modules/installation-deployment-manager-control-plane.adoc index 8100a5fa99..1ea45801e2 100644 --- a/modules/installation-deployment-manager-control-plane.adoc +++ b/modules/installation-deployment-manager-control-plane.adoc @@ -15,6 +15,6 @@ plane machines that you need for your {product-title} cluster: ==== [source,python] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/05_control_plane.py[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/05_control_plane.py[] ---- ==== diff --git a/modules/installation-deployment-manager-ext-lb.adoc b/modules/installation-deployment-manager-ext-lb.adoc index 0a1a96f2c7..64d0c8fe13 100644 --- a/modules/installation-deployment-manager-ext-lb.adoc +++ b/modules/installation-deployment-manager-ext-lb.adoc @@ -13,6 +13,6 @@ You can use the following Deployment Manager template to deploy the external loa ==== [source,python] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/02_lb_ext.py[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/02_lb_ext.py[] ---- ==== diff --git a/modules/installation-deployment-manager-firewall-rules.adoc b/modules/installation-deployment-manager-firewall-rules.adoc index cc2d2e7814..d3cdad8f47 100644 --- a/modules/installation-deployment-manager-firewall-rules.adoc +++ b/modules/installation-deployment-manager-firewall-rules.adoc @@ -13,6 +13,6 @@ You can use the following Deployment Manager template to deploy the firewall rul ==== [source,python] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/03_firewall.py[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/03_firewall.py[] ---- ==== diff --git a/modules/installation-deployment-manager-iam-shared-vpc.adoc b/modules/installation-deployment-manager-iam-shared-vpc.adoc index 3daf914af8..cd53770f8d 100644 --- a/modules/installation-deployment-manager-iam-shared-vpc.adoc +++ b/modules/installation-deployment-manager-iam-shared-vpc.adoc @@ -13,6 +13,6 @@ You can use the following Deployment Manager template to deploy the IAM roles th ==== [source,python] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/03_iam.py[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/03_iam.py[] ---- ==== diff --git a/modules/installation-deployment-manager-int-lb.adoc b/modules/installation-deployment-manager-int-lb.adoc index e59bab9760..b001936c1a 100644 --- a/modules/installation-deployment-manager-int-lb.adoc +++ b/modules/installation-deployment-manager-int-lb.adoc @@ -13,7 +13,7 @@ You can use the following Deployment Manager template to deploy the internal loa ==== [source,python] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/02_lb_int.py[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/02_lb_int.py[] ---- ==== diff --git a/modules/installation-deployment-manager-private-dns.adoc b/modules/installation-deployment-manager-private-dns.adoc index ab06161aa6..1b9365d007 100644 --- a/modules/installation-deployment-manager-private-dns.adoc +++ b/modules/installation-deployment-manager-private-dns.adoc @@ -13,6 +13,6 @@ You can use the following Deployment Manager template to deploy the private DNS ==== [source,python] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/02_dns.py[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/02_dns.py[] ---- ==== diff --git a/modules/installation-deployment-manager-vpc.adoc b/modules/installation-deployment-manager-vpc.adoc index 28beeb8a79..9b57f1df4f 100644 --- a/modules/installation-deployment-manager-vpc.adoc +++ b/modules/installation-deployment-manager-vpc.adoc @@ -15,6 +15,6 @@ you need for your {product-title} cluster: ==== [source,python] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/01_vpc.py[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/01_vpc.py[] ---- ==== diff --git a/modules/installation-deployment-manager-worker.adoc b/modules/installation-deployment-manager-worker.adoc index 9ba2dd0924..2b3dc67114 100644 --- a/modules/installation-deployment-manager-worker.adoc +++ b/modules/installation-deployment-manager-worker.adoc @@ -15,6 +15,6 @@ that you need for your {product-title} cluster: ==== [source,python] ---- -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/06_worker.py[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/06_worker.py[] ---- ==== diff --git a/modules/installation-gcp-tested-machine-types-arm.adoc b/modules/installation-gcp-tested-machine-types-arm.adoc index b6ceac244c..6d019e9b36 100644 --- a/modules/installation-gcp-tested-machine-types-arm.adoc +++ b/modules/installation-gcp-tested-machine-types-arm.adoc @@ -18,5 +18,5 @@ The following {gcp-first} 64-bit ARM instance types have been tested with {produ .Machine series for 64-bit ARM machines [%collapsible] ==== -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/docs/user/gcp/tested_instance_types_arm.md[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/docs/user/gcp/tested_instance_types_arm.md[] ==== \ No newline at end of file diff --git a/modules/installation-gcp-tested-machine-types.adoc b/modules/installation-gcp-tested-machine-types.adoc index d4fcd9f2b5..ef5f0632e6 100644 --- a/modules/installation-gcp-tested-machine-types.adoc +++ b/modules/installation-gcp-tested-machine-types.adoc @@ -25,5 +25,5 @@ Some instance types require the use of Hyperdisk storage. If you use an instance .Machine series [%collapsible] ==== -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/docs/user/gcp/tested_instance_types.md[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/docs/user/gcp/tested_instance_types.md[] ==== diff --git a/modules/installation-ibm-cloud-tested-machine-types.adoc b/modules/installation-ibm-cloud-tested-machine-types.adoc index 4a5c8a72e4..82c779eaa9 100644 --- a/modules/installation-ibm-cloud-tested-machine-types.adoc +++ b/modules/installation-ibm-cloud-tested-machine-types.adoc @@ -14,5 +14,5 @@ The following {ibm-cloud-name} instance types have been tested with {product-tit .Machine series [%collapsible] ==== -include::https://raw.githubusercontent.com/openshift/installer/release-4.20/docs/user/ibmcloud/tested_instance_types_x86_64.md[] +include::https://raw.githubusercontent.com/openshift/installer/release-4.21/docs/user/ibmcloud/tested_instance_types_x86_64.md[] ==== \ No newline at end of file diff --git a/modules/installation-mirror-repository.adoc b/modules/installation-mirror-repository.adoc index 0f3e9a2f35..648ad5e087 100644 --- a/modules/installation-mirror-repository.adoc +++ b/modules/installation-mirror-repository.adoc @@ -48,7 +48,7 @@ endif::[] $ OCP_RELEASE= ---- + -For ``, specify the tag that corresponds to the version of {product-title} to install, such as `4.20.1`. +For ``, specify the tag that corresponds to the version of {product-title} to install, such as `4.21.1`. .. Export the local registry name and host port: + @@ -305,4 +305,3 @@ You must perform this step on a machine with an active internet connection. $ openshift-install ---- endif::openshift-rosa,openshift-dedicated[] - diff --git a/modules/installation-special-config-kmod.adoc b/modules/installation-special-config-kmod.adoc index 4490d81bf3..499ef039d4 100644 --- a/modules/installation-special-config-kmod.adoc +++ b/modules/installation-special-config-kmod.adoc @@ -239,10 +239,10 @@ $ sudo spkut 44 .Example output [source,terminal] ---- -KVC: wrapper simple-kmod for 4.20.0-147.3.1.el8_1.x86_64 +KVC: wrapper simple-kmod for 4.21.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container... + podman run -i --rm --privileged - simple-kmod-dd1a7d4:4.20.0-147.3.1.el8_1.x86_64 spkut 44 + simple-kmod-dd1a7d4:4.21.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44 ---- diff --git a/modules/installation-special-config-rtkernel.adoc b/modules/installation-special-config-rtkernel.adoc index 7400cb0be8..9bbaa286b5 100644 --- a/modules/installation-special-config-rtkernel.adoc +++ b/modules/installation-special-config-rtkernel.adoc @@ -120,8 +120,8 @@ sh-4.4# uname -a .Example output [source,terminal] ---- -Linux 4.20.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT - Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux +Linux 4.21.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT + Wed Feb 25 18:29:55 UTC 2026 x86_64 x86_64 x86_64 GNU/Linux ---- + The kernel name contains `rt` and text `PREEMPT RT` indicates that this is a diff --git a/modules/installation-user-infra-machines-iso.adoc b/modules/installation-user-infra-machines-iso.adoc index da6cfd1846..b3f78f0b09 100644 --- a/modules/installation-user-infra-machines-iso.adoc +++ b/modules/installation-user-infra-machines-iso.adoc @@ -88,10 +88,10 @@ $ openshift-install coreos print-stream-json | grep '\.iso[^.]' [source,terminal] ifndef::openshift-origin[] ---- -"location": "/art/storage/releases/rhcos-4.20-aarch64//aarch64/rhcos--live.aarch64.iso", -"location": "/art/storage/releases/rhcos-4.20-ppc64le//ppc64le/rhcos--live.ppc64le.iso", -"location": "/art/storage/releases/rhcos-4.20-s390x//s390x/rhcos--live.s390x.iso", -"location": "/art/storage/releases/rhcos-4.20//x86_64/rhcos--live.x86_64.iso", +"location": "/art/storage/releases/rhcos-4.21-aarch64//aarch64/rhcos--live.aarch64.iso", +"location": "/art/storage/releases/rhcos-4.21-ppc64le//ppc64le/rhcos--live.ppc64le.iso", +"location": "/art/storage/releases/rhcos-4.21-s390x//s390x/rhcos--live.s390x.iso", +"location": "/art/storage/releases/rhcos-4.21//x86_64/rhcos--live.x86_64.iso", ---- endif::openshift-origin[] ifdef::openshift-origin[] diff --git a/modules/installation-user-infra-machines-pxe.adoc b/modules/installation-user-infra-machines-pxe.adoc index 8c7d1505c2..bbdfaf944c 100644 --- a/modules/installation-user-infra-machines-pxe.adoc +++ b/modules/installation-user-infra-machines-pxe.adoc @@ -101,18 +101,18 @@ $ openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initra [source,terminal] ifndef::openshift-origin[] ---- -"/art/storage/releases/rhcos-4.20-aarch64//aarch64/rhcos--live-kernel-aarch64" -"/art/storage/releases/rhcos-4.20-aarch64//aarch64/rhcos--live-initramfs.aarch64.img" -"/art/storage/releases/rhcos-4.20-aarch64//aarch64/rhcos--live-rootfs.aarch64.img" -"/art/storage/releases/rhcos-4.20-ppc64le/49.84.202110081256-0/ppc64le/rhcos--live-kernel-ppc64le" -"/art/storage/releases/rhcos-4.20-ppc64le//ppc64le/rhcos--live-initramfs.ppc64le.img" -"/art/storage/releases/rhcos-4.20-ppc64le//ppc64le/rhcos--live-rootfs.ppc64le.img" -"/art/storage/releases/rhcos-4.20-s390x//s390x/rhcos--live-kernel-s390x" -"/art/storage/releases/rhcos-4.20-s390x//s390x/rhcos--live-initramfs.s390x.img" -"/art/storage/releases/rhcos-4.20-s390x//s390x/rhcos--live-rootfs.s390x.img" -"/art/storage/releases/rhcos-4.20//x86_64/rhcos--live-kernel-x86_64" -"/art/storage/releases/rhcos-4.20//x86_64/rhcos--live-initramfs.x86_64.img" -"/art/storage/releases/rhcos-4.20//x86_64/rhcos--live-rootfs.x86_64.img" +"/art/storage/releases/rhcos-4.21-aarch64//aarch64/rhcos--live-kernel-aarch64" +"/art/storage/releases/rhcos-4.21-aarch64//aarch64/rhcos--live-initramfs.aarch64.img" +"/art/storage/releases/rhcos-4.21-aarch64//aarch64/rhcos--live-rootfs.aarch64.img" +"/art/storage/releases/rhcos-4.21-ppc64le/49.84.202110081256-0/ppc64le/rhcos--live-kernel-ppc64le" +"/art/storage/releases/rhcos-4.21-ppc64le//ppc64le/rhcos--live-initramfs.ppc64le.img" +"/art/storage/releases/rhcos-4.21-ppc64le//ppc64le/rhcos--live-rootfs.ppc64le.img" +"/art/storage/releases/rhcos-4.21-s390x//s390x/rhcos--live-kernel-s390x" +"/art/storage/releases/rhcos-4.21-s390x//s390x/rhcos--live-initramfs.s390x.img" +"/art/storage/releases/rhcos-4.21-s390x//s390x/rhcos--live-rootfs.s390x.img" +"/art/storage/releases/rhcos-4.21//x86_64/rhcos--live-kernel-x86_64" +"/art/storage/releases/rhcos-4.21//x86_64/rhcos--live-initramfs.x86_64.img" +"/art/storage/releases/rhcos-4.21//x86_64/rhcos--live-rootfs.x86_64.img" ---- endif::openshift-origin[] ifdef::openshift-origin[] @@ -232,9 +232,9 @@ menuentry 'Install CoreOS' { } ---- + -where: +where: + -`coreos.live.rootfs_url`:: Specify the locations of the {op-system} files that you uploaded to your HTTP/TFTP server. +`coreos.live.rootfs_url`:: Specify the locations of the {op-system} files that you uploaded to your HTTP/TFTP server. `kernel`:: The `kernel` parameter value is the location of the `kernel` file on your TFTP server. The `coreos.live.rootfs_url` parameter value is the location of the `rootfs` file, and the `coreos.inst.ignition_url` parameter value is the location of the bootstrap Ignition config file on your HTTP Server. If you use multiple NICs, specify a single interface in the `ip` option. For example, to use DHCP on a NIC that is named `eno1`, set `ip=eno1:dhcp`. `initrd rhcos`:: Specify the location of the `initramfs` file that you uploaded to your TFTP server. diff --git a/modules/installing-a-cluster-with-multiarch-support.adoc b/modules/installing-a-cluster-with-multiarch-support.adoc index 3a9eca63f7..70afe5b3d1 100644 --- a/modules/installing-a-cluster-with-multiarch-support.adoc +++ b/modules/installing-a-cluster-with-multiarch-support.adoc @@ -22,7 +22,7 @@ $ ./openshift-install version .Example output [source,terminal] ---- -./openshift-install 4.20.0 +./openshift-install 4.21.0 built from commit abc123etc release image quay.io/openshift-release-dev/ocp-release@sha256:abc123wxyzetc release architecture multi diff --git a/modules/k8s-nmstate-uninstall-operator.adoc b/modules/k8s-nmstate-uninstall-operator.adoc index feb9aa11e2..25bf4fff06 100644 --- a/modules/k8s-nmstate-uninstall-operator.adoc +++ b/modules/k8s-nmstate-uninstall-operator.adoc @@ -38,14 +38,14 @@ $ oc get --namespace openshift-nmstate clusterserviceversion [source,terminal] ---- NAME DISPLAY VERSION REPLACES PHASE -kubernetes-nmstate-operator.v4.20.0 Kubernetes NMState Operator 4.20.0 Succeeded +kubernetes-nmstate-operator.v4.21.0 Kubernetes NMState Operator 4.21.0 Succeeded ---- . Delete the CSV resource. After you delete the file, {olm} deletes certain resources, such as `RBAC`, that it created for the Operator. + [source,terminal] ---- -$ oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.20.0 +$ oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.21.0 ---- . Delete the `nmstate` CR and any associated `Deployment` resources by running the following commands: diff --git a/modules/kmm-validation-kickoff.adoc b/modules/kmm-validation-kickoff.adoc index 6793e265a7..f83dbbb1ba 100644 --- a/modules/kmm-validation-kickoff.adoc +++ b/modules/kmm-validation-kickoff.adoc @@ -15,22 +15,22 @@ You can obtain the image by running one of the following commands in the cluster [source,terminal] ---- # For x86_64 image: -$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.20.0-x86_64 --image-for=driver-toolkit +$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.21.0-x86_64 --image-for=driver-toolkit ---- + [source,terminal] ---- # For ARM64 image: -$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.20.0-aarch64 --image-for=driver-toolkit +$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.21.0-aarch64 --image-for=driver-toolkit ---- -`kernelVersion`:: Required field that provides the version of the kernel that the cluster is upgraded to. +`kernelVersion`:: Required field that provides the version of the kernel that the cluster is upgraded to. + You can obtain the version by running the following command in the cluster: + [source,terminal] ---- -$ podman run -it --rm $(oc adm release info quay.io/openshift-release-dev/ocp-release:4.20.0-x86_64 --image-for=driver-toolkit) cat /etc/driver-toolkit-release.json +$ podman run -it --rm $(oc adm release info quay.io/openshift-release-dev/ocp-release:4.21.0-x86_64 --image-for=driver-toolkit) cat /etc/driver-toolkit-release.json ---- `pushBuiltImage`:: If `true`, then the images created during the Build and Sign validation are pushed to their repositories. This field is `false` by default. diff --git a/modules/learning-deploying-application-scaling-manual-pod.adoc b/modules/learning-deploying-application-scaling-manual-pod.adoc index 39d032577b..b6c1cc967a 100644 --- a/modules/learning-deploying-application-scaling-manual-pod.adoc +++ b/modules/learning-deploying-application-scaling-manual-pod.adoc @@ -8,23 +8,23 @@ [role="_abstract"] You can manually scale your application's pods by using one of the following methods: -* Changing your ReplicaSet or deployment definition +* Changing your ReplicaSet or deployment definition * Using the command line * Using the web console -This workshop starts by using only one pod for the microservice. By defining a replica of `1` in your deployment definition, the Kubernetes Replication Controller strives to keep one pod alive. You then learn how to define pod autoscaling by using the link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/nodes/working-with-pods#nodes-pods-autoscaling[Horizontal Pod Autoscaler](HPA) which is based on the load and will scale out more pods when necessary. +This workshop starts by using only one pod for the microservice. By defining a replica of `1` in your deployment definition, the Kubernetes Replication Controller strives to keep one pod alive. You then learn how to define pod autoscaling by using the link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/nodes/working-with-pods#nodes-pods-autoscaling[Horizontal Pod Autoscaler](HPA) which is based on the load and will scale out more pods when necessary. .Prerequisites -* An active {product-title} cluster +* An active {product-title} cluster * A deployed OSToy application -.Procedure +.Procedure . In the OSToy app, click the *Networking* tab in the navigational menu. . In the "Intra-cluster Communication" section, locate the box that randomly changes colors. Inside the box, you see the microservice's pod name. There is only one box in this example because there is only one microservice pod. + image::deploy-scale-network.png[HPA Menu] -+ ++ . Confirm that there is only one pod running for the microservice by running the following command: + @@ -95,8 +95,8 @@ ostoy-microservice-6666dcf455-tqzmn 1/1 Running 0 26m $ oc scale deployment ostoy-microservice --replicas=2 ---- + -** From the navigational menu of the OpenShift web console UI, click *Workloads > Deployments > ostoy-microservice*. -** Locate the blue circle with a "3 Pod" label in the middle. +** From the navigational menu of the OpenShift web console UI, click *Workloads > Deployments > ostoy-microservice*. +** Locate the blue circle with a "3 Pod" label in the middle. ** Selecting the arrows next to the circle scales the number of pods. Select the down arrow to `2`. + image::deploy-scale-uiscale.png[UI Scale] diff --git a/modules/learning-deploying-application-scaling-pod-autoscaling.adoc b/modules/learning-deploying-application-scaling-pod-autoscaling.adoc index 46ddbc5ff3..e4b5ee9aed 100644 --- a/modules/learning-deploying-application-scaling-pod-autoscaling.adoc +++ b/modules/learning-deploying-application-scaling-pod-autoscaling.adoc @@ -6,10 +6,10 @@ = Pod autoscaling [role="_abstract"] -{product-title} offers a link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/nodes/working-with-pods#nodes-pods-autoscaling[Horizontal Pod Autoscaler] (HPA). The HPA uses metrics to increase or decrease the number of pods when necessary. +{product-title} offers a link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/nodes/working-with-pods#nodes-pods-autoscaling[Horizontal Pod Autoscaler] (HPA). The HPA uses metrics to increase or decrease the number of pods when necessary. .Prerequisites -* An active {product-title} cluster +* An active {product-title} cluster * A deployed OSToy application .Procedure @@ -27,11 +27,11 @@ $ oc autoscale deployment/ostoy-microservice --cpu-percent=80 --min=1 --max=10 + This command creates an HPA that maintains between 1 and 10 replicas of the pods controlled by the ostoy-microservice deployment. During deployment, HPA increases and decreases the number of replicas to keep the average CPU use across all pods at 80% and 40 millicores. -. On the *Pod Auto Scaling > Horizontal Pod Autoscaling* page, select *Increase the load*. +. On the *Pod Auto Scaling > Horizontal Pod Autoscaling* page, select *Increase the load*. + [IMPORTANT] ==== -Because increasing the load generates CPU intensive calculations, the page can become unresponsive. This is an expected response. Only click *Increase the Load* once. For more information about the process, see the link:https://github.com/openshift-cs/ostoy/blob/master/microservice/app.js#L32[microservice's GitHub repository]. +Because increasing the load generates CPU intensive calculations, the page can become unresponsive. This is an expected response. Only click *Increase the Load* once. For more information about the process, see the link:https://github.com/openshift-cs/ostoy/blob/master/microservice/app.js#L32[microservice's GitHub repository]. ==== + After a few minutes, the new pods display on the page represented by colored boxes. @@ -67,17 +67,17 @@ ostoy-microservice-79894f6945-mgwk7 1/1 Running 0 4h24m ostoy-microservice-79894f6945-q925d 1/1 Running 0 3m14s ---- -* You can also verify autoscaling from the {cluster-manager} +* You can also verify autoscaling from the {cluster-manager} + . In the OpenShift web console navigational menu, click *Observe > Dashboards*. . In the dashboard, select *Kubernetes / Compute Resources / Namespace (Pods)* and your namespace *ostoy*. + image::deploy-scale-hpa-metrics.png[Select metrics] + -. A graph appears showing your resource usage across CPU and memory. The top graph shows recent CPU consumption per pod and the lower graph indicates memory usage. The following lists the callouts in the graph: -.. The load increased (A). -.. Two new pods were created (B and C). -.. The thickness of each graph represents the CPU consumption and indicates which pods handled more load. +. A graph appears showing your resource usage across CPU and memory. The top graph shows recent CPU consumption per pod and the lower graph indicates memory usage. The following lists the callouts in the graph: +.. The load increased (A). +.. Two new pods were created (B and C). +.. The thickness of each graph represents the CPU consumption and indicates which pods handled more load. .. The load decreased (D), and the pods were deleted. + image::deploy-scale-metrics.png[Select metrics] \ No newline at end of file diff --git a/modules/manually-maintained-credentials-upgrade-extract.adoc b/modules/manually-maintained-credentials-upgrade-extract.adoc index eaf9f1c4f5..d58e2c4c23 100644 --- a/modules/manually-maintained-credentials-upgrade-extract.adoc +++ b/modules/manually-maintained-credentials-upgrade-extract.adoc @@ -32,7 +32,7 @@ The output of this command includes pull specs for the available updates similar Recommended updates: VERSION IMAGE -4.20.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032 +4.21.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032 ... ---- diff --git a/modules/monitoring-reviewing-monitoring-dashboards-developer.adoc b/modules/monitoring-reviewing-monitoring-dashboards-developer.adoc index a64ab80848..5a44347b4b 100644 --- a/modules/monitoring-reviewing-monitoring-dashboards-developer.adoc +++ b/modules/monitoring-reviewing-monitoring-dashboards-developer.adoc @@ -14,7 +14,7 @@ include::snippets/snip-unified-perspective-web-console.adoc[] * You have access to the cluster as a developer or as a user. * You have view permissions for the project that you are viewing the dashboard for. -* A cluster administrator has link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/web_console/index#enabling-developer-perspective_web-console_web-console-overview[enabled the *Developer* perspective] in the web console. +* A cluster administrator has link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/web_console/index#enabling-developer-perspective_web-console_web-console-overview[enabled the *Developer* perspective] in the web console. .Procedure diff --git a/modules/network-flow-matrix.adoc b/modules/network-flow-matrix.adoc index 2bbb7f664c..9b5fe1b23b 100644 --- a/modules/network-flow-matrix.adoc +++ b/modules/network-flow-matrix.adoc @@ -23,13 +23,13 @@ Additionally, consider the following dynamic port ranges when managing ingress t To view or download the complete raw CSV content for an environment, see the following resources: -* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/raw/bm.csv[{product-title} on bare metal] +* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/raw/bm.csv[{product-title} on bare metal] -* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/raw/none-sno.csv[{sno-caps} with other platforms] +* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/raw/none-sno.csv[{sno-caps} with other platforms] -* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/raw/aws.csv[{product-title} on {aws-short}] +* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/raw/aws.csv[{product-title} on {aws-short}] -* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/raw/aws-sno.csv[{sno-caps} on {aws-short}] +* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/raw/aws-sno.csv[{sno-caps} on {aws-short}] [NOTE] ==== @@ -50,14 +50,14 @@ For base ingress flows to {sno} clusters, see the _Control plane node base flows .Control plane node base flows [%header,format=csv] |=== -include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/unique/common-master.csv[] +include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/unique/common-master.csv[] |=== [id="network-flow-matrix-worker_{context}"] .Worker node base flows [%header,format=csv] |=== -include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/unique/common-worker.csv[] +include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/unique/common-worker.csv[] |=== [id="network-flow-matrix-bm_{context}"] @@ -68,7 +68,7 @@ In addition to the base network flows, the following matrix describes the ingres .{product-title} on bare metal [%header,format=csv] |=== -include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/unique/bm.csv[] +include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/unique/bm.csv[] |=== [id="network-flow-matrix-sno_{context}"] @@ -79,7 +79,7 @@ In addition to the base network flows, the following matrix describes the ingres .{sno-caps} with other platforms [%header,format=csv] |=== -include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/unique/none-sno.csv[] +include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/unique/none-sno.csv[] |=== [id="network-flow-matrix-aws_{context}"] @@ -90,7 +90,7 @@ In addition to the base network flows, the following matrix describes the ingres .{product-title} on AWS [%header,format=csv] |=== -include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/unique/aws.csv[] +include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/unique/aws.csv[] |=== [id="network-flow-matrix-aws-sno_{context}"] @@ -101,5 +101,5 @@ In addition to the base network flows, the following matrix describes the ingres .{sno-caps} on AWS [%header,format=csv] |=== -include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/unique/aws-sno.csv[] +include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/unique/aws-sno.csv[] |=== \ No newline at end of file diff --git a/modules/nodes-delete-machine-unhealthy-etcd.adoc b/modules/nodes-delete-machine-unhealthy-etcd.adoc index 07cb04a46d..9426ee9306 100644 --- a/modules/nodes-delete-machine-unhealthy-etcd.adoc +++ b/modules/nodes-delete-machine-unhealthy-etcd.adoc @@ -21,7 +21,7 @@ $ oc get clusteroperator baremetal [source,terminal] ---- NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -baremetal 4.20.0 True False False 3d15h +baremetal 4.21.0 True False False 3d15h ---- . Save the `BareMetalHost` object of the affected node to a file for later use by running the following command: diff --git a/modules/nodes-nodes-viewing-listing.adoc b/modules/nodes-nodes-viewing-listing.adoc index ee9426d9f4..b6ab85850c 100644 --- a/modules/nodes-nodes-viewing-listing.adoc +++ b/modules/nodes-nodes-viewing-listing.adoc @@ -60,9 +60,9 @@ $ oc get nodes -o wide [source,terminal] ---- NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME -master.example.com Ready master 171m v1.34.2 10.0.129.108 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.20.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev -node1.example.com Ready worker 72m v1.34.2 10.0.129.222 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.20.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev -node2.example.com Ready worker 164m v1.34.2 10.0.142.150 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.20.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev +master.example.com Ready master 171m v1.34.2 10.0.129.108 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev +node1.example.com Ready worker 72m v1.34.2 10.0.129.222 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev +node2.example.com Ready worker 164m v1.34.2 10.0.142.150 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev ---- * The following command lists information about a single node: diff --git a/modules/nw-dpu-operator-uninstall.adoc b/modules/nw-dpu-operator-uninstall.adoc index 49b65e197a..a0de7013c6 100644 --- a/modules/nw-dpu-operator-uninstall.adoc +++ b/modules/nw-dpu-operator-uninstall.adoc @@ -53,14 +53,14 @@ $ oc get csv -n openshift-dpu-operator [source,terminal] ---- NAME DISPLAY VERSION REPLACES PHASE -dpu-operator.v4.20.0-202503130333 DPU Operator 4.20.0-202503130333 Failed +dpu-operator.v4.21.0-202503130333 DPU Operator 4.21.0-202503130333 Failed ---- .. Delete the DPU Operator by running the following command: + [source,terminal] ---- -$ oc delete csv dpu-operator.v4.20.0-202503130333 -n openshift-dpu-operator +$ oc delete csv dpu-operator.v4.21.0-202503130333 -n openshift-dpu-operator ---- . Delete the namespace that was created for the DPU Operator by running the following command: @@ -78,4 +78,3 @@ $ oc delete namespace openshift-dpu-operator ---- $ oc get csv -n openshift-dpu-operator ---- - diff --git a/modules/oc-mirror-building-image-set-config-v2.adoc b/modules/oc-mirror-building-image-set-config-v2.adoc index b44a6d14ff..f4fb0e8ed8 100644 --- a/modules/oc-mirror-building-image-set-config-v2.adoc +++ b/modules/oc-mirror-building-image-set-config-v2.adoc @@ -26,12 +26,12 @@ apiVersion: mirror.openshift.io/v2alpha1 mirror: platform: channels: - - name: stable-4.20 <1> - minVersion: 4.20.2 - maxVersion: 4.20.2 + - name: stable-4.21 <1> + minVersion: 4.21.2 + maxVersion: 4.21.2 graph: true <2> operators: - - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.20 <3> + - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.21 <3> packages: <4> - name: aws-load-balancer-operator - name: 3scale-operator diff --git a/modules/oc-mirror-image-set-config-examples.adoc b/modules/oc-mirror-image-set-config-examples.adoc index 4c7b2d90b2..272aa0107f 100644 --- a/modules/oc-mirror-image-set-config-examples.adoc +++ b/modules/oc-mirror-image-set-config-examples.adoc @@ -298,12 +298,12 @@ mirror: architectures: - "multi" channels: - - name: stable-4.20 - minVersion: 4.20.0 - maxVersion: 4.20.1 + - name: stable-4.21 + minVersion: 4.21.0 + maxVersion: 4.21.1 type: ocp operators: - - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.20 + - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.21 packages: - name: multicluster-engine ---- \ No newline at end of file diff --git a/modules/oc-mirror-imageset-config-parameters-v2.adoc b/modules/oc-mirror-imageset-config-parameters-v2.adoc index e366c5a8e7..c4f679508b 100644 --- a/modules/oc-mirror-imageset-config-parameters-v2.adoc +++ b/modules/oc-mirror-imageset-config-parameters-v2.adoc @@ -343,7 +343,7 @@ The default value is `false` ---- mirror: operators: - - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.20 + - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.21 packages: - name: rhods-operator defaultChannel: fast diff --git a/modules/rosa-classic-cluster-terraform-file-creation.adoc b/modules/rosa-classic-cluster-terraform-file-creation.adoc index 349d5e28bc..2854b9ceb5 100644 --- a/modules/rosa-classic-cluster-terraform-file-creation.adoc +++ b/modules/rosa-classic-cluster-terraform-file-creation.adoc @@ -36,7 +36,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 4.20.0" + version = ">= 4.21.0" } rhcs = { version = ">= 1.6.2" @@ -102,7 +102,7 @@ module "rosa-classic" { create_account_roles = true create_operator_roles = true # Optional: Configure a cluster administrator user \ <1> -# +# # Option 1: Default cluster-admin user # Create an administrator user (cluster-admin) and automatically # generate a password by uncommenting the following parameter: @@ -114,7 +114,7 @@ module "rosa-classic" { # by uncommenting and editing the values of the following parameters: # admin_credentials_username = # admin_credentials_password = - + depends_on = [time_sleep.wait_60_seconds] } EOF diff --git a/modules/rosa-deleting-account-wide-iam-roles-and-policies.adoc b/modules/rosa-deleting-account-wide-iam-roles-and-policies.adoc index a5f7417672..453a038b45 100644 --- a/modules/rosa-deleting-account-wide-iam-roles-and-policies.adoc +++ b/modules/rosa-deleting-account-wide-iam-roles-and-policies.adoc @@ -43,10 +43,10 @@ ifdef::sts[] ---- I: Fetching account roles ROLE NAME ROLE TYPE ROLE ARN OPENSHIFT VERSION -ManagedOpenShift-ControlPlane-Role Control plane arn:aws:iam:::role/ManagedOpenShift-ControlPlane-Role 4.20 -ManagedOpenShift-Installer-Role Installer arn:aws:iam:::role/ManagedOpenShift-Installer-Role 4.20 -ManagedOpenShift-Support-Role Support arn:aws:iam:::role/ManagedOpenShift-Support-Role 4.20 -ManagedOpenShift-Worker-Role Worker arn:aws:iam:::role/ManagedOpenShift-Worker-Role 4.20 +ManagedOpenShift-ControlPlane-Role Control plane arn:aws:iam:::role/ManagedOpenShift-ControlPlane-Role 4.21 +ManagedOpenShift-Installer-Role Installer arn:aws:iam:::role/ManagedOpenShift-Installer-Role 4.21 +ManagedOpenShift-Support-Role Support arn:aws:iam:::role/ManagedOpenShift-Support-Role 4.21 +ManagedOpenShift-Worker-Role Worker arn:aws:iam:::role/ManagedOpenShift-Worker-Role 4.21 ---- endif::sts[] ifdef::hcp[] @@ -55,9 +55,9 @@ ifdef::hcp[] ---- I: Fetching account roles ROLE NAME ROLE TYPE ROLE ARN OPENSHIFT VERSION AWS Managed -ManagedOpenShift-HCP-ROSA-Installer-Role Installer arn:aws:iam:::role/ManagedOpenShift-HCP-ROSA-Installer-Role 4.20 Yes -ManagedOpenShift-HCP-ROSA-Support-Role Support arn:aws:iam:::role/ManagedOpenShift-HCP-ROSA-Support-Role 4.20 Yes -ManagedOpenShift-HCP-ROSA-Worker-Role Worker arn:aws:iam:::role/ManagedOpenShift-HCP-ROSA-Worker-Role 4.20 Yes +ManagedOpenShift-HCP-ROSA-Installer-Role Installer arn:aws:iam:::role/ManagedOpenShift-HCP-ROSA-Installer-Role 4.21 Yes +ManagedOpenShift-HCP-ROSA-Support-Role Support arn:aws:iam:::role/ManagedOpenShift-HCP-ROSA-Support-Role 4.21 Yes +ManagedOpenShift-HCP-ROSA-Worker-Role Worker arn:aws:iam:::role/ManagedOpenShift-HCP-ROSA-Worker-Role 4.21 Yes ---- endif::hcp[] + diff --git a/modules/rosa-hcp-cluster-terraform-file-creation.adoc b/modules/rosa-hcp-cluster-terraform-file-creation.adoc index e834e861a7..562d19d9a7 100644 --- a/modules/rosa-hcp-cluster-terraform-file-creation.adoc +++ b/modules/rosa-hcp-cluster-terraform-file-creation.adoc @@ -35,7 +35,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 4.20.0" + version = ">= 4.21.0" } rhcs = { version = ">= 1.6.3" @@ -98,7 +98,7 @@ module "rosa-hcp" { create_account_roles = true create_operator_roles = true # Optional: Configure a cluster administrator user \ <1> -# +# # Option 1: Default cluster-admin user # Create an administrator user (cluster-admin) and automatically # generate a password by uncommenting the following parameter: @@ -110,7 +110,7 @@ module "rosa-hcp" { # by uncommenting and editing the values of the following parameters: # admin_credentials_username = # admin_credentials_password = - + depends_on = [time_sleep.wait_60_seconds] } EOF diff --git a/modules/rosa-hcp-deleting-cluster.adoc b/modules/rosa-hcp-deleting-cluster.adoc index 2dc531b96a..fe899d413d 100644 --- a/modules/rosa-hcp-deleting-cluster.adoc +++ b/modules/rosa-hcp-deleting-cluster.adoc @@ -40,7 +40,7 @@ Display Name: test_cluster ID: <1> External ID: Control Plane: ROSA Service Hosted -OpenShift Version: 4.20.0 +OpenShift Version: 4.21.0 Channel Group: stable DNS: test_cluster.l3cn.p3.openshiftapps.com AWS Account: diff --git a/modules/rosa-hcp-sts-creating-a-cluster-external-auth-cluster-cli.adoc b/modules/rosa-hcp-sts-creating-a-cluster-external-auth-cluster-cli.adoc index 27353a20c6..1e4d01a304 100644 --- a/modules/rosa-hcp-sts-creating-a-cluster-external-auth-cluster-cli.adoc +++ b/modules/rosa-hcp-sts-creating-a-cluster-external-auth-cluster-cli.adoc @@ -56,7 +56,7 @@ Display Name: rosa-ext-test ID: External ID: Control Plane: ROSA Service Hosted -OpenShift Version: 4.20.0 +OpenShift Version: 4.21.0 Channel Group: stable DNS: AWS Account: diff --git a/modules/rosa-sts-account-wide-roles-and-policies.adoc b/modules/rosa-sts-account-wide-roles-and-policies.adoc index 6e18ff88c1..8a87561779 100644 --- a/modules/rosa-sts-account-wide-roles-and-policies.adoc +++ b/modules/rosa-sts-account-wide-roles-and-policies.adoc @@ -54,7 +54,7 @@ The account number present in the `sts_installer_trust_policy.json` and `sts_sup ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_installer_trust_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_installer_trust_policy.json[] ---- ==== @@ -63,7 +63,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_installer_permission_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_installer_permission_policy.json[] ---- ==== @@ -86,7 +86,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_instance_controlplane_trust_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_instance_controlplane_trust_policy.json[] ---- ==== @@ -95,7 +95,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_instance_controlplane_permission_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_instance_controlplane_permission_policy.json[] ---- ==== @@ -118,7 +118,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_instance_worker_trust_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_instance_worker_trust_policy.json[] ---- ==== @@ -127,7 +127,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_instance_worker_permission_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_instance_worker_permission_policy.json[] ---- ==== @@ -150,7 +150,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_support_trust_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_support_trust_policy.json[] ---- ==== @@ -159,7 +159,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_support_permission_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_support_permission_policy.json[] ---- ==== @@ -179,7 +179,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_ocm_trust_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_ocm_trust_policy.json[] ---- ==== @@ -199,7 +199,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_installer_trust_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_installer_trust_policy.json[] ---- ==== @@ -256,7 +256,7 @@ I: Attached policy 'arn:aws:iam::000000000000:policy/testrole-Worker-Role-Policy ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/openshift_ingress_operator_cloud_credentials_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/openshift_ingress_operator_cloud_credentials_policy.json[] ---- ==== @@ -276,7 +276,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/openshift_cluster_csi_drivers_ebs_cloud_credentials_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/openshift_cluster_csi_drivers_ebs_cloud_credentials_policy.json[] ---- ==== @@ -296,7 +296,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/openshift_machine_api_aws_cloud_credentials_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/openshift_machine_api_aws_cloud_credentials_policy.json[] ---- ==== @@ -316,7 +316,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/openshift_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/openshift_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json[] ---- ==== @@ -336,7 +336,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs ==== [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/openshift_image_registry_installer_cloud_credentials_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/openshift_image_registry_installer_cloud_credentials_policy.json[] ---- ==== endif::openshift-rosa-hcp[] diff --git a/modules/rosa-sts-aws-requirements-attaching-boundary-policy.adoc b/modules/rosa-sts-aws-requirements-attaching-boundary-policy.adoc index 246111d099..b73b584d46 100644 --- a/modules/rosa-sts-aws-requirements-attaching-boundary-policy.adoc +++ b/modules/rosa-sts-aws-requirements-attaching-boundary-policy.adoc @@ -40,7 +40,7 @@ This example procedure is applicable for an installer role and policy with the m The following example shows `sts_installer_core_permission_boundary_policy.json`: [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_installer_core_permission_boundary_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_installer_core_permission_boundary_policy.json[] ---- [IMPORTANT] @@ -61,7 +61,7 @@ To use the permission boundaries, you will need to prepare the permission bounda + [source,terminal] ---- -$ curl -o ./rosa-installer-core.json https://raw.githubusercontent.com/openshift/managed-cluster-config/master/resources/sts/4.20/sts_installer_core_permission_boundary_policy.json +$ curl -o ./rosa-installer-core.json https://raw.githubusercontent.com/openshift/managed-cluster-config/master/resources/sts/4.21/sts_installer_core_permission_boundary_policy.json ---- . Create the policy in AWS and gather its Amazon Resource Name (ARN) by entering the following command: @@ -124,12 +124,12 @@ The following example shows `sts_installer_privatelink_permission_boundary_polic + [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_installer_privatelink_permission_boundary_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_installer_privatelink_permission_boundary_policy.json[] ---- + The following example shows `sts_installer_vpc_permission_boundary_policy.json`: + [source,json] ---- -include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_installer_vpc_permission_boundary_policy.json[] +include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_installer_vpc_permission_boundary_policy.json[] ---- \ No newline at end of file diff --git a/modules/rosa-sts-creating-a-cluster-with-customizations-cli.adoc b/modules/rosa-sts-creating-a-cluster-with-customizations-cli.adoc index 2106a375cc..5179642d1a 100644 --- a/modules/rosa-sts-creating-a-cluster-with-customizations-cli.adoc +++ b/modules/rosa-sts-creating-a-cluster-with-customizations-cli.adoc @@ -258,7 +258,7 @@ I: Using arn:aws:iam:::role/ManagedOpenShift-Support-Role for th ? Disable Workload monitoring (optional): No I: Creating cluster '' I: To create this cluster again in the future, you can run: - rosa create cluster --cluster-name --role-arn arn:aws:iam:::role/ManagedOpenShift-Installer-Role --support-role-arn arn:aws:iam:::role/ManagedOpenShift-Support-Role --master-iam-role arn:aws:iam:::role/ManagedOpenShift-ControlPlane-Role --worker-iam-role arn:aws:iam:::role/ManagedOpenShift-Worker-Role --operator-roles-prefix - --region us-east-1 --version 4.20.0 --additional-compute-security-group-ids sg-0e375ff0ec4a6cfa2 --additional-infra-security-group-ids sg-0e375ff0ec4a6cfa2 --additional-control-plane-security-group-ids sg-0e375ff0ec4a6cfa2 --replicas 2 --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 <16> + rosa create cluster --cluster-name --role-arn arn:aws:iam:::role/ManagedOpenShift-Installer-Role --support-role-arn arn:aws:iam:::role/ManagedOpenShift-Support-Role --master-iam-role arn:aws:iam:::role/ManagedOpenShift-ControlPlane-Role --worker-iam-role arn:aws:iam:::role/ManagedOpenShift-Worker-Role --operator-roles-prefix - --region us-east-1 --version 4.21.0 --additional-compute-security-group-ids sg-0e375ff0ec4a6cfa2 --additional-infra-security-group-ids sg-0e375ff0ec4a6cfa2 --additional-control-plane-security-group-ids sg-0e375ff0ec4a6cfa2 --replicas 2 --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 <16> I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster '' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. @@ -267,7 +267,7 @@ I: Once the cluster is installed you will need to add an Identity Provider befor <1> Optional. When creating your cluster, you can customize the subdomain for your cluster on `*.openshiftapps.com` using the `--domain-prefix` flag. The value for this flag must be unique within your organization, cannot be longer than 15 characters, and cannot be changed after cluster creation. If the flag is not supplied, an autogenerated value is created that depends on the length of the cluster name. If the cluster name is fewer than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated to a 15 character string. <2> When creating your cluster, you can create a local administrator user (`cluster-admin`) for your cluster. This automatically configures an `htpasswd` identity provider for the `cluster-admin` user. <3> You can create a custom password for the `cluster-admin` user, or have the system generate a password. If you do not create a custom password, the generated password is displayed in the command-line output. If you specify a custom password, the password must be at least 14 characters (ASCII-standard) without any whitespace. When defined, the password is hashed and transported securely. -<4> When creating the cluster, the listed `OpenShift version` options include the major, minor, and patch versions, for example `4.20.0`. +<4> When creating the cluster, the listed `OpenShift version` options include the major, minor, and patch versions, for example `4.21.0`. <5> Optional: Specify `optional` to configure all EC2 instances to use both v1 and v2 endpoints of EC2 Instance Metadata Service (IMDS). This is the default value. Specify `required` to configure all EC2 instances to use IMDSv2 only. + ifdef::openshift-rosa[] diff --git a/modules/running-insights-operator-gather-cli.adoc b/modules/running-insights-operator-gather-cli.adoc index d6270947bf..7153008a7c 100644 --- a/modules/running-insights-operator-gather-cli.adoc +++ b/modules/running-insights-operator-gather-cli.adoc @@ -7,7 +7,7 @@ [id="running-insights-operator-gather-openshift-cli_{context}"] = Gathering data on demand with the {insights-operator} from the OpenShift CLI -You can run a custom {insights-operator} gather operation on-demand from the {product-title} command-line interface (CLI). +You can run a custom {insights-operator} gather operation on-demand from the {product-title} command-line interface (CLI). An on-demand `DataGather` operation is useful for one-off data collections that require different configurations to the periodic data gathering (`InsightsDataGather`) specification. Use the following procedure to create a `DataGather` custom resource definition (CRD), and then run the data gather operation on demand from the CLI. @@ -48,7 +48,7 @@ spec: apiVersion: insights.openshift.io/v1alpha2 kind: DataGather metadata: - name: + name: spec: # Gatherers configuration gatherers: @@ -82,7 +82,7 @@ spec: + [IMPORTANT] ==== -Ensure that the volume name specified matches the existing `PersistentVolumeClaim` value in the `openshift-insights` namespace. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/storage/understanding-persistent-storage#persistent-volume-claims_understanding-persistent-storage[Persistent volume claims]. +Ensure that the volume name specified matches the existing `PersistentVolumeClaim` value in the `openshift-insights` namespace. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/storage/understanding-persistent-storage#persistent-volume-claims_understanding-persistent-storage[Persistent volume claims]. ==== + * To enable data obfuscation, define the `dataPolicy` key and required values. For example, to obfuscate IP addresses and workload names, add the following configuration: @@ -92,7 +92,7 @@ Ensure that the volume name specified matches the existing `PersistentVolumeClai apiVersion: insights.openshift.io/v1alpha2 kind: DataGather metadata: - name: + name: spec: dataPolicy: - ObfuscateNetworking diff --git a/modules/running-insights-operator-gather-web-console.adoc b/modules/running-insights-operator-gather-web-console.adoc index 8c8943cda5..5c7dc89a6b 100644 --- a/modules/running-insights-operator-gather-web-console.adoc +++ b/modules/running-insights-operator-gather-web-console.adoc @@ -52,7 +52,7 @@ spec: apiVersion: insights.openshift.io/v1alpha2 kind: DataGather metadata: - name: + name: spec: # Gatherers configuration gatherers: @@ -86,7 +86,7 @@ spec: + [IMPORTANT] ==== -Ensure that the volume name specified matches the existing `PersistentVolumeClaim` value in the `openshift-insights` namespace. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/storage/understanding-persistent-storage#persistent-volume-claims_understanding-persistent-storage[Persistent volume claims]. +Ensure that the volume name specified matches the existing `PersistentVolumeClaim` value in the `openshift-insights` namespace. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/storage/understanding-persistent-storage#persistent-volume-claims_understanding-persistent-storage[Persistent volume claims]. ==== * To enable data obfuscation, define the `dataPolicy` key and required values. For example, to obfuscate IP addresses and workload names, add the following configuration: + @@ -95,7 +95,7 @@ Ensure that the volume name specified matches the existing `PersistentVolumeClai apiVersion: insights.openshift.io/v1alpha2 kind: DataGather metadata: - name: + name: spec: dataPolicy: - ObfuscateNetworking diff --git a/modules/sd-understanding-process-id-limits.adoc b/modules/sd-understanding-process-id-limits.adoc index ad88765d09..a1dbf3ae66 100644 --- a/modules/sd-understanding-process-id-limits.adoc +++ b/modules/sd-understanding-process-id-limits.adoc @@ -43,7 +43,7 @@ endif::openshift-enterprise,openshift-origin[] ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * Maximum number of PIDs per node. + -The default value depends on link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.20/html-single/nodes/index#nodes-nodes-resources-configuring[node resources]. In {product-title}, this value is controlled by the link:https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved[`--system-reserved`] parameter, which reserves PIDs on each node based on the total resources of the node. +The default value depends on link:https://access.redhat.com/documentation/en-us/openshift_container_platform/latest/html-single/nodes/index#nodes-nodes-resources-configuring[node resources]. In {product-title}, this value is controlled by the link:https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved[`--system-reserved`] parameter, which reserves PIDs on each node based on the total resources of the node. endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] When a pod exceeds the allowed maximum number of PIDs per pod, the pod might stop functioning correctly and might be evicted from the node. See link:https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals-and-thresholds[the Kubernetes documentation for eviction signals and thresholds] for more information. diff --git a/modules/serverless-quarkus-template.adoc b/modules/serverless-quarkus-template.adoc index 4b51c4696f..fd17caf223 100644 --- a/modules/serverless-quarkus-template.adoc +++ b/modules/serverless-quarkus-template.adoc @@ -45,7 +45,7 @@ Both `http` and `event` trigger functions have the same template structure: junit junit - 4.20 + 4.21 test diff --git a/modules/storage-persistent-storage-selinuxChangePolicy-testing-mountoption-RWO-RWX.adoc b/modules/storage-persistent-storage-selinuxChangePolicy-testing-mountoption-RWO-RWX.adoc index bc95a323c4..0e6009fc4a 100644 --- a/modules/storage-persistent-storage-selinuxChangePolicy-testing-mountoption-RWO-RWX.adoc +++ b/modules/storage-persistent-storage-selinuxChangePolicy-testing-mountoption-RWO-RWX.adoc @@ -7,7 +7,7 @@ [id="using_selinuxChangePolicy_testing-mountoption-rwo-rwx_{context}"] = Testing the RWO and RWX and SELinux mount option feature -In {product-title} 4.20, you can evaluate the mount option feature for RWO and RWX volumes as a Technology Preview feature. +In {product-title} 4.21, you can evaluate the mount option feature for RWO and RWX volumes as a Technology Preview feature. :FeatureName: RWO/RWX SELinux mount include::snippets/technology-preview.adoc[] diff --git a/modules/support-log-gather-install-cli.adoc b/modules/support-log-gather-install-cli.adoc index 306dbbb52b..6cf978ee16 100644 --- a/modules/support-log-gather-install-cli.adoc +++ b/modules/support-log-gather-install-cli.adoc @@ -115,5 +115,5 @@ $ oc get csv -n must-gather-operator [source,terminal] ---- NAME DISPLAY VERSION REPLACES PHASE -support-log-gather-operator.v4.20.0 support log gather 4.20.0 Succeeded +support-log-gather-operator.v4.21.0 support log gather 4.21.0 Succeeded ---- \ No newline at end of file diff --git a/modules/update-upgrading-cli.adoc b/modules/update-upgrading-cli.adoc index 42e10c837f..31caa68974 100644 --- a/modules/update-upgrading-cli.adoc +++ b/modules/update-upgrading-cli.adoc @@ -58,13 +58,13 @@ ifdef::openshift-origin[] Upstream update service: https://amd64.origin.releases.ci.openshift.org/graph Channel: stable-scos-4 -Updates to 4.20: +Updates to 4.21: VERSION ISSUES - 4.20.0-okd-scos.ec.14 no known issues relevant to this cluster + 4.21.0-okd-scos.ec.14 no known issues relevant to this cluster -Updates to 4.19: +Updates to 4.20: VERSION ISSUES - 4.19.0-okd-scos.17 no known issues relevant to this cluster + 4.20.0-okd-scos.17 no known issues relevant to this cluster ---- endif::openshift-origin[] + diff --git a/modules/using-cluster-compare-telco-core.adoc b/modules/using-cluster-compare-telco-core.adoc index ee07631358..10228f37bb 100644 --- a/modules/using-cluster-compare-telco-core.adoc +++ b/modules/using-cluster-compare-telco-core.adoc @@ -39,7 +39,7 @@ $ mkdir -p ./out + [source,terminal] ---- -$ podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.20 | base64 -d | tar xv -C out +$ podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.21 | base64 -d | tar xv -C out ---- + You can view the reference configuration in the `out/telco-core-rds/configuration/reference-crs-kube-compare` directory by running the following command: diff --git a/modules/verifying-cluster-install-oci-agent-based.adoc b/modules/verifying-cluster-install-oci-agent-based.adoc index 13b1df6539..433d428420 100644 --- a/modules/verifying-cluster-install-oci-agent-based.adoc +++ b/modules/verifying-cluster-install-oci-agent-based.adoc @@ -100,9 +100,9 @@ $ oc get co [source,terminal] ---- NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE -authentication 4.20.0-0 True False False 6m18s -baremetal 4.20.0-0 True False False 2m42s -network 4.20.0-0 True True False 5m58s Progressing: … +authentication 4.21.0-0 True False False 6m18s +baremetal 4.21.0-0 True False False 2m42s +network 4.21.0-0 True True False 5m58s Progressing: … … ---- diff --git a/modules/virt-checking-storage-configuration.adoc b/modules/virt-checking-storage-configuration.adoc index aaa0702f04..05e82c1b71 100644 --- a/modules/virt-checking-storage-configuration.adoc +++ b/modules/virt-checking-storage-configuration.adoc @@ -169,11 +169,11 @@ data: status.failureReason: "" # <2> status.startTimestamp: "2023-07-31T13:14:38Z" # <3> status.completionTimestamp: "2023-07-31T13:19:41Z" # <4> - status.result.cnvVersion: 4.20.2 # <5> + status.result.cnvVersion: 4.21.2 # <5> status.result.defaultStorageClass: trident-nfs <6> status.result.goldenImagesNoDataSource: # <7> status.result.goldenImagesNotUpToDate: # <8> - status.result.ocpVersion: 4.20.0 # <9> + status.result.ocpVersion: 4.21.0 # <9> status.result.pvcBound: "true" # <10> status.result.storageProfileMissingVolumeSnapshotClass: # <11> status.result.storageProfilesWithEmptyClaimPropertySets: # <12> diff --git a/modules/virt-updating-multiple-vms.adoc b/modules/virt-updating-multiple-vms.adoc index a5146e5b76..975b5c9c5a 100644 --- a/modules/virt-updating-multiple-vms.adoc +++ b/modules/virt-updating-multiple-vms.adoc @@ -54,7 +54,7 @@ spec: spec: containers: - name: kubevirt-api-lifecycle-automation - image: quay.io/openshift-virtualization/kubevirt-api-lifecycle-automation:v4.20 <1> + image: quay.io/openshift-virtualization/kubevirt-api-lifecycle-automation:v4.21 <1> imagePullPolicy: Always env: - name: MACHINE_TYPE_GLOB <2> diff --git a/modules/zero-trust-manager-create-route-oidc.adoc b/modules/zero-trust-manager-create-route-oidc.adoc index fd921cc482..88ef9b2fe5 100644 --- a/modules/zero-trust-manager-create-route-oidc.adoc +++ b/modules/zero-trust-manager-create-route-oidc.adoc @@ -14,9 +14,9 @@ The managed route uses the External Route Certificate feature to set the `tls.ex * You have deployed the SPIRE Server, SPIRE Agent, SPIFFEE CSI Driver, and the SPIRE OIDC Discovery Provider operands in the cluster. -* You have installed the {cert-manager-operator}. For more information, link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/security_and_compliance/index#cert-manager-operator-install[Installing the cert-manager Operator for Red{nbsp}Hat OpenShift]. +* You have installed the {cert-manager-operator}. For more information, link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/security_and_compliance/index#cert-manager-operator-install[Installing the cert-manager Operator for Red{nbsp}Hat OpenShift]. -* You have created a `ClusterIssuer` or `Issuer` configured with a publicly trusted CA service. For example, an Automated Certificate Management Environment (ACME) type `Issuer` with the "Let's Encrypt ACME" service. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/security_and_compliance/index#cert-manager-operator-issuer-acme[Configuring an ACME issuer] +* You have created a `ClusterIssuer` or `Issuer` configured with a publicly trusted CA service. For example, an Automated Certificate Management Environment (ACME) type `Issuer` with the "Let's Encrypt ACME" service. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/security_and_compliance/index#cert-manager-operator-issuer-acme[Configuring an ACME issuer] .Procedure @@ -81,4 +81,3 @@ $ curl https://$JWT_ISSUER_ENDPOINT/.well-known/openid-configuration ] }% ---- - diff --git a/modules/zero-trust-manager-install-vault-oidc.adoc b/modules/zero-trust-manager-install-vault-oidc.adoc index 52777e8732..1413190a29 100644 --- a/modules/zero-trust-manager-install-vault-oidc.adoc +++ b/modules/zero-trust-manager-install-vault-oidc.adoc @@ -11,7 +11,7 @@ Before Vault is used as an OIDC, you need to install Vault. .Prerequisites -* Configure a route. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/ingress_and_load_balancing/configuring-routes#route-configuration[Route configuration] +* Configure a route. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/ingress_and_load_balancing/routes#nw-configuring-routes[Configuring routes] * Helm is installed. @@ -114,7 +114,3 @@ $ curl -s $VAULT_ADDR/v1/sys/health | jq "cluster_id": "5e6f7a8b-9c0d-1e2f-3a4b-5c6d7e8f9a0b" } ---- - - - - diff --git a/modules/zero-trust-manager-manual-management.adoc b/modules/zero-trust-manager-manual-management.adoc index 2d402cdba6..059aa4ec82 100644 --- a/modules/zero-trust-manager-manual-management.adoc +++ b/modules/zero-trust-manager-manual-management.adoc @@ -17,7 +17,7 @@ You can use SPIRE federation with custom certificate management using cert-manag * You have `cluster-admin` privileges on all participating clusters. -* You have installed the {cert-manager-operator}. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/security_and_compliance/cert-manager-operator-for-red-hat-openshift[cert-manager Operator for Red Hat OpenShift]. +* You have installed the {cert-manager-operator}. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/security_and_compliance/cert-manager-operator-for-red-hat-openshift[cert-manager Operator for Red Hat OpenShift]. * Your federation endpoints must be publicly accessible for certificate validation. @@ -556,6 +556,3 @@ spire-server-0 2/2 Running 0 10m ---- . Optional: Test cross-cluster workload authentication by deploying workloads with SPIFFE identities on different clusters and verifying they can authenticate to each other using the federated trust. - - - diff --git a/modules/ztp-image-based-upgrade-procedure-rollback.adoc b/modules/ztp-image-based-upgrade-procedure-rollback.adoc index 53979b05cc..0c3aef5432 100644 --- a/modules/ztp-image-based-upgrade-procedure-rollback.adoc +++ b/modules/ztp-image-based-upgrade-procedure-rollback.adoc @@ -34,8 +34,8 @@ spec: - spoke4 ibuSpec: seedImageRef: - image: quay.io/seed/image:4.20.0-rc.1 - version: 4.20.0-rc.1 + image: quay.io/seed/image:4.21.0-rc.1 + version: 4.21.0-rc.1 pullSecretRef: name: "" extraManifests: diff --git a/modules/ztp-sno-accelerated-ztp.adoc b/modules/ztp-sno-accelerated-ztp.adoc index f4c57c4c8e..09567b6803 100644 --- a/modules/ztp-sno-accelerated-ztp.adoc +++ b/modules/ztp-sno-accelerated-ztp.adoc @@ -34,7 +34,7 @@ spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" - clusterImageSetNameRef: "openshift-4.20" + clusterImageSetNameRef: "openshift-4.21" sshPublicKey: "ssh-rsa AAAA..." clusters: # ... diff --git a/modules/ztp-sno-siteconfig-config-reference.adoc b/modules/ztp-sno-siteconfig-config-reference.adoc index fb08dc0b45..e91c6b2260 100644 --- a/modules/ztp-sno-siteconfig-config-reference.adoc +++ b/modules/ztp-sno-siteconfig-config-reference.adoc @@ -49,7 +49,7 @@ a|Configure this field to enable disk encryption with Trusted Platform Module (T [NOTE] ==== -Configuring disk encryption by using the `diskEncryption` field in the `SiteConfig` CR is a Technology Preview feature in {product-title} 4.20. +Configuring disk encryption by using the `diskEncryption` field in the `SiteConfig` CR is a Technology Preview feature in {product-title} 4.21. ==== |`spec.clusters.diskEncryption.type` diff --git a/rosa_learning/deploying_application_workshop/learning-deploying-application-s2i-deployments.adoc b/rosa_learning/deploying_application_workshop/learning-deploying-application-s2i-deployments.adoc index 3da872358b..3635eb1bcc 100644 --- a/rosa_learning/deploying_application_workshop/learning-deploying-application-s2i-deployments.adoc +++ b/rosa_learning/deploying_application_workshop/learning-deploying-application-s2i-deployments.adoc @@ -6,7 +6,7 @@ include::_attributes/attributes-openshift-dedicated.adoc[] :context: cloud-experts-deploying-application-s2i-deployments toc::[] [role="_abstract"] -The integrated Source-to-Image (S2I) builder is one method to deploy applications in {OCP-short}. The S2I is a tool for building reproducible, Docker-formatted container images. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/overview/getting-started-openshift-common-terms_openshift-editions[Glossary of common terms for {OCP}]. You must have a deployed {product-title} cluster before starting this process. +The integrated Source-to-Image (S2I) builder is one method to deploy applications in {OCP-short}. The S2I is a tool for building reproducible, Docker-formatted container images. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/overview/getting-started-openshift-common-terms_openshift-editions[Glossary of common terms for {OCP}]. You must have a deployed {product-title} cluster before starting this process. include::modules/learning-deploying-application-s2i-deployments-retrieving-login.adoc[leveloffset=+1] include::modules/learning-deploying-application-s2i-deployments-create-new-project.adoc[leveloffset=+1] diff --git a/security/external_secrets_operator/external-secrets-operator-release-notes.adoc b/security/external_secrets_operator/external-secrets-operator-release-notes.adoc index c205b32ec2..4969339863 100644 --- a/security/external_secrets_operator/external-secrets-operator-release-notes.adoc +++ b/security/external_secrets_operator/external-secrets-operator-release-notes.adoc @@ -39,20 +39,20 @@ Version 1.0.0 of the {external-secrets-operator} is based on the upstream extern With this release, the Operator API, `externalsecrets.operator.openshift.io` has been renamed to `externalsecretsconfigs.operator.openshift.io` to avoid confusion with the external-secrets provided API that has the same name, but a different purpose. The external-secrets provided API has also been restructured and new features are added. -For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/security_and_compliance/index#external-secrets-operator-api[External Secrets Operator for Red Hat OpenShift APIs]. +For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/security_and_compliance/index#external-secrets-operator-api[External Secrets Operator for Red Hat OpenShift APIs]. *Support to collect metrics of {external-secrets-operator-short}* With this release, the {external-secrets-operator} supports collecting metrics for both the Operator and operands. This is optional and must be enabled. -For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/security_and_compliance/index#external-secrets-monitoring[Monitoring the External Secrets Operator for Red Hat OpenShift]. +For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/security_and_compliance/index#external-secrets-monitoring[Monitoring the External Secrets Operator for Red Hat OpenShift]. *Support to configure proxy for {external-secrets-operator-short}* With this release, the {external-secrets-operator} supports configuring proxy for both the Operator and operand. -For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/security_and_compliance/index#external-secrets-operator-proxy[About the egress proxy for the External Secrets Operator for Red Hat OpenShift]. +For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/security_and_compliance/index#external-secrets-operator-proxy[About the egress proxy for the External Secrets Operator for Red Hat OpenShift]. *Root filesystem is read-only for {external-secrets-operator} containers* @@ -62,7 +62,7 @@ With this release, to improve security, the {external-secrets-operator} and all With this release, {external-secrets-operator} includes pre-defined `NetworkPolicy` resources designed for enhanced security by governing ingress and egress traffic for operand components. These policies cover essential internal traffic, such as ingress to the metrics and webhook servers, and egress to the OpenShift API server and DNS server. Note that deployment of the `NetworkPolicy` is enabled by default and egress allow policies must be explicitly defined in the `ExternalSecretsConfig` custom resource for the `external-secrets` component to fetch secrets from external providers. -For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/security_and_compliance/index#external-secrets-operator-config-net-policy[Configuring network policy for the operand]. +For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/security_and_compliance/index#external-secrets-operator-config-net-policy[Configuring network policy for the operand]. [id="external-secrets-operator-release-notes-0-1-0_{context}"] diff --git a/security/zero_trust_workload_identity_manager/zero-trust-manager-release-notes.adoc b/security/zero_trust_workload_identity_manager/zero-trust-manager-release-notes.adoc index c88b684547..ecf3ca0932 100644 --- a/security/zero_trust_workload_identity_manager/zero-trust-manager-release-notes.adoc +++ b/security/zero_trust_workload_identity_manager/zero-trust-manager-release-notes.adoc @@ -243,7 +243,7 @@ Support for the managed OIDC Discovery Provider Route:: * The `managedRoute` field is boolean and is set to `true` by default. If set to `false`, the Operator stops managing the route and the existing route will not be deleted automatically. If set back to `true`, the Operator resumes managing the route. If a route does not exist, the Operator creates a new one. If a route already exists, the Operator will override the user configuration if a conflict exists. -* The `externalSecretRef` references an externally managed Secret that has the TLS certificate for the `oidc-discovery-provider` Route host. When provided, this populates the route's `.Spec.TLS.ExternalCertificate` field. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/ingress_and_load_balancing/index#nw-ingress-route-secret-load-external-cert_secured-routes[Creating a route with externally managed certificate] +* The `externalSecretRef` references an externally managed Secret that has the TLS certificate for the `oidc-discovery-provider` Route host. When provided, this populates the route's `.Spec.TLS.ExternalCertificate` field. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/ingress_and_load_balancing/index#nw-ingress-route-secret-load-external-cert_secured-routes[Creating a route with externally managed certificate] Enabling the custom Certificate Authority Time-To-Live for the SPIRE bundle:: diff --git a/updating/preparing_for_updates/updating-cluster-prepare.adoc b/updating/preparing_for_updates/updating-cluster-prepare.adoc index d2f85a7de7..3b141b1c28 100644 --- a/updating/preparing_for_updates/updating-cluster-prepare.adoc +++ b/updating/preparing_for_updates/updating-cluster-prepare.adoc @@ -1,6 +1,6 @@ :_mod-docs-content-type: ASSEMBLY [id="updating-cluster-prepare"] -= Preparing to update to {product-title} 4.20 += Preparing to update to {product-title} 4.21 include::_attributes/common-attributes.adoc[] :context: updating-cluster-prepare