mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 03:47:04 +01:00
OSDOCS-15085 Cross-check and update version number references and package names for 4.21
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
e048ee9852
commit
b031f1f82b
@@ -1,7 +1,7 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id="monitoring-pending-workloads-install-kueue"]
|
||||
= Monitoring pending workloads
|
||||
= Monitoring pending workloads
|
||||
:context: monitoring-pending-workloads
|
||||
|
||||
toc::[]
|
||||
@@ -25,7 +25,7 @@ include::modules/kueue-providing-user-permissions.adoc[leveloffset=+1]
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/ai_workloads/red-hat-build-of-kueue#rbac-permissions[Configuring role-based permissions]
|
||||
* link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/ai_workloads/red-hat-build-of-kueue#rbac-permissions[Configuring role-based permissions]
|
||||
|
||||
|
||||
include::modules/kueue-monitoring-pending-workloads-on-demand.adoc[leveloffset=+1]
|
||||
@@ -35,4 +35,3 @@ include::modules/kueue-viewing-pending-workloads-clusterqueue.adoc[leveloffset=+
|
||||
include::modules/kueue-viewing-pending-workloads-localqueue.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/kueue-modifying-monitoring-settings.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="das-about-dynamic-accelerator-slicer-operator"]
|
||||
= Dynamic Accelerator Slicer (DAS) Operator
|
||||
= Dynamic Accelerator Slicer (DAS) Operator
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: das-about-dynamic-accelerator-slicer-operator
|
||||
|
||||
@@ -10,7 +10,7 @@ toc::[]
|
||||
|
||||
include::snippets/technology-preview.adoc[]
|
||||
|
||||
The Dynamic Accelerator Slicer (DAS) Operator allows you to dynamically slice GPU accelerators in {product-title}, instead of relying on statically sliced GPUs defined when the node is booted. This allows you to dynamically slice GPUs based on specific workload demands, ensuring efficient resource utilization.
|
||||
The Dynamic Accelerator Slicer (DAS) Operator allows you to dynamically slice GPU accelerators in {product-title}, instead of relying on statically sliced GPUs defined when the node is booted. This allows you to dynamically slice GPUs based on specific workload demands, ensuring efficient resource utilization.
|
||||
|
||||
Dynamic slicing is useful if you do not know all the accelerator partitions needed in advance on every node on the cluster.
|
||||
|
||||
@@ -45,7 +45,7 @@ include::modules/das-operator-installing-web-console.adoc[leveloffset=+2]
|
||||
** xref:../hardware_enablement/psap-node-feature-discovery-operator.adoc#psap-node-feature-discovery-operator[Node Feature Discovery (NFD) Operator]
|
||||
** link:https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/index.html[NVIDIA GPU Operator]
|
||||
|
||||
** link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/specialized_hardware_and_driver_enablement/psap-node-feature-discovery-operator#creating-nfd-cr-web-console_psap-node-feature-discovery-operator[NodeFeatureDiscovery CR]
|
||||
** link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/specialized_hardware_and_driver_enablement/psap-node-feature-discovery-operator#creating-nfd-cr-web-console_psap-node-feature-discovery-operator[NodeFeatureDiscovery CR]
|
||||
|
||||
//Installing the Dynamic Accelerator Slicer Operator using the CLI
|
||||
include::modules/das-operator-installing-cli.adoc[leveloffset=+2]
|
||||
@@ -55,7 +55,7 @@ include::modules/das-operator-installing-cli.adoc[leveloffset=+2]
|
||||
* xref:../security/cert_manager_operator/cert-manager-operator-install.adoc#cert-manager-operator-install[{cert-manager-operator}]
|
||||
* xref:../hardware_enablement/psap-node-feature-discovery-operator.adoc#psap-node-feature-discovery-operator[Node Feature Discovery (NFD) Operator]
|
||||
* link:https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/index.html[NVIDIA GPU Operator]
|
||||
* link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/specialized_hardware_and_driver_enablement/psap-node-feature-discovery-operator#creating-nfd-cr-cli_psap-node-feature-discovery-operator[NodeFeatureDiscovery CR]
|
||||
* link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/specialized_hardware_and_driver_enablement/psap-node-feature-discovery-operator#creating-nfd-cr-cli_psap-node-feature-discovery-operator[NodeFeatureDiscovery CR]
|
||||
|
||||
//Uninstalling the Dynamic Accelerator Slicer Operator
|
||||
include::modules/das-operator-uninstalling.adoc[leveloffset=+1]
|
||||
@@ -77,7 +77,3 @@ include::modules/das-operator-troubleshooting.adoc[leveloffset=+1]
|
||||
* link:https://github.com/kubernetes/kubernetes/issues/128043[Kubernetes issue #128043]
|
||||
* xref:../hardware_enablement/psap-node-feature-discovery-operator.adoc#psap-node-feature-discovery-operator[Node Feature Discovery Operator]
|
||||
* link:https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/troubleshooting.html[NVIDIA GPU Operator troubleshooting]
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ $ ./openshift-install version
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
./openshift-install 4.20.0
|
||||
./openshift-install 4.21.0
|
||||
built from commit abc123def456
|
||||
release image quay.io/openshift-release-dev/ocp-release@sha256:123abc456def789ghi012jkl345mno678pqr901stu234vwx567yz0
|
||||
release architecture amd64
|
||||
|
||||
@@ -39,7 +39,7 @@ $ oc --kubeconfig="$MGMT_KUBECONFIG" get np -A
|
||||
[source,terminal]
|
||||
----
|
||||
NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
|
||||
clusters democluster-us-east-1a democluster 1 1 False False 4.20.0 False False
|
||||
clusters democluster-us-east-1a democluster 1 1 False False 4.21.0 False False
|
||||
----
|
||||
+
|
||||
The `node-pool-name` is the `NAME` field in the output. In this example, the `node-pool-name` is `democluster-us-east-1a`.
|
||||
|
||||
@@ -60,9 +60,9 @@ $ ./openshift-install version
|
||||
.Example output for a shared registry binary
|
||||
[source,terminal,subs="quotes"]
|
||||
----
|
||||
./openshift-install 4.20.0
|
||||
./openshift-install 4.21.0
|
||||
built from commit ae7977b7d1ca908674a0d45c5c243c766fa4b2ca
|
||||
release image registry.ci.openshift.org/origin/release:4.20ocp-release@sha256:0da6316466d60a3a4535d5fed3589feb0391989982fba59d47d4c729912d6363
|
||||
release image registry.ci.openshift.org/origin/release:4.21ocp-release@sha256:0da6316466d60a3a4535d5fed3589feb0391989982fba59d47d4c729912d6363
|
||||
release architecture amd64
|
||||
----
|
||||
====
|
||||
|
||||
@@ -131,7 +131,7 @@ $ oc get co kube-apiserver
|
||||
[source,terminal]
|
||||
----
|
||||
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
|
||||
kube-apiserver 4.20.0 True True False 85m NodeInstallerProgressing: 2 node are at revision 8; 1 node is at revision 10
|
||||
kube-apiserver 4.21.0 True True False 85m NodeInstallerProgressing: 2 node are at revision 8; 1 node is at revision 10
|
||||
----
|
||||
+
|
||||
The message in the preceding example shows that one node has progressed to the new revision and two nodes have not yet updated. It can take 20 minutes or more to roll out the new revision to all nodes, depending on the size of your cluster.
|
||||
|
||||
@@ -42,7 +42,7 @@ $ oc get co kube-apiserver
|
||||
[source,terminal]
|
||||
----
|
||||
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
|
||||
kube-apiserver 4.20.0 True True False 85m NodeInstallerProgressing: 2 node are at revision 12; 1 node is at revision 14
|
||||
kube-apiserver 4.21.0 True True False 85m NodeInstallerProgressing: 2 node are at revision 12; 1 node is at revision 14
|
||||
----
|
||||
+
|
||||
The message in the preceding example shows that one node has progressed to the new revision and two nodes have not yet updated. It can take 20 minutes or more to roll out the new revision to all nodes, depending on the size of your cluster.
|
||||
|
||||
@@ -64,7 +64,7 @@ $ hcp create cluster aws \
|
||||
<5> Specify the public hosted zone that the service consumer owns, for example, `service-consumer-domain.com`.
|
||||
<6> Specify the node replica count, for example, `2`.
|
||||
<7> Specify the path to your pull secret file.
|
||||
<8> Specify the supported {product-title} version that you want to use, for example, `4.20.0-multi`.
|
||||
<8> Specify the supported {product-title} version that you want to use, for example, `4.21.0-multi`.
|
||||
<9> Specify the public hosted zone that the service provider owns, for example, `service-provider-domain.com`.
|
||||
<10> Set as `PublicAndPrivate`. You can use external DNS with `Public` or `PublicAndPrivate` configurations only.
|
||||
<11> Specify the path to your {aws-short} STS credentials file, for example, `/home/user/sts-creds/sts-creds.json`.
|
||||
@@ -52,4 +52,4 @@ $ hcp create cluster agent \
|
||||
<6> Specify the `icsp.yaml` file that defines ICSP and your mirror registries.
|
||||
<7> Specify the path to your SSH public key. The default file path is `~/.ssh/id_rsa.pub`.
|
||||
<8> Specify your hosted cluster namespace.
|
||||
<9> Specify the supported {product-title} version that you want to use, for example, `4.20.0-multi`. If you are using a disconnected environment, replace `<ocp_release_image>` with the digest image. To extract the {product-title} release image digest, see "Extracting the {product-title} release image digest".
|
||||
<9> Specify the supported {product-title} version that you want to use, for example, `4.21.0-multi`. If you are using a disconnected environment, replace `<ocp_release_image>` with the digest image. To extract the {product-title} release image digest, see "Extracting the {product-title} release image digest".
|
||||
@@ -7,7 +7,7 @@
|
||||
[id="hcp-bm-hc_{context}"]
|
||||
= Creating a hosted cluster by using the CLI
|
||||
|
||||
On bare-metal infrastructure, you can create or import a hosted cluster. After you enable the Assisted Installer as an add-on to {mce-short} and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. The Agent Cluster API provider connects a management cluster that hosts the control plane and a hosted cluster that consists of only the compute nodes.
|
||||
On bare-metal infrastructure, you can create or import a hosted cluster. After you enable the Assisted Installer as an add-on to {mce-short} and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. The Agent Cluster API provider connects a management cluster that hosts the control plane and a hosted cluster that consists of only the compute nodes.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -17,7 +17,7 @@ On bare-metal infrastructure, you can create or import a hosted cluster. After y
|
||||
|
||||
- You cannot create a hosted cluster in the namespace of a {mce-short} managed cluster.
|
||||
|
||||
- For best security and management practices, create a hosted cluster separate from other hosted clusters.
|
||||
- For best security and management practices, create a hosted cluster separate from other hosted clusters.
|
||||
|
||||
- Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending persistent volume claims (PVCs).
|
||||
|
||||
@@ -32,10 +32,10 @@ $ oc label node [compute-node-1] topology.kubernetes.io/zone=zone1
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc label node [compute-node-2] topology.kubernetes.io/zone=zone2
|
||||
$ oc label node [compute-node-2] topology.kubernetes.io/zone=zone2
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc label node [compute-node-3] topology.kubernetes.io/zone=zone3
|
||||
----
|
||||
@@ -83,7 +83,7 @@ $ hcp create cluster agent \
|
||||
<7> Specify the path to your SSH public key. The default file path is `~/.ssh/id_rsa.pub`.
|
||||
<8> Specify your hosted cluster namespace.
|
||||
<9> Specify the availability policy for the hosted control plane components. Supported options are `SingleReplica` and `HighlyAvailable`. The default value is `HighlyAvailable`.
|
||||
<10> Specify the supported {product-title} version that you want to use, such as `4.20.0-multi`. If you are using a disconnected environment, replace `<ocp_release_image>` with the digest image. To extract the {product-title} release image digest, see _Extracting the {product-title} release image digest_.
|
||||
<10> Specify the supported {product-title} version that you want to use, such as `4.21.0-multi`. If you are using a disconnected environment, replace `<ocp_release_image>` with the digest image. To extract the {product-title} release image digest, see _Extracting the {product-title} release image digest_.
|
||||
<11> Specify the node pool replica count, such as `3`. You must specify the replica count as `0` or greater to create the same number of replicas. Otherwise, you do not create node pools.
|
||||
<12> After the `--ssh-key` flag, specify the path to the SSH key, such as `user/.ssh/id_rsa`.
|
||||
|
||||
@@ -266,7 +266,7 @@ apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
annotations:
|
||||
metallb.universe.tf/address-pool: metallb
|
||||
metallb.universe.tf/address-pool: metallb
|
||||
name: metallb-ingress
|
||||
namespace: openshift-ingress
|
||||
spec:
|
||||
|
||||
@@ -206,7 +206,7 @@ clusteroperator.config.openshift.io/console
|
||||
clusteroperator.config.openshift.io/ingress 4.x.y True False False 53m
|
||||
----
|
||||
+
|
||||
Replace `<4.x.y>` with the supported {product-title} version that you want to use, for example, `4.20.0-multi`.
|
||||
Replace `<4.x.y>` with the supported {product-title} version that you want to use, for example, `4.21.0-multi`.
|
||||
|
||||
|
||||
ifeval::["{context}" == "hcp-manage-non-bm"]
|
||||
|
||||
@@ -37,5 +37,5 @@ $ hcp create cluster aws \
|
||||
<4> Specify the path to your pull secret, for example, `/user/name/pullsecret`.
|
||||
<5> Specify the path to your AWS STS credentials file, for example, `/home/user/sts-creds/sts-creds.json`.
|
||||
<6> Specify the AWS region name, for example, `us-east-1`.
|
||||
<7> Specify the supported {product-title} version that you want to use, for example, `4.20.0-multi`. If you are using a disconnected environment, replace `<ocp_release_image>` with the digest image. To extract the {product-title} release image digest, see "Extracting the {product-title} release image digest".
|
||||
<7> Specify the supported {product-title} version that you want to use, for example, `4.21.0-multi`. If you are using a disconnected environment, replace `<ocp_release_image>` with the digest image. To extract the {product-title} release image digest, see "Extracting the {product-title} release image digest".
|
||||
<8> Specify the Amazon Resource Name (ARN), for example, `arn:aws:iam::820196288204:role/myrole`.
|
||||
@@ -27,7 +27,7 @@ $ hcp create cluster openstack \
|
||||
--openstack-node-flavor m1.xlarge \
|
||||
--base-domain example.com \
|
||||
--pull-secret /path/to/pull-secret.json \
|
||||
--release-image quay.io/openshift-release-dev/ocp-release:4.20.0-x86_64 \
|
||||
--release-image quay.io/openshift-release-dev/ocp-release:4.21.0-x86_64 \
|
||||
--node-pool-replicas 3 \
|
||||
--etcd-storage-class lvms-etcd-class
|
||||
----
|
||||
|
||||
@@ -44,5 +44,5 @@ $ hcp create cluster agent \
|
||||
<4> Replace the name with your base domain, for example, `example.com`.
|
||||
<5> Replace the etcd storage class name, for example, `lvm-storageclass`.
|
||||
<6> Replace the path to your SSH public key. The default file path is `~/.ssh/id_rsa.pub`.
|
||||
<7> Replace with the supported {product-title} version that you want to use, for example, `4.20.0-multi`.
|
||||
<7> Replace with the supported {product-title} version that you want to use, for example, `4.21.0-multi`.
|
||||
<8> Replace the path to Certificate Authority of mirror registry.
|
||||
@@ -52,4 +52,4 @@ $ hcp create cluster agent \
|
||||
<6> Specify the `icsp.yaml` file that defines ICSP and your mirror registries.
|
||||
<7> Specify the path to your SSH public key. The default file path is `~/.ssh/id_rsa.pub`.
|
||||
<8> Specify your hosted cluster namespace.
|
||||
<9> Specify the supported {product-title} version that you want to use, for example, `4.20.0-multi`. If you are using a disconnected environment, replace `<ocp_release_image>` with the digest image. To extract the {product-title} release image digest, see _Extracting the {product-title} release image digest_.
|
||||
<9> Specify the supported {product-title} version that you want to use, for example, `4.21.0-multi`. If you are using a disconnected environment, replace `<ocp_release_image>` with the digest image. To extract the {product-title} release image digest, see _Extracting the {product-title} release image digest_.
|
||||
|
||||
@@ -54,7 +54,7 @@ $ hcp create cluster agent \
|
||||
<7> Specify the path to your SSH public key. The default file path is `~/.ssh/id_rsa.pub`.
|
||||
<8> Specify your hosted cluster namespace.
|
||||
<9> Specify the availability policy for the hosted control plane components. Supported options are `SingleReplica` and `HighlyAvailable`. The default value is `HighlyAvailable`.
|
||||
<10> Specify the supported {product-title} version that you want to use, for example, `4.20.0-multi`.
|
||||
<10> Specify the supported {product-title} version that you want to use, for example, `4.21.0-multi`.
|
||||
<11> Specify the node pool replica count, for example, `3`. You must specify the replica count as `0` or greater to create the same number of replicas. Otherwise, no node pools are created.
|
||||
|
||||
.Verification
|
||||
|
||||
@@ -123,7 +123,7 @@ $ oc get np -n clusters
|
||||
[source,terminal]
|
||||
----
|
||||
NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
|
||||
clusters cb-np cb-np-hcp 1 1 False False 4.20.0-0.nightly-2025-06-05-224220 False False
|
||||
clusters cb-np cb-np-hcp 1 1 False False 4.21.0-0.nightly-2025-06-05-224220 False False
|
||||
----
|
||||
|
||||
. Verify that your new compute nodes are created in the hosted cluster by running the following command:
|
||||
|
||||
@@ -57,14 +57,14 @@ $ oc get clusteroperators
|
||||
[source,terminal]
|
||||
----
|
||||
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
|
||||
authentication 4.20.0-0 True False False 51m
|
||||
baremetal 4.20.0-0 True False False 72m
|
||||
cloud-controller-manager 4.20.0-0 True False False 75m
|
||||
cloud-credential 4.20.0-0 True False False 77m
|
||||
cluster-api 4.20.0-0 True False False 42m
|
||||
cluster-autoscaler 4.20.0-0 True False False 72m
|
||||
config-operator 4.20.0-0 True False False 72m
|
||||
console 4.20.0-0 True False False 55m
|
||||
authentication 4.21.0-0 True False False 51m
|
||||
baremetal 4.21.0-0 True False False 72m
|
||||
cloud-controller-manager 4.21.0-0 True False False 75m
|
||||
cloud-credential 4.21.0-0 True False False 77m
|
||||
cluster-api 4.21.0-0 True False False 42m
|
||||
cluster-autoscaler 4.21.0-0 True False False 72m
|
||||
config-operator 4.21.0-0 True False False 72m
|
||||
console 4.21.0-0 True False False 55m
|
||||
...
|
||||
----
|
||||
+
|
||||
|
||||
@@ -104,14 +104,14 @@ $ oc get clusteroperators
|
||||
[source,terminal]
|
||||
----
|
||||
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
|
||||
authentication 4.20.0-0 True False False 51m
|
||||
baremetal 4.20.0-0 True False False 72m
|
||||
cloud-controller-manager 4.20.0-0 True False False 75m
|
||||
cloud-credential 4.20.0-0 True False False 77m
|
||||
cluster-api 4.20.0-0 True False False 42m
|
||||
cluster-autoscaler 4.20.0-0 True False False 72m
|
||||
config-operator 4.20.0-0 True False False 72m
|
||||
console 4.20.0-0 True False False 55m
|
||||
authentication 4.21.0-0 True False False 51m
|
||||
baremetal 4.21.0-0 True False False 72m
|
||||
cloud-controller-manager 4.21.0-0 True False False 75m
|
||||
cloud-credential 4.21.0-0 True False False 77m
|
||||
cluster-api 4.21.0-0 True False False 42m
|
||||
cluster-autoscaler 4.21.0-0 True False False 72m
|
||||
config-operator 4.21.0-0 True False False 72m
|
||||
console 4.21.0-0 True False False 55m
|
||||
...
|
||||
----
|
||||
+
|
||||
|
||||
@@ -43,7 +43,7 @@ You can host different versions of control planes on the same management cluster
|
||||
----
|
||||
apiVersion: v1
|
||||
data:
|
||||
supported-versions: '{"versions":["4.20"]}'
|
||||
supported-versions: '{"versions":["4.21"]}'
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
labels:
|
||||
|
||||
@@ -59,8 +59,8 @@ kind: ImageBasedInstallationConfig
|
||||
metadata:
|
||||
name: example-image-based-installation-config
|
||||
# The following fields are required
|
||||
seedImage: quay.io/openshift-kni/seed-image:4.20.0
|
||||
seedVersion: 4.20.0
|
||||
seedImage: quay.io/openshift-kni/seed-image:4.21.0
|
||||
seedVersion: 4.21.0
|
||||
installationDisk: /dev/vda
|
||||
pullSecret: '<your_pull_secret>'
|
||||
# networkConfig is optional and contains the network configuration for the host in NMState format.
|
||||
@@ -89,7 +89,7 @@ kind: ImageBasedInstallationConfig
|
||||
metadata:
|
||||
name: example-image-based-installation-config
|
||||
seedImage: quay.io/repo-id/seed:latest
|
||||
seedVersion: "4.20.0"
|
||||
seedVersion: "4.21.0"
|
||||
extraPartitionStart: "-240G"
|
||||
installationDisk: /dev/disk/by-id/wwn-0x62c...
|
||||
sshKey: 'ssh-ed25519 AAAA...'
|
||||
|
||||
@@ -27,7 +27,7 @@ kind: ImageBasedInstallationConfig
|
||||
metadata:
|
||||
name: example-extra-partition
|
||||
seedImage: quay.io/repo-id/seed:latest
|
||||
seedVersion: "4.20.0"
|
||||
seedVersion: "4.21.0"
|
||||
installationDisk: /dev/sda
|
||||
pullSecret: '{"auths": ...}'
|
||||
# ...
|
||||
|
||||
@@ -20,7 +20,7 @@ You must run a gather operation to create an {insights-operator} archive.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/insights-operator/release-4.20/docs/gather-job.yaml[]
|
||||
include::https://raw.githubusercontent.com/openshift/insights-operator/release-4.21/docs/gather-job.yaml[]
|
||||
----
|
||||
. Copy your `insights-operator` image version:
|
||||
+
|
||||
|
||||
@@ -21,10 +21,10 @@ bootstrap machine that you need for your {product-title} cluster:
|
||||
[source,json]
|
||||
----
|
||||
ifndef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azure/04_bootstrap.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azure/04_bootstrap.json[]
|
||||
endif::ash[]
|
||||
ifdef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azurestack/04_bootstrap.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azurestack/04_bootstrap.json[]
|
||||
endif::ash[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -21,10 +21,10 @@ control plane machines that you need for your {product-title} cluster:
|
||||
[source,json]
|
||||
----
|
||||
ifndef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azure/05_masters.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azure/05_masters.json[]
|
||||
endif::ash[]
|
||||
ifdef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azurestack/05_masters.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azurestack/05_masters.json[]
|
||||
endif::ash[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -22,10 +22,10 @@ cluster:
|
||||
[source,json]
|
||||
----
|
||||
ifndef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azure/03_infra.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azure/03_infra.json[]
|
||||
endif::ash[]
|
||||
ifdef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azurestack/03_infra.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azurestack/03_infra.json[]
|
||||
endif::ash[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -21,10 +21,10 @@ stored {op-system-first} image that you need for your {product-title} cluster:
|
||||
[source,json]
|
||||
----
|
||||
ifndef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azure/02_storage.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azure/02_storage.json[]
|
||||
endif::ash[]
|
||||
ifdef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azurestack/02_storage.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azurestack/02_storage.json[]
|
||||
endif::ash[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -21,10 +21,10 @@ VNet that you need for your {product-title} cluster:
|
||||
[source,json]
|
||||
----
|
||||
ifndef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azure/01_vnet.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azure/01_vnet.json[]
|
||||
endif::ash[]
|
||||
ifdef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azurestack/01_vnet.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azurestack/01_vnet.json[]
|
||||
endif::ash[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -21,10 +21,10 @@ worker machines that you need for your {product-title} cluster:
|
||||
[source,json]
|
||||
----
|
||||
ifndef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azure/06_workers.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azure/06_workers.json[]
|
||||
endif::ash[]
|
||||
ifdef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/azurestack/06_workers.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/azurestack/06_workers.json[]
|
||||
endif::ash[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -22,5 +22,5 @@ Use the machine types included in the following charts for your AWS ARM instance
|
||||
.Machine types based on 64-bit ARM architecture
|
||||
[%collapsible]
|
||||
====
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/docs/user/aws/tested_instance_types_aarch64.md[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/docs/user/aws/tested_instance_types_aarch64.md[]
|
||||
====
|
||||
|
||||
@@ -46,7 +46,7 @@ ifndef::local-zone,wavelength-zone,secretregion[]
|
||||
.Machine types based on 64-bit x86 architecture
|
||||
[%collapsible]
|
||||
====
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/docs/user/aws/tested_instance_types_x86_64.md[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/docs/user/aws/tested_instance_types_x86_64.md[]
|
||||
====
|
||||
endif::local-zone,wavelength-zone,secretregion[]
|
||||
ifdef::local-zone[]
|
||||
|
||||
@@ -18,5 +18,5 @@ The following Microsoft Azure ARM64 instance types have been tested with {produc
|
||||
.Machine types based on 64-bit ARM architecture
|
||||
[%collapsible]
|
||||
====
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/docs/user/azure/tested_instance_types_aarch64.md[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/docs/user/azure/tested_instance_types_aarch64.md[]
|
||||
====
|
||||
|
||||
@@ -17,5 +17,5 @@ The following Microsoft Azure instance types have been tested with {product-titl
|
||||
.Machine types based on 64-bit x86 architecture
|
||||
[%collapsible]
|
||||
====
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/docs/user/azure/tested_instance_types_x86_64.md[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/docs/user/azure/tested_instance_types_x86_64.md[]
|
||||
====
|
||||
|
||||
@@ -14,6 +14,6 @@ You can use the following CloudFormation template to deploy the bootstrap machin
|
||||
====
|
||||
[source,yaml]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/aws/cloudformation/04_cluster_bootstrap.yaml[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/04_cluster_bootstrap.yaml[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -15,6 +15,6 @@ machines that you need for your {product-title} cluster.
|
||||
====
|
||||
[source,yaml]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/aws/cloudformation/05_cluster_master_nodes.yaml[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/05_cluster_master_nodes.yaml[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -15,7 +15,7 @@ objects and load balancers that you need for your {product-title} cluster.
|
||||
====
|
||||
[source,yaml]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/aws/cloudformation/02_cluster_infra.yaml[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/02_cluster_infra.yaml[]
|
||||
----
|
||||
====
|
||||
|
||||
|
||||
@@ -15,6 +15,6 @@ that you need for your {product-title} cluster.
|
||||
====
|
||||
[source,yaml]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/aws/cloudformation/03_cluster_security.yaml[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/03_cluster_security.yaml[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -15,6 +15,6 @@ you need for your {product-title} cluster.
|
||||
====
|
||||
[source,yaml]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/aws/cloudformation/01_vpc.yaml[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/01_vpc.yaml[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -14,6 +14,6 @@ You can deploy the compute machines that you need for your {product-title} clust
|
||||
====
|
||||
[source,yaml]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/aws/cloudformation/06_cluster_worker_node.yaml[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/06_cluster_worker_node.yaml[]
|
||||
----
|
||||
====
|
||||
@@ -15,6 +15,6 @@ machine that you need for your {product-title} cluster:
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/04_bootstrap.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/04_bootstrap.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -15,6 +15,6 @@ plane machines that you need for your {product-title} cluster:
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/05_control_plane.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/05_control_plane.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -13,6 +13,6 @@ You can use the following Deployment Manager template to deploy the external loa
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/02_lb_ext.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/02_lb_ext.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -13,6 +13,6 @@ You can use the following Deployment Manager template to deploy the firewall rul
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/03_firewall.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/03_firewall.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -13,6 +13,6 @@ You can use the following Deployment Manager template to deploy the IAM roles th
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/03_iam.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/03_iam.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -13,7 +13,7 @@ You can use the following Deployment Manager template to deploy the internal loa
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/02_lb_int.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/02_lb_int.py[]
|
||||
----
|
||||
====
|
||||
|
||||
|
||||
@@ -13,6 +13,6 @@ You can use the following Deployment Manager template to deploy the private DNS
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/02_dns.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/02_dns.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -15,6 +15,6 @@ you need for your {product-title} cluster:
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/01_vpc.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/01_vpc.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -15,6 +15,6 @@ that you need for your {product-title} cluster:
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/upi/gcp/06_worker.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/gcp/06_worker.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -18,5 +18,5 @@ The following {gcp-first} 64-bit ARM instance types have been tested with {produ
|
||||
.Machine series for 64-bit ARM machines
|
||||
[%collapsible]
|
||||
====
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/docs/user/gcp/tested_instance_types_arm.md[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/docs/user/gcp/tested_instance_types_arm.md[]
|
||||
====
|
||||
@@ -25,5 +25,5 @@ Some instance types require the use of Hyperdisk storage. If you use an instance
|
||||
.Machine series
|
||||
[%collapsible]
|
||||
====
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/docs/user/gcp/tested_instance_types.md[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/docs/user/gcp/tested_instance_types.md[]
|
||||
====
|
||||
|
||||
@@ -14,5 +14,5 @@ The following {ibm-cloud-name} instance types have been tested with {product-tit
|
||||
.Machine series
|
||||
[%collapsible]
|
||||
====
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.20/docs/user/ibmcloud/tested_instance_types_x86_64.md[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.21/docs/user/ibmcloud/tested_instance_types_x86_64.md[]
|
||||
====
|
||||
@@ -48,7 +48,7 @@ endif::[]
|
||||
$ OCP_RELEASE=<release_version>
|
||||
----
|
||||
+
|
||||
For `<release_version>`, specify the tag that corresponds to the version of {product-title} to install, such as `4.20.1`.
|
||||
For `<release_version>`, specify the tag that corresponds to the version of {product-title} to install, such as `4.21.1`.
|
||||
|
||||
.. Export the local registry name and host port:
|
||||
+
|
||||
@@ -305,4 +305,3 @@ You must perform this step on a machine with an active internet connection.
|
||||
$ openshift-install
|
||||
----
|
||||
endif::openshift-rosa,openshift-dedicated[]
|
||||
|
||||
|
||||
@@ -239,10 +239,10 @@ $ sudo spkut 44
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
KVC: wrapper simple-kmod for 4.20.0-147.3.1.el8_1.x86_64
|
||||
KVC: wrapper simple-kmod for 4.21.0-147.3.1.el8_1.x86_64
|
||||
Running userspace wrapper using the kernel module container...
|
||||
+ podman run -i --rm --privileged
|
||||
simple-kmod-dd1a7d4:4.20.0-147.3.1.el8_1.x86_64 spkut 44
|
||||
simple-kmod-dd1a7d4:4.21.0-147.3.1.el8_1.x86_64 spkut 44
|
||||
simple-procfs-kmod number = 0
|
||||
simple-procfs-kmod number = 44
|
||||
----
|
||||
|
||||
@@ -120,8 +120,8 @@ sh-4.4# uname -a
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
Linux <worker_node> 4.20.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT
|
||||
Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
|
||||
Linux <worker_node> 4.21.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT
|
||||
Wed Feb 25 18:29:55 UTC 2026 x86_64 x86_64 x86_64 GNU/Linux
|
||||
----
|
||||
+
|
||||
The kernel name contains `rt` and text `PREEMPT RT` indicates that this is a
|
||||
|
||||
@@ -88,10 +88,10 @@ $ openshift-install coreos print-stream-json | grep '\.iso[^.]'
|
||||
[source,terminal]
|
||||
ifndef::openshift-origin[]
|
||||
----
|
||||
"location": "<url>/art/storage/releases/rhcos-4.20-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso",
|
||||
"location": "<url>/art/storage/releases/rhcos-4.20-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso",
|
||||
"location": "<url>/art/storage/releases/rhcos-4.20-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso",
|
||||
"location": "<url>/art/storage/releases/rhcos-4.20/<release>/x86_64/rhcos-<release>-live.x86_64.iso",
|
||||
"location": "<url>/art/storage/releases/rhcos-4.21-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso",
|
||||
"location": "<url>/art/storage/releases/rhcos-4.21-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso",
|
||||
"location": "<url>/art/storage/releases/rhcos-4.21-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso",
|
||||
"location": "<url>/art/storage/releases/rhcos-4.21/<release>/x86_64/rhcos-<release>-live.x86_64.iso",
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
|
||||
@@ -101,18 +101,18 @@ $ openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initra
|
||||
[source,terminal]
|
||||
ifndef::openshift-origin[]
|
||||
----
|
||||
"<url>/art/storage/releases/rhcos-4.20-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64"
|
||||
"<url>/art/storage/releases/rhcos-4.20-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img"
|
||||
"<url>/art/storage/releases/rhcos-4.20-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img"
|
||||
"<url>/art/storage/releases/rhcos-4.20-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le"
|
||||
"<url>/art/storage/releases/rhcos-4.20-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img"
|
||||
"<url>/art/storage/releases/rhcos-4.20-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img"
|
||||
"<url>/art/storage/releases/rhcos-4.20-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x"
|
||||
"<url>/art/storage/releases/rhcos-4.20-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img"
|
||||
"<url>/art/storage/releases/rhcos-4.20-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img"
|
||||
"<url>/art/storage/releases/rhcos-4.20/<release>/x86_64/rhcos-<release>-live-kernel-x86_64"
|
||||
"<url>/art/storage/releases/rhcos-4.20/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img"
|
||||
"<url>/art/storage/releases/rhcos-4.20/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img"
|
||||
"<url>/art/storage/releases/rhcos-4.21-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64"
|
||||
"<url>/art/storage/releases/rhcos-4.21-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img"
|
||||
"<url>/art/storage/releases/rhcos-4.21-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img"
|
||||
"<url>/art/storage/releases/rhcos-4.21-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le"
|
||||
"<url>/art/storage/releases/rhcos-4.21-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img"
|
||||
"<url>/art/storage/releases/rhcos-4.21-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img"
|
||||
"<url>/art/storage/releases/rhcos-4.21-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x"
|
||||
"<url>/art/storage/releases/rhcos-4.21-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img"
|
||||
"<url>/art/storage/releases/rhcos-4.21-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img"
|
||||
"<url>/art/storage/releases/rhcos-4.21/<release>/x86_64/rhcos-<release>-live-kernel-x86_64"
|
||||
"<url>/art/storage/releases/rhcos-4.21/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img"
|
||||
"<url>/art/storage/releases/rhcos-4.21/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img"
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
@@ -232,9 +232,9 @@ menuentry 'Install CoreOS' {
|
||||
}
|
||||
----
|
||||
+
|
||||
where:
|
||||
where:
|
||||
+
|
||||
`coreos.live.rootfs_url`:: Specify the locations of the {op-system} files that you uploaded to your HTTP/TFTP server.
|
||||
`coreos.live.rootfs_url`:: Specify the locations of the {op-system} files that you uploaded to your HTTP/TFTP server.
|
||||
`kernel`:: The `kernel` parameter value is the location of the `kernel` file on your TFTP server. The `coreos.live.rootfs_url` parameter value is the location of the `rootfs` file, and the `coreos.inst.ignition_url` parameter value is the location of the bootstrap Ignition config file on your HTTP Server. If you use multiple NICs, specify a single interface in the `ip` option.
|
||||
For example, to use DHCP on a NIC that is named `eno1`, set `ip=eno1:dhcp`.
|
||||
`initrd rhcos`:: Specify the location of the `initramfs` file that you uploaded to your TFTP server.
|
||||
|
||||
@@ -22,7 +22,7 @@ $ ./openshift-install version
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
./openshift-install 4.20.0
|
||||
./openshift-install 4.21.0
|
||||
built from commit abc123etc
|
||||
release image quay.io/openshift-release-dev/ocp-release@sha256:abc123wxyzetc
|
||||
release architecture multi
|
||||
|
||||
@@ -38,14 +38,14 @@ $ oc get --namespace openshift-nmstate clusterserviceversion
|
||||
[source,terminal]
|
||||
----
|
||||
NAME DISPLAY VERSION REPLACES PHASE
|
||||
kubernetes-nmstate-operator.v4.20.0 Kubernetes NMState Operator 4.20.0 Succeeded
|
||||
kubernetes-nmstate-operator.v4.21.0 Kubernetes NMState Operator 4.21.0 Succeeded
|
||||
----
|
||||
|
||||
. Delete the CSV resource. After you delete the file, {olm} deletes certain resources, such as `RBAC`, that it created for the Operator.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.20.0
|
||||
$ oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.21.0
|
||||
----
|
||||
|
||||
. Delete the `nmstate` CR and any associated `Deployment` resources by running the following commands:
|
||||
|
||||
@@ -15,22 +15,22 @@ You can obtain the image by running one of the following commands in the cluster
|
||||
[source,terminal]
|
||||
----
|
||||
# For x86_64 image:
|
||||
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.20.0-x86_64 --image-for=driver-toolkit
|
||||
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.21.0-x86_64 --image-for=driver-toolkit
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# For ARM64 image:
|
||||
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.20.0-aarch64 --image-for=driver-toolkit
|
||||
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.21.0-aarch64 --image-for=driver-toolkit
|
||||
----
|
||||
|
||||
`kernelVersion`:: Required field that provides the version of the kernel that the cluster is upgraded to.
|
||||
`kernelVersion`:: Required field that provides the version of the kernel that the cluster is upgraded to.
|
||||
+
|
||||
You can obtain the version by running the following command in the cluster:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ podman run -it --rm $(oc adm release info quay.io/openshift-release-dev/ocp-release:4.20.0-x86_64 --image-for=driver-toolkit) cat /etc/driver-toolkit-release.json
|
||||
$ podman run -it --rm $(oc adm release info quay.io/openshift-release-dev/ocp-release:4.21.0-x86_64 --image-for=driver-toolkit) cat /etc/driver-toolkit-release.json
|
||||
----
|
||||
|
||||
`pushBuiltImage`:: If `true`, then the images created during the Build and Sign validation are pushed to their repositories. This field is `false` by default.
|
||||
|
||||
@@ -8,23 +8,23 @@
|
||||
[role="_abstract"]
|
||||
You can manually scale your application's pods by using one of the following methods:
|
||||
|
||||
* Changing your ReplicaSet or deployment definition
|
||||
* Changing your ReplicaSet or deployment definition
|
||||
* Using the command line
|
||||
* Using the web console
|
||||
|
||||
This workshop starts by using only one pod for the microservice. By defining a replica of `1` in your deployment definition, the Kubernetes Replication Controller strives to keep one pod alive. You then learn how to define pod autoscaling by using the link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/nodes/working-with-pods#nodes-pods-autoscaling[Horizontal Pod Autoscaler](HPA) which is based on the load and will scale out more pods when necessary.
|
||||
This workshop starts by using only one pod for the microservice. By defining a replica of `1` in your deployment definition, the Kubernetes Replication Controller strives to keep one pod alive. You then learn how to define pod autoscaling by using the link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/nodes/working-with-pods#nodes-pods-autoscaling[Horizontal Pod Autoscaler](HPA) which is based on the load and will scale out more pods when necessary.
|
||||
|
||||
.Prerequisites
|
||||
* An active {product-title} cluster
|
||||
* An active {product-title} cluster
|
||||
* A deployed OSToy application
|
||||
|
||||
.Procedure
|
||||
.Procedure
|
||||
|
||||
. In the OSToy app, click the *Networking* tab in the navigational menu.
|
||||
. In the "Intra-cluster Communication" section, locate the box that randomly changes colors. Inside the box, you see the microservice's pod name. There is only one box in this example because there is only one microservice pod.
|
||||
+
|
||||
image::deploy-scale-network.png[HPA Menu]
|
||||
+
|
||||
+
|
||||
|
||||
. Confirm that there is only one pod running for the microservice by running the following command:
|
||||
+
|
||||
@@ -95,8 +95,8 @@ ostoy-microservice-6666dcf455-tqzmn 1/1 Running 0 26m
|
||||
$ oc scale deployment ostoy-microservice --replicas=2
|
||||
----
|
||||
+
|
||||
** From the navigational menu of the OpenShift web console UI, click *Workloads > Deployments > ostoy-microservice*.
|
||||
** Locate the blue circle with a "3 Pod" label in the middle.
|
||||
** From the navigational menu of the OpenShift web console UI, click *Workloads > Deployments > ostoy-microservice*.
|
||||
** Locate the blue circle with a "3 Pod" label in the middle.
|
||||
** Selecting the arrows next to the circle scales the number of pods. Select the down arrow to `2`.
|
||||
+
|
||||
image::deploy-scale-uiscale.png[UI Scale]
|
||||
|
||||
@@ -6,10 +6,10 @@
|
||||
= Pod autoscaling
|
||||
|
||||
[role="_abstract"]
|
||||
{product-title} offers a link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/nodes/working-with-pods#nodes-pods-autoscaling[Horizontal Pod Autoscaler] (HPA). The HPA uses metrics to increase or decrease the number of pods when necessary.
|
||||
{product-title} offers a link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/nodes/working-with-pods#nodes-pods-autoscaling[Horizontal Pod Autoscaler] (HPA). The HPA uses metrics to increase or decrease the number of pods when necessary.
|
||||
|
||||
.Prerequisites
|
||||
* An active {product-title} cluster
|
||||
* An active {product-title} cluster
|
||||
* A deployed OSToy application
|
||||
|
||||
.Procedure
|
||||
@@ -27,11 +27,11 @@ $ oc autoscale deployment/ostoy-microservice --cpu-percent=80 --min=1 --max=10
|
||||
+
|
||||
This command creates an HPA that maintains between 1 and 10 replicas of the pods controlled by the ostoy-microservice deployment. During deployment, HPA increases and decreases the number of replicas to keep the average CPU use across all pods at 80% and 40 millicores.
|
||||
|
||||
. On the *Pod Auto Scaling > Horizontal Pod Autoscaling* page, select *Increase the load*.
|
||||
. On the *Pod Auto Scaling > Horizontal Pod Autoscaling* page, select *Increase the load*.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
Because increasing the load generates CPU intensive calculations, the page can become unresponsive. This is an expected response. Only click *Increase the Load* once. For more information about the process, see the link:https://github.com/openshift-cs/ostoy/blob/master/microservice/app.js#L32[microservice's GitHub repository].
|
||||
Because increasing the load generates CPU intensive calculations, the page can become unresponsive. This is an expected response. Only click *Increase the Load* once. For more information about the process, see the link:https://github.com/openshift-cs/ostoy/blob/master/microservice/app.js#L32[microservice's GitHub repository].
|
||||
====
|
||||
+
|
||||
After a few minutes, the new pods display on the page represented by colored boxes.
|
||||
@@ -67,17 +67,17 @@ ostoy-microservice-79894f6945-mgwk7 1/1 Running 0 4h24m
|
||||
ostoy-microservice-79894f6945-q925d 1/1 Running 0 3m14s
|
||||
----
|
||||
|
||||
* You can also verify autoscaling from the {cluster-manager}
|
||||
* You can also verify autoscaling from the {cluster-manager}
|
||||
+
|
||||
. In the OpenShift web console navigational menu, click *Observe > Dashboards*.
|
||||
. In the dashboard, select *Kubernetes / Compute Resources / Namespace (Pods)* and your namespace *ostoy*.
|
||||
+
|
||||
image::deploy-scale-hpa-metrics.png[Select metrics]
|
||||
+
|
||||
. A graph appears showing your resource usage across CPU and memory. The top graph shows recent CPU consumption per pod and the lower graph indicates memory usage. The following lists the callouts in the graph:
|
||||
.. The load increased (A).
|
||||
.. Two new pods were created (B and C).
|
||||
.. The thickness of each graph represents the CPU consumption and indicates which pods handled more load.
|
||||
. A graph appears showing your resource usage across CPU and memory. The top graph shows recent CPU consumption per pod and the lower graph indicates memory usage. The following lists the callouts in the graph:
|
||||
.. The load increased (A).
|
||||
.. Two new pods were created (B and C).
|
||||
.. The thickness of each graph represents the CPU consumption and indicates which pods handled more load.
|
||||
.. The load decreased (D), and the pods were deleted.
|
||||
+
|
||||
image::deploy-scale-metrics.png[Select metrics]
|
||||
@@ -32,7 +32,7 @@ The output of this command includes pull specs for the available updates similar
|
||||
Recommended updates:
|
||||
|
||||
VERSION IMAGE
|
||||
4.20.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032
|
||||
4.21.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032
|
||||
...
|
||||
----
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ include::snippets/snip-unified-perspective-web-console.adoc[]
|
||||
|
||||
* You have access to the cluster as a developer or as a user.
|
||||
* You have view permissions for the project that you are viewing the dashboard for.
|
||||
* A cluster administrator has link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/web_console/index#enabling-developer-perspective_web-console_web-console-overview[enabled the *Developer* perspective] in the web console.
|
||||
* A cluster administrator has link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/web_console/index#enabling-developer-perspective_web-console_web-console-overview[enabled the *Developer* perspective] in the web console.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -23,13 +23,13 @@ Additionally, consider the following dynamic port ranges when managing ingress t
|
||||
|
||||
To view or download the complete raw CSV content for an environment, see the following resources:
|
||||
|
||||
* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/raw/bm.csv[{product-title} on bare metal]
|
||||
* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/raw/bm.csv[{product-title} on bare metal]
|
||||
|
||||
* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/raw/none-sno.csv[{sno-caps} with other platforms]
|
||||
* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/raw/none-sno.csv[{sno-caps} with other platforms]
|
||||
|
||||
* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/raw/aws.csv[{product-title} on {aws-short}]
|
||||
* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/raw/aws.csv[{product-title} on {aws-short}]
|
||||
|
||||
* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/raw/aws-sno.csv[{sno-caps} on {aws-short}]
|
||||
* link:https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/raw/aws-sno.csv[{sno-caps} on {aws-short}]
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
@@ -50,14 +50,14 @@ For base ingress flows to {sno} clusters, see the _Control plane node base flows
|
||||
.Control plane node base flows
|
||||
[%header,format=csv]
|
||||
|===
|
||||
include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/unique/common-master.csv[]
|
||||
include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/unique/common-master.csv[]
|
||||
|===
|
||||
|
||||
[id="network-flow-matrix-worker_{context}"]
|
||||
.Worker node base flows
|
||||
[%header,format=csv]
|
||||
|===
|
||||
include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/unique/common-worker.csv[]
|
||||
include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/unique/common-worker.csv[]
|
||||
|===
|
||||
|
||||
[id="network-flow-matrix-bm_{context}"]
|
||||
@@ -68,7 +68,7 @@ In addition to the base network flows, the following matrix describes the ingres
|
||||
.{product-title} on bare metal
|
||||
[%header,format=csv]
|
||||
|===
|
||||
include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/unique/bm.csv[]
|
||||
include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/unique/bm.csv[]
|
||||
|===
|
||||
|
||||
[id="network-flow-matrix-sno_{context}"]
|
||||
@@ -79,7 +79,7 @@ In addition to the base network flows, the following matrix describes the ingres
|
||||
.{sno-caps} with other platforms
|
||||
[%header,format=csv]
|
||||
|===
|
||||
include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/unique/none-sno.csv[]
|
||||
include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/unique/none-sno.csv[]
|
||||
|===
|
||||
|
||||
[id="network-flow-matrix-aws_{context}"]
|
||||
@@ -90,7 +90,7 @@ In addition to the base network flows, the following matrix describes the ingres
|
||||
.{product-title} on AWS
|
||||
[%header,format=csv]
|
||||
|===
|
||||
include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/unique/aws.csv[]
|
||||
include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/unique/aws.csv[]
|
||||
|===
|
||||
|
||||
[id="network-flow-matrix-aws-sno_{context}"]
|
||||
@@ -101,5 +101,5 @@ In addition to the base network flows, the following matrix describes the ingres
|
||||
.{sno-caps} on AWS
|
||||
[%header,format=csv]
|
||||
|===
|
||||
include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.20/docs/stable/unique/aws-sno.csv[]
|
||||
include::https://raw.githubusercontent.com/openshift-kni/commatrix/release-4.21/docs/stable/unique/aws-sno.csv[]
|
||||
|===
|
||||
@@ -21,7 +21,7 @@ $ oc get clusteroperator baremetal
|
||||
[source,terminal]
|
||||
----
|
||||
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
|
||||
baremetal 4.20.0 True False False 3d15h
|
||||
baremetal 4.21.0 True False False 3d15h
|
||||
----
|
||||
|
||||
. Save the `BareMetalHost` object of the affected node to a file for later use by running the following command:
|
||||
|
||||
@@ -60,9 +60,9 @@ $ oc get nodes -o wide
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
master.example.com Ready master 171m v1.34.2 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.20.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev
|
||||
node1.example.com Ready worker 72m v1.34.2 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.20.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev
|
||||
node2.example.com Ready worker 164m v1.34.2 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.20.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev
|
||||
master.example.com Ready master 171m v1.34.2 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev
|
||||
node1.example.com Ready worker 72m v1.34.2 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev
|
||||
node2.example.com Ready worker 164m v1.34.2 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev
|
||||
----
|
||||
|
||||
* The following command lists information about a single node:
|
||||
|
||||
@@ -53,14 +53,14 @@ $ oc get csv -n openshift-dpu-operator
|
||||
[source,terminal]
|
||||
----
|
||||
NAME DISPLAY VERSION REPLACES PHASE
|
||||
dpu-operator.v4.20.0-202503130333 DPU Operator 4.20.0-202503130333 Failed
|
||||
dpu-operator.v4.21.0-202503130333 DPU Operator 4.21.0-202503130333 Failed
|
||||
----
|
||||
|
||||
.. Delete the DPU Operator by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete csv dpu-operator.v4.20.0-202503130333 -n openshift-dpu-operator
|
||||
$ oc delete csv dpu-operator.v4.21.0-202503130333 -n openshift-dpu-operator
|
||||
----
|
||||
|
||||
. Delete the namespace that was created for the DPU Operator by running the following command:
|
||||
@@ -78,4 +78,3 @@ $ oc delete namespace openshift-dpu-operator
|
||||
----
|
||||
$ oc get csv -n openshift-dpu-operator
|
||||
----
|
||||
|
||||
|
||||
@@ -26,12 +26,12 @@ apiVersion: mirror.openshift.io/v2alpha1
|
||||
mirror:
|
||||
platform:
|
||||
channels:
|
||||
- name: stable-4.20 <1>
|
||||
minVersion: 4.20.2
|
||||
maxVersion: 4.20.2
|
||||
- name: stable-4.21 <1>
|
||||
minVersion: 4.21.2
|
||||
maxVersion: 4.21.2
|
||||
graph: true <2>
|
||||
operators:
|
||||
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.20 <3>
|
||||
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.21 <3>
|
||||
packages: <4>
|
||||
- name: aws-load-balancer-operator
|
||||
- name: 3scale-operator
|
||||
|
||||
@@ -298,12 +298,12 @@ mirror:
|
||||
architectures:
|
||||
- "multi"
|
||||
channels:
|
||||
- name: stable-4.20
|
||||
minVersion: 4.20.0
|
||||
maxVersion: 4.20.1
|
||||
- name: stable-4.21
|
||||
minVersion: 4.21.0
|
||||
maxVersion: 4.21.1
|
||||
type: ocp
|
||||
operators:
|
||||
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.20
|
||||
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.21
|
||||
packages:
|
||||
- name: multicluster-engine
|
||||
----
|
||||
@@ -343,7 +343,7 @@ The default value is `false`
|
||||
----
|
||||
mirror:
|
||||
operators:
|
||||
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.20
|
||||
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.21
|
||||
packages:
|
||||
- name: rhods-operator
|
||||
defaultChannel: fast
|
||||
|
||||
@@ -36,7 +36,7 @@ terraform {
|
||||
required_providers {
|
||||
aws = {
|
||||
source = "hashicorp/aws"
|
||||
version = ">= 4.20.0"
|
||||
version = ">= 4.21.0"
|
||||
}
|
||||
rhcs = {
|
||||
version = ">= 1.6.2"
|
||||
@@ -102,7 +102,7 @@ module "rosa-classic" {
|
||||
create_account_roles = true
|
||||
create_operator_roles = true
|
||||
# Optional: Configure a cluster administrator user \ <1>
|
||||
#
|
||||
#
|
||||
# Option 1: Default cluster-admin user
|
||||
# Create an administrator user (cluster-admin) and automatically
|
||||
# generate a password by uncommenting the following parameter:
|
||||
@@ -114,7 +114,7 @@ module "rosa-classic" {
|
||||
# by uncommenting and editing the values of the following parameters:
|
||||
# admin_credentials_username = <username>
|
||||
# admin_credentials_password = <password>
|
||||
|
||||
|
||||
depends_on = [time_sleep.wait_60_seconds]
|
||||
}
|
||||
EOF
|
||||
|
||||
@@ -43,10 +43,10 @@ ifdef::sts[]
|
||||
----
|
||||
I: Fetching account roles
|
||||
ROLE NAME ROLE TYPE ROLE ARN OPENSHIFT VERSION
|
||||
ManagedOpenShift-ControlPlane-Role Control plane arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-ControlPlane-Role 4.20
|
||||
ManagedOpenShift-Installer-Role Installer arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Installer-Role 4.20
|
||||
ManagedOpenShift-Support-Role Support arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Support-Role 4.20
|
||||
ManagedOpenShift-Worker-Role Worker arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Worker-Role 4.20
|
||||
ManagedOpenShift-ControlPlane-Role Control plane arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-ControlPlane-Role 4.21
|
||||
ManagedOpenShift-Installer-Role Installer arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Installer-Role 4.21
|
||||
ManagedOpenShift-Support-Role Support arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Support-Role 4.21
|
||||
ManagedOpenShift-Worker-Role Worker arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Worker-Role 4.21
|
||||
----
|
||||
endif::sts[]
|
||||
ifdef::hcp[]
|
||||
@@ -55,9 +55,9 @@ ifdef::hcp[]
|
||||
----
|
||||
I: Fetching account roles
|
||||
ROLE NAME ROLE TYPE ROLE ARN OPENSHIFT VERSION AWS Managed
|
||||
ManagedOpenShift-HCP-ROSA-Installer-Role Installer arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Installer-Role 4.20 Yes
|
||||
ManagedOpenShift-HCP-ROSA-Support-Role Support arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Support-Role 4.20 Yes
|
||||
ManagedOpenShift-HCP-ROSA-Worker-Role Worker arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Worker-Role 4.20 Yes
|
||||
ManagedOpenShift-HCP-ROSA-Installer-Role Installer arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Installer-Role 4.21 Yes
|
||||
ManagedOpenShift-HCP-ROSA-Support-Role Support arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Support-Role 4.21 Yes
|
||||
ManagedOpenShift-HCP-ROSA-Worker-Role Worker arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Worker-Role 4.21 Yes
|
||||
----
|
||||
endif::hcp[]
|
||||
+
|
||||
|
||||
@@ -35,7 +35,7 @@ terraform {
|
||||
required_providers {
|
||||
aws = {
|
||||
source = "hashicorp/aws"
|
||||
version = ">= 4.20.0"
|
||||
version = ">= 4.21.0"
|
||||
}
|
||||
rhcs = {
|
||||
version = ">= 1.6.3"
|
||||
@@ -98,7 +98,7 @@ module "rosa-hcp" {
|
||||
create_account_roles = true
|
||||
create_operator_roles = true
|
||||
# Optional: Configure a cluster administrator user \ <1>
|
||||
#
|
||||
#
|
||||
# Option 1: Default cluster-admin user
|
||||
# Create an administrator user (cluster-admin) and automatically
|
||||
# generate a password by uncommenting the following parameter:
|
||||
@@ -110,7 +110,7 @@ module "rosa-hcp" {
|
||||
# by uncommenting and editing the values of the following parameters:
|
||||
# admin_credentials_username = <username>
|
||||
# admin_credentials_password = <password>
|
||||
|
||||
|
||||
depends_on = [time_sleep.wait_60_seconds]
|
||||
}
|
||||
EOF
|
||||
|
||||
@@ -40,7 +40,7 @@ Display Name: test_cluster
|
||||
ID: <cluster_id> <1>
|
||||
External ID: <external_id>
|
||||
Control Plane: ROSA Service Hosted
|
||||
OpenShift Version: 4.20.0
|
||||
OpenShift Version: 4.21.0
|
||||
Channel Group: stable
|
||||
DNS: test_cluster.l3cn.p3.openshiftapps.com
|
||||
AWS Account: <AWS_id>
|
||||
|
||||
@@ -56,7 +56,7 @@ Display Name: rosa-ext-test
|
||||
ID: <cluster_id>
|
||||
External ID: <cluster_ext_id>
|
||||
Control Plane: ROSA Service Hosted
|
||||
OpenShift Version: 4.20.0
|
||||
OpenShift Version: 4.21.0
|
||||
Channel Group: stable
|
||||
DNS: <dns>
|
||||
AWS Account: <AWS_id>
|
||||
|
||||
@@ -54,7 +54,7 @@ The account number present in the `sts_installer_trust_policy.json` and `sts_sup
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_installer_trust_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_installer_trust_policy.json[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -63,7 +63,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_installer_permission_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_installer_permission_policy.json[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -86,7 +86,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_instance_controlplane_trust_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_instance_controlplane_trust_policy.json[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -95,7 +95,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_instance_controlplane_permission_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_instance_controlplane_permission_policy.json[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -118,7 +118,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_instance_worker_trust_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_instance_worker_trust_policy.json[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -127,7 +127,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_instance_worker_permission_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_instance_worker_permission_policy.json[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -150,7 +150,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_support_trust_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_support_trust_policy.json[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -159,7 +159,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_support_permission_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_support_permission_policy.json[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -179,7 +179,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_ocm_trust_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_ocm_trust_policy.json[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -199,7 +199,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_installer_trust_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_installer_trust_policy.json[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -256,7 +256,7 @@ I: Attached policy 'arn:aws:iam::000000000000:policy/testrole-Worker-Role-Policy
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/openshift_ingress_operator_cloud_credentials_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/openshift_ingress_operator_cloud_credentials_policy.json[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -276,7 +276,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/openshift_cluster_csi_drivers_ebs_cloud_credentials_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/openshift_cluster_csi_drivers_ebs_cloud_credentials_policy.json[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -296,7 +296,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/openshift_machine_api_aws_cloud_credentials_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/openshift_machine_api_aws_cloud_credentials_policy.json[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -316,7 +316,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/openshift_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/openshift_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -336,7 +336,7 @@ include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs
|
||||
====
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/openshift_image_registry_installer_cloud_credentials_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/openshift_image_registry_installer_cloud_credentials_policy.json[]
|
||||
----
|
||||
====
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
@@ -40,7 +40,7 @@ This example procedure is applicable for an installer role and policy with the m
|
||||
The following example shows `sts_installer_core_permission_boundary_policy.json`:
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_installer_core_permission_boundary_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_installer_core_permission_boundary_policy.json[]
|
||||
----
|
||||
|
||||
[IMPORTANT]
|
||||
@@ -61,7 +61,7 @@ To use the permission boundaries, you will need to prepare the permission bounda
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ curl -o ./rosa-installer-core.json https://raw.githubusercontent.com/openshift/managed-cluster-config/master/resources/sts/4.20/sts_installer_core_permission_boundary_policy.json
|
||||
$ curl -o ./rosa-installer-core.json https://raw.githubusercontent.com/openshift/managed-cluster-config/master/resources/sts/4.21/sts_installer_core_permission_boundary_policy.json
|
||||
----
|
||||
|
||||
. Create the policy in AWS and gather its Amazon Resource Name (ARN) by entering the following command:
|
||||
@@ -124,12 +124,12 @@ The following example shows `sts_installer_privatelink_permission_boundary_polic
|
||||
+
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_installer_privatelink_permission_boundary_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_installer_privatelink_permission_boundary_policy.json[]
|
||||
----
|
||||
+
|
||||
The following example shows `sts_installer_vpc_permission_boundary_policy.json`:
|
||||
+
|
||||
[source,json]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.20/sts_installer_vpc_permission_boundary_policy.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/managed-cluster-config/refs/heads/master/resources/sts/4.21/sts_installer_vpc_permission_boundary_policy.json[]
|
||||
----
|
||||
@@ -258,7 +258,7 @@ I: Using arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Support-Role for th
|
||||
? Disable Workload monitoring (optional): No
|
||||
I: Creating cluster '<cluster_name>'
|
||||
I: To create this cluster again in the future, you can run:
|
||||
rosa create cluster --cluster-name <cluster_name> --role-arn arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Installer-Role --support-role-arn arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Support-Role --master-iam-role arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-ControlPlane-Role --worker-iam-role arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Worker-Role --operator-roles-prefix <cluster_name>-<random_string> --region us-east-1 --version 4.20.0 --additional-compute-security-group-ids sg-0e375ff0ec4a6cfa2 --additional-infra-security-group-ids sg-0e375ff0ec4a6cfa2 --additional-control-plane-security-group-ids sg-0e375ff0ec4a6cfa2 --replicas 2 --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 <16>
|
||||
rosa create cluster --cluster-name <cluster_name> --role-arn arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Installer-Role --support-role-arn arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Support-Role --master-iam-role arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-ControlPlane-Role --worker-iam-role arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Worker-Role --operator-roles-prefix <cluster_name>-<random_string> --region us-east-1 --version 4.21.0 --additional-compute-security-group-ids sg-0e375ff0ec4a6cfa2 --additional-infra-security-group-ids sg-0e375ff0ec4a6cfa2 --additional-control-plane-security-group-ids sg-0e375ff0ec4a6cfa2 --replicas 2 --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 <16>
|
||||
I: To view a list of clusters and their status, run 'rosa list clusters'
|
||||
I: Cluster '<cluster_name>' has been created.
|
||||
I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information.
|
||||
@@ -267,7 +267,7 @@ I: Once the cluster is installed you will need to add an Identity Provider befor
|
||||
<1> Optional. When creating your cluster, you can customize the subdomain for your cluster on `*.openshiftapps.com` using the `--domain-prefix` flag. The value for this flag must be unique within your organization, cannot be longer than 15 characters, and cannot be changed after cluster creation. If the flag is not supplied, an autogenerated value is created that depends on the length of the cluster name. If the cluster name is fewer than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated to a 15 character string.
|
||||
<2> When creating your cluster, you can create a local administrator user (`cluster-admin`) for your cluster. This automatically configures an `htpasswd` identity provider for the `cluster-admin` user.
|
||||
<3> You can create a custom password for the `cluster-admin` user, or have the system generate a password. If you do not create a custom password, the generated password is displayed in the command-line output. If you specify a custom password, the password must be at least 14 characters (ASCII-standard) without any whitespace. When defined, the password is hashed and transported securely.
|
||||
<4> When creating the cluster, the listed `OpenShift version` options include the major, minor, and patch versions, for example `4.20.0`.
|
||||
<4> When creating the cluster, the listed `OpenShift version` options include the major, minor, and patch versions, for example `4.21.0`.
|
||||
<5> Optional: Specify `optional` to configure all EC2 instances to use both v1 and v2 endpoints of EC2 Instance Metadata Service (IMDS). This is the default value. Specify `required` to configure all EC2 instances to use IMDSv2 only.
|
||||
+
|
||||
ifdef::openshift-rosa[]
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
[id="running-insights-operator-gather-openshift-cli_{context}"]
|
||||
= Gathering data on demand with the {insights-operator} from the OpenShift CLI
|
||||
|
||||
You can run a custom {insights-operator} gather operation on-demand from the {product-title} command-line interface (CLI).
|
||||
You can run a custom {insights-operator} gather operation on-demand from the {product-title} command-line interface (CLI).
|
||||
An on-demand `DataGather` operation is useful for one-off data collections that require different configurations to the periodic data gathering (`InsightsDataGather`) specification.
|
||||
|
||||
Use the following procedure to create a `DataGather` custom resource definition (CRD), and then run the data gather operation on demand from the CLI.
|
||||
@@ -48,7 +48,7 @@ spec:
|
||||
apiVersion: insights.openshift.io/v1alpha2
|
||||
kind: DataGather
|
||||
metadata:
|
||||
name: <your_data_gather>
|
||||
name: <your_data_gather>
|
||||
spec:
|
||||
# Gatherers configuration
|
||||
gatherers:
|
||||
@@ -82,7 +82,7 @@ spec:
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
Ensure that the volume name specified matches the existing `PersistentVolumeClaim` value in the `openshift-insights` namespace. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/storage/understanding-persistent-storage#persistent-volume-claims_understanding-persistent-storage[Persistent volume claims].
|
||||
Ensure that the volume name specified matches the existing `PersistentVolumeClaim` value in the `openshift-insights` namespace. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/storage/understanding-persistent-storage#persistent-volume-claims_understanding-persistent-storage[Persistent volume claims].
|
||||
====
|
||||
+
|
||||
* To enable data obfuscation, define the `dataPolicy` key and required values. For example, to obfuscate IP addresses and workload names, add the following configuration:
|
||||
@@ -92,7 +92,7 @@ Ensure that the volume name specified matches the existing `PersistentVolumeClai
|
||||
apiVersion: insights.openshift.io/v1alpha2
|
||||
kind: DataGather
|
||||
metadata:
|
||||
name: <your_data_gather>
|
||||
name: <your_data_gather>
|
||||
spec:
|
||||
dataPolicy:
|
||||
- ObfuscateNetworking
|
||||
|
||||
@@ -52,7 +52,7 @@ spec:
|
||||
apiVersion: insights.openshift.io/v1alpha2
|
||||
kind: DataGather
|
||||
metadata:
|
||||
name: <your_data_gather>
|
||||
name: <your_data_gather>
|
||||
spec:
|
||||
# Gatherers configuration
|
||||
gatherers:
|
||||
@@ -86,7 +86,7 @@ spec:
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
Ensure that the volume name specified matches the existing `PersistentVolumeClaim` value in the `openshift-insights` namespace. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/storage/understanding-persistent-storage#persistent-volume-claims_understanding-persistent-storage[Persistent volume claims].
|
||||
Ensure that the volume name specified matches the existing `PersistentVolumeClaim` value in the `openshift-insights` namespace. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/storage/understanding-persistent-storage#persistent-volume-claims_understanding-persistent-storage[Persistent volume claims].
|
||||
====
|
||||
* To enable data obfuscation, define the `dataPolicy` key and required values. For example, to obfuscate IP addresses and workload names, add the following configuration:
|
||||
+
|
||||
@@ -95,7 +95,7 @@ Ensure that the volume name specified matches the existing `PersistentVolumeClai
|
||||
apiVersion: insights.openshift.io/v1alpha2
|
||||
kind: DataGather
|
||||
metadata:
|
||||
name: <your_data_gather>
|
||||
name: <your_data_gather>
|
||||
spec:
|
||||
dataPolicy:
|
||||
- ObfuscateNetworking
|
||||
|
||||
@@ -43,7 +43,7 @@ endif::openshift-enterprise,openshift-origin[]
|
||||
ifdef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[]
|
||||
* Maximum number of PIDs per node.
|
||||
+
|
||||
The default value depends on link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.20/html-single/nodes/index#nodes-nodes-resources-configuring[node resources]. In {product-title}, this value is controlled by the link:https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved[`--system-reserved`] parameter, which reserves PIDs on each node based on the total resources of the node.
|
||||
The default value depends on link:https://access.redhat.com/documentation/en-us/openshift_container_platform/latest/html-single/nodes/index#nodes-nodes-resources-configuring[node resources]. In {product-title}, this value is controlled by the link:https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved[`--system-reserved`] parameter, which reserves PIDs on each node based on the total resources of the node.
|
||||
endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[]
|
||||
|
||||
When a pod exceeds the allowed maximum number of PIDs per pod, the pod might stop functioning correctly and might be evicted from the node. See link:https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals-and-thresholds[the Kubernetes documentation for eviction signals and thresholds] for more information.
|
||||
|
||||
@@ -45,7 +45,7 @@ Both `http` and `event` trigger functions have the same template structure:
|
||||
<dependency>
|
||||
<groupId>junit</groupId>
|
||||
<artifactId>junit</artifactId>
|
||||
<version>4.20</version>
|
||||
<version>4.21</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
[id="using_selinuxChangePolicy_testing-mountoption-rwo-rwx_{context}"]
|
||||
= Testing the RWO and RWX and SELinux mount option feature
|
||||
|
||||
In {product-title} 4.20, you can evaluate the mount option feature for RWO and RWX volumes as a Technology Preview feature.
|
||||
In {product-title} 4.21, you can evaluate the mount option feature for RWO and RWX volumes as a Technology Preview feature.
|
||||
|
||||
:FeatureName: RWO/RWX SELinux mount
|
||||
include::snippets/technology-preview.adoc[]
|
||||
|
||||
@@ -115,5 +115,5 @@ $ oc get csv -n must-gather-operator
|
||||
[source,terminal]
|
||||
----
|
||||
NAME DISPLAY VERSION REPLACES PHASE
|
||||
support-log-gather-operator.v4.20.0 support log gather 4.20.0 Succeeded
|
||||
support-log-gather-operator.v4.21.0 support log gather 4.21.0 Succeeded
|
||||
----
|
||||
@@ -58,13 +58,13 @@ ifdef::openshift-origin[]
|
||||
Upstream update service: https://amd64.origin.releases.ci.openshift.org/graph
|
||||
Channel: stable-scos-4
|
||||
|
||||
Updates to 4.20:
|
||||
Updates to 4.21:
|
||||
VERSION ISSUES
|
||||
4.20.0-okd-scos.ec.14 no known issues relevant to this cluster
|
||||
4.21.0-okd-scos.ec.14 no known issues relevant to this cluster
|
||||
|
||||
Updates to 4.19:
|
||||
Updates to 4.20:
|
||||
VERSION ISSUES
|
||||
4.19.0-okd-scos.17 no known issues relevant to this cluster
|
||||
4.20.0-okd-scos.17 no known issues relevant to this cluster
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
+
|
||||
|
||||
@@ -39,7 +39,7 @@ $ mkdir -p ./out
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.20 | base64 -d | tar xv -C out
|
||||
$ podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.21 | base64 -d | tar xv -C out
|
||||
----
|
||||
+
|
||||
You can view the reference configuration in the `out/telco-core-rds/configuration/reference-crs-kube-compare` directory by running the following command:
|
||||
|
||||
@@ -100,9 +100,9 @@ $ oc get co
|
||||
[source,terminal]
|
||||
----
|
||||
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
|
||||
authentication 4.20.0-0 True False False 6m18s
|
||||
baremetal 4.20.0-0 True False False 2m42s
|
||||
network 4.20.0-0 True True False 5m58s Progressing: …
|
||||
authentication 4.21.0-0 True False False 6m18s
|
||||
baremetal 4.21.0-0 True False False 2m42s
|
||||
network 4.21.0-0 True True False 5m58s Progressing: …
|
||||
…
|
||||
----
|
||||
|
||||
|
||||
@@ -169,11 +169,11 @@ data:
|
||||
status.failureReason: "" # <2>
|
||||
status.startTimestamp: "2023-07-31T13:14:38Z" # <3>
|
||||
status.completionTimestamp: "2023-07-31T13:19:41Z" # <4>
|
||||
status.result.cnvVersion: 4.20.2 # <5>
|
||||
status.result.cnvVersion: 4.21.2 # <5>
|
||||
status.result.defaultStorageClass: trident-nfs <6>
|
||||
status.result.goldenImagesNoDataSource: <data_import_cron_list> # <7>
|
||||
status.result.goldenImagesNotUpToDate: <data_import_cron_list> # <8>
|
||||
status.result.ocpVersion: 4.20.0 # <9>
|
||||
status.result.ocpVersion: 4.21.0 # <9>
|
||||
status.result.pvcBound: "true" # <10>
|
||||
status.result.storageProfileMissingVolumeSnapshotClass: <storage_class_list> # <11>
|
||||
status.result.storageProfilesWithEmptyClaimPropertySets: <storage_profile_list> # <12>
|
||||
|
||||
@@ -54,7 +54,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: kubevirt-api-lifecycle-automation
|
||||
image: quay.io/openshift-virtualization/kubevirt-api-lifecycle-automation:v4.20 <1>
|
||||
image: quay.io/openshift-virtualization/kubevirt-api-lifecycle-automation:v4.21 <1>
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: MACHINE_TYPE_GLOB <2>
|
||||
|
||||
@@ -14,9 +14,9 @@ The managed route uses the External Route Certificate feature to set the `tls.ex
|
||||
|
||||
* You have deployed the SPIRE Server, SPIRE Agent, SPIFFEE CSI Driver, and the SPIRE OIDC Discovery Provider operands in the cluster.
|
||||
|
||||
* You have installed the {cert-manager-operator}. For more information, link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/security_and_compliance/index#cert-manager-operator-install[Installing the cert-manager Operator for Red{nbsp}Hat OpenShift].
|
||||
* You have installed the {cert-manager-operator}. For more information, link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/security_and_compliance/index#cert-manager-operator-install[Installing the cert-manager Operator for Red{nbsp}Hat OpenShift].
|
||||
|
||||
* You have created a `ClusterIssuer` or `Issuer` configured with a publicly trusted CA service. For example, an Automated Certificate Management Environment (ACME) type `Issuer` with the "Let's Encrypt ACME" service. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/security_and_compliance/index#cert-manager-operator-issuer-acme[Configuring an ACME issuer]
|
||||
* You have created a `ClusterIssuer` or `Issuer` configured with a publicly trusted CA service. For example, an Automated Certificate Management Environment (ACME) type `Issuer` with the "Let's Encrypt ACME" service. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/security_and_compliance/index#cert-manager-operator-issuer-acme[Configuring an ACME issuer]
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -81,4 +81,3 @@ $ curl https://$JWT_ISSUER_ENDPOINT/.well-known/openid-configuration
|
||||
]
|
||||
}%
|
||||
----
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ Before Vault is used as an OIDC, you need to install Vault.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Configure a route. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/ingress_and_load_balancing/configuring-routes#route-configuration[Route configuration]
|
||||
* Configure a route. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/ingress_and_load_balancing/routes#nw-configuring-routes[Configuring routes]
|
||||
|
||||
* Helm is installed.
|
||||
|
||||
@@ -114,7 +114,3 @@ $ curl -s $VAULT_ADDR/v1/sys/health | jq
|
||||
"cluster_id": "5e6f7a8b-9c0d-1e2f-3a4b-5c6d7e8f9a0b"
|
||||
}
|
||||
----
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ You can use SPIRE federation with custom certificate management using cert-manag
|
||||
|
||||
* You have `cluster-admin` privileges on all participating clusters.
|
||||
|
||||
* You have installed the {cert-manager-operator}. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/security_and_compliance/cert-manager-operator-for-red-hat-openshift[cert-manager Operator for Red Hat OpenShift].
|
||||
* You have installed the {cert-manager-operator}. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/security_and_compliance/cert-manager-operator-for-red-hat-openshift[cert-manager Operator for Red Hat OpenShift].
|
||||
|
||||
* Your federation endpoints must be publicly accessible for certificate validation.
|
||||
|
||||
@@ -556,6 +556,3 @@ spire-server-0 2/2 Running 0 10m
|
||||
----
|
||||
|
||||
. Optional: Test cross-cluster workload authentication by deploying workloads with SPIFFE identities on different clusters and verifying they can authenticate to each other using the federated trust.
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -34,8 +34,8 @@ spec:
|
||||
- spoke4
|
||||
ibuSpec:
|
||||
seedImageRef:
|
||||
image: quay.io/seed/image:4.20.0-rc.1
|
||||
version: 4.20.0-rc.1
|
||||
image: quay.io/seed/image:4.21.0-rc.1
|
||||
version: 4.21.0-rc.1
|
||||
pullSecretRef:
|
||||
name: "<seed_pull_secret>"
|
||||
extraManifests:
|
||||
|
||||
@@ -34,7 +34,7 @@ spec:
|
||||
baseDomain: "example.com"
|
||||
pullSecretRef:
|
||||
name: "assisted-deployment-pull-secret"
|
||||
clusterImageSetNameRef: "openshift-4.20"
|
||||
clusterImageSetNameRef: "openshift-4.21"
|
||||
sshPublicKey: "ssh-rsa AAAA..."
|
||||
clusters:
|
||||
# ...
|
||||
|
||||
@@ -49,7 +49,7 @@ a|Configure this field to enable disk encryption with Trusted Platform Module (T
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Configuring disk encryption by using the `diskEncryption` field in the `SiteConfig` CR is a Technology Preview feature in {product-title} 4.20.
|
||||
Configuring disk encryption by using the `diskEncryption` field in the `SiteConfig` CR is a Technology Preview feature in {product-title} 4.21.
|
||||
====
|
||||
|
||||
|`spec.clusters.diskEncryption.type`
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/attributes-openshift-dedicated.adoc[]
|
||||
:context: cloud-experts-deploying-application-s2i-deployments
|
||||
toc::[]
|
||||
[role="_abstract"]
|
||||
The integrated Source-to-Image (S2I) builder is one method to deploy applications in {OCP-short}. The S2I is a tool for building reproducible, Docker-formatted container images. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html/overview/getting-started-openshift-common-terms_openshift-editions[Glossary of common terms for {OCP}]. You must have a deployed {product-title} cluster before starting this process.
|
||||
The integrated Source-to-Image (S2I) builder is one method to deploy applications in {OCP-short}. The S2I is a tool for building reproducible, Docker-formatted container images. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/overview/getting-started-openshift-common-terms_openshift-editions[Glossary of common terms for {OCP}]. You must have a deployed {product-title} cluster before starting this process.
|
||||
|
||||
include::modules/learning-deploying-application-s2i-deployments-retrieving-login.adoc[leveloffset=+1]
|
||||
include::modules/learning-deploying-application-s2i-deployments-create-new-project.adoc[leveloffset=+1]
|
||||
|
||||
@@ -39,20 +39,20 @@ Version 1.0.0 of the {external-secrets-operator} is based on the upstream extern
|
||||
|
||||
With this release, the Operator API, `externalsecrets.operator.openshift.io` has been renamed to `externalsecretsconfigs.operator.openshift.io` to avoid confusion with the external-secrets provided API that has the same name, but a different purpose. The external-secrets provided API has also been restructured and new features are added.
|
||||
|
||||
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/security_and_compliance/index#external-secrets-operator-api[External Secrets Operator for Red Hat OpenShift APIs].
|
||||
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/security_and_compliance/index#external-secrets-operator-api[External Secrets Operator for Red Hat OpenShift APIs].
|
||||
|
||||
*Support to collect metrics of {external-secrets-operator-short}*
|
||||
|
||||
With this release, the {external-secrets-operator} supports collecting metrics for both the Operator and operands. This is optional and must be enabled.
|
||||
|
||||
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/security_and_compliance/index#external-secrets-monitoring[Monitoring the External Secrets Operator for Red Hat OpenShift].
|
||||
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/security_and_compliance/index#external-secrets-monitoring[Monitoring the External Secrets Operator for Red Hat OpenShift].
|
||||
|
||||
|
||||
*Support to configure proxy for {external-secrets-operator-short}*
|
||||
|
||||
With this release, the {external-secrets-operator} supports configuring proxy for both the Operator and operand.
|
||||
|
||||
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/security_and_compliance/index#external-secrets-operator-proxy[About the egress proxy for the External Secrets Operator for Red Hat OpenShift].
|
||||
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/security_and_compliance/index#external-secrets-operator-proxy[About the egress proxy for the External Secrets Operator for Red Hat OpenShift].
|
||||
|
||||
*Root filesystem is read-only for {external-secrets-operator} containers*
|
||||
|
||||
@@ -62,7 +62,7 @@ With this release, to improve security, the {external-secrets-operator} and all
|
||||
|
||||
With this release, {external-secrets-operator} includes pre-defined `NetworkPolicy` resources designed for enhanced security by governing ingress and egress traffic for operand components. These policies cover essential internal traffic, such as ingress to the metrics and webhook servers, and egress to the OpenShift API server and DNS server. Note that deployment of the `NetworkPolicy` is enabled by default and egress allow policies must be explicitly defined in the `ExternalSecretsConfig` custom resource for the `external-secrets` component to fetch secrets from external providers.
|
||||
|
||||
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/security_and_compliance/index#external-secrets-operator-config-net-policy[Configuring network policy for the operand].
|
||||
For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/security_and_compliance/index#external-secrets-operator-config-net-policy[Configuring network policy for the operand].
|
||||
|
||||
|
||||
[id="external-secrets-operator-release-notes-0-1-0_{context}"]
|
||||
|
||||
@@ -243,7 +243,7 @@ Support for the managed OIDC Discovery Provider Route::
|
||||
|
||||
* The `managedRoute` field is boolean and is set to `true` by default. If set to `false`, the Operator stops managing the route and the existing route will not be deleted automatically. If set back to `true`, the Operator resumes managing the route. If a route does not exist, the Operator creates a new one. If a route already exists, the Operator will override the user configuration if a conflict exists.
|
||||
|
||||
* The `externalSecretRef` references an externally managed Secret that has the TLS certificate for the `oidc-discovery-provider` Route host. When provided, this populates the route's `.Spec.TLS.ExternalCertificate` field. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.20/html-single/ingress_and_load_balancing/index#nw-ingress-route-secret-load-external-cert_secured-routes[Creating a route with externally managed certificate]
|
||||
* The `externalSecretRef` references an externally managed Secret that has the TLS certificate for the `oidc-discovery-provider` Route host. When provided, this populates the route's `.Spec.TLS.ExternalCertificate` field. For more information, see link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html-single/ingress_and_load_balancing/index#nw-ingress-route-secret-load-external-cert_secured-routes[Creating a route with externally managed certificate]
|
||||
|
||||
Enabling the custom Certificate Authority Time-To-Live for the SPIRE bundle::
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="updating-cluster-prepare"]
|
||||
= Preparing to update to {product-title} 4.20
|
||||
= Preparing to update to {product-title} 4.21
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: updating-cluster-prepare
|
||||
|
||||
|
||||
Reference in New Issue
Block a user