1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-4855: Updating version numbers from 4.12 to 4.13

This commit is contained in:
Andrew Taylor
2023-04-05 14:40:47 -04:00
parent af1ab0ae65
commit 40aa06df4a
85 changed files with 199 additions and 200 deletions

View File

@@ -37,5 +37,5 @@ include::modules/verifying-the-assumed-iam-role-in-your-pod.adoc[leveloffset=+2]
* For more information about installing and using the AWS Boto3 SDK for Python, see the link:https://boto3.amazonaws.com/v1/documentation/api/latest/index.html[AWS Boto3 documentation].
ifdef::openshift-rosa,openshift-dedicated[]
* For general information about webhook admission plugins for OpenShift, see link:https://docs.openshift.com/container-platform/4.12/architecture/admission-plug-ins.html#admission-webhooks-about_admission-plug-ins[Webhook admission plugins] in the OpenShift Container Platform documentation.
* For general information about webhook admission plugins for OpenShift, see link:https://docs.openshift.com/container-platform/4.13/architecture/admission-plug-ins.html#admission-webhooks-about_admission-plug-ins[Webhook admission plugins] in the OpenShift Container Platform documentation.
endif::openshift-rosa,openshift-dedicated[]

View File

@@ -13,7 +13,7 @@ include::modules/storage-persistent-storage-overview.adoc[leveloffset=+1]
[id="additional-resources_understanding-persistent-storage-microshift"]
[role="_additional-resources"]
.Additional resources
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/storage/understanding-persistent-storage#pv-access-modes_understanding-persistent-storage[Access modes for persistent storage]
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html/storage/understanding-persistent-storage#pv-access-modes_understanding-persistent-storage[Access modes for persistent storage]
include::modules/storage-persistent-storage-lifecycle.adoc[leveloffset=+1]

View File

@@ -31,4 +31,4 @@ The *Developer* perspective provides workflows specific to developer use cases,
You can use the *Topology* view to display applications, components, and workloads of your project. If you have no workloads in the project, the *Topology* view will show some links to create or import them. You can also use the *Quick Search* to import components directly.
.Additional Resources
See link:https://docs.openshift.com/container-platform/4.12/applications/odc-viewing-application-composition-using-topology-view.html[Viewing application composition using the Topology] view for more information on using the *Topology* view in *Developer* perspective.
See link:https://docs.openshift.com/container-platform/4.13/applications/odc-viewing-application-composition-using-topology-view.html[Viewing application composition using the Topology] view for more information on using the *Topology* view in *Developer* perspective.

View File

@@ -101,7 +101,7 @@ $ curl -s "$API_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID" -H "Author
.Example output
[source,terminal]
----
https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12
https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.13
----
. Download the ISO:

View File

@@ -14,9 +14,9 @@ The cluster also contains the definition for the `bootstrap` role. Because the b
== Control plane and node host compatibility
The {product-title} version must match between control plane host and node host. For example, in a 4.12 cluster, all control plane hosts must be 4.12 and all nodes must be 4.12.
The {product-title} version must match between control plane host and node host. For example, in a 4.13 cluster, all control plane hosts must be 4.13 and all nodes must be 4.13.
Temporary mismatches during cluster upgrades are acceptable. For example, when upgrading from {product-title} 4.11 to 4.12, some nodes will upgrade to 4.12 before others. Prolonged skewing of control plane hosts and node hosts might expose older compute machines to bugs and missing features. Users should resolve skewed control plane hosts and node hosts as soon as possible.
Temporary mismatches during cluster upgrades are acceptable. For example, when upgrading from {product-title} 4.12 to 4.13, some nodes will upgrade to 4.13 before others. Prolonged skewing of control plane hosts and node hosts might expose older compute machines to bugs and missing features. Users should resolve skewed control plane hosts and node hosts as soon as possible.
The `kubelet` service must not be newer than `kube-apiserver`, and can be up to two minor versions older depending on whether your {product-title} version is odd or even. The table below shows the appropriate version compatibility:
@@ -34,8 +34,8 @@ The `kubelet` service must not be newer than `kube-apiserver`, and can be up to
|===
[.small]
--
1. For example, {product-title} 4.9, 4.11.
2. For example, {product-title} 4.8, 4.10, 4.12.
1. For example, {product-title} 4.11, 4.13.
2. For example, {product-title} 4.10, 4.12.
--
[id="defining-workers_{context}"]

View File

@@ -47,7 +47,7 @@ For {op-system-base-full}, you can install the OpenShift CLI (`oc`) as an RPM if
+
[source,terminal]
----
# subscription-manager repos --enable="rhocp-4.12-for-rhel-8-x86_64-rpms"
# subscription-manager repos --enable="rhocp-4.13-for-rhel-8-x86_64-rpms"
----
+
[NOTE]

View File

@@ -25,7 +25,7 @@ Vector does not support FIPS Enabled Clusters.
.Prerequisites
* {product-title}: 4.12
* {product-title}: 4.13
* {logging-title-uc}: 5.4
* FIPS disabled

View File

@@ -20,7 +20,7 @@ You can use the {product-title} web console to install the Loki Operator.
.Prerequisites
* {product-title}: 4.12
* {product-title}: 4.13
* {logging-title-uc}: 5.4
To install the Loki Operator using the {product-title} web console:

View File

@@ -28,7 +28,7 @@ See "Creating machine configs with Butane" for information about Butane.
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: 40-worker-custom-journald
labels:

View File

@@ -30,7 +30,7 @@ Vector does not support FIPS Enabled Clusters.
.Prerequisites
* {product-title}: 4.12
* {product-title}: 4.13
* {logging-title-uc}: 5.4
* FIPS disabled

View File

@@ -371,7 +371,7 @@ annotations:
[source,terminal]
----
$ podman run --entrypoint performance-profile-creator -v \
/must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.12 \
/must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.13 \
--mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true \
--split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node \
--must-gather-dir-path /must-gather -power-consumption-mode=low-latency \ <1>

View File

@@ -46,7 +46,7 @@ Before removal, the Additional trust bundle section appears, redacting its value
Name: <cluster_name>
ID: <cluster_internal_id>
External ID: <cluster_external_id>
OpenShift Version: 4.12.0
OpenShift Version: 4.13.0
Channel Group: stable
DNS: <dns>
AWS Account: <aws_account_id>
@@ -76,7 +76,7 @@ After removing the proxy, the Additional trust bundle section is removed:
Name: <cluster_name>
ID: <cluster_internal_id>
External ID: <cluster_external_id>
OpenShift Version: 4.12.0
OpenShift Version: 4.13.0
Channel Group: stable
DNS: <dns>
AWS Account: <aws_account_id>

View File

@@ -19,7 +19,7 @@ See "Creating machine configs with Butane" for information about Butane.
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: 51-worker-rh-registry-trust
labels:

View File

@@ -24,7 +24,7 @@ When you configure a custom layered image, {product-title} no longer automatical
You should use the same base {op-system} image that is installed on the rest of your cluster. Use the `oc adm release info --image-for rhel-coreos-8` command to obtain the base image being used in your cluster.
====
+
For example, the following Containerfile creates a custom layered image from an {product-title} 4.12 image and a Hotfix package:
For example, the following Containerfile creates a custom layered image from an {product-title} 4.13 image and a Hotfix package:
+
.Example Containerfile for a custom layer image
[source,yaml]
@@ -127,7 +127,7 @@ Name: rendered-master-4e8be63aef68b843b546827b6ebe0913
Namespace:
Labels: <none>
Annotations: machineconfiguration.openshift.io/generated-by-controller-version: 8276d9c1f574481043d3661a1ace1f36cd8c3b62
machineconfiguration.openshift.io/release-image-version: 4.12.0-ec.3
machineconfiguration.openshift.io/release-image-version: 4.13.0-ec.3
API Version: machineconfiguration.openshift.io/v1
Kind: MachineConfig
...

View File

@@ -69,11 +69,11 @@ The modified `sshd_config` file overrides the default `sshd_config` file.
$ butane -p embedded.yaml -d files-dir/ > embedded.ign
----
. Once the Ignition file is created, you can include the configuration in a new live {op-system} ISO, which is named `rhcos-sshd-4.12.0-x86_64-live.x86_64.iso`, with the `coreos-installer` utility:
. Once the Ignition file is created, you can include the configuration in a new live {op-system} ISO, which is named `rhcos-sshd-4.13.0-x86_64-live.x86_64.iso`, with the `coreos-installer` utility:
+
[source,terminal]
----
$ coreos-installer iso ignition embed -i embedded.ign rhcos-4.12.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.12.0-x86_64-live.x86_64.iso
$ coreos-installer iso ignition embed -i embedded.ign rhcos-4.13.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.13.0-x86_64-live.x86_64.iso
----
.Verification
@@ -82,7 +82,7 @@ $ coreos-installer iso ignition embed -i embedded.ign rhcos-4.12.0-x86_64-live.x
+
[source,terminal]
----
# coreos-installer iso ignition show rhcos-sshd-4.12.0-x86_64-live.x86_64.iso
# coreos-installer iso ignition show rhcos-sshd-4.13.0-x86_64-live.x86_64.iso
----
+

View File

@@ -23,7 +23,7 @@ To override the current upgrade value for existing deployments, change the value
.Prerequisites
* You are running OpenShift Jenkins on {product-title} 4.12.
* You are running OpenShift Jenkins on {product-title} 4.13.
* You know the namespace where OpenShift Jenkins is deployed.
.Procedure

View File

@@ -39,7 +39,7 @@ For {op-system-base-full}, you can install the `{odo-title}` CLI as an RPM.
+
[source,terminal]
----
# subscription-manager repos --enable="ocp-tools-4.12-for-rhel-8-x86_64-rpms"
# subscription-manager repos --enable="ocp-tools-4.13-for-rhel-8-x86_64-rpms"
----
. Install the `{odo-title}` package:

View File

@@ -102,7 +102,7 @@ $ oc get csv -n openshift-operators
[source,terminal]
----
NAME DISPLAY VERSION REPLACES PHASE
node-maintenance-operator.v4.12 Node Maintenance Operator 4.12 Succeeded
node-maintenance-operator.v4.13 Node Maintenance Operator 4.13 Succeeded
----
. Verify that the Node Maintenance Operator is running:
+

View File

@@ -20,6 +20,6 @@ As a cluster administrator, you can enable the capabilities by setting `baseline
$ oc patch clusterversion version --type merge -p '{"spec":{"capabilities":{"baselineCapabilitySet":"vCurrent"}}}' <1>
----
+
<1> For `baselineCapabilitySet` you can specify `vCurrent`, `v4.11`, `v4.12`, or `None`.
<1> For `baselineCapabilitySet` you can specify `vCurrent`, `v4.12`, `v4.13`, or `None`.
include::snippets/capabilities-table.adoc[]

View File

@@ -119,14 +119,14 @@ Check that there are no cluster Operators with the `DEGRADED` condition set to `
[source,terminal]
----
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.12.0 True False False 59m
cloud-credential 4.12.0 True False False 85m
cluster-autoscaler 4.12.0 True False False 73m
config-operator 4.12.0 True False False 73m
console 4.12.0 True False False 62m
csi-snapshot-controller 4.12.0 True False False 66m
dns 4.12.0 True False False 76m
etcd 4.12.0 True False False 76m
authentication 4.13.0 True False False 59m
cloud-credential 4.13.0 True False False 85m
cluster-autoscaler 4.13.0 True False False 73m
config-operator 4.13.0 True False False 73m
console 4.13.0 True False False 62m
csi-snapshot-controller 4.13.0 True False False 66m
dns 4.13.0 True False False 76m
etcd 4.13.0 True False False 76m
...
----

View File

@@ -20,7 +20,7 @@ You must run a gather operation to create an Insights Operator archive.
+
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift/insights-operator/release-4.12/docs/gather-job.yaml[]
include::https://raw.githubusercontent.com/openshift/insights-operator/release-4.13/docs/gather-job.yaml[]
----
. Copy your `insights-operator` image version:
+

View File

@@ -58,7 +58,7 @@ variable:
----
$ export RHCOS_VERSION=<version> <1>
----
<1> The {op-system} VMDK version, like `4.12.0`.
<1> The {op-system} VMDK version, like `4.13.0`.
. Export the Amazon S3 bucket name as an environment variable:
+

View File

@@ -68,37 +68,37 @@ $ watch -n5 oc get clusteroperators
[source,terminal]
----
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.12.0 True False False 19m
baremetal 4.12.0 True False False 37m
cloud-credential 4.12.0 True False False 40m
cluster-autoscaler 4.12.0 True False False 37m
config-operator 4.12.0 True False False 38m
console 4.12.0 True False False 26m
csi-snapshot-controller 4.12.0 True False False 37m
dns 4.12.0 True False False 37m
etcd 4.12.0 True False False 36m
image-registry 4.12.0 True False False 31m
ingress 4.12.0 True False False 30m
insights 4.12.0 True False False 31m
kube-apiserver 4.12.0 True False False 26m
kube-controller-manager 4.12.0 True False False 36m
kube-scheduler 4.12.0 True False False 36m
kube-storage-version-migrator 4.12.0 True False False 37m
machine-api 4.12.0 True False False 29m
machine-approver 4.12.0 True False False 37m
machine-config 4.12.0 True False False 36m
marketplace 4.12.0 True False False 37m
monitoring 4.12.0 True False False 29m
network 4.12.0 True False False 38m
node-tuning 4.12.0 True False False 37m
openshift-apiserver 4.12.0 True False False 32m
openshift-controller-manager 4.12.0 True False False 30m
openshift-samples 4.12.0 True False False 32m
operator-lifecycle-manager 4.12.0 True False False 37m
operator-lifecycle-manager-catalog 4.12.0 True False False 37m
operator-lifecycle-manager-packageserver 4.12.0 True False False 32m
service-ca 4.12.0 True False False 38m
storage 4.12.0 True False False 37m
authentication 4.13.0 True False False 19m
baremetal 4.13.0 True False False 37m
cloud-credential 4.13.0 True False False 40m
cluster-autoscaler 4.13.0 True False False 37m
config-operator 4.13.0 True False False 38m
console 4.13.0 True False False 26m
csi-snapshot-controller 4.13.0 True False False 37m
dns 4.13.0 True False False 37m
etcd 4.13.0 True False False 36m
image-registry 4.13.0 True False False 31m
ingress 4.13.0 True False False 30m
insights 4.13.0 True False False 31m
kube-apiserver 4.13.0 True False False 26m
kube-controller-manager 4.13.0 True False False 36m
kube-scheduler 4.13.0 True False False 36m
kube-storage-version-migrator 4.13.0 True False False 37m
machine-api 4.13.0 True False False 29m
machine-approver 4.13.0 True False False 37m
machine-config 4.13.0 True False False 36m
marketplace 4.13.0 True False False 37m
monitoring 4.13.0 True False False 29m
network 4.13.0 True False False 38m
node-tuning 4.13.0 True False False 37m
openshift-apiserver 4.13.0 True False False 32m
openshift-controller-manager 4.13.0 True False False 30m
openshift-samples 4.13.0 True False False 32m
operator-lifecycle-manager 4.13.0 True False False 37m
operator-lifecycle-manager-catalog 4.13.0 True False False 37m
operator-lifecycle-manager-packageserver 4.13.0 True False False 32m
service-ca 4.13.0 True False False 38m
storage 4.13.0 True False False 37m
----
+
Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials:

View File

@@ -86,7 +86,7 @@ $ ls $HOME/clusterconfig/openshift/
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
labels:
machineconfiguration.openshift.io/role: worker

View File

@@ -88,7 +88,7 @@ $ ls $HOME/clusterconfig/openshift/
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
labels:
machineconfiguration.openshift.io/role: worker

View File

@@ -37,36 +37,36 @@ $ watch -n5 oc get clusteroperators
[source,terminal]
----
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.12.0 True False False 19m
baremetal 4.12.0 True False False 37m
cloud-credential 4.12.0 True False False 40m
cluster-autoscaler 4.12.0 True False False 37m
config-operator 4.12.0 True False False 38m
console 4.12.0 True False False 26m
csi-snapshot-controller 4.12.0 True False False 37m
dns 4.12.0 True False False 37m
etcd 4.12.0 True False False 36m
image-registry 4.12.0 True False False 31m
ingress 4.12.0 True False False 30m
insights 4.12.0 True False False 31m
kube-apiserver 4.12.0 True False False 26m
kube-controller-manager 4.12.0 True False False 36m
kube-scheduler 4.12.0 True False False 36m
kube-storage-version-migrator 4.12.0 True False False 37m
machine-api 4.12.0 True False False 29m
machine-approver 4.12.0 True False False 37m
machine-config 4.12.0 True False False 36m
marketplace 4.12.0 True False False 37m
monitoring 4.12.0 True False False 29m
network 4.12.0 True False False 38m
node-tuning 4.12.0 True False False 37m
openshift-apiserver 4.12.0 True False False 32m
openshift-controller-manager 4.12.0 True False False 30m
openshift-samples 4.12.0 True False False 32m
operator-lifecycle-manager 4.12.0 True False False 37m
operator-lifecycle-manager-catalog 4.12.0 True False False 37m
operator-lifecycle-manager-packageserver 4.12.0 True False False 32m
service-ca 4.12.0 True False False 38m
storage 4.12.0 True False False 37m
authentication 4.13.0 True False False 19m
baremetal 4.13.0 True False False 37m
cloud-credential 4.13.0 True False False 40m
cluster-autoscaler 4.13.0 True False False 37m
config-operator 4.13.0 True False False 38m
console 4.13.0 True False False 26m
csi-snapshot-controller 4.13.0 True False False 37m
dns 4.13.0 True False False 37m
etcd 4.13.0 True False False 36m
image-registry 4.13.0 True False False 31m
ingress 4.13.0 True False False 30m
insights 4.13.0 True False False 31m
kube-apiserver 4.13.0 True False False 26m
kube-controller-manager 4.13.0 True False False 36m
kube-scheduler 4.13.0 True False False 36m
kube-storage-version-migrator 4.13.0 True False False 37m
machine-api 4.13.0 True False False 29m
machine-approver 4.13.0 True False False 37m
machine-config 4.13.0 True False False 36m
marketplace 4.13.0 True False False 37m
monitoring 4.13.0 True False False 29m
network 4.13.0 True False False 38m
node-tuning 4.13.0 True False False 37m
openshift-apiserver 4.13.0 True False False 32m
openshift-controller-manager 4.13.0 True False False 30m
openshift-samples 4.13.0 True False False 32m
operator-lifecycle-manager 4.13.0 True False False 37m
operator-lifecycle-manager-catalog 4.13.0 True False False 37m
operator-lifecycle-manager-packageserver 4.13.0 True False False 32m
service-ca 4.13.0 True False False 38m
storage 4.13.0 True False False 37m
----
. Configure the Operators that are not available.

View File

@@ -70,7 +70,7 @@ You can install {product-title} version {product-version} on the following IBM h
[NOTE]
====
Support for {op-system} functionality for IBM z13 all models, {linuxoneProductName} Emperor, and {linuxoneProductName} Rockhopper is deprecated. These hardware models remain fully supported in {product-title} 4.12. However, Red Hat recommends that you use later hardware models.
Support for {op-system} functionality for IBM z13 all models, {linuxoneProductName} Emperor, and {linuxoneProductName} Rockhopper is deprecated. These hardware models remain fully supported in {product-title} 4.13. However, Red Hat recommends that you use later hardware models.
====
[id="minimum-ibm-z-system-requirements_{context}"]

View File

@@ -19,7 +19,7 @@ You can use Butane to produce a `MachineConfig` object so that you can configure
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: 99-worker-custom
labels:

View File

@@ -38,7 +38,7 @@ See "Creating machine configs with Butane" for information about Butane.
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: 99-worker-chrony <1>
labels:

View File

@@ -388,7 +388,7 @@ See "Creating machine configs with Butane" for information about Butane.
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: 99-simple-kmod
labels:

View File

@@ -27,7 +27,7 @@ Butane is a command-line utility that {product-title} uses to provide convenient
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: raid1-storage
labels:
@@ -70,7 +70,7 @@ storage:
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: raid1-alt-storage
labels:

View File

@@ -62,7 +62,7 @@ For example, the `threshold` value of `2` in the following configuration can be
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: worker-storage
labels:
@@ -209,7 +209,7 @@ For example, to configure storage for compute nodes, create a `$HOME/clusterconf
.Butane config example for a boot device
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: worker-storage <1>
labels:

View File

@@ -132,7 +132,7 @@ $ openshift-install create manifests --dir <installation_directory>
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
labels:
machineconfiguration.openshift.io/role: worker

View File

@@ -35,7 +35,7 @@ See "Creating machine configs with Butane" for information about Butane.
.Butane config example
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: 99-master-chrony-conf-override
labels:
@@ -93,7 +93,7 @@ $ butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yam
.Butane config example
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: 99-worker-chrony-conf-override
labels:

View File

@@ -66,7 +66,7 @@ See "Creating machine configs with Butane" for information about Butane.
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: 99-master-chrony
labels:

View File

@@ -99,5 +99,5 @@ oc get clusterserviceversion -n openshift-nmstate \
[source, terminal]
----
Name Phase
kubernetes-nmstate-operator.4.12.0-202210210157 Succeeded
kubernetes-nmstate-operator.4.13.0-202210210157 Succeeded
----

View File

@@ -27,7 +27,7 @@ See "Creating machine configs with Butane" for information about Butane.
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: 40-worker-custom-journald
labels:

View File

@@ -17,9 +17,9 @@ endif::[]
The Cloud Credential Operator (CCO) `Upgradable` status for a cluster with manually maintained credentials is `False` by default.
* For minor releases, for example, from 4.11 to 4.12, this status prevents you from upgrading until you have addressed any updated permissions and annotated the `CloudCredential` resource to indicate that the permissions are updated as needed for the next version. This annotation changes the `Upgradable` status to `True`.
* For minor releases, for example, from 4.12 to 4.13, this status prevents you from upgrading until you have addressed any updated permissions and annotated the `CloudCredential` resource to indicate that the permissions are updated as needed for the next version. This annotation changes the `Upgradable` status to `True`.
* For z-stream releases, for example, from 4.12.0 to 4.12.1, no permissions are added or changed, so the upgrade is not blocked.
* For z-stream releases, for example, from 4.13.0 to 4.13.1, no permissions are added or changed, so the upgrade is not blocked.
Before upgrading a cluster with manually maintained credentials, you must create any new credentials for the release image that you are upgrading to. Additionally, you must review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components.

View File

@@ -85,7 +85,7 @@ metadata:
name: clusterresourceoverride
namespace: clusterresourceoverride-operator
spec:
channel: "4.12"
channel: "4.13"
name: clusterresourceoverride
source: redhat-operators
sourceNamespace: openshift-marketplace

View File

@@ -213,14 +213,14 @@ $ oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o jso
+
[source,terminal]
----
jq .spec.template.spec.providerSpec.value.machineType ocp_4.12_machineset-a2-highgpu-1g.json
jq .spec.template.spec.providerSpec.value.machineType ocp_4.13_machineset-a2-highgpu-1g.json
"a2-highgpu-1g"
----
+
The `<output_file.json>` file is saved as `ocp_4.12_machineset-a2-highgpu-1g.json`.
The `<output_file.json>` file is saved as `ocp_4.13_machineset-a2-highgpu-1g.json`.
. Update the following fields in `ocp_4.12_machineset-a2-highgpu-1g.json`:
. Update the following fields in `ocp_4.13_machineset-a2-highgpu-1g.json`:
+
* Change `.metadata.name` to a name containing `gpu`.
@@ -244,7 +244,7 @@ to match the new `.metadata.name`.
+
[source,terminal]
----
$ oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.12_machineset-a2-highgpu-1g.json -
$ oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.13_machineset-a2-highgpu-1g.json -
----
+
.Example output
@@ -274,7 +274,7 @@ $ oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o jso
+
[source,terminal]
----
$ oc create -f ocp_4.12_machineset-a2-highgpu-1g.json
$ oc create -f ocp_4.13_machineset-a2-highgpu-1g.json
----
+
.Example output

View File

@@ -53,7 +53,7 @@ spec:
...
----
. Update your cluster to {product-title} 4.12.
. Update your cluster to {product-title} 4.13.
. Set the allowed source ranges API for the `ingresscontroller` by running the following command:
+

View File

@@ -79,7 +79,7 @@ $ oc get ip -n openshift-ingress-node-firewall
[source,terminal]
----
NAME CSV APPROVAL APPROVED
install-5cvnz ingress-node-firewall.4.12.0-202211122336 Automatic true
install-5cvnz ingress-node-firewall.4.13.0-202211122336 Automatic true
----
. To verify the version of the Operator, enter the following command:
@@ -94,7 +94,7 @@ $ oc get csv -n openshift-ingress-node-firewall
[source,terminal]
----
NAME DISPLAY VERSION REPLACES PHASE
ingress-node-firewall.4.12.0-202211122336 Ingress Node Firewall Operator 4.12.0-202211122336 ingress-node-firewall.4.12.0-202211102047 Succeeded
ingress-node-firewall.4.13.0-202211122336 Ingress Node Firewall Operator 4.13.0-202211122336 ingress-node-firewall.4.13.0-202211102047 Succeeded
----
[id="install-operator-web-console_{context}"]

View File

@@ -61,4 +61,3 @@ $ oc get csv -n metallb-system
NAME DISPLAY VERSION REPLACES PHASE
metallb-operator.4.{product-version}.0-202207051316 MetalLB Operator 4.{product-version}.0-202207051316 Succeeded
----

View File

@@ -34,8 +34,8 @@ $ oc get csv
[source,terminal]
----
VERSION REPLACES PHASE
4.12.0 metallb-operator.4.12-nnnnnnnnnnnn Installing
4.12.0 Replacing
4.13.0 metallb-operator.4.13-nnnnnnnnnnnn Installing
4.13.0 Replacing
----
. Run `get csv` again to verify the output:
@@ -49,5 +49,5 @@ $ oc get csv
[source,terminal]
----
NAME DISPLAY VERSION REPLACES PHASE
metallb-operator.4.12-nnnnnnnnnnnn MetalLB 4.12.0 metallb-operator.v4.12.0 Succeeded
metallb-operator.4.13-nnnnnnnnnnnn MetalLB 4.13.0 metallb-operator.v4.13.0 Succeeded
----

View File

@@ -7,7 +7,7 @@
[id="nw-openstack-external-ccm_{context}"]
= The OpenStack Cloud Controller Manager
In {product-title} 4.12, clusters that run on {rh-openstack-first} are switched from the legacy OpenStack cloud provider to the external OpenStack Cloud Controller Manager (CCM). This change follows the move in Kubernetes from in-tree, legacy cloud providers to external cloud providers that are implemented by using the link:https://kubernetes.io/docs/concepts/architecture/cloud-controller/[Cloud Controller Manager].
Beginning with {product-title} 4.12, clusters that run on {rh-openstack-first} were switched from the legacy OpenStack cloud provider to the external OpenStack Cloud Controller Manager (CCM). This change follows the move in Kubernetes from in-tree, legacy cloud providers to external cloud providers that are implemented by using the link:https://kubernetes.io/docs/concepts/architecture/cloud-controller/[Cloud Controller Manager].
To preserve user-defined configurations for the legacy cloud provider, existing configurations are mapped to new ones as part of the migration process. It searches for a configuration called `cloud-provider-config` in the `openshift-config` namespace.

View File

@@ -99,5 +99,5 @@ $ oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.statu
[source,terminal]
----
Name Phase
4.12.0-202301261535 Succeeded
4.13.0-202301261535 Succeeded
----

View File

@@ -50,7 +50,7 @@ Before removal, the proxy IP displays in a proxy section:
Name: <cluster_name>
ID: <cluster_internal_id>
External ID: <cluster_external_id>
OpenShift Version: 4.12.0
OpenShift Version: 4.13.0
Channel Group: stable
DNS: <dns>
AWS Account: <aws_account_id>
@@ -80,7 +80,7 @@ After removing the proxy, the proxy section is removed:
Name: <cluster_name>
ID: <cluster_internal_id>
External ID: <cluster_external_id>
OpenShift Version: 4.12.0
OpenShift Version: 4.13.0
Channel Group: stable
DNS: <dns>
AWS Account: <aws_account_id>

View File

@@ -92,7 +92,7 @@ $ oc get csv -n openshift-sriov-network-operator \
[source,terminal]
----
Name Phase
sriov-network-operator.4.12.0-202310121402 Succeeded
sriov-network-operator.4.13.0-202310121402 Succeeded
----
[id="install-operator-web-console_{context}"]

View File

@@ -16,7 +16,7 @@ Restic is not supported in the OADP on ROSA with AWS STS environment. Ensure the
.Prerequisites
* A ROSA OpenShift Cluster with the required access and tokens.
* link:https://docs.openshift.com/container-platform/4.12/backup_and_restore/application_backup_and_restore/installing/installing-oadp-aws.html#oadp-creating-default-secret_installing-oadp-aws[A default Secret], if your backup and snapshot locations use the same credentials, or if you do not require a snapshot location.
* link:https://docs.openshift.com/container-platform/4.13/backup_and_restore/application_backup_and_restore/installing/installing-oadp-aws.html#oadp-creating-default-secret_installing-oadp-aws[A default Secret], if your backup and snapshot locations use the same credentials, or if you do not require a snapshot location.
.Procedure
@@ -122,6 +122,6 @@ You are now ready to backup and restore OpenShift applications, as described in
* link:https://docs.openshift.com/rosa/rosa_architecture/rosa-understanding.html[Understanding ROSA with STS]
* link:https://docs.openshift.com/rosa/rosa_getting_started/rosa-sts-getting-started-workflow.html[Getting started with ROSA STS]
* link:https://docs.openshift.com/rosa/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.html[Creating a ROSA cluster with STS]
* link:https://docs.openshift.com/container-platform/4.12/backup_and_restore/application_backup_and_restore/installing/about-installing-oadp.html[About installing OADP]
* link:https://docs.openshift.com/container-platform/4.12/storage/container_storage_interface/persistent-storage-csi.html[Configuring CSI volumes]
* link:https://docs.openshift.com/container-platform/4.13/backup_and_restore/application_backup_and_restore/installing/about-installing-oadp.html[About installing OADP]
* link:https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/persistent-storage-csi.html[Configuring CSI volumes]
* link:https://docs.openshift.com/rosa/rosa_architecture/rosa_policy_service_definition/rosa-service-definition.html#rosa-sdpolicy-storage_rosa-service-definition[ROSA storage options]

View File

@@ -58,7 +58,7 @@ Do not continue to the next step until `PROGRESSING` is listed as `False`, as sh
[source,terminal]
----
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.12.0 True False False 145m
authentication 4.13.0 True False False 145m
----
. Check that a new revision of the Kubernetes API server pods has rolled out. This will take several minutes.
@@ -74,7 +74,7 @@ Do not continue to the next step until `PROGRESSING` is listed as `False`, as sh
[source,terminal]
----
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
kube-apiserver 4.12.0 True False False 145m
kube-apiserver 4.13.0 True False False 145m
----
+
If `PROGRESSING` is showing `True`, wait a few minutes and try again.

View File

@@ -44,11 +44,11 @@ storageConfig: <2>
mirror:
platform:
channels:
- name: stable-4.12 <4>
- name: stable-4.13 <4>
type: ocp
graph: true <5>
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 <6>
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 <6>
packages:
- name: serverless-operator <7>
channels:

View File

@@ -30,9 +30,9 @@ mirror:
architectures:
- "s390x"
channels:
- name: stable-4.12
- name: stable-4.13
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13
helm:
repositories:
- name: redhat-helm-charts
@@ -60,7 +60,7 @@ storageConfig:
path: /home/user/metadata
mirror:
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13
packages:
- name: rhacs-operator
channels:

View File

@@ -112,7 +112,7 @@ repositories:
[source,yaml]
----
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13
packages:
- name: elasticsearch-operator
minVersion: '2.4.0'
@@ -120,7 +120,7 @@ operators:
|`mirror.operators.catalog`
|The Operator catalog to include in the image set.
|String. For example: `registry.redhat.io/redhat/redhat-operator-index:v4.12`.
|String. For example: `registry.redhat.io/redhat/redhat-operator-index:v4.13`.
|`mirror.operators.full`
|When `true`, downloads the full catalog, Operator package, or Operator channel.
@@ -133,7 +133,7 @@ operators:
[source,yaml]
----
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13
packages:
- name: elasticsearch-operator
minVersion: '5.2.3-31'
@@ -149,7 +149,7 @@ operators:
|`mirror.operators.packages.channels.name`
|The Operator channel name, unique within a package, to include in the image set.
|String. For example: `fast` or `stable-v4.12`.
|String. For example: `fast` or `stable-v4.13`.
|`mirror.operators.packages.channels.maxVersion`
|The highest version of the Operator mirror across all channels in which it exists.
@@ -206,7 +206,7 @@ architectures:
----
channels:
- name: stable-4.10
- name: stable-4.12
- name: stable-4.13
----
|`mirror.platform.channels.full`
@@ -215,7 +215,7 @@ channels:
|`mirror.platform.channels.name`
|The name of the release channel.
|String. For example: `stable-4.12`
|String. For example: `stable-4.13`
|`mirror.platform.channels.minVersion`
|The minimum version of the referenced platform to be mirrored.
@@ -223,7 +223,7 @@ channels:
|`mirror.platform.channels.maxVersion`
|The highest version of the referenced platform to be mirrored.
|String. For example: `4.12.1`
|String. For example: `4.13.1`
|`mirror.platform.channels.shortestPath`
|Toggles shortest path mirroring or full range mirroring.

View File

@@ -31,7 +31,7 @@ kind: ImageSetConfiguration
apiVersion: mirror.openshift.io/v1alpha2
mirror:
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13
packages:
- name: aws-load-balancer-operator
----

View File

@@ -13,7 +13,7 @@ You can access the *Administrator* and *Developer* perspective from the web cons
To access a perspective, ensure that you have logged in to the web console. Your default perspective is automatically determined by the permission of the users. The *Administrator* perspective is selected for users with access to all projects, while the *Developer* perspective is selected for users with limited access to their own projects
.Additional Resources
See link:https://docs.openshift.com/container-platform/4.12/web_console/adding-user-preferences.html[Adding User Preferences] for more information on changing perspectives.
See link:https://docs.openshift.com/container-platform/4.13/web_console/adding-user-preferences.html[Adding User Preferences] for more information on changing perspectives.
.Procedure

View File

@@ -12,20 +12,20 @@ endif::[]
[id="olm-catalogsource-image-template_{context}"]
= Image template for custom catalog sources
Operator compatibility with the underlying cluster can be expressed by a catalog source in various ways. One way, which is used for the default Red Hat-provided catalog sources, is to identify image tags for index images that are specifically created for a particular platform release, for example {product-title} 4.12.
Operator compatibility with the underlying cluster can be expressed by a catalog source in various ways. One way, which is used for the default Red Hat-provided catalog sources, is to identify image tags for index images that are specifically created for a particular platform release, for example {product-title} 4.13.
During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from {product-title} 4.11 to 4.12, the `spec.image` field in the `CatalogSource` object for the `redhat-operators` catalog is updated from:
During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from {product-title} 4.12 to 4.13, the `spec.image` field in the `CatalogSource` object for the `redhat-operators` catalog is updated from:
[source,terminal]
----
registry.redhat.io/redhat/redhat-operator-index:v4.11
registry.redhat.io/redhat/redhat-operator-index:v4.12
----
to:
[source,terminal]
----
registry.redhat.io/redhat/redhat-operator-index:v4.12
registry.redhat.io/redhat/redhat-operator-index:v4.13
----
However, the CVO does not automatically update image tags for custom catalogs. To ensure users are left with a compatible and supported Operator installation after a cluster upgrade, custom catalogs should also be kept updated to reference an updated index image.
@@ -77,7 +77,7 @@ If the `spec.image` field and the `olm.catalogImageTemplate` annotation are both
If the `spec.image` field is not set and the annotation does not resolve to a usable pull spec, OLM stops reconciliation of the catalog source and sets it into a human-readable error condition.
====
For an {product-title} 4.12 cluster, which uses Kubernetes 1.25, the `olm.catalogImageTemplate` annotation in the preceding example resolves to the following image reference:
For an {product-title} 4.13 cluster, which uses Kubernetes 1.26, the `olm.catalogImageTemplate` annotation in the preceding example resolves to the following image reference:
[source,terminal]
----

View File

@@ -61,5 +61,5 @@ $ oc get co platform-operators-aggregated
[source,terminal]
----
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
platform-operators-aggregated 4.12.0-0 True False False 70s
platform-operators-aggregated 4.13.0-0 True False False 70s
----

View File

@@ -79,7 +79,7 @@ Red Hat does not provide direct guidance on sizing your {product-title} cluster.
[id="cluster-maximums-major-releases-example-scenario_{context}"]
== Example scenario
As an example, 500 worker nodes (m5.2xl) were tested, and are supported, using {product-title} 4.12, the OVN-Kubernetes network plugin, and the following workload objects:
As an example, 500 worker nodes (m5.2xl) were tested, and are supported, using {product-title} 4.13, the OVN-Kubernetes network plugin, and the following workload objects:
* 200 namespaces, in addition to the defaults
* 60 pods per node; 30 server and 30 client pods (30k total)

View File

@@ -238,9 +238,9 @@ $ oc wait localvolume -n openshift-local-storage assisted-service --for conditio
apiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
name: "4.12"
name: "4.13"
spec:
releaseImage: quay.io/openshift-release-dev/ocp-release:4.12.0-x86_64
releaseImage: quay.io/openshift-release-dev/ocp-release:4.13.0-x86_64
----
. Create a manifest to import the agent installed cluster (that hosts the multicluster engine and the Assisted Service) as the hub cluster.

View File

@@ -36,12 +36,12 @@ To mirror your {product-title} image repository to your mirror registry, you can
architectures:
- "amd64"
channels:
- name: stable-4.12 <4>
- name: stable-4.13 <4>
type: ocp
additionalImages:
- name: registry.redhat.io/ubi8/ubi:latest
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 <5>
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 <5>
packages: <6>
- name: multicluster-engine <7>
- name: local-storage-operator <8>

View File

@@ -44,7 +44,7 @@ $ oc adm release info quay.io/openshift-release-dev/ocp-release:{product-version
+
.Example output
The output for the `ocp-release:4.12.0-x86_64` image is as follows:
The output for the `ocp-release:4.13.0-x86_64` image is as follows:
+
[source,terminal]
----

View File

@@ -37,7 +37,7 @@ spec:
instance: "" # instance is empty by default
topologyupdater: false # False by default
operand:
image: registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12
image: registry.redhat.io/openshift4/ose-node-feature-discovery:v4.13
imagePullPolicy: Always
workerConfig:
configData: |

View File

@@ -116,7 +116,7 @@ $ oc get clusteroperator image-registry
[source,terminal]
----
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
image-registry 4.12 True False False 6h50m
image-registry 4.13 True False False 6h50m
----
+
. Ensure that your registry is set to managed to enable building and pushing of images.

View File

@@ -313,7 +313,7 @@ $ oc get clusteroperator baremetal
[source,terminal]
----
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
baremetal 4.12.0 True False False 3d15h
baremetal 4.13.0 True False False 3d15h
----
. Remove the old `BareMetalHost` object by running the following command:

View File

@@ -21,7 +21,7 @@ See "Creating machine configs with Butane" for information about Butane.
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
labels:
machineconfiguration.openshift.io/role: worker

View File

@@ -81,7 +81,7 @@ As of {product-title} 4.11, the Ansible playbooks are provided only for {op-syst
[source,terminal]
----
# subscription-manager repos --disable=rhocp-4.11-for-rhel-8-x86_64-rpms \
--enable=rhocp-4.12-for-rhel-8-x86_64-rpms
--enable=rhocp-4.13-for-rhel-8-x86_64-rpms
----
. Update a {op-system-base} worker machine:

View File

@@ -75,7 +75,7 @@ Note that this might take a few minutes if you have a large number of available
# subscription-manager repos \
--enable="rhel-8-for-x86_64-baseos-rpms" \
--enable="rhel-8-for-x86_64-appstream-rpms" \
--enable="rhocp-4.12-for-rhel-8-x86_64-rpms" \
--enable="rhocp-4.13-for-rhel-8-x86_64-rpms" \
--enable="fast-datapath-for-rhel-8-x86_64-rpms"
----

View File

@@ -63,7 +63,7 @@ If you use SSH key-based authentication, you must manage the key with an SSH age
# subscription-manager repos \
--enable="rhel-8-for-x86_64-baseos-rpms" \
--enable="rhel-8-for-x86_64-appstream-rpms" \
--enable="rhocp-4.12-for-rhel-8-x86_64-rpms"
--enable="rhocp-4.13-for-rhel-8-x86_64-rpms"
----
. Install the required packages, including `openshift-ansible`:

View File

@@ -180,7 +180,7 @@ When using `--private-link`, the `--subnet-ids` argument is required and only on
|The Amazon Resource Name (ARN) of the role used by Red Hat Site Reliabilty Engineers (SREs) to enable access to the cluster account to provide support.
|--version
|The version (string) of {product-title} that will be used to install the cluster or cluster resources, including `account-role`. Example: `4.12`
|The version (string) of {product-title} that will be used to install the cluster or cluster resources, including `account-role`. Example: `4.13`
|--worker-iam-role string
|The Amazon Resource Name (ARN) of the IAM role that will be attached to compute instances.

View File

@@ -36,10 +36,10 @@ $ rosa list account-roles
----
I: Fetching account roles
ROLE NAME ROLE TYPE ROLE ARN OPENSHIFT VERSION
ManagedOpenShift-ControlPlane-Role Control plane arn:aws:iam::8744:role/ManagedOpenShift-ControlPlane-Role 4.12
ManagedOpenShift-Installer-Role Installer arn:aws:iam::8744:role/ManagedOpenShift-Installer-Role 4.12
ManagedOpenShift-Support-Role Support arn:aws:iam::8744:role/ManagedOpenShift-Support-Role 4.12
ManagedOpenShift-Worker-Role Worker arn:aws:iam::8744:role/ManagedOpenShift-Worker-Role 4.12
ManagedOpenShift-ControlPlane-Role Control plane arn:aws:iam::8744:role/ManagedOpenShift-ControlPlane-Role 4.13
ManagedOpenShift-Installer-Role Installer arn:aws:iam::8744:role/ManagedOpenShift-Installer-Role 4.13
ManagedOpenShift-Support-Role Support arn:aws:iam::8744:role/ManagedOpenShift-Support-Role 4.13
ManagedOpenShift-Worker-Role Worker arn:aws:iam::8744:role/ManagedOpenShift-Worker-Role 4.13
----
. If they do not exist in your AWS account, create the required account-wide STS roles and policies by running the following command:

View File

@@ -68,7 +68,7 @@ I: AWS credentials are valid!
I: Validating AWS quota...
I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html
I: Verifying whether OpenShift command-line tool is available...
I: Current OpenShift Client Version: 4.12.0
I: Current OpenShift Client Version: 4.13.0
I: Creating account roles
? Role prefix: ManagedOpenShift <1>
? Permissions boundary ARN (optional): <2>

View File

@@ -158,7 +158,7 @@ I: AWS credentials are valid!
I: Validating AWS quota...
I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html
I: Verifying whether OpenShift command-line tool is available...
I: Current OpenShift Client Version: 4.12.0
I: Current OpenShift Client Version: 4.13.0
I: Creating account roles
? Role prefix: ManagedOpenShift <1>
? Permissions boundary ARN (optional): <2>

View File

@@ -23,7 +23,7 @@ The following table describes the interactive cluster creation mode options:
|Create an OpenShift cluster that uses the AWS Security Token Service (STS) to allocate temporary, limited-privilege credentials for component-specific AWS Identity and Access Management (IAM) roles. The service enables cluster components to make AWS API calls using secure cloud resource management practices. The default is `Yes`.
|`OpenShift version`
|Select the version of OpenShift to install, for example `4.12`. The default is the latest version.
|Select the version of OpenShift to install, for example `4.13`. The default is the latest version.
|`Installer role ARN`
|If you have more than one set of account roles in your AWS account for your cluster version, a list of installer role ARNs are provided. Select the ARN for the installer role that you want to use with your cluster. The cluster uses the account-wide roles and policies that relate to the selected installer role.

View File

@@ -23,7 +23,7 @@ You can customize the following ZTP custom resources to specify more details abo
clusterDeploymentRef:
name: ostest
imageSetRef:
name: openshift-4.12
name: openshift-4.13
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
@@ -71,9 +71,9 @@ spec:
apiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
name: openshift-4.12
name: openshift-4.13
spec:
releaseImage: registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-06-06-025509
releaseImage: registry.ci.openshift.org/ocp/release:4.13.0-0.nightly-2022-06-06-025509
----
*infra-env.yaml*

View File

@@ -12,11 +12,11 @@ During a customized installation, you create an `install-config.yaml` file that
[source,yaml]
----
capabilities:
baselineCapabilitySet: v4.12 <1>
baselineCapabilitySet: v4.13 <1>
additionalEnabledCapabilities: <2>
- CSISnapshot
- Console
- Storage
----
<1> Defines a baseline set of capabilities to install. Valid values are `None`, `v4.11`, `v4.12`, and `vCurrent`. If you select `None`, all optional capabilities will be disabled. The default value is `vCurrent`, which enables all optional capabilities.
<1> Defines a baseline set of capabilities to install. Valid values are `None`, `v4.12`, `v4.13`, and `vCurrent`. If you select `None`, all optional capabilities will be disabled. The default value is `vCurrent`, which enables all optional capabilities.
<2> Defines a list of capabilities to explicitly enable. These will be enabled in addition to the capabilities specified in `baselineCapabilitySet`.

View File

@@ -45,7 +45,7 @@ Both `http` and `event` trigger functions have the same template structure:
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<version>4.13</version>
<scope>test</scope>
</dependency>
<dependency>

View File

@@ -47,7 +47,7 @@ metadata:
name: sriov-network-operator-subsription
namespace: openshift-sriov-network-operator
spec:
channel: "4.12"
channel: "4.13"
name: sriov-network-operator
config:
nodeSelector:
@@ -69,7 +69,7 @@ $ oc get csv -n openshift-sriov-network-operator
[source,terminal]
----
NAME DISPLAY VERSION REPLACES PHASE
sriov-network-operator.4.12.0-202211021237 SR-IOV Network Operator 4.12.0-202211021237 sriov-network-operator.4.12.0-202210290517 Succeeded
sriov-network-operator.4.13.0-202211021237 SR-IOV Network Operator 4.13.0-202211021237 sriov-network-operator.4.13.0-202210290517 Succeeded
----
. To verify that the SR-IOV pods are deployed, run the following command:

View File

@@ -28,7 +28,7 @@ Create a `MachineConfig` object for cluster-wide configuration:
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: 99-worker-kdump <1>
labels:

View File

@@ -48,7 +48,7 @@ metadata:
annotations:
kubernetes.io/description: |
Sysctl allowlist for nodes.
release.openshift.io/version: 4.12.0-0.nightly-2022-11-16-003434
release.openshift.io/version: 4.13.0-0.nightly-2022-11-16-003434
creationTimestamp: "2022-11-17T14:09:27Z"
name: cni-sysctl-allowlist
namespace: openshift-multus

View File

@@ -35,7 +35,7 @@ See "Creating machine configs with Butane" for information about Butane.
[source,yaml]
----
variant: openshift
version: 4.12.0
version: 4.13.0
metadata:
name: 100-worker-vfiopci
labels:

View File

@@ -20,12 +20,12 @@ Enable the {VirtProductName} repository for your version of {op-system-base-full
+
[source,terminal]
----
# subscription-manager repos --enable cnv-4.12-for-rhel-8-x86_64-rpms
# subscription-manager repos --enable cnv-4.13-for-rhel-8-x86_64-rpms
----
** To enable the repository for {op-system-base} 7, run:
+
[source,terminal]
----
# subscription-manager repos --enable rhel-7-server-cnv-4.12-rpms
# subscription-manager repos --enable rhel-7-server-cnv-4.13-rpms
----

View File

@@ -140,7 +140,7 @@ spec:
restartPolicy: Never
containers:
- name: vm-latency-checkup
image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup:v4.12.0
image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup:v4.13.0
securityContext:
allowPrivilegeEscalation: false
capabilities:

View File

@@ -16,7 +16,7 @@ This procedure is specific to the Amazon Web Services Elastic File System (AWS E
{product-title} is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic File Service (EFS).
Familiarity with link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/storage/index#persistent-storage-overview_understanding-persistent-storage[persistent storage] and link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/storage/index#persistent-storage-csi[configuring CSI volumes] is recommended when working with a CSI Operator and driver.
Familiarity with link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/storage/index#persistent-storage-overview_understanding-persistent-storage[persistent storage] and link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/storage/index#persistent-storage-csi[configuring CSI volumes] is recommended when working with a CSI Operator and driver.
After installing the AWS EFS CSI Driver Operator, {product-title} installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the `openshift-cluster-csi-drivers` namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets.
@@ -87,5 +87,5 @@ include::modules/persistent-storage-csi-olm-operator-uninstall.adoc[leveloffset=
[role="_additional-resources"]
== Additional resources
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/storage/index#persistent-storage-csi[Configuring CSI volumes]
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/storage/index#persistent-storage-csi[Configuring CSI volumes]

View File

@@ -16,7 +16,7 @@ This procedure is specific to the Amazon Web Services Elastic File System (AWS E
{product-title} is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic File Service (EFS).
Familiarity with link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/storage/index#persistent-storage-overview_understanding-persistent-storage[persistent storage] and link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/storage/index#persistent-storage-csi[configuring CSI volumes] is recommended when working with a CSI Operator and driver.
Familiarity with link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/storage/index#persistent-storage-overview_understanding-persistent-storage[persistent storage] and link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/storage/index#persistent-storage-csi[configuring CSI volumes] is recommended when working with a CSI Operator and driver.
After installing the AWS EFS CSI Driver Operator, {product-title} installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the `openshift-cluster-csi-drivers` namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets.
@@ -51,7 +51,7 @@ include::modules/persistent-storage-csi-efs-sts.adoc[leveloffset=+1]
* xref:../../storage/persistent_storage/rosa-persistent-storage-aws-efs-csi.adoc#persistent-storage-csi-olm-operator-install_rosa-persistent-storage-aws-efs-csi[Installing the AWS EFS CSI Driver Operator]
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/authentication_and_authorization/index#cco-ccoctl-configuring_cco-mode-sts[Configuring the Cloud Credential Operator utility]
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/authentication_and_authorization/index#cco-ccoctl-configuring_cco-mode-sts[Configuring the Cloud Credential Operator utility]
:StorageClass: AWS EFS
:Provisioner: efs.csi.aws.com
@@ -80,5 +80,5 @@ include::modules/persistent-storage-csi-olm-operator-uninstall.adoc[leveloffset=
[role="_additional-resources"]
== Additional resources
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/storage/index#persistent-storage-csi[Configuring CSI volumes]
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html-single/storage/index#persistent-storage-csi[Configuring CSI volumes]

View File

@@ -34,7 +34,7 @@ Collecting data about your environment minimizes the time required to analyze an
.Procedure
. xref:../../support/gathering-cluster-data.adoc#support_gathering_data_gathering-cluster-data[Collect must-gather data for the cluster].
. link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.12/html-single/troubleshooting_openshift_data_foundation/index#downloading-log-files-and-diagnostic-information_rhodf[Collect must-gather data for {rh-storage-first}], if necessary.
. link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13/html-single/troubleshooting_openshift_data_foundation/index#downloading-log-files-and-diagnostic-information_rhodf[Collect must-gather data for {rh-storage-first}], if necessary.
. xref:../../virt/support/virt-collecting-virt-data.adoc#virt-using-virt-must-gather_virt-collecting-virt-data[Collect must-gather data for {VirtProductName}].
. xref:../../monitoring/managing-metrics.adoc#querying-metrics-for-all-projects-as-an-administrator_managing-metrics[Collect Prometheus metrics for the cluster].