mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Merge pull request #70417 from openshift-cherrypick-robot/cherry-pick-70177-to-enterprise-4.15
This commit is contained in:
@@ -38,5 +38,5 @@ include::modules/verifying-the-assumed-iam-role-in-your-pod.adoc[leveloffset=+2]
|
||||
* For more information about installing and using the AWS Boto3 SDK for Python, see the link:https://boto3.amazonaws.com/v1/documentation/api/latest/index.html[AWS Boto3 documentation].
|
||||
|
||||
ifdef::openshift-rosa,openshift-dedicated[]
|
||||
* For general information about webhook admission plugins for OpenShift, see link:https://docs.openshift.com/container-platform/4.14/architecture/admission-plug-ins.html#admission-webhooks-about_admission-plug-ins[Webhook admission plugins] in the OpenShift Container Platform documentation.
|
||||
* For general information about webhook admission plugins for OpenShift, see link:https://docs.openshift.com/container-platform/4.15/architecture/admission-plug-ins.html#admission-webhooks-about_admission-plug-ins[Webhook admission plugins] in the OpenShift Container Platform documentation.
|
||||
endif::openshift-rosa,openshift-dedicated[]
|
||||
|
||||
@@ -36,11 +36,11 @@ You can schedule backups by creating a `Schedule` CR instead of a `Backup` CR. S
|
||||
[id="known-issues-backing-up-applications"]
|
||||
== Known issues
|
||||
|
||||
{ocp} 4.14 enforces a pod security admission (PSA) policy that can hinder the readiness of pods during a Restic restore process.
|
||||
{ocp} {product-version} enforces a pod security admission (PSA) policy that can hinder the readiness of pods during a Restic restore process.
|
||||
|
||||
This issue has been resolved in the OADP 1.1.6 and OADP 1.2.2 releases, therefore it is recommended that users upgrade to these releases.
|
||||
|
||||
For more information, see xref:../../../backup_and_restore/application_backup_and_restore/troubleshooting.adoc#oadp-restic-restore-failing-psa-policy_oadp-troubleshooting[Restic restore partially failing on OCP 4.14 due to changed PSA policy].
|
||||
For more information, see xref:../../../backup_and_restore/application_backup_and_restore/troubleshooting.adoc#oadp-restic-restore-failing-psa-policy_oadp-troubleshooting[Restic restore partially failing on OCP 4.15 due to changed PSA policy].
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
@@ -7,7 +7,7 @@ include::_attributes/common-attributes.adoc[]
|
||||
toc::[]
|
||||
|
||||
|
||||
As of {product-title} 4.14, {product-title} uses link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux control group version 2] (cgroup v2) in your cluster. If you are using cgroup v1 on {product-title} 4.13 or earlier, migrating to {product-title} 4.14 will not automatically update your cgroup configuration to version 2. A fresh installation of {product-title} 4.14 will use cgroup v2 by default. However, you can enable link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v1/index.html[Linux control group version 1] (cgroup v1) upon installation. Enabling cgroup v1 in {product-title} disables all cgroup v2 controllers and hierarchies in your cluster.
|
||||
As of {product-title} 4.14, {product-title} uses link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux control group version 2] (cgroup v2) in your cluster. If you are using cgroup v1 on {product-title} 4.13 or earlier, migrating to {product-title} 4.15 will not automatically update your cgroup configuration to version 2. A fresh installation of {product-title} 4.14 or later will use cgroup v2 by default. However, you can enable link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v1/index.html[Linux control group version 1] (cgroup v1) upon installation. Enabling cgroup v1 in {product-title} disables all cgroup v2 controllers and hierarchies in your cluster.
|
||||
|
||||
include::snippets/cgroupv2-vs-cgroupv1.adoc[]
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ include::modules/cpmso-ts-mhc-etcd-degraded.adoc[leveloffset=+1]
|
||||
[id="cpmso-troubleshooting-shiftstack-upgrade_{context}"]
|
||||
== Upgrading clusters that run on {rh-openstack}
|
||||
|
||||
For clusters that run on {rh-openstack-first} that you upgrade from {product-title} 4.13 to 4.14, you might have to perform post-upgrade tasks before you can use control plane machine sets.
|
||||
For clusters that run on {rh-openstack-first} that were created with {product-title} 4.13 or earlier, you might have to perform post-upgrade tasks before you can use control plane machine sets.
|
||||
|
||||
// TODO: Rejigger
|
||||
// Post-upgrade config for ShiftStack with machine AZs explicitly defined and rootVolumes w/out AZs
|
||||
|
||||
@@ -24,6 +24,6 @@ Commands for multi-node deployments, projects, and developer tooling are not sup
|
||||
== Additional resources
|
||||
|
||||
* xref:..//microshift_cli_ref/microshift-oc-cli-install.adoc#microshift-oc-cli-install[Installing the OpenShift CLI tool for MicroShift].
|
||||
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html/cli_tools/openshift-cli-oc[Detailed description of the OpenShift CLI (oc)].
|
||||
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.15/html/cli_tools/openshift-cli-oc[Detailed description of the OpenShift CLI (oc)].
|
||||
* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9[Red Hat Enterprise Linux (RHEL) documentation for specific use cases].
|
||||
* xref:../microshift_configuring/microshift-cluster-access-kubeconfig.adoc#microshift-kubeconfig[Cluster access with kubeconfig]
|
||||
@@ -31,4 +31,4 @@ The *Developer* perspective provides workflows specific to developer use cases,
|
||||
You can use the *Topology* view to display applications, components, and workloads of your project. If you have no workloads in the project, the *Topology* view will show some links to create or import them. You can also use the *Quick Search* to import components directly.
|
||||
|
||||
.Additional Resources
|
||||
See link:https://docs.openshift.com/container-platform/4.14/applications/odc-viewing-application-composition-using-topology-view.html[Viewing application composition using the Topology] view for more information on using the *Topology* view in *Developer* perspective.
|
||||
See link:https://docs.openshift.com/container-platform/4.15/applications/odc-viewing-application-composition-using-topology-view.html[Viewing application composition using the Topology] view for more information on using the *Topology* view in *Developer* perspective.
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
For some clusters that run on {rh-openstack-first} that you upgrade, you must manually update machine resources before you can use control plane machine sets if the following configurations are true:
|
||||
|
||||
* You upgraded the cluster from {product-title} 4.13 to 4.14.
|
||||
* The upgraded cluster was created with {product-title} 4.13 or earlier.
|
||||
|
||||
* The cluster infrastructure is installer-provisioned.
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
For some clusters that run on {rh-openstack-first} that you upgrade, you must manually update machine resources before you can use control plane machine sets if the following configurations are true:
|
||||
|
||||
* You upgraded the cluster from {product-title} 4.13 to 4.14.
|
||||
* The upgraded cluster was created with {product-title} 4.13 or earlier.
|
||||
|
||||
* The cluster infrastructure is installer-provisioned.
|
||||
|
||||
@@ -43,7 +43,7 @@ providerSpec:
|
||||
name: openstack-cloud-credentials
|
||||
namespace: openshift-machine-api
|
||||
flavor: m1.xlarge
|
||||
image: rhcos-4.14
|
||||
image: rhcos-4.15
|
||||
kind: OpenstackProviderSpec
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
|
||||
@@ -19,7 +19,7 @@ The HyperShift Operator manages the lifecycle of hosted clusters that are repres
|
||||
----
|
||||
apiVersion: v1
|
||||
data:
|
||||
supported-versions: '{"versions":["4.14"]}'
|
||||
supported-versions: '{"versions":["4.15"]}'
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
labels:
|
||||
|
||||
@@ -20,7 +20,7 @@ You must run a gather operation to create an Insights Operator archive.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/insights-operator/release-4.14/docs/gather-job.yaml[]
|
||||
include::https://raw.githubusercontent.com/openshift/insights-operator/release-4.15/docs/gather-job.yaml[]
|
||||
----
|
||||
. Copy your `insights-operator` image version:
|
||||
+
|
||||
|
||||
@@ -20,10 +20,10 @@ bootstrap machine that you need for your {product-title} cluster:
|
||||
[source,json]
|
||||
----
|
||||
ifndef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/azure/04_bootstrap.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/azure/04_bootstrap.json[]
|
||||
endif::ash[]
|
||||
ifdef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/azurestack/04_bootstrap.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/azurestack/04_bootstrap.json[]
|
||||
endif::ash[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -20,10 +20,10 @@ control plane machines that you need for your {product-title} cluster:
|
||||
[source,json]
|
||||
----
|
||||
ifndef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/azure/05_masters.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/azure/05_masters.json[]
|
||||
endif::ash[]
|
||||
ifdef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/azurestack/05_masters.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/azurestack/05_masters.json[]
|
||||
endif::ash[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -20,10 +20,10 @@ stored {op-system-first} image that you need for your {product-title} cluster:
|
||||
[source,json]
|
||||
----
|
||||
ifndef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/azure/02_storage.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/azure/02_storage.json[]
|
||||
endif::ash[]
|
||||
ifdef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/azurestack/02_storage.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/azurestack/02_storage.json[]
|
||||
endif::ash[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -20,10 +20,10 @@ VNet that you need for your {product-title} cluster:
|
||||
[source,json]
|
||||
----
|
||||
ifndef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/azure/01_vnet.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/azure/01_vnet.json[]
|
||||
endif::ash[]
|
||||
ifdef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/azurestack/01_vnet.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/azurestack/01_vnet.json[]
|
||||
endif::ash[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -20,10 +20,10 @@ worker machines that you need for your {product-title} cluster:
|
||||
[source,json]
|
||||
----
|
||||
ifndef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/azure/06_workers.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/azure/06_workers.json[]
|
||||
endif::ash[]
|
||||
ifdef::ash[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/azurestack/06_workers.json[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/azurestack/06_workers.json[]
|
||||
endif::ash[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -13,6 +13,6 @@ You can use the following CloudFormation template to deploy the bootstrap machin
|
||||
====
|
||||
[source,yaml]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/aws/cloudformation/04_cluster_bootstrap.yaml[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/aws/cloudformation/04_cluster_bootstrap.yaml[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -14,6 +14,6 @@ machines that you need for your {product-title} cluster.
|
||||
====
|
||||
[source,yaml]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/aws/cloudformation/05_cluster_master_nodes.yaml[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/aws/cloudformation/05_cluster_master_nodes.yaml[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -14,7 +14,7 @@ objects and load balancers that you need for your {product-title} cluster.
|
||||
====
|
||||
[source,yaml]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/aws/cloudformation/02_cluster_infra.yaml[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/aws/cloudformation/02_cluster_infra.yaml[]
|
||||
----
|
||||
====
|
||||
|
||||
|
||||
@@ -14,6 +14,6 @@ that you need for your {product-title} cluster.
|
||||
====
|
||||
[source,yaml]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/aws/cloudformation/03_cluster_security.yaml[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/aws/cloudformation/03_cluster_security.yaml[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -14,6 +14,6 @@ you need for your {product-title} cluster.
|
||||
====
|
||||
[source,yaml]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/aws/cloudformation/01_vpc.yaml[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/aws/cloudformation/01_vpc.yaml[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -14,6 +14,6 @@ that you need for your {product-title} cluster.
|
||||
====
|
||||
[source,yaml]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/aws/cloudformation/06_cluster_worker_node.yaml[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/aws/cloudformation/06_cluster_worker_node.yaml[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -14,6 +14,6 @@ machine that you need for your {product-title} cluster:
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/gcp/04_bootstrap.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/gcp/04_bootstrap.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -14,6 +14,6 @@ plane machines that you need for your {product-title} cluster:
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/gcp/05_control_plane.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/gcp/05_control_plane.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -12,6 +12,6 @@ You can use the following Deployment Manager template to deploy the external loa
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/gcp/02_lb_ext.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/gcp/02_lb_ext.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -12,6 +12,6 @@ You can use the following Deployment Manager template to deploy the firewall rue
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/gcp/03_firewall.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/gcp/03_firewall.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -12,6 +12,6 @@ You can use the following Deployment Manager template to deploy the IAM roles th
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/gcp/03_iam.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/gcp/03_iam.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -12,7 +12,7 @@ You can use the following Deployment Manager template to deploy the internal loa
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/gcp/02_lb_int.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/gcp/02_lb_int.py[]
|
||||
----
|
||||
====
|
||||
|
||||
|
||||
@@ -12,6 +12,6 @@ You can use the following Deployment Manager template to deploy the private DNS
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/gcp/02_dns.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/gcp/02_dns.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -14,6 +14,6 @@ you need for your {product-title} cluster:
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/gcp/01_vpc.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/gcp/01_vpc.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -14,6 +14,6 @@ that you need for your {product-title} cluster:
|
||||
====
|
||||
[source,python]
|
||||
----
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/gcp/06_worker.py[]
|
||||
include::https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/gcp/06_worker.py[]
|
||||
----
|
||||
====
|
||||
|
||||
@@ -12,7 +12,7 @@ your {product-title} nodes.
|
||||
.Procedure
|
||||
|
||||
ifndef::openshift-origin[]
|
||||
. Obtain the {op-system} image from the link:https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.14/[{op-system} image mirror] page.
|
||||
. Obtain the {op-system} image from the link:https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.15/[{op-system} image mirror] page.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
In {product-title}, you can use the tags for grouping resources and for managing resource access and cost. You can define the tags on the Azure resources in the `install-config.yaml` file only during {product-title} cluster creation. You cannot modify the user-defined tags after cluster creation.
|
||||
|
||||
Support for user-defined tags is available only for the resources created in the Azure Public Cloud. User-defined tags are not supported for the {product-title} clusters upgraded to {product-title} 4.14.
|
||||
Support for user-defined tags is available only for the resources created in the Azure Public Cloud. User-defined tags are not supported for the {product-title} clusters upgraded to {product-title} {product-version}.
|
||||
|
||||
User-defined and {product-title} specific tags are applied only to the resources created by the {product-title} installer and its core operators such as Machine api provider azure Operator, Cluster Ingress Operator, Cluster Image Registry Operator.
|
||||
|
||||
|
||||
@@ -85,10 +85,10 @@ $ openshift-install coreos print-stream-json | grep '\.iso[^.]'
|
||||
[source,terminal]
|
||||
ifndef::openshift-origin[]
|
||||
----
|
||||
"location": "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso",
|
||||
"location": "<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso",
|
||||
"location": "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso",
|
||||
"location": "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live.x86_64.iso",
|
||||
"location": "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso",
|
||||
"location": "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso",
|
||||
"location": "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso",
|
||||
"location": "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso",
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
|
||||
@@ -101,18 +101,18 @@ $ openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initra
|
||||
[source,terminal]
|
||||
ifndef::openshift-origin[]
|
||||
----
|
||||
"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64"
|
||||
"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img"
|
||||
"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img"
|
||||
"<url>/art/storage/releases/rhcos-4.14-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le"
|
||||
"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img"
|
||||
"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img"
|
||||
"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x"
|
||||
"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img"
|
||||
"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img"
|
||||
"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-kernel-x86_64"
|
||||
"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img"
|
||||
"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img"
|
||||
"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64"
|
||||
"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img"
|
||||
"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img"
|
||||
"<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le"
|
||||
"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img"
|
||||
"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img"
|
||||
"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x"
|
||||
"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img"
|
||||
"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img"
|
||||
"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64"
|
||||
"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img"
|
||||
"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img"
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
|
||||
@@ -79,7 +79,7 @@ If you plan to add more compute machines to your cluster after you finish instal
|
||||
====
|
||||
|
||||
ifndef::openshift-origin[]
|
||||
. Obtain the {op-system} OVA image. Images are available from the link:https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.14/[{op-system} image mirror] page.
|
||||
. Obtain the {op-system} OVA image. Images are available from the link:https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.15/[{op-system} image mirror] page.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
|
||||
@@ -14,7 +14,7 @@ You can define labels and tags for each GCP resource only during {product-title}
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
User-defined labels and tags are not supported for {product-title} clusters upgraded to {product-title} 4.14 version.
|
||||
User-defined labels and tags are not supported for {product-title} clusters upgraded to {product-title} {product-version}.
|
||||
====
|
||||
|
||||
.User-defined labels
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
* You installed the {loki-op}.
|
||||
* You installed the {oc-first}.
|
||||
* You deployed link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/[{rh-storage}].
|
||||
* You configured your {rh-storage} cluster https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.14/html/managing_and_allocating_storage_resources/adding-file-and-object-storage-to-an-existing-external-ocs-cluster[for object storage].
|
||||
* You configured your {rh-storage} cluster link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.15/html/managing_and_allocating_storage_resources/adding-file-and-object-storage-to-an-existing-external-ocs-cluster[for object storage].
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@ The output of this command includes pull specs for the available updates similar
|
||||
Recommended updates:
|
||||
|
||||
VERSION IMAGE
|
||||
4.14.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032
|
||||
4.15.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032
|
||||
...
|
||||
----
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ EOF
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The wildcard `*` in the commands uses the latest {microshift-short} RPMs. If you need a specific version, substitute the wildcard for the version you want. For example, insert `4.14.1` to download the {microshift-short} 4.14.1 RPMs.
|
||||
The wildcard `*` in the commands uses the latest {microshift-short} RPMs. If you need a specific version, substitute the wildcard for the version you want. For example, insert `4.15.0` to download the {microshift-short} 4.15.0 RPMs.
|
||||
====
|
||||
|
||||
. Add the blueprint to the Image Builder by running the following command:
|
||||
|
||||
@@ -28,7 +28,7 @@ You can use Image Builder to create `rpm-ostree` system images with embedded {mi
|
||||
----
|
||||
$ sudo dnf install -y microshift-release-info-<release_version>
|
||||
----
|
||||
Replace `<release_version>` with the numerical value of the release you are deploying, using the entire version number, such as `4.14.0`.
|
||||
Replace `<release_version>` with the numerical value of the release you are deploying, using the entire version number, such as `4.15.0`.
|
||||
|
||||
.. List the contents of the `/usr/share/microshift/release` directory to verify the presence of the release information files by running the following command:
|
||||
+
|
||||
@@ -54,14 +54,14 @@ If you installed the `microshift-release-info` RPM, you can proceed to step 4.
|
||||
----
|
||||
$ sudo dnf download microshift-release-info-<release_version>
|
||||
----
|
||||
Replace `<release_version>` with the numerical value of the release you are deploying, using the entire version number, such as `4.14.0`.
|
||||
Replace `<release_version>` with the numerical value of the release you are deploying, using the entire version number, such as `4.15.0`.
|
||||
+
|
||||
.Example rpm
|
||||
[source,terminal]
|
||||
----
|
||||
microshift-release-info-4.14.0.*.el9.noarch.rpm <1>
|
||||
microshift-release-info-4.15.0.*.el9.noarch.rpm <1>
|
||||
----
|
||||
<1> The `*` represents the date and commit ID. Your output should contain both, for example `-202311101230.p0.g7dc6a00.assembly.4.14.0`.
|
||||
<1> The `*` represents the date and commit ID. Your output should contain both, for example `-202311101230.p0.g7dc6a00.assembly.4.15.0`.
|
||||
|
||||
.. Unpack the RPM package without installing it by running the following command:
|
||||
+
|
||||
|
||||
@@ -16,7 +16,7 @@ endif::[]
|
||||
|
||||
ifndef::openshift-origin[]
|
||||
ifdef::post[]
|
||||
As of {product-title} 4.14, {product-title} uses link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux control group version 2] (cgroup v2) in your cluster. If you are using cgroup v1 on {product-title} 4.13 or earlier, migrating to {product-title} 4.14 will not automatically update your cgroup configuration to version 2. A fresh installation of {product-title} 4.14 will use cgroup v2 by default. However, you can enable link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v1/index.html[Linux control group version 1] (cgroup v1) upon installation.
|
||||
As of {product-title} 4.14, {product-title} uses link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux control group version 2] (cgroup v2) in your cluster. If you are using cgroup v1 on {product-title} 4.13 or earlier, migrating to {product-title} 4.14 or later will not automatically update your cgroup configuration to version 2. A fresh installation of {product-title} 4.14 or later will use cgroup v2 by default. However, you can enable link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v1/index.html[Linux control group version 1] (cgroup v1) upon installation.
|
||||
endif::post[]
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
|
||||
@@ -16,5 +16,5 @@ The Ingress Node Firewall Operator supports only stateless firewall rules.
|
||||
|
||||
Network interface controllers (NICs) that do not support native XDP drivers will run at a lower performance.
|
||||
|
||||
For {product-title} 4.14, you must run Ingress Node Firewall Operator on {op-system-base} 9.0 or later.
|
||||
For {product-title} 4.14 or later, you must run Ingress Node Firewall Operator on {op-system-base} 9.0 or later.
|
||||
====
|
||||
|
||||
@@ -348,7 +348,7 @@ If you set this field to `true`, you do not receive the performance benefits of
|
||||
|
||||
|`ipForwarding`
|
||||
|`object`
|
||||
|You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the `ipForwarding` specification in the `Network` resource. Specify `Restricted` to only allow IP forwarding for Kubernetes related traffic. Specify `Global` to allow forwarding of all IP traffic. For new installations, the default is `Restricted`. For updates to {product-title} 4.14, the default is `Global`.
|
||||
|You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the `ipForwarding` specification in the `Network` resource. Specify `Restricted` to only allow IP forwarding for Kubernetes related traffic. Specify `Global` to allow forwarding of all IP traffic. For new installations, the default is `Restricted`. For updates to {product-title} 4.14 or later, the default is `Global`.
|
||||
|
||||
|====
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="nw-ptp-wpc-hardware-pins-reference_{context}"]
|
||||
= Intel Westport Channel E810 hardware configuration reference
|
||||
|
||||
Use this information to understand how to use the link:https://github.com/openshift/linuxptp-daemon/blob/release-4.14/addons/intel/e810.go[Intel E810-XXVDA4T hardware plugin] to configure the E810 network interface as PTP grandmaster clock.
|
||||
Use this information to understand how to use the link:https://github.com/openshift/linuxptp-daemon/blob/release-4.15/addons/intel/e810.go[Intel E810-XXVDA4T hardware plugin] to configure the E810 network interface as PTP grandmaster clock.
|
||||
Hardware pin configuration determines how the network interface interacts with other components and devices in the system.
|
||||
The E810-XXVDA4T NIC has four connectors for external 1PPS signals: `SMA1`, `SMA2`, `U.FL1`, and `U.FL2`.
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ If you do not use Google workload identity federation cloud authentication, cont
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have installed a cluster in manual mode with link:https://docs.openshift.com/container-platform/4.14/installing/installing_gcp/installing-gcp-customizations.html#installing-gcp-with-short-term-creds_installing-gcp-customizations[GCP Workload Identity configured].
|
||||
* You have installed a cluster in manual mode with link:https://docs.openshift.com/container-platform/4.15/installing/installing_gcp/installing-gcp-customizations.html#installing-gcp-with-short-term-creds_installing-gcp-customizations[GCP Workload Identity configured].
|
||||
* You have access to the Cloud Credential Operator utility (`ccoctl`) and to the associated workload identity pool.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -59,7 +59,7 @@ $ oc -n openshift-adp create secret generic cloud-credentials \
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
In {product-title} versions 4.14 and later, the OADP Operator supports a new standardized STS workflow through the Operator Lifecycle Manager (OLM)
|
||||
In {product-title} versions 4.15 and later, the OADP Operator supports a new standardized STS workflow through the Operator Lifecycle Manager (OLM)
|
||||
and Cloud Credentials Operator (CCO). In this workflow, you do not need to create the above
|
||||
secret, you only need to supply the role ARN during link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html/operators/user-tasks#olm-installing-from-operatorhub-using-web-console_olm-installing-operators-in-namespace[the installation of OLM-managed operators via the the {product-title} web console].
|
||||
The above secret is created automatically via CCO.
|
||||
|
||||
@@ -120,7 +120,7 @@ operators:
|
||||
|
||||
|`mirror.operators.catalog`
|
||||
|The Operator catalog to include in the image set.
|
||||
|String. For example: `registry.redhat.io/redhat/redhat-operator-index:v4.14`.
|
||||
|String. For example: `registry.redhat.io/redhat/redhat-operator-index:v4.15`.
|
||||
|
||||
|`mirror.operators.full`
|
||||
|When `true`, downloads the full catalog, Operator package, or Operator channel.
|
||||
@@ -149,7 +149,7 @@ operators:
|
||||
|
||||
|`mirror.operators.packages.channels.name`
|
||||
|The Operator channel name, unique within a package, to include in the image set.
|
||||
|String. For example: `fast` or `stable-v4.14`.
|
||||
|String. For example: `fast` or `stable-v4.15`.
|
||||
|
||||
|`mirror.operators.packages.channels.maxVersion`
|
||||
|The highest version of the Operator mirror across all channels in which it exists. See the following note for further information.
|
||||
@@ -227,7 +227,7 @@ channels:
|
||||
|
||||
|`mirror.platform.channels.name`
|
||||
|The name of the release channel.
|
||||
|String. For example: `stable-4.14`
|
||||
|String. For example: `stable-4.15`
|
||||
|
||||
|`mirror.platform.channels.minVersion`
|
||||
|The minimum version of the referenced platform to be mirrored.
|
||||
@@ -235,7 +235,7 @@ channels:
|
||||
|
||||
|`mirror.platform.channels.maxVersion`
|
||||
|The highest version of the referenced platform to be mirrored.
|
||||
|String. For example: `4.14.1`
|
||||
|String. For example: `4.15.1`
|
||||
|
||||
|`mirror.platform.channels.shortestPath`
|
||||
|Toggles shortest path mirroring or full range mirroring.
|
||||
|
||||
@@ -13,7 +13,7 @@ You can access the *Administrator* and *Developer* perspective from the web cons
|
||||
To access a perspective, ensure that you have logged in to the web console. Your default perspective is automatically determined by the permission of the users. The *Administrator* perspective is selected for users with access to all projects, while the *Developer* perspective is selected for users with limited access to their own projects
|
||||
|
||||
.Additional Resources
|
||||
See link:https://docs.openshift.com/container-platform/4.14/web_console/adding-user-preferences.html[Adding User Preferences] for more information on changing perspectives.
|
||||
See link:https://docs.openshift.com/container-platform/4.15/web_console/adding-user-preferences.html[Adding User Preferences] for more information on changing perspectives.
|
||||
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -14,7 +14,7 @@ endif::[]
|
||||
|
||||
Operator compatibility with the underlying cluster can be expressed by a catalog source in various ways. One way, which is used for the default Red Hat-provided catalog sources, is to identify image tags for index images that are specifically created for a particular platform release, for example {product-title} {product-version}.
|
||||
|
||||
During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from {product-title} 4.13 to 4.14, the `spec.image` field in the `CatalogSource` object for the `redhat-operators` catalog is updated from:
|
||||
During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from {product-title} 4.14 to 4.15, the `spec.image` field in the `CatalogSource` object for the `redhat-operators` catalog is updated from:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -25,7 +25,7 @@ to:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
registry.redhat.io/redhat/redhat-operator-index:v4.14
|
||||
registry.redhat.io/redhat/redhat-operator-index:v4.15
|
||||
----
|
||||
|
||||
However, the CVO does not automatically update image tags for custom catalogs. To ensure users are left with a compatible and supported Operator installation after a cluster upgrade, custom catalogs should also be kept updated to reference an updated index image.
|
||||
|
||||
@@ -24,7 +24,7 @@ Starting in {product-title} 4.14, the CCO can semi-automate this task through an
|
||||
Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update.
|
||||
====
|
||||
|
||||
As an Operator author preparing an Operator for use alongside the updated CCO in {product-title} 4.14, you should instruct users and add code to handle the divergence from earlier CCO versions, in addition to handling STS token authentication (if your Operator is not already STS-enabled). The recommended method is to provide a `CredentialsRequest` object with correctly filled STS-related fields and let the CCO create the `Secret` for you.
|
||||
As an Operator author preparing an Operator for use alongside the updated CCO in {product-title} 4.14 or later, you should instruct users and add code to handle the divergence from earlier CCO versions, in addition to handling STS token authentication (if your Operator is not already STS-enabled). The recommended method is to provide a `CredentialsRequest` object with correctly filled STS-related fields and let the CCO create the `Secret` for you.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
|
||||
@@ -222,7 +222,7 @@ Tags that are added by Red Hat are required for clusters to stay in compliance w
|
||||
====
|
||||
|
||||
|--version string
|
||||
|The version of ROSA that will be used to install the cluster or cluster resources. For `cluster` use an `X.Y.Z` format, for example, `4.14.0`. For `account-role` use an `X.Y` format, for example, `4.14`.
|
||||
|The version of ROSA that will be used to install the cluster or cluster resources. For `cluster` use an `X.Y.Z` format, for example, `4.15.0`. For `account-role` use an `X.Y` format, for example, `4.15`.
|
||||
|
||||
|--worker-iam-role string
|
||||
|The ARN of the IAM role that will be attached to compute instances.
|
||||
|
||||
@@ -19,7 +19,7 @@ To use a custom hostname for a route, you must update your DNS provider by creat
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Starting with {product-title} 4.14, the Custom Domain Operator is deprecated. To manage Ingress in {product-title} 4.14, use the Ingress Operator. The functionality is unchanged for {product-title} 4.13 and earlier versions.
|
||||
Starting with {product-title} 4.14, the Custom Domain Operator is deprecated. To manage Ingress in {product-title} 4.14 or later, use the Ingress Operator. The functionality is unchanged for {product-title} 4.13 and earlier versions.
|
||||
====
|
||||
|
||||
[id="rosa-sdpolicy-validated-certificates_{context}"]
|
||||
|
||||
@@ -11,7 +11,7 @@ This section lists the `aws` CLI commands that the `rosa` command generates in t
|
||||
[id="rosa-sts-account-wide-role-and-policy-aws-cli-manual-mode_{context}"]
|
||||
== Using manual mode for account role creation
|
||||
|
||||
The manual role creation mode generates the `aws` commands for you to review and run. The following command starts that process, where `<openshift_version>` refers to your version of {product-title} (ROSA), such as `4.14`.
|
||||
The manual role creation mode generates the `aws` commands for you to review and run. The following command starts that process, where `<openshift_version>` refers to your version of {product-title} (ROSA), such as `4.15`.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
This section provides details about the account-wide IAM roles and policies that are required for ROSA deployments that use STS, including the Operator policies. It also includes the JSON files that define the policies.
|
||||
|
||||
The account-wide roles and policies are specific to an OpenShift minor release version, for example OpenShift 4.14, and are backward compatible. You can minimize the required STS resources by reusing the account-wide roles and policies for multiple clusters of the same minor version, regardless of their patch version.
|
||||
The account-wide roles and policies are specific to an OpenShift minor release version, for example OpenShift 4.15, and are backward compatible. You can minimize the required STS resources by reusing the account-wide roles and policies for multiple clusters of the same minor version, regardless of their patch version.
|
||||
|
||||
[id="rosa-sts-account-wide-roles-and-policies-creation-methods_{context}"]
|
||||
== Methods of account-wide role creation
|
||||
|
||||
@@ -240,8 +240,8 @@ EOF
|
||||
$ cat<<-EOF>variables.tf
|
||||
variable "rosa_openshift_version" {
|
||||
type = string
|
||||
default = "4.14.2"
|
||||
description = "Desired version of OpenShift for the cluster, for example '4.14.2'. If version is greater than the currently running version, an upgrade will be scheduled."
|
||||
default = "4.15.0"
|
||||
description = "Desired version of OpenShift for the cluster, for example '4.15.0'. If version is greater than the currently running version, an upgrade will be scheduled."
|
||||
}
|
||||
|
||||
variable "account_role_policies" {
|
||||
@@ -394,10 +394,10 @@ $ rosa list account-roles
|
||||
----
|
||||
I: Fetching account roles
|
||||
ROLE NAME ROLE TYPE ROLE ARN OPENSHIFT VERSION AWS Managed
|
||||
ROSA-demo-ControlPlane-Role Control plane arn:aws:iam::<ID>:role/ROSA-demo-ControlPlane-Role 4.14 No
|
||||
ROSA-demo-Installer-Role Installer arn:aws:iam::<ID>:role/ROSA-demo-Installer-Role 4.14 No
|
||||
ROSA-demo-Support-Role Support arn:aws:iam::<ID>:role/ROSA-demo-Support-Role 4.14 No
|
||||
ROSA-demo-Worker-Role Worker arn:aws:iam::<ID>:role/ROSA-demo-Worker-Role 4.14 No
|
||||
ROSA-demo-ControlPlane-Role Control plane arn:aws:iam::<ID>:role/ROSA-demo-ControlPlane-Role 4.15 No
|
||||
ROSA-demo-Installer-Role Installer arn:aws:iam::<ID>:role/ROSA-demo-Installer-Role 4.15 No
|
||||
ROSA-demo-Support-Role Support arn:aws:iam::<ID>:role/ROSA-demo-Support-Role 4.15 No
|
||||
ROSA-demo-Worker-Role Worker arn:aws:iam::<ID>:role/ROSA-demo-Worker-Role 4.15 No
|
||||
----
|
||||
|
||||
. Verify that your Operator roles were created by running the following command:
|
||||
|
||||
@@ -223,7 +223,7 @@ Deploy cluster with Hosted Control Plane (optional): No
|
||||
? Create cluster admin user: Yes <1>
|
||||
? Username: user-admin <1>
|
||||
? Password: [? for help] *************** <1>
|
||||
? OpenShift version: 4.14.0 <2>
|
||||
? OpenShift version: 4.15.0 <2>
|
||||
? Configure the use of IMDSv2 for ec2 instances optional/required (optional): <3>
|
||||
I: Using arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Installer-Role for the Installer role <4>
|
||||
I: Using arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-ControlPlane-Role for the ControlPlane role
|
||||
@@ -253,14 +253,14 @@ I: Using arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Support-Role for th
|
||||
? Disable Workload monitoring (optional): No
|
||||
I: Creating cluster '<cluster_name>'
|
||||
I: To create this cluster again in the future, you can run:
|
||||
rosa create cluster --cluster-name <cluster_name> --role-arn arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Installer-Role --support-role-arn arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Support-Role --master-iam-role arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-ControlPlane-Role --worker-iam-role arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Worker-Role --operator-roles-prefix <cluster_name>-<random_string> --region us-east-1 --version 4.14.0 --additional-compute-security-group-ids sg-0e375ff0ec4a6cfa2 --additional-infra-security-group-ids sg-0e375ff0ec4a6cfa2 --additional-control-plane-security-group-ids sg-0e375ff0ec4a6cfa2 --replicas 2 --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 <12>
|
||||
rosa create cluster --cluster-name <cluster_name> --role-arn arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Installer-Role --support-role-arn arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Support-Role --master-iam-role arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-ControlPlane-Role --worker-iam-role arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Worker-Role --operator-roles-prefix <cluster_name>-<random_string> --region us-east-1 --version 4.15.0 --additional-compute-security-group-ids sg-0e375ff0ec4a6cfa2 --additional-infra-security-group-ids sg-0e375ff0ec4a6cfa2 --additional-control-plane-security-group-ids sg-0e375ff0ec4a6cfa2 --replicas 2 --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 <12>
|
||||
I: To view a list of clusters and their status, run 'rosa list clusters'
|
||||
I: Cluster '<cluster_name>' has been created.
|
||||
I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information.
|
||||
...
|
||||
----
|
||||
<1> When creating your cluster, you can create a local administrator user for your cluster. Selecting `Yes` then prompts you to create a user name and password for the cluster admin. The user name must not contain `/`, `:`, or `%`. The password must be at least 14 characters (ASCII-standard) without whitespaces. This process automatically configures an htpasswd identity provider.
|
||||
<2> When creating the cluster, the listed `OpenShift version` options include the major, minor, and patch versions, for example `4.14.0`.
|
||||
<2> When creating the cluster, the listed `OpenShift version` options include the major, minor, and patch versions, for example `4.15.0`.
|
||||
<3> Optional: Specify 'optional' to configure all EC2 instances to use both v1 and v2 endpoints of EC2 Instance Metadata Service (IMDS). This is the default value. Specify 'required' to configure all EC2 instances to use IMDSv2 only.
|
||||
+
|
||||
[IMPORTANT]
|
||||
|
||||
@@ -55,7 +55,7 @@ endif::rosa-terraform[]
|
||||
|Cluster settings
|
||||
|
|
||||
ifdef::rosa-terraform[]
|
||||
* Default cluster version: `4.14.2`
|
||||
* Default cluster version: `4.15.0`
|
||||
* Cluster name: `rosa-<6-digit-alphanumeric-string>`
|
||||
endif::rosa-terraform[]
|
||||
ifndef::rosa-terraform[]
|
||||
|
||||
@@ -5,6 +5,6 @@
|
||||
[id="rosa-troubleshooting-general-deployment-failure_{context}"]
|
||||
= Connectivity issues on clusters with private Network Load Balancers
|
||||
|
||||
{product-title} and {hcp-title} clusters created with version 4.14 deploy AWS Network Load Balancers (NLB) by default for the `default` ingress controller. In the case of a private NLB, the NLB's client IP address preservation might cause connections to be dropped where the source and destination are the same host. See the AWS's documentation about how to link:https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-troubleshooting.html#loopback-timeout[Troubleshoot your Network Load Balancer]. This IP address preservation has the implication that any customer workloads cohabitating on the same node with the router pods, may not be able send traffic to the private NLB fronting the ingress controller router.
|
||||
{product-title} and {hcp-title} clusters created with version {product-version} deploy AWS Network Load Balancers (NLB) by default for the `default` ingress controller. In the case of a private NLB, the NLB's client IP address preservation might cause connections to be dropped where the source and destination are the same host. See the AWS's documentation about how to link:https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-troubleshooting.html#loopback-timeout[Troubleshoot your Network Load Balancer]. This IP address preservation has the implication that any customer workloads cohabitating on the same node with the router pods, may not be able send traffic to the private NLB fronting the ingress controller router.
|
||||
|
||||
To mitigate this impact, customer's should reschedule their workloads onto nodes separate from those where the router pods are scheduled. Alternatively, customers should rely on the internal pod and service networks for accessing other workloads co-located within the same cluster.
|
||||
@@ -14,7 +14,7 @@ The default value is 4,096 in {product-title} 4.11 and later. This value is cont
|
||||
|
||||
* Maximum number of PIDs per node.
|
||||
+
|
||||
The default value depends on link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/nodes/index#nodes-nodes-resources-configuring[node resources]. In {product-title}, this value is controlled by the link:https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved[`--system-reserved`] parameter, which reserves PIDs on each node based on the total resources of the node.
|
||||
The default value depends on link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.15/html-single/nodes/index#nodes-nodes-resources-configuring[node resources]. In {product-title}, this value is controlled by the link:https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved[`--system-reserved`] parameter, which reserves PIDs on each node based on the total resources of the node.
|
||||
|
||||
When a pod exceeds the allowed maximum number of PIDs per pod, the pod might stop functioning correctly and might be evicted from the node. See link:https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals-and-thresholds[the Kubernetes documentation for eviction signals and thresholds] for more information.
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Starting with {product-title} 4.14, the Custom Domain Operator is deprecated. To manage Ingress in {product-title} 4.14, use the Ingress Operator. The functionality is unchanged for {product-title} 4.13 and earlier versions.
|
||||
Starting with {product-title} 4.14, the Custom Domain Operator is deprecated. To manage Ingress in {product-title} 4.14 or later, use the Ingress Operator. The functionality is unchanged for {product-title} 4.13 and earlier versions.
|
||||
====
|
||||
|
||||
To use a custom hostname for a route, you must update your DNS provider by creating a canonical name (CNAME) record. Your CNAME record should map the OpenShift canonical router hostname to your custom domain. The OpenShift canonical router hostname is shown on the *Route Details* page after a Route is created. Alternatively, a wildcard CNAME record can be created once to route all subdomains for a given hostname to the cluster's router.
|
||||
|
||||
@@ -45,7 +45,7 @@ Both `http` and `event` trigger functions have the same template structure:
|
||||
<dependency>
|
||||
<groupId>junit</groupId>
|
||||
<artifactId>junit</artifactId>
|
||||
<version>4.14</version>
|
||||
<version>4.15</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
|
||||
@@ -18,7 +18,7 @@ The following features that are included in {product-title} {product-version} an
|
||||
|Support for running rootless Data Plane Development Kit (DPDK) workloads with kernel access by using the TAP CNI plugin
|
||||
a|DPDK applications that inject traffic into the kernel can run in non-privileged pods with the help of the TAP CNI plugin.
|
||||
|
||||
* link:https://docs.openshift.com/container-platform/4.14/networking/hardware_networks/using-dpdk-and-rdma.html#nw-running-dpdk-rootless-tap_using-dpdk-and-rdma[Using the TAP CNI to run a rootless DPDK workload with kernel access]
|
||||
* link:https://docs.openshift.com/container-platform/4.15/networking/hardware_networks/using-dpdk-and-rdma.html#nw-running-dpdk-rootless-tap_using-dpdk-and-rdma[Using the TAP CNI to run a rootless DPDK workload with kernel access]
|
||||
|
||||
//CNF-5977 Better pinning of the networking stack
|
||||
|Dynamic use of non-reserved CPUs for OVS
|
||||
@@ -31,7 +31,7 @@ OVS cannot use isolated CPUs assigned to containers in `Guaranteed` QoS pods. Th
|
||||
|Enabling more control over the C-states for each pod
|
||||
a|The `PerformanceProfile` supports `perPodPowerManagement` which provides more control over the C-states for pods. Now, instead of disabling C-states completely, you can specify a maximum latency in microseconds for C-states. You configure this option in the `cpu-c-states.crio.io` annotation, which helps to optimize power savings for high-priority applications by enabling some of the shallower C-states instead of disabling them completely.
|
||||
|
||||
* link:https://docs.openshift.com/container-platform/4.14/scalability_and_performance/cnf-low-latency-tuning.html#node-tuning-operator-pod-power-saving-config_cnf-master[Optional: Power saving configurations]
|
||||
* link:https://docs.openshift.com/container-platform/4.15/scalability_and_performance/cnf-low-latency-tuning.html#node-tuning-operator-pod-power-saving-config_cnf-master[Optional: Power saving configurations]
|
||||
|
||||
//CNF-7741 Permit to disable NUMA Aware scheduling hints based on SR-IOV VFs
|
||||
|Exclude SR-IOV network topology for NUMA-aware scheduling
|
||||
@@ -39,7 +39,7 @@ a|You can exclude advertising Non-Uniform Memory Access (NUMA) nodes for the SR-
|
||||
|
||||
For example, in some scenarios, you want flexibility for how a pod is deployed. By not providing a NUMA node hint to the Topology Manager for the pod's SR-IOV network resource, the Topology Manager can deploy the SR-IOV network resource and the pod CPU and memory resources to different NUMA nodes. In previous {product-title} releases, the Topology Manager attempted to place all resources on the same NUMA node.
|
||||
|
||||
* link:https://docs.openshift.com/container-platform/4.14/networking/hardware_networks/configuring-sriov-device.html#nw-sriov-exclude-topology-manager_configuring-sriov-device[Exclude the SR-IOV network topology for NUMA-aware scheduling]
|
||||
* link:https://docs.openshift.com/container-platform/4.15/networking/hardware_networks/configuring-sriov-device.html#nw-sriov-exclude-topology-manager_configuring-sriov-device[Exclude the SR-IOV network topology for NUMA-aware scheduling]
|
||||
|
||||
//CNF-8035 MetalLB VRF Egress interface selection with VRFs (Tech Preview)
|
||||
|Egress service resource to manage egress traffic for pods behind a load balancer (Technology Preview)
|
||||
@@ -51,6 +51,6 @@ You can use the `EgressService` CR to manage egress traffic in the following way
|
||||
|
||||
* Configure the egress traffic for pods behind a load balancer to a different network than the default node network.
|
||||
|
||||
* link:https://docs.openshift.com/container-platform/4.14/networking/ovn_kubernetes_network_provider/configuring-egress-traffic-for-vrf-loadbalancer-services.html#configuring-egress-traffic-loadbalancer-services[Configuring an egress service]
|
||||
* link:https://docs.openshift.com/container-platform/4.15/networking/ovn_kubernetes_network_provider/configuring-egress-traffic-for-vrf-loadbalancer-services.html#configuring-egress-traffic-loadbalancer-services[Configuring an egress service]
|
||||
|
||||
|====
|
||||
|
||||
@@ -11,7 +11,7 @@ New in this release::
|
||||
|
||||
Description::
|
||||
|
||||
The https://docs.openshift.com/container-platform/4.14/rest_api/node_apis/performanceprofile-performance-openshift-io-v2.html#spec-workloadhints[Performance Profile] can be used to configure a cluster in a high power, low power or mixed (https://docs.openshift.com/container-platform/4.14/scalability_and_performance/cnf-low-latency-tuning.html#node-tuning-operator-pod-power-saving-config_cnf-master[per-pod power management]) mode. The choice of power mode depends on the characteristics of the workloads running on the cluster particularly how sensitive they are to latency.
|
||||
The link:https://docs.openshift.com/container-platform/4.15/rest_api/node_apis/performanceprofile-performance-openshift-io-v2.html#spec-workloadhints[Performance Profile] can be used to configure a cluster in a high power, low power or mixed (link:https://docs.openshift.com/container-platform/4.15/scalability_and_performance/cnf-low-latency-tuning.html#node-tuning-operator-pod-power-saving-config_cnf-master[per-pod power management]) mode. The choice of power mode depends on the characteristics of the workloads running on the cluster particularly how sensitive they are to latency.
|
||||
|
||||
Limits and requirements::
|
||||
* Power configuration relies on appropriate BIOS configuration, for example, enabling C-states and P-states. Configuration varies between hardware vendors.
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
New in this release::
|
||||
|
||||
* NUMA-aware scheduling with the NUMA Resources Operator is now generally available in {product-title} {product-version}.
|
||||
* With this release, you can exclude advertising the Non-Uniform Memory Access (NUMA) node for the SR-IOV network to the Topology Manager. By not advertising the NUMA node for the SR-IOV network, you can permit more flexible SR-IOV network deployments during NUMA-aware pod scheduling. To exclude advertising the NUMA node for the SR-IOV network resource to the Topology Manager, set the value `excludeTopology` to `true` in the `SriovNetworkNodePolicy` CR. For more information, see link:https://docs.openshift.com/container-platform/4.14/networking/hardware_networks/configuring-sriov-device.html#nw-sriov-exclude-topology-manager_configuring-sriov-device[Exclude the SR-IOV network topology for NUMA-aware scheduling].
|
||||
* With this release, you can exclude advertising the Non-Uniform Memory Access (NUMA) node for the SR-IOV network to the Topology Manager. By not advertising the NUMA node for the SR-IOV network, you can permit more flexible SR-IOV network deployments during NUMA-aware pod scheduling. To exclude advertising the NUMA node for the SR-IOV network resource to the Topology Manager, set the value `excludeTopology` to `true` in the `SriovNetworkNodePolicy` CR. For more information, see link:https://docs.openshift.com/container-platform/4.15/networking/hardware_networks/configuring-sriov-device.html#nw-sriov-exclude-topology-manager_configuring-sriov-device[Exclude the SR-IOV network topology for NUMA-aware scheduling].
|
||||
|
||||
Description::
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ SR-IOV enables physical network interfaces (PFs) to be divided into multiple vir
|
||||
|
||||
Limits and requirements::
|
||||
|
||||
* The network interface controllers supported are listed in https://docs.openshift.com/container-platform/4.14/networking/hardware_networks/about-sriov.html#nw-sriov-supported-platforms_about-sriov[OCP supported SR-IOV devices]
|
||||
* The network interface controllers supported are listed in link:https://docs.openshift.com/container-platform/4.15/networking/hardware_networks/about-sriov.html#nw-sriov-supported-platforms_about-sriov[OCP supported SR-IOV devices]
|
||||
* SR-IOV and IOMMU enablement in BIOS: The SR-IOV Network Operator automatically enables IOMMU on the kernel command line.
|
||||
* SR-IOV VFs do not receive link state updates from PF. If link down detection is needed, it must be done at the protocol level.
|
||||
|
||||
|
||||
@@ -18,13 +18,13 @@ The following features that are included in {product-title} {product-version} an
|
||||
|{ztp} independence from managed cluster version
|
||||
a|You can now use {ztp} to manage clusters that are running different versions of {product-title} compared to the version that is running on the hub cluster. You can also have a mix of {product-title} versions in the deployed fleet of clusters.
|
||||
|
||||
* link:https://docs.openshift.com/container-platform/4.14/scalability_and_performance/ztp_far_edge/ztp-preparing-the-hub-cluster.html#ztp-preparing-the-ztp-git-repository-ver-ind_ztp-preparing-the-hub-cluster[Preparing the {ztp} site configuration repository for version independence]
|
||||
* link:https://docs.openshift.com/container-platform/4.15/scalability_and_performance/ztp_far_edge/ztp-preparing-the-hub-cluster.html#ztp-preparing-the-ztp-git-repository-ver-ind_ztp-preparing-the-hub-cluster[Preparing the {ztp} site configuration repository for version independence]
|
||||
|
||||
//CNF-6925
|
||||
|Using custom CRs alongside the reference CRs in {ztp}
|
||||
a|You can now use custom CRs alongside the reference configuration CRs provided in the `ztp-site-generate` container.
|
||||
|
||||
* link:https://docs.openshift.com/container-platform/4.14/scalability_and_performance/ztp_far_edge/ztp-advanced-policy-config.html#ztp-adding-new-content-to-gitops-ztp_ztp-advanced-policy-config[Adding custom content to the {ztp} pipeline]
|
||||
* link:https://docs.openshift.com/container-platform/4.15/scalability_and_performance/ztp_far_edge/ztp-advanced-policy-config.html#ztp-adding-new-content-to-gitops-ztp_ztp-advanced-policy-config[Adding custom content to the {ztp} pipeline]
|
||||
|
||||
//CNF-7078
|
||||
//|Intel Westport Channel e810 NIC as PTP Grandmaster clock
|
||||
@@ -46,19 +46,19 @@ a|You can now use custom CRs alongside the reference configuration CRs provided
|
||||
|Using custom node labels in the `SiteConfig` CR with {ztp}
|
||||
a|you can now use the `nodeLabels` field in the `SiteConfig` CR to create custom roles for nodes in managed clusters.
|
||||
|
||||
* link:https://docs.openshift.com/container-platform/4.14/scalability_and_performance/ztp_far_edge/ztp-deploying-far-edge-sites.html#ztp-sno-siteconfig-config-reference_ztp-deploying-far-edge-sites[{sno} SiteConfig CR installation reference]
|
||||
* link:https://docs.openshift.com/container-platform/4.15/scalability_and_performance/ztp_far_edge/ztp-deploying-far-edge-sites.html#ztp-sno-siteconfig-config-reference_ztp-deploying-far-edge-sites[{sno} SiteConfig CR installation reference]
|
||||
|
||||
//OCPBUGS-13050, CTONET-3072
|
||||
|PTP events and metrics
|
||||
a|The `PtpConfig` reference configuration CRs have been updated.
|
||||
|
||||
* link:https://docs.openshift.com/container-platform/4.14/networking/using-ptp.html#discover-ptp-devices_using-ptp[Discovering PTP capable network devices in your cluster]
|
||||
* link:https://docs.openshift.com/container-platform/4.15/networking/using-ptp.html#discover-ptp-devices_using-ptp[Discovering PTP capable network devices in your cluster]
|
||||
|
||||
//CNF-7517
|
||||
|Precaching user-specified images
|
||||
a|You can now precache application workload images before upgrading your applications on {sno} clusters with {cgu-operator-full}.
|
||||
|
||||
* link:https://docs.openshift.com/container-platform/4.14/scalability_and_performance/ztp_far_edge/ztp-precaching-tool.html#ztp-pre-staging-tool[Precaching images for {sno} deployments]
|
||||
* link:https://docs.openshift.com/container-platform/4.15/scalability_and_performance/ztp_far_edge/ztp-precaching-tool.html#ztp-pre-staging-tool[Precaching images for {sno} deployments]
|
||||
|
||||
//CNF-6318
|
||||
|Using OpenShift capabilities to further reduce the {sno} DU footprint
|
||||
@@ -66,5 +66,5 @@ a|Use cluster capabilities to enable or disable optional components before you i
|
||||
In {product-title} {product-version}, the following optional capabilities are available:
|
||||
`baremetal`, `marketplace`, `openshift-samples`, `Console`, `Insights`, `Storage`, `CSISnapshot`, `NodeTuning`, `MachineAPI`. The reference configuration includes only those features required for RAN DU.
|
||||
|
||||
* link:https://docs.openshift.com/container-platform/4.14/installing/cluster-capabilities.html#cluster-capabilities[Cluster capabilities]
|
||||
* link:https://docs.openshift.com/container-platform/4.15/installing/cluster-capabilities.html#cluster-capabilities[Cluster capabilities]
|
||||
|====
|
||||
|
||||
@@ -70,5 +70,5 @@ For more information, see link:https://docs.openshift.com/container-platform/lat
|
||||
====
|
||||
In {product-title} {product-version}, any `PerformanceProfile` CR configured on the cluster causes the Node Tuning Operator to automatically set all cluster nodes to use cgroup v1.
|
||||
|
||||
For more information about cgroups, see link:https://docs.openshift.com/container-platform/4.14/nodes/clusters/nodes-cluster-cgroups-2.html#nodes-clusters-cgroups-2_nodes-cluster-cgroups-2[Configuring Linux cgroup].
|
||||
For more information about cgroups, see link:https://docs.openshift.com/container-platform/4.15/nodes/clusters/nodes-cluster-cgroups-2.html#nodes-clusters-cgroups-2_nodes-cluster-cgroups-2[Configuring Linux cgroup].
|
||||
====
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="update-preparing-ack_{context}"]
|
||||
= Providing the administrator acknowledgment
|
||||
|
||||
After you have evaluated your cluster for any removed APIs and have migrated any removed APIs, you can acknowledge that your cluster is ready to upgrade from {product-title} 4.13 to 4.14.
|
||||
After you have evaluated your cluster for any removed APIs and have migrated any removed APIs, you can acknowledge that your cluster is ready to upgrade from {product-title} 4.14 to 4.15.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
@@ -19,9 +19,9 @@ Be aware that all responsibility falls on the administrator to ensure that all u
|
||||
|
||||
.Procedure
|
||||
|
||||
* Run the following command to acknowledge that you have completed the evaluation and your cluster is ready for the Kubernetes API removals in {product-title} 4.14:
|
||||
* Run the following command to acknowledge that you have completed the evaluation and your cluster is ready for the Kubernetes API removals in {product-title} 4.15:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.13-kube-1.27-api-removals-in-4.14":"true"}}' --type=merge
|
||||
$ oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.13-kube-1.27-api-removals-in-4.15":"true"}}' --type=merge
|
||||
----
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="update-preparing-list_{context}"]
|
||||
= Removed Kubernetes APIs
|
||||
|
||||
{product-title} 4.14 uses Kubernetes 1.27, which removed the following deprecated APIs. You must migrate manifests and API clients to use the appropriate API version. For more information about migrating removed APIs, see the link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-27[Kubernetes documentation].
|
||||
{product-title} 4.15 uses Kubernetes 1.27, which removed the following deprecated APIs. You must migrate manifests and API clients to use the appropriate API version. For more information about migrating removed APIs, see the link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-27[Kubernetes documentation].
|
||||
|
||||
.APIs removed from Kubernetes 1.27
|
||||
[cols="2,2,2",options="header",]
|
||||
|
||||
@@ -7,7 +7,7 @@ include::_attributes/common-attributes.adoc[]
|
||||
toc::[]
|
||||
|
||||
ifndef::openshift-origin[]
|
||||
As of {product-title} 4.14, {product-title} uses link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux control group version 2] (cgroup v2) in your cluster. If you are using cgroup v1 on {product-title} 4.13 or earlier, migrating to {product-title} 4.14 will not automatically update your cgroup configuration to version 2. A fresh installation of {product-title} 4.14 will use cgroup v2 by default. However, you can enable link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v1/index.html[Linux control group version 1] (cgroup v1) upon installation.
|
||||
As of {product-title} 4.14, {product-title} uses link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux control group version 2] (cgroup v2) in your cluster. If you are using cgroup v1 on {product-title} 4.13 or earlier, migrating to {product-title} 4.14 or later will not automatically update your cgroup configuration to version 2. A fresh installation of {product-title} 4.14 or later will use cgroup v2 by default. However, you can enable link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v1/index.html[Linux control group version 1] (cgroup v1) upon installation.
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
{product-title} uses link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux control group version 2] (cgroup v2) in your cluster.
|
||||
|
||||
@@ -16,7 +16,7 @@ This procedure is specific to the link:https://github.com/openshift/aws-efs-csi-
|
||||
|
||||
{product-title} is capable of provisioning persistent volumes (PVs) using the link:https://github.com/openshift/aws-efs-csi-driver[AWS EFS CSI driver].
|
||||
|
||||
Familiarity with link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/storage/index#persistent-storage-overview_understanding-persistent-storage[persistent storage] and link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/storage/index#persistent-storage-csi[configuring CSI volumes] is recommended when working with a CSI Operator and driver.
|
||||
Familiarity with link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.15/html-single/storage/index#persistent-storage-overview_understanding-persistent-storage[persistent storage] and link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.15/html-single/storage/index#persistent-storage-csi[configuring CSI volumes] is recommended when working with a CSI Operator and driver.
|
||||
|
||||
After installing the AWS EFS CSI Driver Operator, {product-title} installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the `openshift-cluster-csi-drivers` namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets.
|
||||
|
||||
@@ -87,5 +87,5 @@ include::modules/persistent-storage-csi-olm-operator-uninstall.adoc[leveloffset=
|
||||
[role="_additional-resources"]
|
||||
== Additional resources
|
||||
|
||||
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/storage/index#persistent-storage-csi[Configuring CSI volumes]
|
||||
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.15/html-single/storage/index#persistent-storage-csi[Configuring CSI volumes]
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ To create CSI-provisioned persistent volumes (PVs) that mount to vSphere storage
|
||||
|
||||
* *vSphere CSI Driver Operator*: The Operator provides a storage class, called `thin-csi`, that you can use to create persistent volumes claims (PVCs). The vSphere CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage. You can disable this default storage class if desired (see xref:../../storage/container_storage_interface/persistent-storage-csi-sc-manage.adoc#persistent-storage-csi-sc-manage[Managing the default storage class]).
|
||||
|
||||
* *vSphere CSI driver*: The driver enables you to create and mount vSphere PVs. In {product-title} 4.14, the driver version is 3.0.2. The vSphere CSI driver supports all of the file systems supported by the underlying Red Hat Core OS release, including XFS and Ext4. For more information about supported file systems, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_file_systems/assembly_overview-of-available-file-systems_managing-file-systems[Overview of available file systems].
|
||||
* *vSphere CSI driver*: The driver enables you to create and mount vSphere PVs. In {product-title} 4.15, the driver version is 3.0.2. The vSphere CSI driver supports all of the file systems supported by the underlying Red Hat Core OS release, including XFS and Ext4. For more information about supported file systems, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_file_systems/assembly_overview-of-available-file-systems_managing-file-systems[Overview of available file systems].
|
||||
|
||||
//Please update driver version as needed with each major OCP release starting with 4.13.
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ This procedure is specific to the Amazon Web Services Elastic File System (AWS E
|
||||
|
||||
{product-title} is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic File Service (EFS).
|
||||
|
||||
Familiarity with link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/storage/index#persistent-storage-overview_understanding-persistent-storage[persistent storage] and link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/storage/index#persistent-storage-csi[configuring CSI volumes] is recommended when working with a CSI Operator and driver.
|
||||
Familiarity with link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.15/html-single/storage/index#persistent-storage-overview_understanding-persistent-storage[persistent storage] and link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.15/html-single/storage/index#persistent-storage-csi[configuring CSI volumes] is recommended when working with a CSI Operator and driver.
|
||||
|
||||
After installing the AWS EFS CSI Driver Operator, {product-title} installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the `openshift-cluster-csi-drivers` namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets.
|
||||
|
||||
@@ -51,7 +51,7 @@ include::modules/persistent-storage-csi-efs-sts.adoc[leveloffset=+1]
|
||||
* xref:../../storage/persistent_storage/rosa-persistent-storage-aws-efs-csi.adoc#persistent-storage-csi-olm-operator-install_rosa-persistent-storage-aws-efs-csi[Installing the AWS EFS CSI Driver Operator]
|
||||
|
||||
|
||||
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/authentication_and_authorization/index#cco-ccoctl-configuring_cco-mode-sts[Configuring the Cloud Credential Operator utility]
|
||||
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.15/html-single/authentication_and_authorization/index#cco-ccoctl-configuring_cco-mode-sts[Configuring the Cloud Credential Operator utility]
|
||||
|
||||
:StorageClass: AWS EFS
|
||||
:Provisioner: efs.csi.aws.com
|
||||
@@ -80,5 +80,5 @@ include::modules/persistent-storage-csi-olm-operator-uninstall.adoc[leveloffset=
|
||||
[role="_additional-resources"]
|
||||
== Additional resources
|
||||
|
||||
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html-single/storage/index#persistent-storage-csi[Configuring CSI volumes]
|
||||
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.15/html-single/storage/index#persistent-storage-csi[Configuring CSI volumes]
|
||||
|
||||
|
||||
@@ -34,7 +34,7 @@ Collecting data about your environment minimizes the time required to analyze an
|
||||
.Procedure
|
||||
|
||||
. xref:../../support/gathering-cluster-data.adoc#support_gathering_data_gathering-cluster-data[Collect must-gather data for the cluster].
|
||||
. link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.14/html-single/troubleshooting_openshift_data_foundation/index#downloading-log-files-and-diagnostic-information_rhodf[Collect must-gather data for {rh-storage-first}], if necessary.
|
||||
. link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.15/html-single/troubleshooting_openshift_data_foundation/index#downloading-log-files-and-diagnostic-information_rhodf[Collect must-gather data for {rh-storage-first}], if necessary.
|
||||
. xref:../../virt/support/virt-collecting-virt-data.adoc#virt-using-virt-must-gather_virt-collecting-virt-data[Collect must-gather data for {VirtProductName}].
|
||||
. xref:../../monitoring/managing-metrics.adoc#querying-metrics-for-all-projects-as-an-administrator_managing-metrics[Collect Prometheus metrics for the cluster].
|
||||
|
||||
|
||||
Reference in New Issue
Block a user