mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Merge pull request #72080 from cbippley/OSDOCS-9662
OSDOCS-9662 Clean up 4.15 RN file
This commit is contained in:
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Red Hat {product-title} provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management overhead. {product-title} supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP.
|
||||
Red Hat {product-title} provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management. {product-title} supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP.
|
||||
|
||||
Built on {op-system-base-full} and Kubernetes, {product-title} provides a more secure and scalable multitenant operating system for today's enterprise-class applications, while delivering integrated application runtimes and libraries. {product-title} enables organizations to meet security, privacy, compliance, and governance requirements.
|
||||
|
||||
@@ -19,7 +19,7 @@ Built on {op-system-base-full} and Kubernetes, {product-title} provides a more s
|
||||
{product-title} {product-version} clusters are available at https://console.redhat.com/openshift. With the {cluster-manager-first} application for {product-title}, you can deploy {product-title} clusters to either on-premises or cloud environments.
|
||||
|
||||
// Double check OP system versions
|
||||
{product-title} {product-version} is supported on {op-system-base-full} 8.8 and 8.9, as well as on {op-system-first} 4.15.
|
||||
{product-title} {product-version} is supported on {op-system-base-full} 8.8 and 8.9, and on {op-system-first} 4.15.
|
||||
|
||||
You must use {op-system} machines for the control plane, and you can use either {op-system} or {op-system-base} for compute machines.
|
||||
//Removed the note per https://issues.redhat.com/browse/GRPA-3517
|
||||
@@ -55,12 +55,12 @@ This release adds improvements related to the following components and concepts.
|
||||
[id="ocp-4-15-rhcos"]
|
||||
=== {op-system-first}
|
||||
|
||||
[id="ocp-4-14-rhcos-rhel-9-2-packages"]
|
||||
[id="ocp-4-15-rhcos-rhel-9-2-packages"]
|
||||
==== {op-system} now uses {op-system-base} 9.2
|
||||
|
||||
{op-system} now uses {op-system-base-full} 9.2 packages in {product-title} {product-version}. These packages ensure that your {product-title} instance receives the latest fixes, features, enhancements, hardware support, and driver updates.
|
||||
|
||||
[id="ocp-4-14-rhcos-iscsi-support"]
|
||||
[id="ocp-4-15-rhcos-iscsi-support"]
|
||||
==== Support for iSCSI devices (Technology Preview)
|
||||
|
||||
{op-system} now supports the `iscsi_bft` driver, letting you boot directly from iSCSI devices that work with the iSCSI Boot Firmware Table (iBFT), as a Technology Preview. This lets you target iSCSI devices as the root disk for installation.
|
||||
@@ -85,7 +85,7 @@ It is now possible to configure the CAPI `Machine` and CAPO `OpenStackMachine` r
|
||||
[id="ocp-4-15-installation-and-update-ibm-cloud-user-managed-encryption"]
|
||||
==== IBM Cloud and user-managed encryption
|
||||
|
||||
You can now specify your own {ibm-name} Key Protect for {ibm-cloud-name} root key as part of the installation process. This root key is used to encrypt the root (boot) volume of control plane and compute machines, as well as the persistent volumes (data volumes) that are provisioned after the cluster is deployed.
|
||||
You can now specify your own {ibm-name} Key Protect for {ibm-cloud-name} root key as part of the installation process. This root key is used to encrypt the root (boot) volume of control plane and compute machines, and the persistent volumes (data volumes) that are provisioned after the cluster is deployed.
|
||||
|
||||
For more information, see xref:../installing/installing_ibm_cloud_public/user-managed-encryption-ibm-cloud.adoc#user-managed-encryption-ibm-cloud[User-managed encryption for IBM Cloud].
|
||||
|
||||
@@ -101,7 +101,7 @@ For more information, see xref:../installing/installing_ibm_cloud_public/install
|
||||
|
||||
You can quickly install an {product-title} cluster in Amazon Web Services (AWS) Wavelength Zones by setting the zone names in the edge compute pool of the `install-config.yaml` file, or install a cluster in an existing VPC with Wavelength Zone subnets.
|
||||
|
||||
You can also perform post-installation tasks to extend an existing {product-title} cluster on AWS to use AWS Wavelength Zones.
|
||||
You can also perform postinstallation tasks to extend an existing {product-title} cluster on AWS to use AWS Wavelength Zones.
|
||||
|
||||
For more information, see xref:../installing/installing_aws/installing-aws-wavelength-zone.adoc#installing-aws-wavelength-zone[Installing a cluster on AWS with compute nodes on AWS Wavelength Zones] and xref:../post_installation_configuration/aws-compute-edge-zone-tasks.adoc#aws-compute-edge-zone-tasks[Extend existing clusters to use AWS Local Zones or Wavelength Zones].
|
||||
|
||||
@@ -134,7 +134,7 @@ In {product-title} 4.15, you can disable the Operator Lifecycle Manager (OLM) ca
|
||||
[id="ocp-4-15-deploying-osp-with-root-volume-etcd-on-local-disk"]
|
||||
==== Deploying {rh-openstack-first} with root volume and etcd on local disk (Technology Preview)
|
||||
|
||||
You can now move etcd from a root volume (Cinder) to a dedicated ephemeral local disk as a day 2 deployment. With this Technology Preview feature, you can resolve and prevent performance issues of your {rh-openstack} installation.
|
||||
You can now move etcd from a root volume (Cinder) to a dedicated ephemeral local disk as a Day 2 deployment. With this Technology Preview feature, you can resolve and prevent performance issues of your {rh-openstack} installation.
|
||||
|
||||
For more information, see xref:../installing/installing_openstack/deploying-openstack-with-rootVolume-etcd-on-local-disk.adoc#deploying-openstack-with-rootvolume-etcd-on-local-disk[Deploying on OpenStack with rootVolume and etcd on local disk].
|
||||
|
||||
@@ -157,7 +157,6 @@ For more information, see xref:../installing/installing_with_agent_based_install
|
||||
=== Postinstallation configuration
|
||||
|
||||
[id="ocp-4.15-postinstallation-configuration-multi-arch-compute-machines"]
|
||||
|
||||
==== {product-title} clusters with multi-architecture compute machines
|
||||
|
||||
On {product-title} {product-version} clusters with multi-architecture compute machines, you can now enable 64k page sizes in the {op-system-first} kernel on the 64-bit ARM compute machines in your cluster. For more information on setting this parameter, see xref:../post_installation_configuration/configuring-multi-arch-compute-machines/multi-architecture-compute-managing.adoc#multi-architecture-enabling-64k-pages_multi-architecture-compute-managing[Enabling 64k pages on the {op-system-first} kernel].
|
||||
@@ -171,7 +170,7 @@ On {product-title} {product-version} clusters with multi-architecture compute ma
|
||||
This release introduces the following updates to the *Administrator* perspective of the web console:
|
||||
|
||||
* Enable and disable the tailing to Pod log viewer to minimize load time.
|
||||
* View recomended values for `VerticalPodAutoscaler` on the *Deployment* page.
|
||||
* View recommended values for `VerticalPodAutoscaler` on the *Deployment* page.
|
||||
|
||||
[id="node-uptime-information"]
|
||||
===== Node uptime information
|
||||
@@ -220,9 +219,9 @@ With this update, builds for {product-title} 1.0 is supported in the web console
|
||||
|
||||
For more information, see link:https://docs.openshift.com/builds/1.0/about/overview-openshift-builds.html[builds for {product-title}].
|
||||
|
||||
[id="ocp-4-15-openshift-cli"]
|
||||
=== OpenShift CLI (oc)
|
||||
|
||||
//[id="ocp-4-15-openshift-cli"]
|
||||
//=== OpenShift CLI (oc)
|
||||
//
|
||||
[id="ocp-4-15-ibm-z"]
|
||||
=== {ibm-z-title} and {ibm-linuxone-title}
|
||||
With this release, {ibm-z-name} and {ibm-linuxone-name} are now compatible with {product-title} {product-version}. You can perform the installation with z/VM, LPAR, or {op-system-base-full} Kernel-based Virtual Machine (KVM). For installation instructions, see the following documentation:
|
||||
@@ -285,7 +284,7 @@ This release introduces support for the following features on {ibm-power-name}:
|
||||
[discrete]
|
||||
=== {ibm-power-title}, {ibm-z-title}, and {ibm-linuxone-title} support matrix
|
||||
|
||||
Starting in {product-title} 4.14, Extended Update Support (EUS) is extended to the {ibm-power-name} and the {ibm-z-name} platform. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift-eus[OpenShift EUS Overview].
|
||||
Starting in {product-title} 4.14, Extended Update Support (EUS) is extended to the {ibm-power-name} and the {ibm-z-name} platform. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift-eus[OpenShift EUS Overview].
|
||||
|
||||
.{product-title} features
|
||||
[cols="3,1,1",options="header"]
|
||||
@@ -598,7 +597,7 @@ For more information, see xref:../operators/operator_sdk/token_auth/osdk-cco-azu
|
||||
[id="ocp-4-15-networking"]
|
||||
=== Networking
|
||||
|
||||
[id="ocp-4-14-networking-ovn-kubernetes-ipsec-support-for-external-traffic"]
|
||||
[id="ocp-4-15-networking-ovn-kubernetes-ipsec-support-for-external-traffic"]
|
||||
==== OVN-Kubernetes network plugin support for IPsec encryption of external traffic general availability (GA)
|
||||
|
||||
{product-title} now supports encryption of external traffic, also known as _north-south traffic_. IPsec already supports encryption of network traffic between pods, known as _east-west traffic_. You can use both features together to provide full in-transit encryption for {product-title} clusters.
|
||||
@@ -615,7 +614,7 @@ For more information, see xref:../networking/ovn_kubernetes_network_provider/con
|
||||
[id="ocp-4-15-ipv6-default-macvlan"]
|
||||
==== IPv6 unsolicited neighbor advertisements now default on macvlan CNI plugin
|
||||
|
||||
Previously, if one pod (`Pod X`) was deleted, and a second pod (`Pod Y`) was created with a similar configuration, `Pod Y` might have had the same IPv6 address as `Pod X`, but it would have a different MAC address. In this scenario, the router was unaware of the MAC address address change, and it would continue sending traffic to the MAC address for `Pod X`.
|
||||
Previously, if one pod (`Pod X`) was deleted, and a second pod (`Pod Y`) was created with a similar configuration, `Pod Y` might have had the same IPv6 address as `Pod X`, but it would have a different MAC address. In this scenario, the router was unaware of the MAC address change, and it would continue sending traffic to the MAC address for `Pod X`.
|
||||
|
||||
With this update, pods created using the macvlan CNI plugin, where the IP address management CNI plugin has assigned IPs, now send IPv6 unsolicited neighbor advertisements by default onto the network. This enhancement notifies the network fabric of the new pod's MAC address for a particular IP to refresh IPv6 neighbor caches.
|
||||
|
||||
@@ -704,7 +703,8 @@ For more information, see xref:../storage/persistent_storage/persistent_storage_
|
||||
[id="ocp-4.15-storage-support-for-wiping-the-devices"]
|
||||
==== Support for wiping the devices in {lvms}
|
||||
|
||||
This feature provides a new optional field `forceWipeDevicesAndDestroyAllData` in the `LVMCluster` custom resource (CR) to force wipe the selected devices. Prior to this release, wiping the devices required you to manually access the host. With this release, you can force wipe the disks without manual intervention. This simplifies the process of wiping the disks.
|
||||
This feature provides a new optional field `forceWipeDevicesAndDestroyAllData` in the `LVMCluster` custom resource (CR) to force wipe the selected devices. Before this release, wiping the devices required you to manually access the host. With this release, you can force wipe the disks without manual intervention. This simplifies the process of wiping the disks.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
If `forceWipeDevicesAndDestroyAllData` is set to `true`, {lvms} wipes all previous data on the devices. You must use this feature with caution.
|
||||
@@ -715,7 +715,7 @@ For more information, see xref:../storage/persistent_storage/persistent_storage_
|
||||
[id="ocp-4.15-storage-support-for-lvms-on-multi-node-clusters"]
|
||||
==== Support for deploying {lvms} on multi-node clusters
|
||||
|
||||
This feature provides support for deploying {lvms} on multi-node clusters. Prior to this release, {lvms} only supported single-node configurations. With this release, {lvms} supports all of the {product-title} deployment topologies. This enables provisioning of local storage on multi-node clusters.
|
||||
This feature provides support for deploying {lvms} on multi-node clusters. Previously, {lvms} only supported single-node configurations. With this release, {lvms} supports all of the {product-title} deployment topologies. This enables provisioning of local storage on multi-node clusters.
|
||||
[WARNING]
|
||||
====
|
||||
{lvms} only supports node local storage on multi-node clusters. It does not support storage data replication mechanism across nodes. When using {lvms} on multi-node clusters, you must ensure storage data replication through active or passive replication mechanisms to avoid a single point of failure.
|
||||
@@ -726,18 +726,18 @@ For more information, see xref:../storage/persistent_storage/persistent_storage_
|
||||
[id="ocp-4.15-storage-support-for-integrating-raid-arrays-with-lvms"]
|
||||
==== Integrating RAID arrays with {lvms}
|
||||
|
||||
This feature provides support for integrating RAID arrays that are created using the `mdadm` utility with {lvms}. The `LVMCluster` custom resource (CR) provides support for adding paths to the RAID arrays in the `deviceSelector.paths` field and the `deviceSelector.optionalPaths` field.
|
||||
This feature provides support for integrating RAID arrays that are created using the `mdadm` utility with {lvms}. The `LVMCluster` custom resource (CR) provides support for adding paths to the RAID arrays in the `deviceSelector.paths` field and the `deviceSelector.optionalPaths` field.
|
||||
|
||||
For more information, see xref:../storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc#lvms-integrating-software-raid-arrays_logical-volume-manager-storage[Integrating software RAID arrays with LVM Storage].
|
||||
|
||||
[id="ocp-4.14-storage-fips-conpliance-support-for-lvms"]
|
||||
[id="ocp-4.15-storage-fips-conpliance-support-for-lvms"]
|
||||
==== FIPS compliance support for {lvms}
|
||||
|
||||
With this release, {lvms} is designed for Federal Information Processing Standards (FIPS). When {lvms} is installed on {product-title} in FIPS mode, {lvms} uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-3 validation only on the x86_64 architecture.
|
||||
|
||||
[id="ocp-4-15-storage-retro-sc-assignment"]
|
||||
==== Retroactive default StorageClass assignment is generally available
|
||||
Prior to {product-title} 4.13, if there was no default storage class, persistent volumes claims (PVCs) that were created that requested the default storage class remained stranded in the pending state indefinitely, unless you manually delete and recreate them. Starting with {product-title} 4.14, as a Technology Preview feature, the default storage class is assigned to these PVCs retroactively so that they do not remain in the pending state. After a default storage class is created, or one of the existing storage classes is declared the default, these previously stranded PVCs are assigned to the default storage class. This feature is now generally available.
|
||||
Before {product-title} 4.13, if there was no default storage class, persistent volumes claims (PVCs) that were created that requested the default storage class remained stranded in the pending state indefinitely, unless you manually delete and recreate them. Starting with {product-title} 4.14, as a Technology Preview feature, the default storage class is assigned to these PVCs retroactively so that they do not remain in the pending state. After a default storage class is created, or one of the existing storage classes is declared the default, these previously stranded PVCs are assigned to the default storage class. This feature is now generally available.
|
||||
|
||||
For more information, see xref:..//storage/container_storage_interface/persistent-storage-csi-sc-manage.adoc#absent-default-storage-class[Absent default storage class].
|
||||
|
||||
@@ -764,10 +764,10 @@ For more information, see xref:..//storage/container_storage_interface/persisten
|
||||
The user-managed encryption feature allows you to provide keys during installation that encrypt {product-title} node root volumes, and enables all managed storage classes to use the specified encryption key to encrypt provisioned storage volumes. This feature was introduced in {product-title} 4.13 for Google Cloud Platform (GCP) persistent disk (PD) storage, Microsoft Azure Disk, and Amazon Web Services (AWS) Elastic Block storage (EBS), and is now supported on IBM Virtual Private Cloud (VPC) Block storage.
|
||||
|
||||
//TODO: For more information, see xref:..//storage/container_storage_interface/persistent-storage-csi-ibm-vpc-block.adoc#byok_persistent-storage-csi-ibm-vpc-block [User-managed encryption].
|
||||
|
||||
//
|
||||
[id="ocp-4-15-storage-selinux-relabling-mount-options"]
|
||||
==== SELinux relabeling using mount options (Technology Preview)
|
||||
Previously, when SELinux was enabled, the persistent volume's (PV's) files were relabeled when attaching the PV to the pod, potentially causing timeouts when the PVs contained a lot of files, as well as overloading the storage backend.
|
||||
Previously, when SELinux was enabled, the persistent volume's (PV's) files were relabeled when attaching the PV to the pod, potentially causing timeouts when the PVs contained many files, as well as overloading the storage backend.
|
||||
|
||||
In {product-title} 4.15, for Container Storage Interface (CSI) driver that support this feature, the driver will mount the volume directly with the correct SELinux labels, eliminating the need to recursively relabel the volume, and pod startup can be significantly faster.
|
||||
|
||||
@@ -786,7 +786,7 @@ If the following conditions are true, the feature is enabled by default:
|
||||
|
||||
* The pod that uses the persistent volume has full SELinux label specified in its `spec.securityContext` or `spec.containers[*].securityContext` by using `restricted` SCC.
|
||||
|
||||
* Access mode set to `ReadWriteOncePod` for the volume.
|
||||
* Access mode set to `ReadWriteOncePod` for the volume.
|
||||
|
||||
[id="ocp-4-15-oci"]
|
||||
=== Oracle(R) Cloud Infrastructure
|
||||
@@ -824,11 +824,14 @@ Support for version ranges::
|
||||
You can specify a version range by using a comparison string in an Operator or extension's custom resource (CR). If you specify a version range in the CR, {olmv1} installs or updates to the latest version of the Operator that can be resolved within the version range. For more information, see xref:../operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc#olmv1-updating-an-operator_olmv1-installing-operator[Updating an Operator] and xref:../operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc#olmv1-version-range-support_olmv1-installing-operator[Support for version ranges]
|
||||
|
||||
Performance improvements in the Catalog API::
|
||||
The Catalog API now uses an HTTP service to serve catalog content on the cluster. Previously, custom resource definitions (CRDs) were used for this purpose. The transition to using an HTTP service to serve catalog content reduces the load on the Kubenertes API server. For more information, see xref:../operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc#olmv1-finding-operators-to-install_olmv1-installing-operator[Finding Operators to install from a catalog].
|
||||
The Catalog API now uses an HTTP service to serve catalog content on the cluster. Previously, custom resource definitions (CRDs) were used for this purpose. The change to using an HTTP service to serve catalog content reduces the load on the Kubernetes API server. For more information, see xref:../operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc#olmv1-finding-operators-to-install_olmv1-installing-operator[Finding Operators to install from a catalog].
|
||||
|
||||
include::snippets/olmv1-cli-only.adoc[]
|
||||
|
||||
For more information, see xref:../operators/olm_v1/index.adoc#olmv1-about[About Operator Lifecycle Manager 1.0].
|
||||
//
|
||||
//[id="ocp-4-15-osdk"]
|
||||
//=== Operator development
|
||||
|
||||
[id="ocp-4-15-deprecation"]
|
||||
==== Deprecation schema for Operator catalogs
|
||||
@@ -878,7 +881,7 @@ You can configure unprivileged pods with the `/dev/fuse` device to access faster
|
||||
For more information, see xref:../nodes/containers/nodes-containers-dev-fuse.adoc#nodes-containers-dev-fuse[Accessing faster builds with /dev/fuse].
|
||||
|
||||
[id="ocp-4-15-nodes-log-linking"]
|
||||
==== Log linking is enabled dby efault
|
||||
==== Log linking is enabled by default
|
||||
|
||||
Beginning with {product-title} 4.15, log linking is enabled by default. Log linking gives you access to the container logs for your pods.
|
||||
|
||||
@@ -946,7 +949,7 @@ For monitoring deployments that have shown sub-optimal chunk compression for dat
|
||||
|
||||
[id="ocp-4-15-monitoring-improved-staleness-handling-for-the-kubelet-service-monitor"]
|
||||
==== Improved staleness handling for the kubelet service monitor
|
||||
Staleness handling for the kubelet service monitor has been improved in order to ensure that alerts and time aggregations are accurate.
|
||||
Staleness handling for the kubelet service monitor has been improved to ensure that alerts and time aggregations are accurate.
|
||||
This improved functionality is active by default and makes the dedicated service monitors feature obsolete.
|
||||
As a result, the dedicated service monitors feature has been disabled and is now deprecated, and setting the `DedicatedServiceMonitors` resource to `enabled` has no effect.
|
||||
|
||||
@@ -960,7 +963,7 @@ If the Cluster Monitoring Operator (CMO) reports task failures, the following re
|
||||
|
||||
[id="ocp-4-15-network-observability-1-5"]
|
||||
=== Network Observability Operator
|
||||
The Network Observability Operator releases updates independently from the {product-title} minor version release stream. Updates are available through a single, rolling stream which is supported on all currently supported versions of {product-title} 4. Information regarding new features, enhancements, and bug fixes for the Network Observability Operator is found in the xref:../network_observability/network-observability-operator-release-notes.adoc[Network Observability release notes].
|
||||
The Network Observability Operator releases updates independently from the {product-title} minor version release stream. Updates are available through a single, Rolling Stream which is supported on all currently supported versions of {product-title} 4. Information regarding new features, enhancements, and bug fixes for the Network Observability Operator is found in the xref:../network_observability/network-observability-operator-release-notes.adoc#network-observability-rn[Network Observability release notes].
|
||||
|
||||
[id="ocp-4-15-scalability-and-performance"]
|
||||
=== Scalability and performance
|
||||
@@ -995,14 +998,13 @@ For more information, see xref:../scalability_and_performance/cnf-performing-pla
|
||||
[id="ocp-4-15-bare-metal-operator"]
|
||||
==== Bare Metal Operator
|
||||
For {product-title} {product-version}, when the Bare Metal Operator removes a host from the cluster it also powers off the host. This enhancement streamlines hardware maintenance and management.
|
||||
//
|
||||
//[id="ocp-4-15-hcp"]
|
||||
//=== Hosted control planes
|
||||
|
||||
[id="ocp-4-15-hcp"]
|
||||
=== Hosted control planes
|
||||
|
||||
|
||||
[id="ocp-4-15-insights-operator"]
|
||||
=== Insights Operator
|
||||
|
||||
//[id="ocp-4-15-insights-operator"]
|
||||
//=== Insights Operator
|
||||
//
|
||||
[id="ocp-4-15-notable-technical-changes"]
|
||||
== Notable technical changes
|
||||
|
||||
@@ -1013,7 +1015,7 @@ For {product-title} {product-version}, when the Bare Metal Operator removes a ho
|
||||
//[discrete]
|
||||
//[id="ocp-4-15-cluster-cloud-controller-manager-operator"]
|
||||
//=== Cloud controller managers for additional cloud providers
|
||||
|
||||
//
|
||||
[discrete]
|
||||
[id="ocp-4-15-ports-use-tls"]
|
||||
=== Cluster metrics ports secured
|
||||
@@ -1072,9 +1074,9 @@ Previously, the IP address range `168.254.0.0/16` was the default IP address ran
|
||||
[id="ocp-4-15-no-strict-limits-variable"]
|
||||
=== Introduction of HAProxy no strict-limits variable
|
||||
|
||||
The transition to HAProxy 2.6 included enforcement for the `strict-limits` configuration, which resulted in fatal errors when `maxConnections` requirements could not be met. The `strict-limits` setting is not configurable by end users and remains under the control of the HAProxy template.
|
||||
The transition to HAProxy 2.6 included enforcement for the `strict-limits` configuration, which resulted in unrecoverable errors when `maxConnections` requirements could not be met. The `strict-limits` setting is not configurable by end users and remains under the control of the HAProxy template.
|
||||
|
||||
This release introduces a configuration adjustment in response to the transition to the `maxConnections` issues. Now, the HAProxy configuration switches to using `no strict-limits`. As a result, HAProxy no longer fatally exits when the `maxConnection` configuration cannot be satisfied. Instead, it emits warnings and continues running. When `maxConnection` limitations cannot be met, warning like the following examples might be returned:
|
||||
This release introduces a configuration adjustment in response to the migration to the `maxConnections` issues. Now, the HAProxy configuration switches to using `no strict-limits`. As a result, HAProxy no longer fatally exits when the `maxConnection` configuration cannot be satisfied. Instead, it emits warnings and continues running. When `maxConnection` limitations cannot be met, warnings such as the following examples might be returned:
|
||||
|
||||
* `[WARNING] (50) : [/usr/sbin/haproxy.main()] Cannot raise FD limit to 4000237, limit is 1048576.`
|
||||
* `[ALERT] (50) : [/usr/sbin/haproxy.main()] FD limit (1048576) too low for maxconn=2000000/maxsock=4000237. Please raise 'ulimit-n' to 4000237 or more to avoid any trouble.`
|
||||
@@ -1383,7 +1385,7 @@ The Bare Metal Event Relay Operator is deprecated. The ability to monitor bare-m
|
||||
==== Dedicated service monitors for core platform monitoring
|
||||
With this release, the dedicated service monitors feature for core platform monitoring is deprecated.
|
||||
The ability to enable dedicated service monitors by configuring the `dedicatedServiceMonitors` setting in the `cluster-monitoring-config` config map object in the `openshift-monitoring` namespace will be removed in a future {product-title} release.
|
||||
To replace this feature, Prometheus functionality has been improved in order to ensure that alerts and time aggregations are accurate.
|
||||
To replace this feature, Prometheus functionality has been improved to ensure that alerts and time aggregations are accurate.
|
||||
This improved functionality is active by default and makes the dedicated service monitors feature obsolete.
|
||||
|
||||
[id="ocp-4-15-deprecated-oc-registry-info"]
|
||||
@@ -1391,7 +1393,7 @@ This improved functionality is active by default and makes the dedicated service
|
||||
|
||||
With this release, the experimental `oc registry info` command is deprecated.
|
||||
|
||||
To view information about the integrated {product-registry}, run `oc get imagestream -n openshift` and check the `IMAGE REPOSITORY` column.
|
||||
To view information about the integrated {product-registry}, run `oc get imagestream -n openshift` and check the `IMAGE REPOSITORY` column.
|
||||
|
||||
[id="ocp-4-15-removed-features"]
|
||||
=== Removed features
|
||||
@@ -1417,9 +1419,9 @@ To view information about the integrated {product-registry}, run `oc get images
|
||||
|
||||
As of {product-title} {product-version}, support for installing clusters on {rh-openstack} with kuryr is removed.
|
||||
|
||||
[id="ocp-4-15-future-deprecation"]
|
||||
=== Notice of future deprecation
|
||||
|
||||
//[id="ocp-4-15-future-deprecation"]
|
||||
//=== Notice of future deprecation
|
||||
//
|
||||
[id="ocp-4-15-future-removals"]
|
||||
=== Future Kubernetes API removals
|
||||
|
||||
@@ -1483,7 +1485,7 @@ With {product-title} {product-version}, the `inspector.ipxe` configuration has b
|
||||
[id="ocp-4-15-builds-bug-fixes"]
|
||||
==== Builds
|
||||
|
||||
* Previously, timestamps were not preserved when copying contents between containers. With this release, the `-p` flag is added to the `cp` command in order to allow timestamps to be preserved. (link:https://issues.redhat.com/browse/OCPBUGS-22497[*OCPBUGS-22497*])
|
||||
* Previously, timestamps were not preserved when copying contents between containers. With this release, the `-p` flag is added to the `cp` command to allow timestamps to be preserved. (link:https://issues.redhat.com/browse/OCPBUGS-22497[*OCPBUGS-22497*])
|
||||
|
||||
[discrete]
|
||||
[id="ocp-4-15-cloud-compute-bug-fixes"]
|
||||
@@ -1563,7 +1565,6 @@ With {product-title} {product-version}, the `inspector.ipxe` configuration has b
|
||||
|
||||
* Previously, the `cluster-backup.sh` script cached the `etcdctl` binary on the local machine indefinitely, making updates impossible. With this update, the `cluster-backup.sh` script pulls the latest `etcdctl` binary each time it is run. (link:https://issues.redhat.com/browse/OCPBUGS-19052[*OCPBUGS-19052*])
|
||||
|
||||
|
||||
[discrete]
|
||||
[id="ocp-4-15-hosted-control-plane-bug-fixes"]
|
||||
==== Hosted Control Plane
|
||||
@@ -1679,11 +1680,10 @@ With this update, the `assisted-installer-controller` can remove the uninitializ
|
||||
==== Kubernetes Controller Manager
|
||||
|
||||
* Previously, when the `maxSurge` field was set for a daemon set and the toleration was updated, pods failed to scale down, which resulted in a failed rollout due to a different set of nodes being used for scheduling. With this release, nodes are properly excluded if scheduling constraints are not met, and rollouts can complete successfully. (link:https://issues.redhat.com/browse/OCPBUGS-19452[*OCPBUGS-19452*])
|
||||
|
||||
[discrete]
|
||||
[id="ocp-4-15-kube-scheduler-bug-fixes"]
|
||||
==== Kubernetes Scheduler
|
||||
|
||||
//
|
||||
//[discrete]
|
||||
//[id="ocp-4-15-kube-scheduler-bug-fixes"]
|
||||
//==== Kubernetes Scheduler
|
||||
|
||||
[discrete]
|
||||
[id="ocp-4-15-machine-config-operator-bug-fixes"]
|
||||
@@ -1765,7 +1765,7 @@ With this update, the Ingress Operator no longer adds the `clientca-configmap` f
|
||||
|
||||
* Previously, the Azure upstream DNS did not comply with non-EDNS DNS queries because it returned a payload larger than 512 bytes. Because CoreDNS 1.10.1 no longer uses EDNS for upstream queries and only uses EDNS when the original client query uses EDNS, the combination would result in an overflow `servfail` error when the upstream returned a payload larger than 512 bytes for non-EDNS queries using CoreDNS 1.10.1. Consequently, upgrading from {product-title} 4.12 to 4.13 led to some DNS queries failing that previously worked.
|
||||
+
|
||||
With this release, instead of returning an overflow `servfail` error, the CoreDNS now truncates the response, indicating that the client can try again in TCP. As a result, clusters with a non-compliant upstream now retry with TCP when experiencing overflow errors. This prevents any disruption of functionality between {product-title} 4.12 and 4.13. (link:https://issues.redhat.com/browse/OCPBUGS-27904[*OCPBUGS-27904*]), (link:https://issues.redhat.com/browse/OCPBUGS-28205[*OCPBUGS-28205*])
|
||||
With this release, instead of returning an overflow `servfail` error, the CoreDNS now truncates the response, indicating that the client can try again in TCP. As a result, clusters with a noncompliant upstream now retry with TCP when experiencing overflow errors. This prevents any disruption of functionality between {product-title} 4.12 and 4.13. (link:https://issues.redhat.com/browse/OCPBUGS-27904[*OCPBUGS-27904*]), (link:https://issues.redhat.com/browse/OCPBUGS-28205[*OCPBUGS-28205*])
|
||||
|
||||
* Previously, there was a limitation in private Microsoft Azure clusters where secondary IP addresses designated as egress IP addresses lacked outbound connectivity. This meant that pods associated with these IP addresses were unable to access the internet. However, they could still reach external servers within the infrastructure network, which is the intended use case for egress IP addresses. This update enables egress IP addresses for Microsoft Azure clusters, allowing outbound connectivity to be achieved through outbound rules. (link:https://issues.redhat.com/browse/OCPBUGS-5491[*OCPBUGS-5491*])
|
||||
|
||||
@@ -1779,7 +1779,7 @@ With this release, instead of returning an overflow `servfail` error, the CoreDN
|
||||
|
||||
* Previously, the `spec.desiredState.ovn.bridge-mappings` API configuration deleted all the external IDs in Open vSwitch (OVS) local tables on each Kubernetes node. As a result, the OVN chassis configuration was deleted, breaking the default cluster network. With this fix, you can use the `ovn.bridge-mappings` configuration without affecting the OVS configuration. (link:https://issues.redhat.com/browse/OCPBUGS-18869[*OCPBUGS-18869*])
|
||||
|
||||
* Previously, if NMEA sentences were lost on their way to the E810 controller, the T-GM would not be able to synchronize the devices in the network synchronization chain. If these conditions were met, the PTP operator reported an error. With this release, a fix is implemented to report 'FREERUN' in the event of a loss of the NMEA string. (link:https://issues.redhat.com/browse/OCPBUGS-20514[*OCPBUGS-20514*])
|
||||
* Previously, if NMEA sentences were lost on their way to the E810 controller, the T-GM would not be able to synchronize the devices in the network synchronization chain. If these conditions were met, the PTP operator reported an error. With this release, a fix is implemented to report 'FREERUN' in case of a loss of the NMEA string. (link:https://issues.redhat.com/browse/OCPBUGS-20514[*OCPBUGS-20514*])
|
||||
|
||||
* Previously, pods assigned an IP from the pool created by the Whereabouts CNI plugin persisted in the `ContainerCreating` state after a node force reboot. With this release, the Whereabouts CNI plugin issue associated with the IP allocation after a node force reboot is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-18893[*OCPBUGS-18893*])
|
||||
|
||||
@@ -1807,11 +1807,11 @@ With this update, the third `ovnkube-control-plane` node has been removed. As a
|
||||
[id="ocp-4-15-node-tuning-operator-bug-fixes"]
|
||||
==== Node Tuning Operator (NTO)
|
||||
|
||||
Previously, the Tuned profile reports `Degraded` condition after applying a PerformanceProfile. The generated Tuned profile was trying to set a `sysctl` value for the default Receive Packet Steering (RPS) mask when it already configured the same value using an `/etc/sysctl.d` file. Tuned warns about that and the Node Tuning Operator (NTO) treats that as a degradation with the following message `The TuneD daemon issued one or more error message(s) when applying the profile profile. TuneD stderr: net.core.rps_default_mask`. With this update, the duplication was solved by not setting the default RPS mask using Tuned. The `sysctl.d` file was left in place as it applies early during boot. (link:https://issues.redhat.com/browse/OCPBUGS-25092[*OCPBUGS-25092*])
|
||||
* Previously, the Tuned profile reports `Degraded` condition after applying a PerformanceProfile. The generated Tuned profile was trying to set a `sysctl` value for the default Receive Packet Steering (RPS) mask when it already configured the same value using an `/etc/sysctl.d` file. Tuned warns about that and the Node Tuning Operator (NTO) treats that as a degradation with the following message `The TuneD daemon issued one or more error message(s) when applying the profile profile. TuneD stderr: net.core.rps_default_mask`. With this update, the duplication was solved by not setting the default RPS mask using Tuned. The `sysctl.d` file was left in place as it applies early during boot. (link:https://issues.redhat.com/browse/OCPBUGS-25092[*OCPBUGS-25092*])
|
||||
|
||||
* Previously, the Node Tuning Operator (NTO) did not set the `UserAgent` and used a default one. With this update, the NTO sets the `UserAgent` appropriately, which makes debugging the cluster easier. (link:https://issues.redhat.com/browse/OCPBUGS-19785[*OCPBUGS-19785*])
|
||||
* Previously, the Node Tuning Operator (NTO) did not set the `UserAgent` and used a default one. With this update, the NTO sets the `UserAgent` appropriately, which makes debugging the cluster easier. (link:https://issues.redhat.com/browse/OCPBUGS-19785[*OCPBUGS-19785*])
|
||||
|
||||
* Previously, when the Node Tuning Operator (NTO) pod restarted while there were a large number of CSVs in the cluster, the NTO pod would fail and entered into `CrashBackLoop` state. With this update, pagination has been added to the list CSVs requests and this avoids the `api-server` timeout issue that resulted in the `CrashBackLoop` state. (link:https://issues.redhat.com/browse/OCPBUGS-14241[*OCPBUGS-14241*])
|
||||
* Previously, when the Node Tuning Operator (NTO) pod restarted while there were a large number of CSVs in the cluster, the NTO pod would fail and entered into `CrashBackLoop` state. With this update, pagination has been added to the list CSVs requests and this avoids the `api-server` timeout issue that resulted in the `CrashBackLoop` state. (link:https://issues.redhat.com/browse/OCPBUGS-14241[*OCPBUGS-14241*])
|
||||
|
||||
[discrete]
|
||||
[id="ocp-4-15-openshift-cli-bug-fixes"]
|
||||
@@ -1880,47 +1880,46 @@ The SCC API can globally affect scheduling on an {product-title} cluster. When a
|
||||
====
|
||||
+
|
||||
(link:https://issues.redhat.com/browse/OCPBUGS-20347[*OCPBUGS-20347*])
|
||||
//
|
||||
//[discrete]
|
||||
//[id="ocp-4-15-openshift-api-server-bug-fixes"]
|
||||
//==== OpenShift API server
|
||||
|
||||
[discrete]
|
||||
[id="ocp-4-15-openshift-api-server-bug-fixes"]
|
||||
==== OpenShift API server
|
||||
|
||||
[discrete]
|
||||
[id="ocp-4-15-rhcos-bug-fixes"]
|
||||
==== {op-system-first}
|
||||
|
||||
//[discrete]
|
||||
//[id="ocp-4-15-rhcos-bug-fixes"]
|
||||
//==== {op-system-first}
|
||||
//
|
||||
[discrete]
|
||||
[id="ocp-4-15-scalability-and-performance-bug-fixes"]
|
||||
==== Scalability and performance
|
||||
|
||||
* Previously, a race condition between `udev` events and the creation queues associated with physical devices led to some of the queues being configured with the wrong Receive Packet Steering (RPS) mask when they should be reset to zero. This resulted in the RPS mask being configured on the queues of the physical devices, meaning they were using RPS instead of Receive Side Scaling (RSS), which could impact the performance. With this fix, the event was changed to be triggered per queue creation instead of at device creation. This guarantees that no queue will be missing. The queues of all physical devices are now set up with the correct RPS mask which is empty. (link:https://issues.redhat.com/browse/OCPBUGS-18662[*OCPBUGS-18662*])
|
||||
* Previously, a race condition between `udev` events and the creation queues associated with physical devices led to some of the queues being configured with the wrong Receive Packet Steering (RPS) mask when they should be reset to zero. This resulted in the RPS mask being configured on the queues of the physical devices, meaning they were using RPS instead of Receive Side Scaling (RSS), which could impact the performance. With this fix, the event was changed to be triggered per queue creation instead of at device creation. This guarantees that no queue will be missing. The queues of all physical devices are now set up with the correct RPS mask which is empty. (link:https://issues.redhat.com/browse/OCPBUGS-18662[*OCPBUGS-18662*])
|
||||
|
||||
* Previously, due to differences in setting up a container’s `cgroup` hierarchy, containers that use the `crun` OCI runtime along with a `PerformanceProfile` configuration encountered performance degradation. With this release, when handling a `PerformanceProfile` request, CRI-O accounts for the differences in `crun` and correctly configures the CPU quota to ensure performance. (link:https://issues.redhat.com/browse/OCPBUGS-20492[*OCPBUGS-20492*])
|
||||
* Previously, due to differences in setting up a container’s `cgroup` hierarchy, containers that use the `crun` OCI runtime along with a `PerformanceProfile` configuration encountered performance degradation. With this release, when handling a `PerformanceProfile` request, CRI-O accounts for the differences in `crun` and correctly configures the CPU quota to ensure performance. (link:https://issues.redhat.com/browse/OCPBUGS-20492[*OCPBUGS-20492*])
|
||||
|
||||
[discrete]
|
||||
[id="ocp-4-15-storage-bug-fixes"]
|
||||
==== Storage
|
||||
|
||||
* Prior to this release, {lvms} did not support disabling over-provisioning, and the minimum value for the `thinPoolConfig.overprovisionRatio` field in the `LVMCluster` CR was 2. With this release, you can disable over-provisioning by setting the value of the `thinPoolConfig.overprovisionRatio` field to 1. (link:https://issues.redhat.com/browse/OCPBUGS-24396[*OCPBUGS-24396*])
|
||||
* Previously, {lvms} did not support disabling over-provisioning, and the minimum value for the `thinPoolConfig.overprovisionRatio` field in the `LVMCluster` CR was 2. With this release, you can disable over-provisioning by setting the value of the `thinPoolConfig.overprovisionRatio` field to 1. (link:https://issues.redhat.com/browse/OCPBUGS-24396[*OCPBUGS-24396*])
|
||||
|
||||
* Prior to this release, if the `LVMCluster` CR was created with an invalid device path in the `deviceSelector.optionalPaths` field, the `LVMCluster` CR was in `Progressing` state. With this release, if the `deviceSelector.optionalPaths` field contains an invalid device path, {lvms} updates the `LVMCluster` CR state to `Failed`. (link:https://issues.redhat.com/browse/OCPBUGS-23995[*OCPBUGS-23995*])
|
||||
* Previously, if the `LVMCluster` CR was created with an invalid device path in the `deviceSelector.optionalPaths` field, the `LVMCluster` CR was in `Progressing` state. With this release, if the `deviceSelector.optionalPaths` field contains an invalid device path, {lvms} updates the `LVMCluster` CR state to `Failed`. (link:https://issues.redhat.com/browse/OCPBUGS-23995[*OCPBUGS-23995*])
|
||||
|
||||
* Prior to this release, the {lvms} resource pods were preempted while the cluster was congested. With this release, upon updating {product-title}, {lvms} configures the `priorityClassName` parameter to ensure proper scheduling and preemption behavior while the cluster is congested. (link:https://issues.redhat.com/browse/OCPBUGS-23375[*OCPBUGS-23375*])
|
||||
* Previously, the {lvms} resource pods were preempted while the cluster was congested. With this release, upon updating {product-title}, {lvms} configures the `priorityClassName` parameter to ensure proper scheduling and preemption behavior while the cluster is congested. (link:https://issues.redhat.com/browse/OCPBUGS-23375[*OCPBUGS-23375*])
|
||||
|
||||
* Prior to this release, upon creating the `LVMCluster` CR, {lvms} skipped the counting of volume groups. This resulted in the `LVMCluster` CR moving to `Progressing` state even when the volume groups were valid. With this release, upon creating the `LVMCluster` CR, {lvms} counts all the volume groups, and updates the `LVMCluster` CR state to `Ready` if the volume groups are valid. (link:https://issues.redhat.com/browse/OCPBUGS-23191[*OCPBUGS-23191*])
|
||||
* Previously, upon creating the `LVMCluster` CR, {lvms} skipped the counting of volume groups. This resulted in the `LVMCluster` CR moving to `Progressing` state even when the volume groups were valid. With this release, upon creating the `LVMCluster` CR, {lvms} counts all the volume groups, and updates the `LVMCluster` CR state to `Ready` if the volume groups are valid. (link:https://issues.redhat.com/browse/OCPBUGS-23191[*OCPBUGS-23191*])
|
||||
|
||||
* Prior to this release, if the default device class was not present on all selected nodes, {lvms} failed to set up the `LVMCluster` CR. With this release, {lvms} detects all the default device classes even if the default device class is present only on one of the selected nodes. With this update, you can define the default device class only on one of the selected nodes. (link:https://issues.redhat.com/browse/OCPBUGS-23181[*OCPBUGS-23181*])
|
||||
* Previously, if the default device class was not present on all selected nodes, {lvms} failed to set up the `LVMCluster` CR. With this release, {lvms} detects all the default device classes even if the default device class is present only on one of the selected nodes. With this update, you can define the default device class only on one of the selected nodes. (link:https://issues.redhat.com/browse/OCPBUGS-23181[*OCPBUGS-23181*])
|
||||
|
||||
* Prior to this release, upon deleting the worker node in the single-node OpenShift (SNO) and worker node topology, the `LVMCluster` CR still included the configuration of the deleted worker node. This resulted in the `LVMCluster` CR remaining in `Progressing` state. With this release, upon deleting the worker node in the SNO and worker node topology, {lvms} deletes the worker node configuration in the `LVMCluster` CR, and updates the `LVMCluster` CR state to `Ready`. (link:https://issues.redhat.com/browse/OCPBUGS-13558[*OCPBUGS-13558*])
|
||||
* Previously, upon deleting the worker node in the single-node OpenShift (SNO) and worker node topology, the `LVMCluster` CR still included the configuration of the deleted worker node. This resulted in the `LVMCluster` CR remaining in `Progressing` state. With this release, upon deleting the worker node in the SNO and worker node topology, {lvms} deletes the worker node configuration in the `LVMCluster` CR, and updates the `LVMCluster` CR state to `Ready`. (link:https://issues.redhat.com/browse/OCPBUGS-13558[*OCPBUGS-13558*])
|
||||
|
||||
* Previously, CPU limits for the AWS EFS CSI driver container could cause performance degradation of volumes managed by the AWS EFS CSI Driver Operator. With this release, the CPU limits from the AWS EFS CSI driver container have been removed to help prevent potential performance degradation. (link:https://issues.redhat.com/browse/OCPBUGS-28645[*OCPBUGS-28645*])
|
||||
|
||||
* Previously, if you used the `performancePlus` parameter in the Azure Disk CSI driver and provisioned volumes 512 GiB or smaller, you would receive an error from the driver that you need a disk size of at least 512 GiB. With this release, if you use the `performancePlus` parameter and provision volumes 512 GiB or smaller, the Azure Disk CSI driver automatically resizes volumes to be 513 GiB. (link:https://issues.redhat.com/browse/OCPBUGS-17542[*OCPBUGS-17542*])
|
||||
|
||||
[discrete]
|
||||
[id="ocp-4-15-windows-containers-bug-fixes"]
|
||||
==== Windows containers
|
||||
// Added after OCP GA
|
||||
//
|
||||
//[discrete]
|
||||
//[id="ocp-4-15-windows-containers-bug-fixes"]
|
||||
//==== Windows containers
|
||||
|
||||
[id="ocp-4-15-technology-preview-tables"]
|
||||
== Technology Preview features status
|
||||
@@ -2592,14 +2591,8 @@ If NMEA sentences are lost on their way to the e810 NIC, the T-GM cannot synchro
|
||||
A proposed fix is to report a `FREERUN` event when the NMEA string is lost.
|
||||
(link:https://issues.redhat.com/browse/OCPBUGS-19838[*OCPBUGS-19838*])
|
||||
|
||||
* If you have IPsec enabled on the cluster and IPsec encryption is configured between the cluster and an external node, stopping the IPsec connection on the external node causes a loss of connectivity to the external node. This connectivity loss occurs because on the {product-title} side of the connection, the IPsec tunnel shutdown is not recognized. (link:https://issues.redhat.com/browse/RHEL-24802[*RHEL-24802*])
|
||||
|
||||
* If you have IPsec enabled on the cluster, and your cluster is a hosted control planes for {product-title} cluster, the MTU adjustment to account for the IPsec tunnel for pod-to-pod traffic does not happen automatically. (link:https://issues.redhat.com/browse/OCPBUGS-28757[*OCPBUGS-28757*])
|
||||
|
||||
* If you have IPsec enabled on the cluster, you cannot modify existing IPsec tunnels to external hosts that you have created. Modifying an existing NMState Operator `NodeNetworkConfigurationPolicy` object to adjust an existing IPsec configuration to encrypt traffic to external hosts is not recognized by {product-title}. (link:https://issues.redhat.com/browse/RHEL-22720[*RHEL-22720*])
|
||||
|
||||
[id="ocp-telco-ran-4-15-known-issues"]
|
||||
|
||||
//[id="ocp-telco-ran-4-15-known-issues"]
|
||||
//
|
||||
[id="ocp-telco-core-4-15-known-issues"]
|
||||
|
||||
* Currently, defining a `sysctl` value for a setting with a slash in its name, such as for bond devices, in the `profile` field of a Tuned resource might not work. Values with a slash in the `sysctl` option name are not mapped correctly to the `/proc` filesystem. As a workaround, create a `MachineConfig` resource that places a configuration file with the required values in the `/etc/sysctl.d` node directory. (link:https://issues.redhat.com/browse/RHEL-3707[*RHEL-3707*])
|
||||
@@ -2610,7 +2603,7 @@ This issue affects CPU load balancing features because these features depend on
|
||||
|
||||
* When a node reboot occurs all pods are restarted in a random order. In this scenario it is possible that `tuned` pod started after the workload pods. This means the workload pods start with partial tuning, which can affect performance or even cause the workload to fail. (link:https://issues.redhat.com/browse/OCPBUGS-26400[*OCPBUGS-26400*])
|
||||
|
||||
* The installation of {product-title} might fail when a performance profile is present in the extra manifests folder and targets the master or worker pools. This is caused by the internal install ordering that processes the performance profile before the default master and worker `MachineConfigPools` are created. It is possible to workaround this issue by including a copy of the stock master or worker `MachineConfigPools` in the extra manifests folder. (link:https://issues.redhat.com/browse/OCPBUGS-27948[*OCPBUGS-27948*]) (link:https://issues.redhat.com/browse/OCPBUGS-18640[*OCPBUGS-18640*])
|
||||
* The installation of {product-title} might fail when a performance profile is present in the extra manifests folder and targets the primary or worker pools. This is caused by the internal install ordering that processes the performance profile before the default primary and worker `MachineConfigPools` are created. It is possible to workaround this issue by including a copy of the stock primary or worker `MachineConfigPools` in the extra manifests folder. (link:https://issues.redhat.com/browse/OCPBUGS-27948[*OCPBUGS-27948*]) (link:https://issues.redhat.com/browse/OCPBUGS-18640[*OCPBUGS-18640*])
|
||||
|
||||
[id="ocp-4-15-asynchronous-errata-updates"]
|
||||
== Asynchronous errata updates
|
||||
|
||||
Reference in New Issue
Block a user