diff --git a/release_notes/ocp-4-20-release-notes.adoc b/release_notes/ocp-4-20-release-notes.adoc index 010f179ba3..b69a021492 100644 --- a/release_notes/ocp-4-20-release-notes.adoc +++ b/release_notes/ocp-4-20-release-notes.adoc @@ -601,6 +601,11 @@ The machine config nodes custom resource, which you can use to monitor the progr You can now view the status of updates to custom machine config pools in addition to the control plane and worker pools. The functionality for the feature has not changed. However, some of the information in the command output and in the status fields in the `MachineConfigNode` object has been updated. The `must-gather` for the Machine Config Operator now includes all `MachineConfigNodes` objects in the cluster. For more information, see xref:../machine_configuration/index.adoc#checking-mco-node-status_machine-config-overview[About checking machine config node status]. +[id="ocp-release-notes-auth-hostmount-anyuid-v2-scc_{context}"] +==== Enabling direct + +This release includes a new security context constraint (SCC), named `hostmount-anyuid-v2`. This SCC provides the same features as the `hostmount-anyuid` SCC, but contains `seLinuxContext: RunAsAny`. This SCC was added because the `hostmount-anyuid` SCC was intended to allow trusted pods to access any paths on the host, but SELinux prevents containers from accessing most paths. The `hostmount-anyuid-v2` allows host file system access as any UID, including UID 0, and is intended to be used instead of the `privileged` SCC. Grant with caution. + [id="ocp-release-notes-machine-management_{context}"] === Machine management @@ -1377,7 +1382,7 @@ With this release, kubelet no longer reports resources for terminated pods, whic [id="ocp-release-note-bare-metal-hardware-bug-fixes_{context}"] === Bare Metal Hardware Provisioning -* Before this update, when installing a dual-stack cluster on bare metal by using installer-provisioned infrastructure, the installation failed because the Virtual Media URL was IPv4 instead of IPv6. As IPv4 was unreachable, the bootstrap failed on the virtual machine (VM) and cluster nodes were not created. With this release, when you install a dual-stack cluster on bare metal for installer-provisioned infrastructure, the dual-stack cluster uses the Virtual Media URL IPv6 and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-60240[OCPBUGS-60240]) +* Before this update, when installing a dual-stack cluster on bare metal by using installer-provisioned infrastructure, the installation failed because the Virtual Media URL was IPv4 instead of IPv6. As IPv4 was unreachable, the bootstrap failed on the virtual machine (VM) and cluster nodes were not created. With this release, when you install a dual-stack cluster on bare metal for installer-provisioned infrastructure, the dual-stack cluster uses the Virtual Media URL IPv6 and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-60240[OCPBUGS-60240]) * Before this update, when installing a cluster with the bare metal as a service (BMaaS) API, an ambiguous validation error was reported. When you set an image URL without a checksum, BMaaS failed to validate the deployment image source information. With this release, when you do not provide a required checksum for an image, a clear message is reported. (link:https://issues.redhat.com/browse/OCPBUGS-57472[OCPBUGS-57472]) @@ -1396,113 +1401,49 @@ With this release, kubelet no longer reports resources for terminated pods, whic [id="ocp-release-note-cloud-compute-bug-fixes_{context}"] === Cloud Compute -* Before this update, {aws-short} compute machine sets could include a null value for the `userDataSecret` parameter. -Using a null value sometimes caused machines to get stuck in the `Provisioning` state. With this release, the `userDataSecret` parameter requires a value. -(link:https://issues.redhat.com/browse/OCPBUGS-55135[OCPBUGS-55135]) +* Before this update, {aws-short} compute machine sets could include a null value for the `userDataSecret` parameter. Using a null value sometimes caused machines to get stuck in the `Provisioning` state. With this release, the `userDataSecret` parameter requires a value. (link:https://issues.redhat.com/browse/OCPBUGS-55135[OCPBUGS-55135]) -* Before this update, {product-title} clusters on {aws-short} that were created with version 4.13 or earlier could not update to version 4.19. -Clusters that were created with version 4.14 and later have an {aws-short} `cloud-conf` ConfigMap by default, and this ConfigMap is required starting in {product-title} 4.19. -With this release, the Cloud Controller Manager Operator creates a default `cloud-conf` ConfigMap when none is present on the cluster. -This change enables clusters that were created with version 4.13 or earlier to update to version 4.19. -(link:https://issues.redhat.com/browse/OCPBUGS-59251[OCPBUGS-59251]) +* Before this update, {product-title} clusters on {aws-short} that were created with version 4.13 or earlier could not update to version 4.19. Clusters that were created with version 4.14 and later have an {aws-short} `cloud-conf` ConfigMap by default, and this ConfigMap is required starting in {product-title} 4.19. With this release, the Cloud Controller Manager Operator creates a default `cloud-conf` ConfigMap when none is present on the cluster. This change enables clusters that were created with version 4.13 or earlier to update to version 4.19. (link:https://issues.redhat.com/browse/OCPBUGS-59251[OCPBUGS-59251]) -* Before this update, a `failed to find machine for node ...` appeared in the logs when the `InternalDNS` address for a machine was not set as expected. -As a consequence, the user might interpret this error as the machine not existing. -With this release, the log message reads `failed to find machine with InternalDNS matching ...`. -As a result, the user has a clearer indication of why the match is failing. -(link:https://issues.redhat.com/browse/OCPBUGS-19856[OCPBUGS-19856]) +* Before this update, a `failed to find machine for node ...` message was displayed in the logs when the `InternalDNS` address for a machine was not set as expected. As a consequence, the user might interpret this error as the machine not existing. With this release, the log message reads `failed to find machine with InternalDNS matching ...`. As a result, the user has a clearer indication of why the match is failing. (link:https://issues.redhat.com/browse/OCPBUGS-19856[OCPBUGS-19856]) -* Before this update, a bug fix altered the availability set configuration by changing the fault domain count to use the maximum available value instead of being fixed at 2. -This inadvertently caused scaling issues for compute machine sets that were created prior to the bug fix, because the controller attempted to modify immutable availability sets. -With this release, availability sets are no longer modified after creation, allowing affected compute machine sets to scale properly. -(link:https://issues.redhat.com/browse/OCPBUGS-56380[OCPBUGS-56380]) +* Before this update, a bug fix altered the availability set configuration by changing the fault domain count to use the maximum available value instead of being fixed at 2. This inadvertently caused scaling issues for compute machine sets that were created prior to the bug fix, because the controller attempted to modify immutable availability sets. With this release, availability sets are no longer modified after creation, allowing affected compute machine sets to scale properly. (link:https://issues.redhat.com/browse/OCPBUGS-56380[OCPBUGS-56380]) -* Before this update, compute machine sets migrating from the Cluster API to the Machine API got stuck in the `Migrating` state. -As a consequence, the compute machine set could not finish transitioning to use a different authoritative API or perform further reconciliation of the `MachineSet` object status. -With this release, the migration controllers watch for changes in Cluster API resources and react to authoritative API transitions. -As a result, compute machine sets successfully transition from the Cluster API to the Machine API. -(link:https://issues.redhat.com/browse/OCPBUGS-56487[OCPBUGS-56487]) +* Before this update, compute machine sets migrating from the Cluster API to the Machine API got stuck in the `Migrating` state. As a consequence, the compute machine set could not finish transitioning to use a different authoritative API or perform further reconciliation of the `MachineSet` object status. With this release, the migration controllers watch for changes in Cluster API resources and react to authoritative API transitions. +As a result, compute machine sets successfully transition from the Cluster API to the Machine API. (link:https://issues.redhat.com/browse/OCPBUGS-56487[OCPBUGS-56487]) -* Before this update, for the `maxUnhealthy` field in the `MachineHealthCheck` custom resource definition (CRD), it did not document the default value. -With this release, the CRD documents the default value. -(link:https://issues.redhat.com/browse/OCPBUGS-61314[OCPBUGS-61314]) +* Before this update, the `maxUnhealthy` field in the `MachineHealthCheck` custom resource definition (CRD) did not document the default value. With this release, the CRD documents the default value. (link:https://issues.redhat.com/browse/OCPBUGS-61314[OCPBUGS-61314]) -* Before this update, it was possible to specify the use of the `CapacityReservationsOnly` capacity reservation behavior and Spot Instances in the same machine template. -As a consequence, machines with these two incompatible settings were created. -With this release, validation of machine templates ensures that these two incompatible settings do not co-occur. -As a result, machines with these two incompatible settings cannot be created. +* Before this update, it was possible to specify the use of the `CapacityReservationsOnly` capacity reservation behavior and `SpotInstances` in the same machine template. As a consequence, machines with these two incompatible settings were created. With this release, validation of machine templates ensures that these two incompatible settings are not set at the same time. As a result, machines with these two incompatible settings cannot be created. (link:https://issues.redhat.com/browse/OCPBUGS-60943[OCPBUGS-60943]) -* Before this update, on clusters that support migrating Machine API resources to Cluster API resources, deleting a nonauthoritative machine did not delete the corresponding authoritative machine. -As a consequence, orphaned machines that should have been cleaned up remained on the cluster and could cause a resource leak. -With this release, deleting a nonauthoritative machine triggers propagation of the deletion to the corresponding authoritative machine. -As a result, deletion requests on nonauthoritative machine correctly cascade, preventing orphaned authoritative machines and ensuring consistency in machine cleanup. -(link:https://issues.redhat.com/browse/OCPBUGS-55985[OCPBUGS-55985]) +* Before this update, on clusters that support migrating Machine API resources to Cluster API resources, deleting a nonauthoritative machine did not delete the corresponding authoritative machine. As a consequence, orphaned machines that should have been cleaned up remained on the cluster and could cause a resource leak. With this release, deleting a nonauthoritative machine triggers propagation of the deletion to the corresponding authoritative machine. As a result, deletion requests on nonauthoritative machine correctly cascade, preventing orphaned authoritative machines and ensuring consistency in machine cleanup. (link:https://issues.redhat.com/browse/OCPBUGS-55985[OCPBUGS-55985]) -* Before this update, on clusters that support migrating Machine API resources to Cluster API resources, the {cluster-capi-operator} could create an authoritative Cluster API compute machine set in the `Paused` state. -As a consequence, the newly created Cluster API compute machine set could not reconcile or scale machines even though it was using the authoritative API. -With this release, the Operator now ensures that Cluster API compute machine sets are created in an unpaused state when the Cluster API is authoritative. -As a result, newly created Cluster API compute machine sets are reconciled immediately and scaling and machine lifecycle operations proceed as intended when the Cluster API is authoritative. -(link:https://issues.redhat.com/browse/OCPBUGS-56604[OCPBUGS-56604]) +* Before this update, on clusters that support migrating Machine API resources to Cluster API resources, the {cluster-capi-operator} could create an authoritative Cluster API compute machine set in the `Paused` state. As a consequence, the newly created Cluster API compute machine set could not reconcile or scale machines even though it was using the authoritative API. With this release, the Operator now ensures that Cluster API compute machine sets are created in an unpaused state when the Cluster API is authoritative. As a result, newly created Cluster API compute machine sets are reconciled immediately and scaling and machine lifecycle operations proceed as intended when the Cluster API is authoritative. (link:https://issues.redhat.com/browse/OCPBUGS-56604[OCPBUGS-56604]) -* Before this update, scaling large numbers of nodes was slow because scaling requires reconciling each machine several times and each machine was reconciled individually. -With this release, up to ten machines can be reconciled concurrently. -This change improves the processing speed for machines during scaling. -(link:https://issues.redhat.com/browse/OCPBUGS-59376[OCPBUGS-59376]) +* Before this update, scaling large numbers of nodes was slow because scaling requires reconciling each machine several times and each machine was reconciled individually. With this release, up to ten machines can be reconciled concurrently. This change improves the processing speed for machines during scaling. (link:https://issues.redhat.com/browse/OCPBUGS-59376[OCPBUGS-59376]) -* Before this update, the {cluster-capi-operator} status controller used an unsorted list of related objects, leading to status updates when there were no functional changes. -As a consequence, users would see significant noise in the {cluster-capi-operator} object and in logs due to continuous and unnecessary status updates. -With this release, the status controller logic sorts the list of related objects before comparing them for changes. -As a result, a status update only occurs when there is a change to the Operator's state. -(link:https://issues.redhat.com/browse/OCPBUGS-56805[OCPBUGS-56805], link:https://issues.redhat.com/browse/OCPBUGS-58880[OCPBUGS-58880]) +* Before this update, the {cluster-capi-operator} status controller used an unsorted list of related objects, leading to status updates when there were no functional changes. As a consequence, users would see significant noise in the {cluster-capi-operator} object and in logs due to continuous and unnecessary status updates. With this release, the status controller logic sorts the list of related objects before comparing them for changes. +As a result, a status update only occurs when there is a change to the Operator's state. (link:https://issues.redhat.com/browse/OCPBUGS-56805[OCPBUGS-56805], link:https://issues.redhat.com/browse/OCPBUGS-58880[OCPBUGS-58880]) -* Before this update, the `config-sync-controller` component of the Cloud Controller Manager Operator did not display logs. -The issue is resolved in this release. -(link:https://issues.redhat.com/browse/OCPBUGS-56508[OCPBUGS-56508]) +* Before this update, the `config-sync-controller` component of the Cloud Controller Manager Operator did not display logs. The issue is resolved in this release. (link:https://issues.redhat.com/browse/OCPBUGS-56508[OCPBUGS-56508]) -* Before this update, the Control Plane Machine Set configuration used availability zones from compute machine sets. -This is not a valid configuration. -As a consequence, the Control Plane Machine Set could not be generated when the control plane machines were in a single zone while compute machine sets spanned multiple zones. -With this release, the Control Plane Machine Set derives an availability zone configuration from existing control plane machines. -As a result, the Control Plane Machine Set generates a valid zone configuration that accurately reflects the current control plane machines. -(link:https://issues.redhat.com/browse/OCPBUGS-52448[OCPBUGS-52448]) +* Before this update, the Control Plane Machine Set configuration used availability zones from compute machine sets. This is not a valid configuration. As a consequence, the Control Plane Machine Set could not be generated when the control plane machines were in a single zone while compute machine sets spanned multiple zones. With this release, the Control Plane Machine Set derives an availability zone configuration from existing control plane machines. +As a result, the Control Plane Machine Set generates a valid zone configuration that accurately reflects the current control plane machines. (link:https://issues.redhat.com/browse/OCPBUGS-52448[OCPBUGS-52448]) -* Before this update, the controller that annotates a Machine API compute machine set did not check whether the Machine API was authoritative before adding scale-from-zero annotations. -As a consequence, the controller repeatedly added these annotations and caused a loop of continuous changes to the `MachineSet` object. -With this release, the controller checks the value of the `authoritativeAPI` field before adding scale-from-zero annotations. -As a result, the controller avoids the looping behavior by only adding these annotations to a Machine API compute machine set when the Machine API is authoritative. -(link:https://issues.redhat.com/browse/OCPBUGS-57581[OCPBUGS-57581]) +* Before this update, the controller that annotates a Machine API compute machine set did not check whether the Machine API was authoritative before adding scale-from-zero annotations. As a consequence, the controller repeatedly added these annotations and caused a loop of continuous changes to the `MachineSet` object. With this release, the controller checks the value of the `authoritativeAPI` field before adding scale-from-zero annotations. +As a result, the controller avoids the looping behavior by only adding these annotations to a Machine API compute machine set when the Machine API is authoritative. (link:https://issues.redhat.com/browse/OCPBUGS-57581[OCPBUGS-57581]) -* Before this update, the Machine API Operator attempted to reconcile `Machine` resources on platforms other than {aws-short} where the `.status.authoritativeAPI` field was not populated. -As a consequence, compute machines remained in the `Provisioning` state indefinitely and never became operational. -With this release, the Machine API Operator now populates the empty `.status.authoritativeAPI` field with the corresponding value in the machine specification. -A guard is also added to the controllers to handle cases where this field might still be empty. -As a result, `Machine` and `MachineSet` resources are reconciled properly and compute machines no longer remain in the `Provisioning` state indefinitely. -(link:https://issues.redhat.com/browse/OCPBUGS-56849[OCPBUGS-56849]) +* Before this update, the Machine API Operator attempted to reconcile `Machine` resources on platforms other than {aws-short} where the `.status.authoritativeAPI` field was not populated. As a consequence, compute machines remained in the `Provisioning` state indefinitely and never became operational. With this release, the Machine API Operator now populates the empty `.status.authoritativeAPI` field with the corresponding value in the machine specification. A guard is also added to the controllers to handle cases where this field might still be empty. As a result, `Machine` and `MachineSet` resources are reconciled properly and compute machines no longer remain in the `Provisioning` state indefinitely. (link:https://issues.redhat.com/browse/OCPBUGS-56849[OCPBUGS-56849]) -* Before this update, the Machine API Provider Azure used an old version of the Azure SDK, which used an old API version that did not support referencing a Capacity Reservation group. -As a consequence, creating a Machine API machine that referenced a Capacity Reservation group in another subscription resulted in an Azure API error. -With this release, the Machine API Provider Azure uses a version of the Azure SDK that supports this configuration. -As a result, creating a Machine API machine that references a Capacity Reservation group in another subscription works as expected. -(link:https://issues.redhat.com/browse/OCPBUGS-55372[OCPBUGS-55372]) +* Before this update, the Machine API Provider Azure used an old version of the Azure SDK, which used an old API version that did not support referencing a Capacity Reservation group. As a consequence, creating a Machine API machine that referenced a Capacity Reservation group in another subscription resulted in an Azure API error. With this release, the Machine API Provider Azure uses a version of the Azure SDK that supports this configuration. +As a result, creating a Machine API machine that references a Capacity Reservation group in another subscription works as expected. (link:https://issues.redhat.com/browse/OCPBUGS-55372[OCPBUGS-55372]) -* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources did not correctly compare the machine specification when converting an authoritative Cluster API machine template to a Machine API machine set. -As a consequence, changes to the Cluster API machine template specification were not synchronized to the Machine API machine set. -With this release, changes to the comparison logic resolve the issue. -As a result, the Machine API machine set synchronizes correctly after the Cluster API machine set references the new Cluster API machine template. -(link:https://issues.redhat.com/browse/OCPBUGS-56010[OCPBUGS-56010]) +* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources did not correctly compare the machine specification when converting an authoritative Cluster API machine template to a Machine API machine set. As a consequence, changes to the Cluster API machine template specification were not synchronized to the Machine API machine set. With this release, changes to the comparison logic resolve the issue. As a result, the Machine API machine set synchronizes correctly after the Cluster API machine set references the new Cluster API machine template. (link:https://issues.redhat.com/browse/OCPBUGS-56010[OCPBUGS-56010]) -* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources did not delete the machine template when its corresponding Machine API machine set was deleted. -As a consequence, unneeded Cluster API machine templates persisted in the cluster and cluttered the `openshift-cluster-api` namespace. -With this release, the two-way synchronization controller correctly handles deletion synchronization for the machine template. -As a result, deleting a Machine API authoritative machine set deletes the corresponding Cluster API machine template. -(link:https://issues.redhat.com/browse/OCPBUGS-57195[OCPBUGS-57195]) +* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources did not delete the machine template when its corresponding Machine API machine set was deleted. As a consequence, unneeded Cluster API machine templates persisted in the cluster and cluttered the `openshift-cluster-api` namespace. With this release, the two-way synchronization controller correctly handles deletion synchronization for the machine template. As a result, deleting a Machine API authoritative machine set deletes the corresponding Cluster API machine template. (link:https://issues.redhat.com/browse/OCPBUGS-57195[OCPBUGS-57195]) -* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources prematurely reported a successful migration. -As a consequence, if any errors occurred when updating the status of related objects, the operation was not retried. -With this release, the controller ensures that all related object statuses are written before reporting a successful status. -As a result, the controller handles errors during migration better. -(link:https://issues.redhat.com/browse/OCPBUGS-57040[OCPBUGS-57040]) +* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources prematurely reported a successful migration. As a consequence, if any errors occurred when updating the status of related objects, the operation was not retried. With this release, the controller ensures that all related object statuses are written before reporting a successful status. As a result, the controller handles errors during migration better. (link:https://issues.redhat.com/browse/OCPBUGS-57040[OCPBUGS-57040]) [id="ocp-release-note-cloud-credential-operator-bug-fixes_{context}"] === Cloud Credential Operator @@ -1533,8 +1474,8 @@ As a result, the controller handles errors during migration better. [id="ocp-release-note-extensions-olmv1-bug-fixes_{context}"] === Extensions ({olmv1}) - -* Before this update, the preflight custom resource definition (CRD) safety check in {olmv1} blocked updates if it detected changes in the description fields of a CRD. With this update, the preflight CRD safety check does not block updates when there are changes to documentation fields. (link:https://issues.redhat.com/browse/OCPBUGS-55051[OCPBUGS-55051]) + +* Before this update, the preflight custom resource definition (CRD) safety check in {olmv1} blocked updates if it detected changes in the description fields of a CRD. With this update, the preflight CRD safety check does not block updates when there are changes to documentation fields. (link:https://issues.redhat.com/browse/OCPBUGS-55051[OCPBUGS-55051]) * Before this update, the catalogd and Operator Controller components did not display the correct version and commit information in the {oc-first}. With this update, the correct commit and version information is displayed. (link:https://issues.redhat.com/browse/OCPBUGS-23055[OCPBUGS-23055]) @@ -1560,9 +1501,17 @@ As a result, the controller handles errors during migration better. [id="ocp-release-note-machine-config-operator-bug-fixes_{context}"] === Machine Config Operator +* Before this update, an external actor could uncordon a node that the Machine Config Operator (MCO) was draining. As a consequence, the MCO and the scheduler would schedule and unschedule pods at the same time, prolonging the drain process. With this release, the MCO attempts to recordon the node if an external actor uncordons it during the drain process. As a result, the MCO and scheduler no longer schedule and remove pods at the same time. (link:https://issues.redhat.com/browse/OCPBUGS-61516[OCPBUGS-61516]) +* Before this update, during an update from {product-title} 4.18.21 to {product-title} 4.19.6, the Machine Config Operator (MCO) failed due to multiple labels in the `capacity.cluster-autoscaler.kubernetes.io/labels` annotation in one or more machine sets. With this release, the MCO now accepts multiple labels in the `capacity.cluster-autoscaler.kubernetes.io/labels` annotation and no longer fails during the update to {product-title} 4.19.6. (link:https://issues.redhat.com/browse/OCPBUGS-60119[OCPBUGS-60119]) +* Before this update, the Machine Config Operator (MCO) certificate management failed during an Azure Red Hat OpenShift (ARO) upgrade to 4.19 due to missing infrastructure status fields. As a consequence, certificates were refreshed without required Storage Area Network (SAN) IPs, causing connectivity issues for upgraded ARO clusters. With this release, the MCO now adds and retains SAN IPs during certificate management in ARO, preventing immediate rotation on upgrade to 4.19. (link:https://issues.redhat.com/browse/OCPBUGS-59780[OCPBUGS-59780]) +* Before this update, when updating from a version of {product-title} prior to 4.15, the `MachineConfigNode` Custom Resource Definitions (CRDs)feature was installed as Technology Preview (TP) causing the update to fail. This feature was fully introduced in {product-title} 4.16. With this release, the update no longer deploys the Technology Preview CRDs, ensuring a successful upgrade. (link:https://issues.redhat.com/browse/OCPBUGS-59723[OCPBUGS-59723]) + +* Before this update, the Machine Config Operator (MCO) was updating node boot images without checking whether the current boot image was from {gcp-first} or {aws-first} Marketplace. As a consequence, the MCO would override a marketplace boot image with a standard {product-title} image. With this release, for {aws-short} images, the MCO has a lookup table that has all of the standard {product-title} installer Advanced Metering Infrastructures (AMIs), which it references before updating the boot image. For {gcp-first} images, the MCO checks the URL header before updating the boot image. As a result, the MCO no longer updates machine sets that have a marketplace boot image. (link:https://issues.redhat.com/browse/OCPBUGS-57426[OCPBUGS-57426]) + +* Before this update, {product-title} updates that shipped a change to Core DNS templates would restart the `coredns` pod before the image pull for the updated base operating system (OS) image. As a consequence, a race occurred when the operating system update manager failed failed the image pull because of network errors, causing the update to stall. With this release, a retry update operation is added to the the Machine Config Operator (MCO) to work around this race condition. https://issues.redhat.com/browse/OCPBUGS-43406[OCPBUGS-43406] [id="ocp-release-note-management-console-bug-fixes_{context}"] @@ -1645,13 +1594,13 @@ As a result, the controller handles errors during migration better. [id="ocp-release-note-olm-bug-fixes_{context}"] === {olmv0-first} - -* Before this update, bundle unpack jobs did not inherit control plane tolerances for the catalog Operator when they were created. As a result, bundle unpack jobs ran on worker nodes only. If no worker nodes were available due to taints, cluster administrators could not install or update Operators on the cluster. With this release, {olmv0} adopts control plane tolerations for bundle unpack jobs and the jobs can run as part of the control plane. (link://https://issues.redhat.com/browse/OCPBUGS-58349[OCPBUGS-58349]) - -* Before this update, when an Operator supplied more than one API in an Operator group namespace, {olmv0} made unnecessary update calls to the cluster roles that were created for the Operator group. As a result, these unnecessary calls caused churn for ectd and the API server. With this update, {olmv0} does not make unnecessary update calls to the cluster role objects in Operator groups. (link:https://issues.redhat.com/browse/OCPBUGS-57222[OCPBUGS-57222]) + +* Before this update, bundle unpack jobs did not inherit control plane tolerances for the catalog Operator when they were created. As a result, bundle unpack jobs ran on worker nodes only. If no worker nodes were available due to taints, cluster administrators could not install or update Operators on the cluster. With this release, {olmv0} adopts control plane tolerations for bundle unpack jobs and the jobs can run as part of the control plane. (link://https://issues.redhat.com/browse/OCPBUGS-58349[OCPBUGS-58349]) + +* Before this update, when an Operator supplied more than one API in an Operator group namespace, {olmv0} made unnecessary update calls to the cluster roles that were created for the Operator group. As a result, these unnecessary calls caused churn for ectd and the API server. With this update, {olmv0} does not make unnecessary update calls to the cluster role objects in Operator groups. (link:https://issues.redhat.com/browse/OCPBUGS-57222[OCPBUGS-57222]) * Before this update, if the `olm-operator` pod crashed during cluster updates due to mislabeled resources, the notification message used the the `info` label. With this update, crash notification messages due to mislabeled resources use the `error` label instead. (link:https://issues.redhat.com/browse/OCPBUGS-53161[OCPBUGS-53161]) - + * Before this update, the catalog Operator scheduled catalog snapshots for every 5 minutes. On clusters with many namespaces and subscriptions, snapshots failed and cascaded across catalog sources. As a result, the spikes in CPU loads effectively blocked installing and updating Operators. With this update, catalog snapshots are scheduled for every 30 minutes to allow enough time for the snapshotes to resolve. (link:https://issues.redhat.com/browse/OCPBUGS-43966[OCPBUGS-43966]) [id="ocp-release-note-pao-bug-fixes_{context}"] @@ -2417,9 +2366,9 @@ In the following tables, features are marked with the following statuses: + There is no supported workaround for this issue. (link:https://issues.redhat.com/browse/OCPBUGS-57440[OCPBUGS-57440]) -* When installing a cluster on {azure-short}, if you set any of the `compute.platform.azure.identity.type`, `controlplane.platform.azure.identity.type`, or `platform.azure.defaultMachinePlatform.identity.type` field values to `None`, your cluster is unable to pull images from the Azure Container Registry. -You can avoid this issue by providing a user-assigned identity or by leaving the identity field blank. -In both cases, the installation program generates a user-assigned identity. +* When installing a cluster on {azure-short}, if you set any of the `compute.platform.azure.identity.type`, `controlplane.platform.azure.identity.type`, or `platform.azure.defaultMachinePlatform.identity.type` field values to `None`, your cluster is unable to pull images from the Azure Container Registry. +You can avoid this issue by providing a user-assigned identity or by leaving the identity field blank. +In both cases, the installation program generates a user-assigned identity. (link:https://issues.redhat.com/browse/OCPBUGS-56008[OCPBUGS-56008]) [id="ocp-telco-core-release-known-issues_{context}"]