diff --git a/release_notes/ocp-4-15-release-notes.adoc b/release_notes/ocp-4-15-release-notes.adoc index 7342300a17..c3fb08b600 100644 --- a/release_notes/ocp-4-15-release-notes.adoc +++ b/release_notes/ocp-4-15-release-notes.adoc @@ -2779,6 +2779,60 @@ This section will continue to be updated over time to provide notes on enhanceme For any {product-title} release, always review the instructions on xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[updating your cluster] properly. ==== +// 4.15.49 +[id="ocp-4-15-49_{context}"] +=== RHSA-2025:3790 - {product-title} 4.15.49 bug fix and security update + +Issued: 16 April 2025 + +{product-title} release 4.15.49, which includes security updates, is now available. The list of bug fixes that are included in this update is documented in the link:https://access.redhat.com/errata/RHSA-2025:3790 [RHSA-2025:3790] advisory. The RPM packages that are included in this update are provided by the link:https://access.redhat.com/errata/RHBA-2025:3792[RHBA-2025:3792] advisory. + +Space precluded documenting all of the container images for this release in the advisory. + +You can view the container images in this release by running the following command: + +[source,terminal] +---- +$ oc adm release info 4.15.49 --pullspecs +---- + +[id="ocp-4-15-49-known-issues_{context}"] +==== Known issues + +* IPsec is not supported on {op-system-base-full} compute nodes because of a `libreswan` incompatiblility issue between a host and an `ovn-ipsec` container that exist in each compute node. (link:https://issues.redhat.com/browse/OCPBUGS-36688[*OCPBUGS-36688*]) + +[id="ocp-4-15-49-bug-fixes_{context}"] +==== Bug fixes + +* Previously, an update to the {ibm-cloud-name} Cloud Internet Services (CIS) implementation impacted the upstream Terraform plugin. If you attempted to create an external-facing cluster on {ibm-cloud-name}, the following error occurred: ++ +[source,terminal] +---- +ERROR Error: Plugin did not respond +ERROR +ERROR with module.cis.ibm_cis_dns_record.kubernetes_api_internal[0], +ERROR on cis/main.tf line 27, in resource "ibm_cis_dns_record" "kubernetes_api_internal": +ERROR 27: resource "ibm_cis_dns_record" "kubernetes_api_internal" +---- ++ +With this release, you can use the installation program to create an external cluster on {product-title} without the plugin issue. (link:https://issues.redhat.com/browse/OCPBUGS-54367[OCPBUGS-54367]) + +* Previously, when installing a cluster on {aws-first} in existing subnets that were located in edge zones, such as a Local Zone or a Wavelength Zone, the `kubernetes.io/cluster/:shared` tag was missing in the subnet resources of the edge zone. With this release, a fix ensures that all subnets that are used in the `install-config.yaml` configuration file have the required tag. (link:https://issues.redhat.com/browse/OCPBUGS-54353[OCPBUGS-54353]) + +* Previously, iSCSI and Fibre Channel devices attached by multipath did not resolve correctly when partitioned. This was caused by improper handling of multipath devices. With this release, the partitioned multipath storage is now correctly recognized. (https://issues.redhat.com/browse/OCPBUGS-53139[OCPBUGS-53139]) + +* Previously, the *Cluster Settings* page would not properly render during a cluster update if the Cluster Version Operator (CVO) did not receive a `Completed` update. With this release, the *Cluster Setting* page properly renders even if the CVO has not received a `Completed` update. (link:https://issues.redhat.com/browse/OCPBUGS-53138[OCPBUGS-53138]) + +* Previously, the *Observe* section on the web console did not show items contributed from plugins unless certain flags related to monitoring were set. However, these flags prevented other plugins, such as logging, distributed tracing, network observability, and so on, from adding items to the *Observe* section. With this release, the monitoring flags are removed so that other plugins can add items to the *Observe* section. (link:https://issues.redhat.com/browse/OCPBUGS-53055[OCPBUGS-53055]) + +* Previously, in the `condition.Status` field, the ignition-server controller overloaded the Kubernetes agent server (KAS) by updating that condition with the same message in every reconcile loop. The updates caused an overload of the KAS. With this release, the controller checks the message and validates whether it is the existing message so that the KAS is not overloaded. (link:https://issues.redhat.com/browse/OCPBUGS-50867[OCPBUGS-50867]) + +* Previously, a custom Security Context Constraint (SCC) impacted pods that the Cluster Version Operator generated from receiving a cluster version upgrade. With this release, {product-title} now sets a default SCC to each pod, so that any custom SCC created does not impact a pod. (link:https://issues.redhat.com/browse/OCPBUGS-50591[OCPBUGS-50591]) + +[id="ocp-4-15-49-updating_{context}"] +==== Updating +To update an {product-title} 4.15 cluster to this latest release, see xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster by using the CLI]. + // 4.15.48 [id="ocp-4-15-48_{context}"] === RHSA-2025:3055 - {product-title} 4.15.48 bug fix and security update @@ -2799,13 +2853,13 @@ $ oc adm release info 4.15.48 --pullspecs [id="ocp-4-15-48-bug-fixes_{context}"] ==== Bug fixes -* Previously, the availability set fault domain count was hardcoded to `2`. This value works in most regions on {azure-first} because the fault domain counts are typically at least `2`, but failed in the `centraluseuap` and `eastusstg` regions. With this release, the availability set fault domain count in a region is set dynamically so that this issue no longer occurs. (link:https://issues.redhat.com/browse/OCPBUGS-53226[*OCPBUGS-53226*]) +* Previously, the availability set fault domain count was hardcoded to `2`. This value works in most regions on {azure-first} because the fault domain counts are typically at least `2`, but failed in the `centraluseuap` and `eastusstg` regions. With this release, the availability set fault domain count in a region is set dynamically so that this issue no longer occurs. (link:https://issues.redhat.com/browse/OCPBUGS-53226[OCPBUGS-53226]) -* Previously, the `trusted-ca-bundle-managed` ConfigMap component was a mandatory component. If you attempted to use a custom Public Key Infrastructure (PKI), the deployment would fail because the OpenShift API server expected the presence of the `trusted-ca-bundle-managed` ConfigMap component. With this release, this component is optional so that you can deploy clusters without the `trusted-ca-bundle-managed` config map component when you use a custom PKI. (link:https://issues.redhat.com/browse/OCPBUGS-52896[*OCPBUGS-52896*]) +* Previously, the `trusted-ca-bundle-managed` ConfigMap component was a mandatory component. If you attempted to use a custom Public Key Infrastructure (PKI), the deployment would fail because the OpenShift API server expected the presence of the `trusted-ca-bundle-managed` ConfigMap component. With this release, this component is optional so that you can deploy clusters without the `trusted-ca-bundle-managed` config map component when you use a custom PKI. (link:https://issues.redhat.com/browse/OCPBUGS-52896[OCPBUGS-52896]) -* Previously, the URL for the *Alert Rules* page on the web console was incorrect. With this release, the URL is fixed and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-52344[*OCPBUGS-52344*]) +* Previously, the URL for the *Alert Rules* page on the web console was incorrect. With this release, the URL is fixed and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-52344[OCPBUGS-52344]) -* Previously, Operator Lifecycle Manager (OLM) sometimes concurrently resolved the same namespace in a cluster. As a consequence, subscriptions reached a terminal state of `ConstraintsNotSatisfiable` because two concurrent processes interacted with a subscription, which caused a CSV file to become unassociated. With this release, OLM no longer concurrently resolves namespaces, so that OLM correctly processes a subscription without leaving a CSV file in an unassociated state. (link:https://issues.redhat.com/browse/OCPBUGS-48662[*OCPBUGS-48662*]) +* Previously, Operator Lifecycle Manager (OLM) sometimes concurrently resolved the same namespace in a cluster. As a consequence, subscriptions reached a terminal state of `ConstraintsNotSatisfiable` because two concurrent processes interacted with a subscription, which caused a CSV file to become unassociated. With this release, OLM no longer concurrently resolves namespaces, so that OLM correctly processes a subscription without leaving a CSV file in an unassociated state. (link:https://issues.redhat.com/browse/OCPBUGS-48662[OCPBUGS-48662]) [id="ocp-4-15-48-updating_{context}"] ==== Updating @@ -2831,13 +2885,13 @@ $ oc adm release info 4.15.47 --pullspecs [id="ocp-4-15-47-bug-fixes_{context}"] ==== Bug fixes -* Previously, an extra `name` prop was passed into the resource list page extensions used to list related operands on the *CSV details* page. This caused the operand list to be filtered by the `CSV` name, which often caused it to be an empty list. With this update, the operands are listed as expected. (link:https://issues.redhat.com/browse/OCPBUGS-51332[*OCPBUGS-51332*]) +* Previously, an extra `name` prop was passed into the resource list page extensions used to list related operands on the *CSV details* page. This caused the operand list to be filtered by the `CSV` name, which often caused it to be an empty list. With this update, the operands are listed as expected. (link:https://issues.redhat.com/browse/OCPBUGS-51332[OCPBUGS-51332]) -* Previously, incorrect addresses were passed to the Kubernetes EndpointSlice on a cluster. This issue prevented the installation of the MetalLB Operator on an Agent-based cluster in an IPv6 disconnected environment. With this release, a fix modifies the address evaluation method. Red{nbsp}Hat Marketplace pods can successfully connect to the cluster API server. As a result, the installation of the MetalLB Operator and the handling of ingress traffic in IPv6 disconnected environments can occur. (link:https://issues.redhat.com/browse/OCPBUGS-51253[*OCPBUGS-51253*]) +* Previously, incorrect addresses were passed to the Kubernetes EndpointSlice on a cluster. This issue prevented the installation of the MetalLB Operator on an Agent-based cluster in an IPv6 disconnected environment. With this release, a fix modifies the address evaluation method. Red{nbsp}Hat Marketplace pods can successfully connect to the cluster API server. As a result, the installation of the MetalLB Operator and the handling of ingress traffic in IPv6 disconnected environments can occur. (link:https://issues.redhat.com/browse/OCPBUGS-51253[OCPBUGS-51253]) -* Previously,`konnectivity-https-proxy` did not have the additional trust bundles that were applied in the `configuration.proxy.trustCA` certificate. This caused hosted clusters to fail the provisioning process. With this release, the specified certificates are added to `Konnectivity` and propagate the proxy environment variables, allowing hosted clusters with secure proxies and custom certificates to successfully complete their provisioning. (link:https://issues.redhat.com/browse/OCPBUGS-52172[*OCPBUGS-52172*]) +* Previously,`konnectivity-https-proxy` did not have the additional trust bundles that were applied in the `configuration.proxy.trustCA` certificate. This caused hosted clusters to fail the provisioning process. With this release, the specified certificates are added to `Konnectivity` and propagate the proxy environment variables, allowing hosted clusters with secure proxies and custom certificates to successfully complete their provisioning. (link:https://issues.redhat.com/browse/OCPBUGS-52172[OCPBUGS-52172]) -* Previously, in the Red{nbsp}Hat {product-title} web console *Notifications* section, silenced alerts were visible in the notification drawer because the alerts did not include external labels. With this release, the alerts include external labels so that silenced alerts are not visible on the notification drawer. (link:https://issues.redhat.com/browse/OCPBUGS-49849[*OCPBUGS-49849*]) +* Previously, in the Red{nbsp}Hat {product-title} web console *Notifications* section, silenced alerts were visible in the notification drawer because the alerts did not include external labels. With this release, the alerts include external labels so that silenced alerts are not visible on the notification drawer. (link:https://issues.redhat.com/browse/OCPBUGS-49849[OCPBUGS-49849]) [id="ocp-4-15-47-updating_{context}"] ==== Updating @@ -2863,11 +2917,11 @@ $ oc adm release info 4.15.46 --pullspecs [id="ocp-4-15-46-bug-fixes_{context}"] ==== Bug fixes -* Previously, if you tried to rerun a resolver-based `PipelineRun` from the {product-title} console, the `Invalid PipelineRun configuration, unable to start Pipeline` UI message was displayed. With this release, you can rerun a resolver-based `PipelineRun` with no problem. (link:https://issues.redhat.com/browse/OCPBUGS-48593[*OCPBUGS-48593*]) +* Previously, if you tried to rerun a resolver-based `PipelineRun` from the {product-title} console, the `Invalid PipelineRun configuration, unable to start Pipeline` UI message was displayed. With this release, you can rerun a resolver-based `PipelineRun` with no problem. (link:https://issues.redhat.com/browse/OCPBUGS-48593[OCPBUGS-48593]) -* Previously, a bug caused requests to update the `deploymentconfigs/scale` sub resource to fail when a matching admission webhook was configured. With this release, you can update to continue without an error. (link:https://issues.redhat.com/browse/OCPBUGS-47766[*OCPBUGS-47766*]) +* Previously, a bug caused requests to update the `deploymentconfigs/scale` sub resource to fail when a matching admission webhook was configured. With this release, you can update to continue without an error. (link:https://issues.redhat.com/browse/OCPBUGS-47766[OCPBUGS-47766]) -* Previously, the installation program did not validate the maximum transmission unit (MTU) for custom networks on Red{nbsp}Hat OpenStack platforms, which led to an installation failure when the MTU was too small. For IPv6, the minimum MTU is 1280 and 100 for OVN-Kubernetes. With this release, the installation program validates the MTU of Red{nbsp}Hat OpenStack custom networks. (link:https://issues.redhat.com/browse/OCPBUGS-41815[*OCPBUGS-41815*]) +* Previously, the installation program did not validate the maximum transmission unit (MTU) for custom networks on Red{nbsp}Hat OpenStack platforms, which led to an installation failure when the MTU was too small. For IPv6, the minimum MTU is 1280 and 100 for OVN-Kubernetes. With this release, the installation program validates the MTU of Red{nbsp}Hat OpenStack custom networks. (link:https://issues.redhat.com/browse/OCPBUGS-41815[OCPBUGS-41815]) [id="ocp-4-15-46-updating_{context}"] ==== Updating @@ -2893,11 +2947,11 @@ $ oc adm release info 4.15.45 --pullspecs [id="ocp-4-15-45-bug-fixes_{context}"] ==== Bug fixes -* Previously, crun failed to stop a container if you opened a terminal session and then disconnected from it. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-48751[*OCPBUGS-48751*]) +* Previously, crun failed to stop a container if you opened a terminal session and then disconnected from it. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-48751[OCPBUGS-48751]) -* Previously, every time a subcription was reconciled, the OLM catalog Operator requested a full view of the catalog metadata from the catalog source pod of the subscription. These requests caused performance issues for the catalog pods. With this release, the OLM catalog Operator now uses a local cache that is refreshed periodically and reused by all subscription reconciliations, so that the performance issue for the catalog pods no longer persists. (link:https://issues.redhat.com/browse/OCPBUGS-48697[*OCPBUGS-48697*]) +* Previously, every time a subcription was reconciled, the OLM catalog Operator requested a full view of the catalog metadata from the catalog source pod of the subscription. These requests caused performance issues for the catalog pods. With this release, the OLM catalog Operator now uses a local cache that is refreshed periodically and reused by all subscription reconciliations, so that the performance issue for the catalog pods no longer persists. (link:https://issues.redhat.com/browse/OCPBUGS-48697[OCPBUGS-48697]) -* Previously, when you used the *Form View* to edit `Deployment` or `DeploymentConfig` API objects on the {product-title} web console, duplicate `ImagePullSecrets` parameters existed in the YAML configuration for either object. With this release, a fix ensures that duplicate `ImagePullSecrets` parameters do not get automatically added for either object. (link:https://issues.redhat.com/browse/OCPBUGS-48592[*OCPBUGS-48592*]) +* Previously, when you used the *Form View* to edit `Deployment` or `DeploymentConfig` API objects on the {product-title} web console, duplicate `ImagePullSecrets` parameters existed in the YAML configuration for either object. With this release, a fix ensures that duplicate `ImagePullSecrets` parameters do not get automatically added for either object. (link:https://issues.redhat.com/browse/OCPBUGS-48592[OCPBUGS-48592]) [id="ocp-4-15-45-updating_{context}"] ==== Updating @@ -2923,9 +2977,9 @@ $ oc adm release info 4.15.44 --pullspecs [id="ocp-4-15-44-bug-fixes_{context}"] ==== Bug fixes -* Previously, East to West pod traffic over the Geneve overlay could stop working between one or multiple nodes, which prevented pods from reaching pods on other nodes. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-47799[*OCPBUGS-47799*]) +* Previously, East to West pod traffic over the Geneve overlay could stop working between one or multiple nodes, which prevented pods from reaching pods on other nodes. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-47799[OCPBUGS-47799]) -* Previously, when installing a cluster on {ibm-cloud-name} into an existing VPC, the installation program retrieved an unsupported VPC region. Attempting to install into a supported VPC region that follows the unsupported VPC region alphabetically caused the installation program to crash. With this release, the installation program is updated to ignore any VPC regions that are not fully available during resource lookups. (link:https://issues.redhat.com/browse/OCPBUGS-44259[*OCPBUGS-44259*]) +* Previously, when installing a cluster on {ibm-cloud-name} into an existing VPC, the installation program retrieved an unsupported VPC region. Attempting to install into a supported VPC region that follows the unsupported VPC region alphabetically caused the installation program to crash. With this release, the installation program is updated to ignore any VPC regions that are not fully available during resource lookups. (link:https://issues.redhat.com/browse/OCPBUGS-44259[OCPBUGS-44259]) [id="ocp-4-15-44-updating_{context}"] ==== Updating @@ -2951,15 +3005,15 @@ $ oc adm release info 4.15.43 --pullspecs [id="ocp-4-15-43-bug-fixes_{context}"] ==== Bug fixes -* Previously, a machine controller failed to save the {vmw-full} task ID of an instance template clone operation. This caused the machine to go into the `Provisioning` state and to power off. With this release, the {vmw-full} machine controller can detect and recover from this state. (link:https://issues.redhat.com/browse/OCPBUGS-48105[*OCPBUGS-48105*]) +* Previously, a machine controller failed to save the {vmw-full} task ID of an instance template clone operation. This caused the machine to go into the `Provisioning` state and to power off. With this release, the {vmw-full} machine controller can detect and recover from this state. (link:https://issues.redhat.com/browse/OCPBUGS-48105[OCPBUGS-48105]) -* Previously, installation of an {aws-short} cluster failed in certain environments on existing subnets when the `MachineSet` object's parameter `publicIp` was explicitly set to `false`. With this release, a fix ensures that a configuration value set for `publicIp` no longer causes issues when the installation program provisions machines for your {aws-short} cluster in certain environments. (link:https://issues.redhat.com/browse/OCPBUGS-47680[*OCPBUGS-47680*]) +* Previously, installation of an {aws-short} cluster failed in certain environments on existing subnets when the `MachineSet` object's parameter `publicIp` was explicitly set to `false`. With this release, a fix ensures that a configuration value set for `publicIp` no longer causes issues when the installation program provisions machines for your {aws-short} cluster in certain environments. (link:https://issues.redhat.com/browse/OCPBUGS-47680[OCPBUGS-47680]) -* Previously, the IDs used to determine the number of rows in a Dashboard table were not unique and some rows would be combined if their IDs were the same. With this release, the ID uses more information to prevent duplicate IDs and the table can display each expected row. (link:https://issues.redhat.com/browse/OCPBUGS-47646[*OCPBUGS-47646*]) +* Previously, the IDs used to determine the number of rows in a Dashboard table were not unique and some rows would be combined if their IDs were the same. With this release, the ID uses more information to prevent duplicate IDs and the table can display each expected row. (link:https://issues.redhat.com/browse/OCPBUGS-47646[OCPBUGS-47646]) -* Previously, the algorithm for calculating the priority of machine removal equated Machines over a specific age to Machines annotated as preferred for removal. With this release, the priority of unmarked Machines sorted by age is reduced to avoid conflict with those explicitly marked, and the algorithm has been updated to ensure age order is guaranteed for Machines up to ten years old. (link:https://issues.redhat.com/browse/OCPBUGS-46080[*OCPBUGS-46080*]) +* Previously, the algorithm for calculating the priority of machine removal equated Machines over a specific age to Machines annotated as preferred for removal. With this release, the priority of unmarked Machines sorted by age is reduced to avoid conflict with those explicitly marked, and the algorithm has been updated to ensure age order is guaranteed for Machines up to ten years old. (link:https://issues.redhat.com/browse/OCPBUGS-46080[OCPBUGS-46080]) -* Previously, in managed services, audit logs are sent to a local webhook service. Control plane deployments sent traffic through `konnectivity` and attempted to send the audit webhook traffic through the `konnectivity` proxies - `openshift-apiserver` and `oauth-openshift`. With this release, the audit-webhook is in the list of no_proxy hosts for the affected pods, and the audit log traffic that is sent to the audit-webhook is successfully sent. (link:https://issues.redhat.com/browse/OCPBUGS-46075[*OCPBUGS-46075*]) +* Previously, in managed services, audit logs are sent to a local webhook service. Control plane deployments sent traffic through `konnectivity` and attempted to send the audit webhook traffic through the `konnectivity` proxies - `openshift-apiserver` and `oauth-openshift`. With this release, the audit-webhook is in the list of no_proxy hosts for the affected pods, and the audit log traffic that is sent to the audit-webhook is successfully sent. (link:https://issues.redhat.com/browse/OCPBUGS-46075[OCPBUGS-46075]) [id="ocp-4-15-43-updating_{context}"] ==== Updating @@ -2985,15 +3039,15 @@ $ oc adm release info 4.15.42 --pullspecs [id="ocp-4-15-42-bug-fixes_{context}"] ==== Bug fixes -* Previously, when the webhook token authenticator was enabled and had the authorization type set to `None`, the {product-title} web console would consistently crash. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-46482[*OCPBUGS-46482*]) +* Previously, when the webhook token authenticator was enabled and had the authorization type set to `None`, the {product-title} web console would consistently crash. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-46482[OCPBUGS-46482]) -* Previously, when you attempted to use the Operator Lifecycle Manager (OLM) to upgrade an Operator, the upgrade was blocked and an `error validating existing CRs against new CRD's schema` message was generated. An issue existed with OLM, whereby OLM erroneously identified incompatibility issues validating existing custom resources (CRs) against the new Operator version's custom resource definitions (CRDs). With this release, the validation is corrected so that Operator upgrades are no longer blocked. (link:https://issues.redhat.com/browse/OCPBUGS-46479[*OCPBUGS-46479*]) +* Previously, when you attempted to use the Operator Lifecycle Manager (OLM) to upgrade an Operator, the upgrade was blocked and an `error validating existing CRs against new CRD's schema` message was generated. An issue existed with OLM, whereby OLM erroneously identified incompatibility issues validating existing custom resources (CRs) against the new Operator version's custom resource definitions (CRDs). With this release, the validation is corrected so that Operator upgrades are no longer blocked. (link:https://issues.redhat.com/browse/OCPBUGS-46479[OCPBUGS-46479]) -* Previously, the images for custom OS layering were not present when the OS was on Red Hat Enterprise Linux CoreOS (RHCOS) 4.15, preventing some customers from upgrading from RHCOS 4.15 to RHCOS 4.16. This release adds Azure Container Registry (ACR) and Google Container Registry (GCR) image credential provider RPMs to RHCOS 4.15. (link:https://issues.redhat.com/browse/OCPBUGS-46063[*OCPBUGS-46063*]) +* Previously, the images for custom OS layering were not present when the OS was on Red Hat Enterprise Linux CoreOS (RHCOS) 4.15, preventing some customers from upgrading from RHCOS 4.15 to RHCOS 4.16. This release adds Azure Container Registry (ACR) and Google Container Registry (GCR) image credential provider RPMs to RHCOS 4.15. (link:https://issues.redhat.com/browse/OCPBUGS-46063[OCPBUGS-46063]) -* Previously, you could not configure your Amazon Web Services DHCP option set with a custom domain name containing a period (`.`) as the final character, as trailing periods were not allowed in a Kubernetes object name. With this release, trailing periods are allowed in a domain name in a DHCP option set. (link:https://issues.redhat.com/browse/OCPBUGS-46034[*OCPBUGS-46034*]) +* Previously, you could not configure your Amazon Web Services DHCP option set with a custom domain name containing a period (`.`) as the final character, as trailing periods were not allowed in a Kubernetes object name. With this release, trailing periods are allowed in a domain name in a DHCP option set. (link:https://issues.redhat.com/browse/OCPBUGS-46034[OCPBUGS-46034]) -* Previously, when `openshift-sdn` pods were deployed during the {product-title} upgrading process, the Open vSwitch (OVS) storage table was cleared. This issue occurred on {product-title} {product-version}.19 and later versions. Ports for existing pods had to be re-created and this disrupted numerous services. With this release, a fix ensures that the OVS tables do not get cleared and pods do not get disconnected during a cluster upgrade operation. (link:https://issues.redhat.com/browse/OCPBUGS-45955[*OCPBUGS-45955*]) +* Previously, when `openshift-sdn` pods were deployed during the {product-title} upgrading process, the Open vSwitch (OVS) storage table was cleared. This issue occurred on {product-title} {product-version}.19 and later versions. Ports for existing pods had to be re-created and this disrupted numerous services. With this release, a fix ensures that the OVS tables do not get cleared and pods do not get disconnected during a cluster upgrade operation. (link:https://issues.redhat.com/browse/OCPBUGS-45955[OCPBUGS-45955]) * Previously, you could not remove a `finally` pipeline task from the *edit Pipeline* form if you created a pipeline with only one `finally` task. With this release, you can remove the `finally` task from the *edit Pipeline* form and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-45950[*OCPBUGS-45950*])