1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Adds another batch of bug text to RNs

This commit is contained in:
Olivia Payne
2022-07-29 10:53:37 -04:00
parent e5ae530771
commit 9e90ea3319

View File

@@ -1264,6 +1264,17 @@ See link:https://access.redhat.com/articles/6955985[Navigating Kubernetes API de
* Previously, if the `rotational` field was set for `RootDeviceHints`, the host could fail the provision. With this update, the `rotational` field in `RootDeviceHints` is properly copied and checked. As a result, the provisioning will succeed when using the `rotational` field. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2053721[*BZ#2053721*])
* Previously, Ironic was unable to use virtual media to provision Nokia OE 20 servers because the BMC required the `TransferProtocolType` attribute to be explicitly set in the request despite this being an optional attribute. Additionally, the BMC also required the use of a dedicated `RedFish` settings resource to override boot orders, whereas most BMCs just use the `system` resource. This error occurred because Nokia OE 20 strictly requires an optional `TransferProtocolType` attribute for vMedia attachments and requires the use of the `RedFish` settings resource for overriding boot sequences. Consequently, virtual media based provisioning would fail on Nokia OE 20. There are two workarounds for this issue:
. When the vMedia attachment request fails with an error indicating that the `TransferProtocolType` attribute is missing, retry the request and explicitly specify this attribute.
. Check for the presence of the RedFish settings resource in the system. If it is present, use it for the boot sequence override.
As a result, virtual media based provisioning will succeed on Nokia OE 20 machines. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2059567[*BZ#2059567*])
* Previously, the Ironic API inspector image failed to clean disks that were part of passive multipath setups when using {product-title} bare-metal IPI deployments. This update fixes the failures when active or passive storage arrays are in use. As a result, it is now possible to use {product-title} bare metal IPI when customers want to use multipath setups that are active or passive. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2089309[*BZ#2089309*])
* Previously, Ironic failed to match `wwn` serial numbers to multi-path devices. Consequently, `wwn` serial numbers for device mapper devices could not be used in the `rootDeviceHint` parameter in the `install-config.yaml` configuration file. With this update, Ironic now recognizes `wwn` serial numbers as unique identifiers for multi-path devices. As a result, it is now possible to use `wwn` serial numbers for device mapper devices for the `install-config.yaml` file. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2098392[*BZ#2098392*])
[discrete]
[id="ocp-4-11-builds-bug-fixes"]
==== Builds
@@ -1379,10 +1390,16 @@ sourceStrategy:
[id="ocp-4-11-kube-api-server-bug-fixes"]
==== Kubernetes API server
* Previously, long running requests used for streaming were taken into account for the `KubeAPIErrorBudgetBurn` calculation. Consequently, The alert from `KubeAPIErrorBudgetBurn` would be triggered and cause false positives. This update excludes long running requests from the `KubeAPIErrorBudgetBurn` calculation. As a result, false positives are reduced on the `KubeAPIErrorBudgetBurn` metric. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1982704[*BZ#1982704*])
[discrete]
[id="ocp-4-11-kube-scheduler-bug-fixes"]
==== Kubernetes Scheduler
* With {product-title} {product-version}, the hosted control plane namespace is excluded from eviction when the descheduler is installed on a cluster that has hosted control planes enabled. As a result, pods are no longer evicted from the hosted control plane namespace when the descheduler is installed. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2000653[*BZ#2000653*])
* Previously, resources incorrectly specified the API version in the owner reference of the `kubedescheduler` custom resource (CR). Consequently, the owner reference was invalid, and the affected resources would not be deleted when the `kubedescheduler` CR ran. This update specifies the correct API version in all owner references. As a result, all resources with an owner reference to the `kubedescheduler` CR are deleted after the CR is deleted. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1957012[*BZ#1957012*])
[discrete]
[id="ocp-4-11-machine-config-operator-bug-fixes"]
==== Machine Config Operator
@@ -1401,51 +1418,34 @@ sourceStrategy:
[id="ocp-4-11-monitoring-bug-fixes"]
==== Monitoring
* Before this update, dashboards in the {product-title} web console that contained queries using a container label for `container_fs*` metrics returned no data points because the container labels had been dropped due to high cardinality.
This update resolves the issue, and these dashboards now display data as expected.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=2037513[*BZ#2037513*])
* Before this update, dashboards in the {product-title} web console that contained queries using a container label for `container_fs*` metrics returned no data points because the container labels had been dropped due to high cardinality. This update resolves the issue, and these dashboards now display data as expected. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2037513[*BZ#2037513*])
* Before this update, the `prometheus-operator` component allowed any time value for `ScrapeTimeout` in the config map.
If you set `ScrapeTimeout` to a value greater than the `ScrapeInterval` value, Prometheus would stop loading the config map settings and fail to apply all subsequent configuration changes.
* Before this update, the `prometheus-operator` component allowed any time value for `ScrapeTimeout` in the config map. If you set `ScrapeTimeout` to a value greater than the `ScrapeInterval` value, Prometheus would stop loading the config map settings and fail to apply all subsequent configuration changes.
With this update, if the `ScrapeTimeout` value specified is greater than the `ScrapeInterval` value, the system logs the settings as invalid, but continues loading the other config map settings.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=2037762[*BZ#2037762*])
* Before this update, in the *CPU Utilisation* panel on the *Kubernetes / Compute Resources / Cluster* dashboard in the {product-title} web console, the formula used to calculate the CPU utilization of a node could incorrectly display invalid negative values.
With this update, the formula has been updated, and the *CPU Utilisation* panel now shows correct values.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=2040635[*BZ#2040635*])
* Before this update, in the *CPU Utilisation* panel on the *Kubernetes / Compute Resources / Cluster* dashboard in the {product-title} web console, the formula used to calculate the CPU utilization of a node could incorrectly display invalid negative values. With this update, the formula has been updated, and the *CPU Utilisation* panel now shows correct values. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2040635[*BZ#2040635*])
* Before this update, data from the `prometheus-adapter` component could not be accessed during the automatic update that occurs every 15 days because the update process removed old pods before the new pods became available.
With this release, the automatic update process now only removes old pods after the new pods are able to serve requests so that data from the old pods continues to be available during the update process.
* Before this update, data from the `prometheus-adapter` component could not be accessed during the automatic update that occurs every 15 days because the update process removed old pods before the new pods became available. With this release, the automatic update process now only removes old pods after the new pods are able to serve requests so that data from the old pods continues to be available during the update process.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=2048333[*BZ#2048333*])
* Before this update, the following metrics were incorrectly missing from `kube-state-metrics`: `kube_pod_container_status_terminated_reason`, `kube_pod_init_container_status_terminated_reason`, and `kube_pod_status_scheduled_time`.
With this release, `kube-state-metrics` correctly exposes these metrics so that they are available.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=2050120[*BZ#2050120*])
* Before this update, the following metrics were incorrectly missing from `kube-state-metrics`: `kube_pod_container_status_terminated_reason`, `kube_pod_init_container_status_terminated_reason`, and `kube_pod_status_scheduled_time`. With this release, `kube-state-metrics` correctly displays these metrics so that they are available. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2050120[*BZ#2050120*])
* Before this update, if invalid write relabel config map settings existed for the `prometheus-operator` component, the configuration would still load all subsequent settings.
With this release, the component checks for valid write relabel settings when loading the configuration.
If invalid settings exist, an error is logged, and the configuration loading process stops.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=2051470[*BZ#2051470*])
With this release, the component checks for valid write relabel settings when loading the configuration. If invalid settings exist, an error is logged, and the configuration loading process stops. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2051470[*BZ#2051470*])
* Before this update, the `init-config-reloader` container for the Prometheus pods requested `100m` of CPU and `50Mi` of memory, even though in practice the container needed fewer resources.
With this update, the container requests `1m` of CPU and `10Mi` of memory. These settings are consistent with the settings of the `config-reloader` container.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=2057025[*BZ#2057025*])
With this update, the container requests `1m` of CPU and `10Mi` of memory. These settings are consistent with the settings of the `config-reloader` container. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2057025[*BZ#2057025*])
* Before this update, when an administrator enabled user workload monitoring, the `user-workload-monitoring-config` config map was not automatically created.
Because non-administrator users with the `user-workload-monitoring-config-edit` role did not have permission to create the config map manually, they depended on an administrator to create it.
With this update, the `user-workload-monitoring-config` config map is now automatically created when an administrator enables user workload monitoring and is available to edit by users with the appropriate role.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=2065577[*BZ#2065577*])
* Before this update, when an administrator enabled user workload monitoring, the `user-workload-monitoring-config` config map was not automatically created. Because non-administrator users with the `user-workload-monitoring-config-edit` role did not have permission to create the config map manually, they required an administrator to create it. With this update, the `user-workload-monitoring-config` config map is now automatically created when an administrator enables user workload monitoring and is available to edit by users with the appropriate role. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2065577[*BZ#2065577*])
* Before this update, after you deleted a deployment, the Cluster Monitoring Operator (CMO) did not wait for the deletion to be completed, which caused reconciliation errors.
With this update, the CMO now waits until deployments are deleted before recreating them, which resolves this issue.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=2069068[*BZ#2069068*])
* Before this update, after you deleted a deployment, the Cluster Monitoring Operator (CMO) did not wait for the deletion to be completed, which caused reconciliation errors. With this update, the CMO now waits until deployments are deleted before recreating them, which resolves this issue. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2069068[*BZ#2069068*])
* Before this update, for user workload monitoring, if you configured external labels for metrics in Prometheus, the CMO did not correctly propagate these labels to Thanos Ruler. If you queried external metrics for user-defined projects, not provided by the user workload monitoring instance of Prometheus, you would sometimes not see external labels for these metrics even though you had configured Prometheus to add them. With this update, the CMO now properly propagates the external labels that you configured in Prometheus to Thanos Ruler, and you can see the labels when you query external metrics. Therefore, for user-defined projects, if you queried external metrics not provided by the user workload monitoring instance of Prometheus, you would sometimes not see external labels for these metrics even though you had configured Prometheus to add them. With this update, the CMO now properly propagates the external labels that you configured in Prometheus to Thanos Ruler, and you can see the labels when you query external metrics.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=2073112[*BZ#2073112*])
* Before this update, for user workload monitoring, if you configured external labels for metrics in Prometheus, the CMO did not correctly propagate these labels to Thanos Ruler. If you queried external metrics for user-defined projects, not provided by the user workload monitoring instance of Prometheus, you would sometimes not see external labels for these metrics even though you had configured Prometheus to add them. With this update, the CMO now properly propagates the external labels that you configured in Prometheus to Thanos Ruler, and you can see the labels when you query external metrics. Therefore, for user-defined projects, if you queried external metrics not provided by the user workload monitoring instance of Prometheus, you would sometimes not see external labels for these metrics even though you had configured Prometheus to add them. With this update, the CMO now properly propagates the external labels that you configured in Prometheus to Thanos Ruler, and you can see the labels when you query external metrics. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2073112[*BZ#2073112*])
* Before this update, the `tunbr` interface incorrectly triggered the `NodeNetworkInterfaceFlapping` alert.
With this update, the `tunbr` interface is now included in the list of interfaces that the alert ignores and no longer causes the alert to trigger incorrectly.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=2090838[*BZ#2090838*])
* Before this update, the `tunbr` interface incorrectly triggered the `NodeNetworkInterfaceFlapping` alert. With this update, the `tunbr` interface is now included in the list of interfaces that the alert ignores and no longer causes the alert to trigger incorrectly. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2090838[*BZ#2090838*])
* Previously, the Prometheus Operator allowed invalid re-label configurations. With this update, the Prometheus Operator validates re-labeled configurations. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2051407[*BZ#2051407*])
[discrete]
[id="ocp-4-11-networking-bug-fixes"]
@@ -1550,19 +1550,21 @@ In this update, `systemd` service only sets the default RPS mask for virtual int
* Before this update, the package server was not aware of pod topology when defining its leader election duration, renewal deadline, and retry periods. As a result, the package server strained topologies with limited resources, such as single-node environments. This update introduces a `leaderElection` package that sets reasonable lease duration, renewal deadlines, and retry periods. This fix reduces strain on clusters with limited resources. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2048563[*BZ#2048563*])
* Previously, there was a bad catalog source in the `openshift-marketplace` namespace. Because of this, all subscriptions were blocked. With this update, if there is a bad catalog source in the `openshift-marketplace` namespace, users can subscribe to an operator from a quality catalog source of their own namespace with the original annotation. As a result, if there is a bad catalog source in the local namespace, the user cannot subscribe to any operator in the namespace. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2076323[*BZ2076323*])
* Previously, there was a bad catalog source in the `openshift-marketplace` namespace. Because of this, all subscriptions were blocked. With this update, if there is a bad catalog source in the `openshift-marketplace` namespace, users can subscribe to an operator from a quality catalog source of their own namespace with the original annotation. As a result, if there is a bad catalog source in the local namespace, the user cannot subscribe to any operator in the namespace. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2076323[*BZ#2076323*])
* Previously, info-level logs were generated during `operator-marketplace` project polling, which caused log spam. This update uses the command line flag to reduce the log line to the debug level, and adds more control of the log levels for the user. As a result, this reduces log spam. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2057558[*BZ#2057558*])
* Previously, each component managed by the Cluster Version Operator (CVO) consisted of YAML files defined in the `/manifest` directory in the root of a project's repo. When removing a YYAML file from the `/manifest` directory, you needed to add the `release.openshift.io/delete: “true”` annotation, otherwise the CVO would not delete the resources from the cluster. This update reintroduces any resources that were removed from the `/manifest` directory and adds the `release.openshift.io/delete: “true”` annotation so that the CVO cleans up the resources. As a result, resources that are no longer required for the OLM component are removed from the cluster. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1975543[*BZ#1975543*])
* Previously, the CheckRegistryServer function used by GRPC catalog sources did not confirm the existence of the service account associated with the catalog source. This caused the existence of an unhealthy catalog source with no service account. With this update, the GRPC `CheckRegistryServer` function checks if the service account exists and recreates the service if it is not found. As a result, the OLM recreates service accounts owned by GRPC catalog souces if they do not exist. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1975543[*BZ#1975543*])
* Previously, the `CheckRegistryServer` function used by gRPC catalog sources did not confirm the existence of the service account associated with the catalog source. This caused the existence of an unhealthy catalog source with no service account. With this update, the gRPC `CheckRegistryServer` function checks if the service account exists and recreates the service if it is not found. As a result, the OLM recreates service accounts owned by gRPC catalog sources if they do not exist. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2074612[*BZ#2074612*])
* Previously, in an error message that occurred when users ran `opm index prune` against a file-based catalog image, imprecise language made it unclear that this command does not support that catalog format. This update clarifies the error message so users understand that the command `opm index prune` only supports SQLite-based images. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2039135[*BZ#2039135*])
* Previously, there was a broken thread safety around the Operator API. Consequently, Operator resources were not properly deleted. With this update, Operator resources are correctly deleted. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2015023[*BZ#2015023*])
* Previously, pod failures were artificially extending the validity period of certificates causing them to incorrectly rotate. With this update, the certificate validity period is correctly deterined and the certificates are correctly rotated. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2020484[*BZ#2020484*])
* Previously, pod failures were artificially extending the validity period of certificates causing them to incorrectly rotate. With this update, the certificate validity period is correctly determined and the certificates are correctly rotated. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2020484[*BZ#2020484*])
* In {product-title} {product-version} the default cluster-wide pod security admission policy is set to `baseline` for all namespaces and the default warning level is set to `restricted`. Before this update, Operator Lifecycle Manager displayed pod security admission warnings in the `operator-marketplace` namespace. With this fix, reducing the warning level to `baseline` resolves the issue. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2088541[*BZ#2088541*])
[discrete]
[id="ocp-4-11-openshift-operator-sdk-bug-fixes"]
@@ -1576,9 +1578,9 @@ In this update, `systemd` service only sets the default RPS mask for virtual int
[id="ocp-4-11-openshift-api-server-bug-fixes"]
==== OpenShift API server
* Because multiple Authentication Operator controllers were synchronizing at the same time, the Authentication Operator was taking too long to react to changes to its configuration. This feature adds jitter to the regular synchronization periods so that the Authentication Operator controllers do not race for resources. As a result, it now takes less time for the Authentication Operator to react to configuration changes. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1958198[*BZ#1958198*])"
* Because multiple Authentication Operator controllers were synchronizing at the same time, the Authentication Operator was taking too long to react to changes to its configuration. This feature adds jitter to the regular synchronization periods so that the Authentication Operator controllers do not compete for resources. As a result, it now takes less time for the Authentication Operator to react to configuration changes. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1958198[*BZ#1958198*])
* With {product-title} 4.11, authentication attempts from external identity providers are now logged to the audit logs. As a result, you can view successful, failed, and errored login attempts from external identity providers in the audit logs. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2086465[*BZ#2086465*])"
* With {product-title} 4.11, authentication attempts from external identity providers are now logged to the audit logs. As a result, you can view successful, failed, and errored login attempts from external identity providers in the audit logs. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2086465[*BZ#2086465*])
[discrete]
[id="ocp-4-11-openshift-update-service-bug-fixes"]
@@ -1588,11 +1590,11 @@ In this update, `systemd` service only sets the default RPS mask for virtual int
[id="ocp-4-11-rhcos-bug-fixes"]
==== {op-system-first}
* Before this update, if a machine was booted through PXE and the `BOOTIF` argument was on the kernel command line, the machine would boot with DHCP enabled on only a single interface. With this update, the machine boots with DHCP enabled on all interfaces even if the `BOOTIF` argument is provided. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2032717[BZ#2032717])
* Before this update, if a machine was booted through PXE and the `BOOTIF` argument was on the kernel command line, the machine would boot with DHCP enabled on only a single interface. With this update, the machine boots with DHCP enabled on all interfaces even if the `BOOTIF` argument is provided. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2032717[*BZ#2032717*])
* Previously, nodes that were provisioned from VMware OVA images did not delete the Ignition config after initial provisioning. Consequently, this created security issues when secrets are stored within the Ignition config. With this update, the Ignition config is now deleted from the VMware hypervisor after initial provisioning on new nodes and when upgrading from a previous {product-title} release on existing nodes. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2082274[BZ#2082274])
* Previously, nodes that were provisioned from VMware OVA images did not delete the Ignition config after initial provisioning. Consequently, this created security issues when secrets are stored within the Ignition config. With this update, the Ignition config is now deleted from the VMware hypervisor after initial provisioning on new nodes and when upgrading from a previous {product-title} release on existing nodes. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2082274[*BZ#2082274*])
* Previously, any arguments provided to the `toolbox` command were ignored when the command wasfirst invoked. This fix updates the toolbox script to initiate the `podman container create` command followed by the `podman start` and `podman exec` commands. It also modifies the script to handle multiple arguments and whitespaces as an array. As a result, the arguments passed to the `toolbox` command are executed every time as expected. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2039589[*BZ#2039589*])
* Previously, any arguments provided to the `toolbox` command were ignored when the command was first invoked. This fix updates the toolbox script to initiate the `podman container create` command followed by the `podman start` and `podman exec` commands. It also modifies the script to handle multiple arguments and whitespaces as an array. As a result, the arguments passed to the `toolbox` command are executed every time as expected. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2039589[*BZ#2039589*])
[discrete]
[id="ocp-4-11-performance-bug-fixes"]
@@ -1610,9 +1612,9 @@ In this update, `systemd` service only sets the default RPS mask for virtual int
* Previously, {product-title} 4.8 added an API for customizing platform routes. This API includes status and spec fields in the cluster ingress configuration for reporting the current host names of customizable routes and the user's desired host names for these routes, respectively. The API also defined constraints for these values. These constraints were restrictive and excluded some valid potential host names. Consequently, the restrictive validation for the API prevented users from specifying custom host names that should have been permitted and prevented users from being able to install clusters with domains that should have been permitted. With this update, the constraints on host names were relaxed to allow all host names that are valid for routes and {product-title} allows users to use cluster domains with TLDs that contain decimal digits. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2039256[*BZ#2039256*])
* Previously, the Ingress Operator did not check whether Ingress Controllers configured with cluster `spec.domain` parameter matched the `spec.baseDomain` parameter. This caused the Operator to create DNS records and set `DNSManaged` conditions to `false`. With this fix, the Ingress Operator now checks whether the `spec.domain` parameter matches with the cluster `spec.baseDomain`. As a result, for custom Ingress Controllers, the Ingress Operator does not create DNS records and sets `DNSManaged` conditions to false . (link:https://bugzilla.redhat.com/show_bug.cgi?id=2041616#[*BZ#2041616*])
* Previously, the Ingress Operator did not check whether Ingress Controllers configured with cluster `spec.domain` parameter matched the `spec.baseDomain` parameter. This caused the Operator to create DNS records and set `DNSManaged` conditions to `false`. With this fix, the Ingress Operator now checks whether the `spec.domain` parameter matches with the cluster `spec.baseDomain`. As a result, for custom Ingress Controllers, the Ingress Operator does not create DNS records and sets `DNSManaged` conditions to false. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2041616#[*BZ#2041616*])
* Previously, in OpenShift Container Platform 4.10, the HAProxy must-gather function could take up to an hour to run. This can happen when routers in the terminating state delay the `oc cp` comand. The delay lasts until the pod is terminated. With the new release, a 10 minute limit on the `oc op` command prevents longer delays. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2104701[*BZ#2104701*])
* Previously, in {product-title} 4.10, the HAProxy must-gather function could take up to an hour to run. This can happen when routers in the terminating state delay the `oc cp` command. The delay lasts until the pod is terminated. With the new release, a 10 minute limit on the `oc op` command prevents longer delays. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2104701[*BZ#2104701*])
* Previously, the Ingress Operator did not clear the route status when Ingress Controllers were deleted, showing that the route was still in the operator after its deletion. This fix clears the route status when an Ingress Controller is deleted, resulting in the route being cleared in the operator after its deletion. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1944851#[*BZ#1944851*])