mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
55 lines
5.5 KiB
Plaintext
55 lines
5.5 KiB
Plaintext
:_mod-docs-content-type: REFERENCE
|
||
[id="rn-ocp-release-notes-known-issues_{context}"]
|
||
= Known issues
|
||
|
||
This section includes several known issues for {product-title} {product-version}.
|
||
|
||
* Currently, due to a known issue, the {product-title} 4.21 versions of the Cluster Resource Override Operator and the DPU Operator will be available in an upcoming 4.21 maintenance release. (link:https://issues.redhat.com/browse/OCPBUGS-74224[OCPBUGS-74224])
|
||
|
||
* If you mirrored the {product-title} release images to the registry of a disconnected environment by using the `oc adm release mirror` command, the release image Sigstore signature is not mirrored with the image.
|
||
+
|
||
This has become an issue in {product-title} {product-version}, because the `openshift` cluster image policy is now deployed to the cluster by default. This policy causes CRI-O to automatically verify the Sigstore signature when pulling images into a cluster. (link:https://issues.redhat.com/browse/OCPBUGS-70297[OCPBUGS-70297])
|
||
+
|
||
With the absence of the Sigstore signature, after updating to {product-title} {product-version} on a disconnected environment, future Cluster Version Operator pods might fail to run. You can avoid this problem by installing the oc-mirror plugin v2 and using the `oc mirror` command to mirror the {product-title} release image. The oc-mirror plugin v2 mirrors both the release image and its Sigstore signature from your mirror registry to your disconnected environment.
|
||
+
|
||
If you cannot use the oc-mirror plugin v2, you can use the `oc image mirror` command to mirror the Sigstore signature into your mirror registry by using a command similar to the following:
|
||
+
|
||
--
|
||
[source,terminal]
|
||
----
|
||
$ oc image mirror "quay.io/openshift-release-dev/ocp-release:${RELEASE_DIGEST}.sig" "${LOCAL_REGISTRY}/${LOCAL_RELEASE_IMAGES_REPOSITORY}:${RELEASE_DIGEST}.sig"
|
||
----
|
||
where:
|
||
|
||
`RELEASE_DIGEST`:: Specifies your digest image with the `:` character replaced by a `-` character. For example: `sha256:884e1ff5effeaa04467fab9725900e7f0ed1daa89a7734644f14783014cebdee` becomes `sha256-884e1ff5effeaa04467fab9725900e7f0ed1daa89a7734644f14783014cebdee.sig`.
|
||
--
|
||
+
|
||
For information on the oc-mirror v2 plugin, see _Mirroring images for a disconnected installation by using the oc-mirror plugin v2_.
|
||
|
||
* Starting with {product-title} 4.21, there is a decrease in the default maximum open files soft limit for containers. As a consequence, end users might experience application failures. To work around this problem, increase the container runtimes (CRI-O) ulimit configuration using a method of your choice, such as the `ulimit` command. (link:https://issues.redhat.com/browse/OCPBUGS-62095[OCPBUGS-62095])
|
||
|
||
* Currently, on clusters with SR-IOV network virtual functions configured, a race condition might occur between system services responsible for network device renaming and the TuneD service managed by the Node Tuning Operator. As a consequence, the TuneD profile might become degraded after the node restarts, leading to performance degradation. As a workaround, restart the TuneD pod to restore the profile state. (link:https://issues.redhat.com/browse/OCPBUGS-41934[OCPBUGS-41934])
|
||
|
||
* Currently, pods that use a `guaranteed` QoS class and request whole CPUs might not restart automatically after a node reboot or kubelet restart. The issue might occur in nodes configured with a static CPU Manager policy and using the `full-pcpus-only` specification, and when most or all CPUs on the node are already allocated by such workloads. As a workaround, manually delete and re-create the affected pods. (link:https://issues.redhat.com/browse/OCPBUGS-43280[OCPBUGS-43280])
|
||
|
||
* On systems using specific AMD EPYC processors, some low-level system interrupts, for example `AMD-Vi`, might contain CPUs in the CPU mask that overlaps with CPU-pinned workloads. This behavior is because of the hardware design. These specific error-reporting interrupts are generally inactive and there is currently no known performance impact.(link:https://issues.redhat.com/browse/OCPBUGS-57787[OCPBUGS-57787])
|
||
|
||
|
||
* While Day 2 firmware updates and BIOS attribute reconfiguration for bare-metal hosts are generally available with this release, the Bare Metal Operator (BMO) does not provide a native mechanism to cancel a firmware update request once initiated. If a firmware update or setting change for `HostFirmwareComponents` or `HostFirmwareSettings` resources fails, returns an error, or becomes indefinitely stuck, you can try to recover by using the following steps:
|
||
+
|
||
--
|
||
* Removing the changes to the `HostFirmwareComponents` and `HostFirmwareSettings` resources.
|
||
* Setting the node to `online: false` to trigger a reboot.
|
||
* If the issue persists, deleting the Ironic pod.
|
||
--
|
||
+
|
||
A native abort capability for servicing operations might be planned for a future release.
|
||
|
||
* There is a known issue with the ability to configure the maximum throughput of gp3 storage volumes in an {aws-short} cluster.
|
||
This feature does not work with control plane machine sets.
|
||
There is no workaround for this issue, but it is planned to be fixed in a later release. (link:https://issues.redhat.com/browse/OCPBUGS-74478[OCPBUGS-74478])
|
||
|
||
* When installing a private cluster on {gcp-first} behind a proxy with user-provisioned DNS, you might encounter installation errors indicating the bootstrap failed to complete or the cluster initialization failed.
|
||
In both cases, the installation can succeed, resulting in a healthy cluster.
|
||
As a workaround, install the private cluster on a bastion host that is within the same virtual private cloud (VPC) as the cluster to be deployed. (link:https://issues.redhat.com/browse/OCPBUGS-54901[OCPBUGS-54901])
|