1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Merge pull request #92517 from openshift-cherrypick-robot/cherry-pick-91970-to-enterprise-4.19

[enterprise-4.19] OADP-3921 remove cpu limits
This commit is contained in:
Agil Antony
2025-04-23 16:02:33 +05:30
committed by GitHub

View File

@@ -9,23 +9,15 @@ Testing shows that increasing `NodeAgent` CPU can significantly improve backup a
[IMPORTANT]
====
It is not recommended to use Kopia without limits in production environments on nodes running production workloads due to Kopias aggressive consumption of resources. However, running Kopia with limits that are too low results in CPU limiting and slow backups and restore situations. Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace.
You can tune your {product-title} environment based on your performance analysis and preference. Use CPU limits in the workloads when you use Kopia for file system backups.
If you do not use CPU limits on the pods, the pods can use excess CPU when it is available. If you specify CPU limits, the pods might be throttled if they exceed their limits. Therefore, the use of CPU limits on the pods is considered an anti-pattern.
Ensure that you are accurately specifying CPU requests so that pods can take advantage of excess CPU. Resource allocation is guaranteed based on CPU requests rather than CPU limits.
Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace. Testing detected no CPU limiting or memory saturation with these resource specifications.
====
Testing detected no CPU limiting or memory saturation with these resource specifications.
In some environments, you might need to adjust Ceph MDS pod resources to avoid pod restarts, which occur when default settings cause resource saturation.
You can set these limits in Ceph MDS pods by following the procedure in https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.14/html/troubleshooting_openshift_data_foundation/changing-resources-for-the-openshift-data-foundation-components_rhodf#changing_the_cpu_and_memory_resources_on_the_rook_ceph_pods[Changing the CPU and memory resources on the rook-ceph pods].
You need to add the following lines to the storage cluster Custom Resource (CR) to set the limits:
[source,yaml]
----
resources:
mds:
limits:
cpu: "3"
memory: 128Gi
requests:
cpu: "3"
memory: 8Gi
----
For more information about how to set the pod resources limit in Ceph MDS pods, see link:https://docs.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.15/html/troubleshooting_openshift_data_foundation/changing-resources-for-the-openshift-data-foundation-components_rhodf#changing_the_cpu_and_memory_resources_on_the_rook_ceph_pods[Changing the CPU and memory resources on the rook-ceph pods].