From 6f3b57601d5573fdd6558d735415b6bd8e9b7401 Mon Sep 17 00:00:00 2001 From: Carmi Wisemon Date: Thu, 2 May 2024 10:23:11 +0300 Subject: [PATCH] OADP 3921 - Remove CPU limits --- .../oadp-backup-restore-for-large-usage.adoc | 26 +++++++------------ 1 file changed, 9 insertions(+), 17 deletions(-) diff --git a/modules/oadp-backup-restore-for-large-usage.adoc b/modules/oadp-backup-restore-for-large-usage.adoc index f070e7a339..93c83f7d32 100644 --- a/modules/oadp-backup-restore-for-large-usage.adoc +++ b/modules/oadp-backup-restore-for-large-usage.adoc @@ -9,23 +9,15 @@ Testing shows that increasing `NodeAgent` CPU can significantly improve backup a [IMPORTANT] ==== -It is not recommended to use Kopia without limits in production environments on nodes running production workloads due to Kopia’s aggressive consumption of resources. However, running Kopia with limits that are too low results in CPU limiting and slow backups and restore situations. Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace. +You can tune your {product-title} environment based on your performance analysis and preference. Use CPU limits in the workloads when you use Kopia for file system backups. + +If you do not use CPU limits on the pods, the pods can use excess CPU when it is available. If you specify CPU limits, the pods might be throttled if they exceed their limits. Therefore, the use of CPU limits on the pods is considered an anti-pattern. + +Ensure that you are accurately specifying CPU requests so that pods can take advantage of excess CPU. Resource allocation is guaranteed based on CPU requests rather than CPU limits. + +Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace. Testing detected no CPU limiting or memory saturation with these resource specifications. ==== -Testing detected no CPU limiting or memory saturation with these resource specifications. +In some environments, you might need to adjust Ceph MDS pod resources to avoid pod restarts, which occur when default settings cause resource saturation. -You can set these limits in Ceph MDS pods by following the procedure in https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.14/html/troubleshooting_openshift_data_foundation/changing-resources-for-the-openshift-data-foundation-components_rhodf#changing_the_cpu_and_memory_resources_on_the_rook_ceph_pods[Changing the CPU and memory resources on the rook-ceph pods]. - -You need to add the following lines to the storage cluster Custom Resource (CR) to set the limits: - -[source,yaml] ----- - resources: - mds: - limits: - cpu: "3" - memory: 128Gi - requests: - cpu: "3" - memory: 8Gi ----- \ No newline at end of file +For more information about how to set the pod resources limit in Ceph MDS pods, see link:https://docs.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.15/html/troubleshooting_openshift_data_foundation/changing-resources-for-the-openshift-data-foundation-components_rhodf#changing_the_cpu_and_memory_resources_on_the_rook_ceph_pods[Changing the CPU and memory resources on the rook-ceph pods]. \ No newline at end of file