1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-07 00:48:01 +01:00
Files
openshift-docs/modules/telco-ran-node-tuning-operator.adoc
Shane Lovern 78a436378f RDS cleanup
2025-06-26 11:37:00 +01:00

56 lines
3.3 KiB
Plaintext

// Module included in the following assemblies:
//
// * scalability_and_performance/telco_ran_du_ref_design_specs/telco-ran-du-rds.adoc
:_mod-docs-content-type: REFERENCE
[id="telco-ran-node-tuning-operator_{context}"]
= CPU partitioning and performance tuning
New in this release::
* No reference design updates in this release
Description::
The RAN DU use model includes cluster performance tuning using `PerformanceProfile` CRs for low-latency performance.
The RAN DU use case requires the cluster to be tuned for low-latency performance.
The Node Tuning Operator reconciles the `PerformanceProfile` CRs.
For more information about node tuning with the `PerformanceProfile` CR, see "Tuning nodes for low latency with the performance profile".
Limits and requirements::
The Node Tuning Operator uses the `PerformanceProfile` CR to configure the cluster.
You must configure the following settings in the telco RAN DU profile `PerformanceProfile` CR:
+
--
* Set a reserved `cpuset` of 4 or more, equating to 4 hyper-threads (2 cores) for either of the following CPUs:
** Intel 3rd Generation Xeon (IceLake) 2.20 GHz, or newer, CPUs with host firmware tuned for maximum performance
** AMD EPYC Zen 4 CPUs (Genoa, Bergamo)
+
[NOTE]
====
AMD EPYC Zen 4 CPUs (Genoa, Bergamo) are fully supported.
Power consumption evaluations are ongoing.
It is recommended to evaluate features, such as per-pod power management, to determine any potential impact on performance.
====
* Set the reserved `cpuset` to include both hyper-thread siblings for each included core.
Unreserved cores are available as allocatable CPU for scheduling workloads.
* Ensure that hyper-thread siblings are not split across reserved and isolated cores.
* Ensure that reserved and isolated CPUs include all the threads for all cores in the CPU.
* Include Core 0 for each NUMA node in the reserved CPU set.
* Set the huge page size to 1G.
* Only pin {product-title} pods that are by default configured as part of the management workload partition to reserved cores.
* When recommended by the hardware vendor, set the maximum CPU frequency for reserved and isolated CPUs using the `hardwareTuning` section.
--
Engineering considerations::
* Meeting the full performance metrics requires use of the RT kernel.
If required, you can use the non-RT kernel with corresponding impact to performance.
* The number of hugepages you configure depends on application workload requirements.
Variation in this parameter is expected and allowed.
* Variation is expected in the configuration of reserved and isolated CPU sets based on selected hardware and additional components in use on the system.
The variation must still meet the specified limits.
* Hardware without IRQ affinity support affects isolated CPUs.
To ensure that pods with guaranteed whole CPU QoS have full use of allocated CPUs, all hardware in the server must support IRQ affinity.
* If you enable workload partitioning by setting `cpuPartitioningMode` to `AllNodes` during deployment, you must use the `PerformanceProfile` CR to allocate enough CPUs to support the operating system, interrupts, and {product-title} pods.
* The reference performance profile includes additional kernel arguments settings for `vfio_pci`.
These arguments are included for support of devices such as the FEC accelerator. You can omit them if they are not required for your workload.