mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
38 lines
3.2 KiB
Plaintext
38 lines
3.2 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc
|
|
|
|
:_mod-docs-content-type: REFERENCE
|
|
[id="telco-core-application-workloads_{context}"]
|
|
= Application workloads
|
|
|
|
Application workloads running on telco core clusters can include a mix of high performance cloud-native network functions (CNFs) and traditional best-effort or burstable pod workloads.
|
|
|
|
Guaranteed QoS scheduling is available to pods that require exclusive or dedicated use of CPUs due to performance or security requirements.
|
|
Typically, pods that run high performance or latency sensitive CNFs by using user plane networking (for example, DPDK) require exclusive use of dedicated whole CPUs achieved through node tuning and guaranteed QoS scheduling.
|
|
When creating pod configurations that require exclusive CPUs, be aware of the potential implications of hyper-threaded systems.
|
|
Pods should request multiples of 2 CPUs when the entire core (2 hyper-threads) must be allocated to the pod.
|
|
|
|
|
|
Pods running network functions that do not require high throughput or low latency networking should be scheduled with best-effort or burstable QoS pods and do not require dedicated or isolated CPU cores.
|
|
|
|
Engineering considerations::
|
|
|
|
Plan telco core workloads and cluster resources by using the following information:
|
|
|
|
* As of {product-title} 4.19, `cgroup v1` is no longer supported and has been removed.
|
|
All workloads must now be compatible with `cgroup v2`.
|
|
For more information, see link:https://www.redhat.com/en/blog/rhel-9-changes-context-red-hat-openshift-workloads[Red Hat Enterprise Linux 9 changes in the context of Red Hat OpenShift workloads].
|
|
* CNF applications should conform to the latest version of https://redhat-best-practices-for-k8s.github.io/guide/[Red Hat Best Practices for Kubernetes].
|
|
* Use a mix of best-effort and burstable QoS pods as required by your applications.
|
|
** Use guaranteed QoS pods with proper configuration of reserved or isolated CPUs in the `PerformanceProfile` CR that configures the node.
|
|
** Guaranteed QoS Pods must include annotations for fully isolating CPUs.
|
|
** Best effort and burstable pods are not guaranteed exclusive CPU use.
|
|
Workloads can be preempted by other workloads, operating system daemons, or kernel tasks.
|
|
* Use exec probes sparingly and only when no other suitable option is available.
|
|
** Do not use exec probes if a CNF uses CPU pinning. Use other probe implementations, for example, `httpGet` or `tcpSocket`.
|
|
** When you need to use exec probes, limit the exec probe frequency and quantity. The maximum number of exec probes must be kept below 10, and the frequency must not be set to less than 10 seconds.
|
|
** You can use startup probes, because they do not use significant resources at steady-state operation. The limitation on exec probes applies primarily to liveness and readiness probes. Exec probes cause much higher CPU usage on management cores compared to other probe types because they require process forking.
|
|
* Use pre-stop hooks to allow the application workload to perform required actions before pod disruption, such as during an upgrade or node maintenance. The hooks enable a pod to save state to persistent storage, offload traffic from a Service, or signal other Pods.
|
|
|