1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/snippets/about-numa.adoc
2025-10-24 15:45:53 +00:00

16 lines
1.7 KiB
Plaintext

// Snippets included in the following assemblies and modules:
//
// *scalability_and_performance/cnf-numa-aware-scheduling.adoc
// * /virt/managing_vms/advanced_vm_management/virt-NUMA-topology.adoc
:_mod-docs-content-type: SNIPPET
Non-uniform memory access (NUMA) architecture is a multiprocessor architecture model where CPUs do not access all memory in all locations at the same speed. Instead, CPUs can gain faster access to memory that is in closer proximity to them, or _local_ to them, but slower access to memory that is further away.
A CPU with multiple memory controllers can use any available memory across CPU complexes, regardless of where the memory is located. However, this increased flexibility comes at the expense of performance.
_NUMA resource topology_ refers to the physical locations of CPUs, memory, and PCI devices relative to each other in a _NUMA zone_. In a NUMA architecture, a NUMA zone is a group of CPUs that has its own processors and memory. Colocated resources are said to be in the same NUMA zone, and CPUs in a zone have faster access to the same local memory than CPUs outside of that zone.
A CPU processing a workload using memory that is outside its NUMA zone is slower than a workload processed in a single NUMA zone. For I/O-constrained workloads, the network interface on a distant NUMA zone slows down how quickly information can reach the application.
Applications can achieve better performance by containing data and processing within the same NUMA zone. For high-performance workloads and applications, such as telecommunications workloads, the cluster must process pod workloads in a single NUMA zone so that the workload can operate to specification.