mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 03:47:04 +01:00
159 lines
4.8 KiB
Plaintext
159 lines
4.8 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * virt/vm_networking/virt-using-dpdk-with-sriov.adoc
|
|
|
|
:_mod-docs-content-type: PROCEDURE
|
|
[id="virt-configuring-cluster-dpdk_{context}"]
|
|
= Configuring a cluster for DPDK workloads
|
|
|
|
[role="_abstract"]
|
|
You can configure an {product-title} cluster to run Data Plane Development Kit (DPDK) workloads for improved network performance.
|
|
|
|
.Prerequisites
|
|
* You have access to the cluster as a user with `cluster-admin` permissions.
|
|
* You have installed the OpenShift CLI (`oc`).
|
|
* You have installed the SR-IOV Network Operator.
|
|
* You have installed the Node Tuning Operator.
|
|
|
|
.Procedure
|
|
// Cannot label nodes in ROSA/OSD, but can edit machine pools
|
|
. Map your compute nodes topology to determine which Non-Uniform Memory Access (NUMA) CPUs are isolated for DPDK applications and which ones are reserved for the operating system (OS).
|
|
. If your {product-title} cluster uses separate control plane and compute nodes for high-availability:
|
|
|
|
.. Label a subset of the compute nodes with a custom role; for example, `worker-dpdk`:
|
|
+
|
|
ifndef::openshift-rosa[]
|
|
[source,terminal]
|
|
----
|
|
$ oc label node <node_name> node-role.kubernetes.io/worker-dpdk=""
|
|
----
|
|
endif::openshift-rosa[]
|
|
+
|
|
ifdef::openshift-rosa[]
|
|
[source,terminal]
|
|
----
|
|
$ rosa edit machinepool --cluster=<cluster_name> <machinepool_ID> node-role.kubernetes.io/worker-dpdk=""
|
|
----
|
|
endif::openshift-rosa[]
|
|
|
|
.. Create a new `MachineConfigPool` manifest that contains the `worker-dpdk` label in the `spec.machineConfigSelector` object.
|
|
+
|
|
Example `MachineConfigPool` manifest:
|
|
+
|
|
[source,yaml]
|
|
----
|
|
apiVersion: machineconfiguration.openshift.io/v1
|
|
kind: MachineConfigPool
|
|
metadata:
|
|
name: worker-dpdk
|
|
labels:
|
|
machineconfiguration.openshift.io/role: worker-dpdk
|
|
spec:
|
|
machineConfigSelector:
|
|
matchExpressions:
|
|
- key: machineconfiguration.openshift.io/role
|
|
operator: In
|
|
values:
|
|
- worker
|
|
- worker-dpdk
|
|
nodeSelector:
|
|
matchLabels:
|
|
node-role.kubernetes.io/worker-dpdk: ""
|
|
----
|
|
|
|
. Create a `PerformanceProfile` manifest that applies to the labeled nodes and the machine config pool that you created in the previous steps. The performance profile specifies the CPUs that are isolated for DPDK applications and the CPUs that are reserved for house keeping.
|
|
+
|
|
Example `PerformanceProfile` manifest:
|
|
+
|
|
[source,yaml]
|
|
----
|
|
apiVersion: performance.openshift.io/v2
|
|
kind: PerformanceProfile
|
|
metadata:
|
|
name: profile-1
|
|
spec:
|
|
cpu:
|
|
isolated: 4-39,44-79
|
|
reserved: 0-3,40-43
|
|
globallyDisableIrqLoadBalancing: true
|
|
hugepages:
|
|
defaultHugepagesSize: 1G
|
|
pages:
|
|
- count: 8
|
|
node: 0
|
|
size: 1G
|
|
net:
|
|
userLevelNetworking: true
|
|
nodeSelector:
|
|
node-role.kubernetes.io/worker-dpdk: ""
|
|
numa:
|
|
topologyPolicy: single-numa-node
|
|
----
|
|
+
|
|
[NOTE]
|
|
====
|
|
The compute nodes automatically restart after you apply the `MachineConfigPool` and `PerformanceProfile` manifests.
|
|
====
|
|
|
|
. Retrieve the name of the generated `RuntimeClass` resource from the `status.runtimeClass` field of the `PerformanceProfile` object:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{"\n"}'
|
|
----
|
|
|
|
. Set the previously obtained `RuntimeClass` name as the default container runtime class for the `virt-launcher` pods by editing the `HyperConverged` custom resource (CR):
|
|
+
|
|
[source,terminal,subs="attributes+"]
|
|
----
|
|
$ oc patch hyperconverged kubevirt-hyperconverged -n {CNVNamespace} \
|
|
--type='json' -p='[{"op": "add", "path": "/spec/defaultRuntimeClass", "value":"<runtimeclass-name>"}]'
|
|
----
|
|
+
|
|
[NOTE]
|
|
====
|
|
Editing the `HyperConverged` CR changes a global setting that affects all VMs that are created after the change is applied.
|
|
====
|
|
|
|
. If your DPDK-enabled compute nodes use Simultaneous multithreading (SMT), enable the `AlignCPUs` enabler by editing the `HyperConverged` CR:
|
|
+
|
|
[source,terminal,subs="attributes+"]
|
|
----
|
|
$ oc patch hyperconverged kubevirt-hyperconverged -n {CNVNamespace} \
|
|
--type='json' -p='[{"op": "replace", "path": "/spec/featureGates/alignCPUs", "value": true}]'
|
|
----
|
|
+
|
|
[NOTE]
|
|
====
|
|
Enabling `AlignCPUs` allows {VirtProductName} to request up to two additional dedicated CPUs to bring the total CPU count to an even parity when using
|
|
emulator thread isolation.
|
|
====
|
|
|
|
. Create an `SriovNetworkNodePolicy` object with the `spec.deviceType` field set to `vfio-pci`.
|
|
+
|
|
Example `SriovNetworkNodePolicy` manifest:
|
|
+
|
|
[source,yaml]
|
|
----
|
|
apiVersion: sriovnetwork.openshift.io/v1
|
|
kind: SriovNetworkNodePolicy
|
|
metadata:
|
|
name: policy-1
|
|
namespace: openshift-sriov-network-operator
|
|
spec:
|
|
resourceName: intel_nics_dpdk
|
|
deviceType: vfio-pci
|
|
mtu: 9000
|
|
numVfs: 4
|
|
priority: 99
|
|
nicSelector:
|
|
vendor: "8086"
|
|
deviceID: "1572"
|
|
pfNames:
|
|
- eno3
|
|
rootDevices:
|
|
- "0000:19:00.2"
|
|
nodeSelector:
|
|
feature.node.kubernetes.io/network-sriov.capable: "true"
|
|
----
|