1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/hcp-virt-attach-nvidia-gpus-np-api.adoc
2024-11-08 07:09:12 +00:00

101 lines
3.1 KiB
Plaintext

// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-virt.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-virt-attach-nvidia-gpus-np-api_{context}"]
= Attaching NVIDIA GPU devices by using the NodePool resource
You can attach one or more NVIDIA graphics processing unit (GPU) devices to node pools by configuring the `nodepool.spec.platform.kubevirt.hostDevices` field in the `NodePool` resource.
:FeatureName: Attaching NVIDIA GPU devices to node pools
include::snippets/technology-preview.adoc[]
.Procedure
* Attach one or more GPU devices to node pools:
** To attach a single GPU device, configure the `NodePool` resource by using the following example configuration:
+
[source,yaml]
----
apiVersion: hypershift.openshift.io/v1beta1
kind: NodePool
metadata:
name: <hosted_cluster_name> <1>
namespace: <hosted_cluster_namespace> <2>
spec:
arch: amd64
clusterName: <hosted_cluster_name>
management:
autoRepair: false
upgradeType: Replace
nodeDrainTimeout: 0s
nodeVolumeDetachTimeout: 0s
platform:
kubevirt:
attachDefaultNetwork: true
compute:
cores: <cpu> <3>
memory: <memory> <4>
hostDevices: <5>
- count: <count> <6>
deviceName: <gpu_device_name> <7>
networkInterfaceMultiqueue: Enable
rootVolume:
persistent:
size: 32Gi
type: Persistent
type: KubeVirt
replicas: <worker_node_count> <8>
----
<1> Specify the name of your hosted cluster, for instance, `example`.
<2> Specify the name of the hosted cluster namespace, for example, `clusters`.
<3> Specify a value for CPU, for example, `2`.
<4> Specify a value for memory, for example, `16Gi`.
<5> The `hostDevices` field defines a list of different types of GPU devices that you can attach to node pools.
<6> Specify the number of GPU devices you want to attach to each virtual machine (VM) in node pools. For example, if you attach 2 GPU devices to 3 node pool replicas, all 3 VMs in the node pool are attached to the 2 GPU devices. The default count is `1`.
<7> Specify the GPU device name, for example,`nvidia-a100`.
<8> Specify the worker count, for example, `3`.
** To attach multiple GPU devices, configure the `NodePool` resource by using the following example configuration:
+
[source,yaml]
----
apiVersion: hypershift.openshift.io/v1beta1
kind: NodePool
metadata:
name: <hosted_cluster_name>
namespace: <hosted_cluster_namespace>
spec:
arch: amd64
clusterName: <hosted_cluster_name>
management:
autoRepair: false
upgradeType: Replace
nodeDrainTimeout: 0s
nodeVolumeDetachTimeout: 0s
platform:
kubevirt:
attachDefaultNetwork: true
compute:
cores: <cpu>
memory: <memory>
hostDevices:
- count: <count>
deviceName: <gpu_device_name>
- count: <count>
deviceName: <gpu_device_name>
- count: <count>
deviceName: <gpu_device_name>
- count: <count>
deviceName: <gpu_device_name>
networkInterfaceMultiqueue: Enable
rootVolume:
persistent:
size: 32Gi
type: Persistent
type: KubeVirt
replicas: <worker_node_count>
----