mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 21:46:22 +01:00
22 lines
1.1 KiB
Plaintext
22 lines
1.1 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * hardware_accelerators/about-hardware-accelerators.adoc
|
|
|
|
:_mod-docs-content-type: CONCEPT
|
|
[id="nvidia-gpu-kvm_{context}"]
|
|
= GPUs and Red Hat KVM
|
|
|
|
You can use {product-title} on an NVIDIA-certified kernel-based virtual machine (KVM) server.
|
|
|
|
Similar to bare-metal deployments, one or three or more servers are required. Clusters with two servers are not supported.
|
|
|
|
However, unlike bare-metal deployments, you can use different types of GPUs in the server. This is because you can assign these GPUs to different VMs that act as Kubernetes nodes. The only limitation is that a Kubernetes node must have the same set of GPU types at its own level.
|
|
|
|
You can choose one of the following methods to access the containerized GPUs:
|
|
|
|
* GPU passthrough for accessing and using GPU hardware within a virtual machine (VM)
|
|
|
|
* GPU (vGPU) time-slicing when not all of the GPU is needed
|
|
|
|
To enable the vGPU capability, a special driver must be installed at the host level. This driver is delivered as a RPM package. This host driver is not required at all for GPU passthrough allocation.
|