mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
23 lines
1.1 KiB
Plaintext
23 lines
1.1 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * hardware_accelerators/about-hardware-accelerators.adoc
|
|
|
|
:_mod-docs-content-type: CONCEPT
|
|
[id="nvidia-gpu-bare-metal_{context}"]
|
|
= GPUs and bare metal
|
|
|
|
You can deploy {product-title} on an NVIDIA-certified bare metal server but with some limitations:
|
|
|
|
* Control plane nodes can be CPU nodes.
|
|
|
|
* Worker nodes must be GPU nodes, provided that AI/ML workloads are executed on these worker nodes.
|
|
+
|
|
In addition, the worker nodes can host one or more GPUs, but they must be of the same type. For example, a node can have two NVIDIA A100 GPUs, but a node with one A100 GPU and one T4 GPU is not supported. The NVIDIA Device Plugin for Kubernetes does not support mixing different GPU models on the same node.
|
|
|
|
* When using OpenShift, note that one or three or more servers are required. Clusters with two servers are not supported. The single server deployment is called single node openShift (SNO) and using this configuration results in a non-high availability OpenShift environment.
|
|
|
|
You can choose one of the following methods to access the containerized GPUs:
|
|
|
|
* GPU passthrough
|
|
* Multi-Instance GPU (MIG)
|