1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

TELCODOCS-1956: Move NVIDIA GPU architecture content to Hardware Accelerators section

This commit is contained in:
StephenJamesSmith
2024-11-26 11:53:30 -05:00
committed by openshift-cherrypick-robot
parent 2d7cd1cc6d
commit 404cbc4922
20 changed files with 143 additions and 20 deletions

View File

@@ -88,9 +88,6 @@ Topics:
- Name: Control plane architecture
File: control-plane
Distros: openshift-enterprise,openshift-origin,openshift-online
- Name: NVIDIA GPU architecture overview
File: nvidia-gpu-architecture-overview
Distros: openshift-enterprise
- Name: Understanding OpenShift development
File: understanding-development
Distros: openshift-enterprise
@@ -3301,6 +3298,8 @@ Distros: openshift-origin,openshift-enterprise
Topics:
- Name: About hardware accelerators
File: about-hardware-accelerators
- Name: NVIDIA GPU architecture
File: nvidia-gpu-architecture
---
Name: Backup and restore
Dir: backup_and_restore

View File

@@ -16,7 +16,7 @@ Red{nbsp}Hat {product-title} provides support for cards and peripheral hardware
* Data processing units (DPUs)
image::OCP_HW_Accelerators_4.png[Supported hardware accelerators cards and peripherals]
image::OCP_HW_Accelerators_5.png[Supported hardware accelerators cards and peripherals]
Specialized hardware accelerators provide a rich set of benefits for AI/ML development:
@@ -44,3 +44,4 @@ NVIDIA GPU Operator on Red Hat OpenShift Container Platform]
* link:https://www.amd.com/en/products/accelerators/instinct.html[AMD Instinct Accelerators]
* link:https://www.intel.com/content/www/us/en/products/details/processors/ai-accelerators/gaudi-overview.html[Intel Gaudi Al Accelerators]

View File

@@ -0,0 +1,106 @@
:_mod-docs-content-type: ASSEMBLY
[id="nvidia-gpu-architecture"]
= NVIDIA GPU architecture
include::_attributes/common-attributes.adoc[]
:context: nvidia-gpu-architecture
toc::[]
NVIDIA supports the use of graphics processing unit (GPU) resources on {product-title}. {product-title} is a security-focused and hardened Kubernetes platform developed and supported by Red Hat for deploying and managing Kubernetes clusters at scale. {product-title} includes enhancements to Kubernetes so that users can easily configure and use NVIDIA GPU resources to accelerate workloads.
The NVIDIA GPU Operator uses the Operator framework within {product-title} to manage the full lifecycle of NVIDIA software components required to run GPU-accelerated workloads.
These components include the NVIDIA drivers (to enable CUDA), the Kubernetes device plugin for GPUs, the NVIDIA Container Toolkit, automatic node tagging using GPU feature discovery (GFD), DCGM-based monitoring, and others.
[NOTE]
====
The NVIDIA GPU Operator is only supported by NVIDIA. For more information about obtaining support from NVIDIA, see link:https://access.redhat.com/solutions/5174941[Obtaining Support from NVIDIA].
====
//TELCODOCS-1956 - Moved to Hardware accelerators section
include::modules/nvidia-gpu-prerequisites.adoc[leveloffset=+1]
// New enablement modules
ifndef::openshift-dedicated,openshift-rosa[]
include::modules/nvidia-gpu-enablement.adoc[leveloffset=+1]
include::modules/nvidia-gpu-bare-metal.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://docs.nvidia.com/ai-enterprise/deployment-guide-openshift-on-bare-metal/0.1.0/on-bare-metal.html[Red Hat OpenShift on Bare Metal Stack]
include::modules/nvidia-gpu-virtualization.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/openshift/openshift-virtualization.html[NVIDIA GPU Operator with OpenShift Virtualization]
include::modules/nvidia-gpu-vsphere.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/openshift/nvaie-with-ocp.html#openshift-container-platform-on-vmware-vsphere-with-nvidia-vgpus[OpenShift Container Platform on VMware vSphere with NVIDIA vGPUs]
include::modules/nvidia-gpu-kvm.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://computingforgeeks.com/how-to-deploy-openshift-container-platform-on-kvm/[How To Deploy OpenShift Container Platform 4.13 on KVM]
include::modules/nvidia-gpu-csps.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://docs.nvidia.com/ai-enterprise/deployment-guide-cloud/0.1.0/aws-redhat-openshift.html[Red Hat Openshift in the Cloud]
endif::openshift-dedicated,openshift-rosa[]
// Include this module at a higher leveloffset for OSD/ROSA.
ifdef::openshift-dedicated,openshift-rosa[]
include::modules/nvidia-gpu-csps.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* link:https://docs.nvidia.com/ai-enterprise/deployment-guide-cloud/0.1.0/aws-redhat-openshift.html[Red Hat Openshift in the Cloud]
endif::openshift-dedicated,openshift-rosa[]
ifndef::openshift-dedicated,openshift-rosa[]
include::modules/nvidia-gpu-red-hat-device-edge.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://cloud.redhat.com/blog/how-to-accelerate-workloads-with-nvidia-gpus-on-red-hat-device-edge[How to accelerate workloads with NVIDIA GPUs on Red Hat Device Edge]
endif::openshift-dedicated,openshift-rosa[]
// TELCODOCS-1092 GPU sharing methods
include::modules/nvidia-gpu-sharing-methods.adoc[leveloffset=+1]
.Additional resources
* link:https://developer.nvidia.com/blog/improving-gpu-utilization-in-kubernetes/[Improving GPU Utilization]
include::modules/nvidia-gpu-cuda-streams.adoc[leveloffset=+2]
.Additional resources
* link:https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#asynchronous-concurrent-execution[Asynchronous Concurrent Execution]
include::modules/nvidia-gpu-time-slicing.adoc[leveloffset=+2]
include::modules/nvidia-gpu-cuda-mps.adoc[leveloffset=+2]
.Additional resources
* link:https://docs.nvidia.com/deploy/mps/index.html[CUDA MPS]
include::modules/nvidia-gpu-mig-gpu.adoc[leveloffset=+2]
.Additional resources
* link:https://docs.nvidia.com/datacenter/tesla/mig-user-guide/[NVIDIA Multi-Instance GPU User Guide]
include::modules/nvidia-gpu-virtualization-with-gpu.adoc[leveloffset=+2]
.Additional resources
* link:https://www.nvidia.com/en-us/data-center/virtual-solutions/[Virtual GPUs]
include::modules/nvidia-gpu-features.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* link:https://docs.nvidia.com/ngc/ngc-deploy-on-premises/nvidia-certified-systems/index.html[NVIDIA-Certified Systems]
* link:https://docs.nvidia.com/ai-enterprise/index.html#deployment-guides[NVIDIA AI Enterprise]
* link:https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/overview.html#[NVIDIA Container Toolkit]
* link:https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/enable-gpu-monitoring-dashboard.html[Enabling the GPU Monitoring Dashboard]
* link:https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/mig-ocp.html[MIG Support in OpenShift Container Platform]
* link:https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/time-slicing-gpus-in-openshift.html[Time-slicing NVIDIA GPUs in OpenShift]
* link:https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/mirror-gpu-ocp-disconnected.html[Deploy GPU Operators in a disconnected or airgapped environment]
// Topic not available in OSD/ROSA
ifndef::openshift-dedicated,openshift-rosa[]
* xref:../hardware_enablement/psap-node-feature-discovery-operator.html[Node Feature Discovery Operator]
endif::openshift-dedicated,openshift-rosa[]

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

View File

@@ -0,0 +1,18 @@
// Module included in the following assemblies:
//
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-architecture_{context}"]
= NVIDIA GPU architecture
NVIDIA supports the use of graphics processing unit (GPU) resources on {product-title}. {product-title} is a security-focused and hardened Kubernetes platform developed and supported by Red Hat for deploying and managing Kubernetes clusters at scale. {product-title} includes enhancements to Kubernetes so that users can easily configure and use NVIDIA GPU resources to accelerate workloads.
The NVIDIA GPU Operator leverages the Operator framework within {product-title} to manage the full lifecycle of NVIDIA software components required to run GPU-accelerated workloads.
These components include the NVIDIA drivers (to enable CUDA), the Kubernetes device plugin for GPUs, the NVIDIA Container Toolkit, automatic node tagging using GPU feature discovery (GFD), DCGM-based monitoring, and others.
[NOTE]
====
The NVIDIA GPU Operator is only supported by NVIDIA. For more information about obtaining support from NVIDIA, see link:https://access.redhat.com/solutions/5174941[Obtaining Support from NVIDIA].
====

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-bare-metal_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-csps_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-cuda-mps_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-cuda-streams_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-enablement_{context}"]

View File

@@ -1,7 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-features_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-kvm_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-mig-gpu_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-red-hat-device-edge_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-sharing-methods_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-time-slicing_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-virtualization-with-gpu_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-virtualization_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * architecture/nvidia-gpu-architecture-overview.adoc
// * hardware_accelerators/about-hardware-accelerators.adoc
:_mod-docs-content-type: CONCEPT
[id="nvidia-gpu-vsphere_{context}"]