1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Merge pull request #100573 from openshift-cherrypick-robot/cherry-pick-100332-to-enterprise-4.20

[enterprise-4.20] OSDOCS 16489 Add GPU usage with CMA docs -- NEEDED for 4.20!
This commit is contained in:
Michael Burke
2025-10-15 15:20:56 -04:00
committed by GitHub
2 changed files with 41 additions and 0 deletions

View File

@@ -0,0 +1,40 @@
// Module included in the following assemblies:
//
// * nodes/cma/nodes-cma-autoscaling-custom-trigger.adoc
:_mod-docs-content-type: CONCEPT
[id="nodes-cma-autoscaling-custom-trigger-prom-gpu_{context}"]
= Configuring GPU-based autoscaling with Prometheus and DCGM metrics
You can use the Custom Metrics Autoscaler with NVIDIA Data Center GPU Manager (DCGM) metrics to scale workloads based on GPU utilization. This is particularly useful for AI and machine learning workloads that require GPU resources.
.Example scaled object with a Prometheus target for GPU-based autoscaling
[source,yaml,options="nowrap"]
----
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: gpu-scaledobject
namespace: my-namespace
spec:
scaleTargetRef:
kind: Deployment
name: gpu-deployment
minReplicaCount: 1 <1>
maxReplicaCount: 5 <2>
triggers:
- type: prometheus
metadata:
serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092
namespace: my-namespace
metricName: gpu_utilization
threshold: '90' <3>
query: SUM(DCGM_FI_DEV_GPU_UTIL{instance=~".+", gpu=~".+"}) <4>
authModes: bearer
authenticationRef:
name: keda-trigger-auth-prometheus
----
<1> Specifies the minimum number of replicas to maintain. For GPU workloads, this should not be set to `0` to ensure that metrics continue to be collected.
<2> Specifies the maximum number of replicas allowed during scale-up operations.
<3> Specifies the GPU utilization percentage threshold that triggers scaling. When the average GPU utilization exceeds 90%, the autoscaler scales up the deployment.
<4> Specifies a Prometheus query using NVIDIA DCGM metrics to monitor GPU utilization across all GPU devices. The `DCGM_FI_DEV_GPU_UTIL` metric provides GPU utilization percentages.

View File

@@ -22,6 +22,7 @@ You can configure a certificate authority xref:../../nodes/cma/nodes-cma-autosca
// assemblies.
include::modules/nodes-cma-autoscaling-custom-trigger-prom.adoc[leveloffset=+1]
include::modules/nodes-cma-autoscaling-custom-trigger-prom-gpu.adoc[leveloffset=+2]
include::modules/nodes-cma-autoscaling-custom-prometheus-config.adoc[leveloffset=+2]
[role="_additional-resources"]