mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-06 06:46:26 +01:00
WPC T-GM + GNSS updates General PTP docs reorg and clean up Updates based on Aneesh's review comments removing gnss-state-change from api/ocloudNotifications/v1/<resource_address>/CurrentState Adding metric details Adding new PTP image Adding final PTP 4.14 image Aneesh's comments jack's comments Adjust TOC Peer review comments
14 lines
880 B
Plaintext
14 lines
880 B
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * architecture/nvidia-gpu-architecture-overview.adoc
|
|
|
|
:_mod-docs-content-type: CONCEPT
|
|
[id="nvidia-gpu-cuda-streams_{context}"]
|
|
= CUDA streams
|
|
|
|
Compute Unified Device Architecture (CUDA) is a parallel computing platform and programming model developed by NVIDIA for general computing on GPUs.
|
|
|
|
A stream is a sequence of operations that executes in issue-order on the GPU. CUDA commands are typically executed sequentially in a default stream and a task does not start until a preceding task has completed.
|
|
|
|
Asynchronous processing of operations across different streams allows for parallel execution of tasks. A task issued in one stream runs before, during, or after another task is issued into another stream. This allows the GPU to run multiple tasks simultaneously in no prescribed order, leading to improved performance.
|