1
0
mirror of https://github.com/containers/ramalama.git synced 2026-02-05 06:46:39 +01:00

docs: linting on ramalama-cuda

Applied some fixes based on the Markdown linter in VSCode

See: https://github.com/DavidAnson/vscode-markdownlint

Signed-off-by: Micah Abbott <miabbott@redhat.com>
This commit is contained in:
Micah Abbott
2025-03-28 17:34:48 -04:00
parent d80f49c294
commit e1865100dd

View File

@@ -5,6 +5,7 @@
This guide walks through the steps required to set up RamaLama with CUDA support.
## Install the NVIDIA Container Toolkit
Follow the installation instructions provided in the [NVIDIA Container Toolkit installation guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
### Installation using dnf/yum (For RPM based distros like Fedora)
@@ -14,7 +15,8 @@ Follow the installation instructions provided in the [NVIDIA Container Toolkit i
```bash
sudo dnf install -y nvidia-container-toolkit
```
> **Note:** The Nvidia Container Toolkit is required on the host for running CUDA in containers.
> **Note:** The Nvidia Container Toolkit is required on the host for running CUDA in containers.
### Installation using APT (For Debian based distros like Ubuntu)
@@ -23,7 +25,7 @@ Follow the installation instructions provided in the [NVIDIA Container Toolkit i
```bash
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey
sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
@@ -34,12 +36,14 @@ Follow the installation instructions provided in the [NVIDIA Container Toolkit i
```bash
sudo apt-get update
```
* Install the NVIDIA Container Toolkit packages
```bash
sudo apt-get install -y nvidia-container-toolkit
```
> **Note:** The Nvidia Container Toolkit is required for WSL to have CUDA resources while running a container.
> **Note:** The Nvidia Container Toolkit is required for WSL to have CUDA resources while running a container.
## Setting Up CUDA Support
@@ -51,7 +55,7 @@ Follow the installation instructions provided in the [NVIDIA Container Toolkit i
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
```
# Check the names of the generated devices
# Check the names of the generated devices
Open and edit the NVIDIA container runtime configuration:
@@ -64,12 +68,13 @@ Follow the installation instructions provided in the [NVIDIA Container Toolkit i
> **Note:** Generate a new CDI specification after any configuration change most notably when the driver is upgraded!
## Testing the Setup
**Based on this Documentation:** [Running a Sample Workload](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/sample-workload.html)
---
# **Test the Installation**
* **Test the Installation**
Run the following command to verify setup:
```bash
@@ -77,10 +82,11 @@ Follow the installation instructions provided in the [NVIDIA Container Toolkit i
```
# **Expected Output**
Verify everything is configured correctly, with output similar to this:
```text
Thu Dec 5 19:58:40 2024
Thu Dec 5 19:58:40 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 565.72 Driver Version: 566.14 CUDA Version: 12.7 |
|-----------------------------------------+------------------------+----------------------+
@@ -102,13 +108,14 @@ Follow the installation instructions provided in the [NVIDIA Container Toolkit i
| 0 N/A N/A 35 G /Xwayland N/A |
+-----------------------------------------------------------------------------------------+
```
## Troubleshooting
### CUDA Updates
On some CUDA software updates, RamaLama stops working complaining about missing shared NVIDIA libraries for example:
```
```bash
ramalama run granite
Error: crun: cannot stat `/lib64/libEGL_nvidia.so.565.77`: No such file or directory: OCI runtime attempted to invoke a command that was not found
```
@@ -120,7 +127,9 @@ Because the CUDA version is updated, the CDI specification file needs to be recr
```
## SEE ALSO
**[ramalama(1)](ramalama.1.md)**, **[podman(1)](https://github.com/containers/podman/blob/main/docs/podman.1.md)**
## HISTORY
Jan 2025, Originally compiled by Dan Walsh <dwalsh@redhat.com>