1
0
mirror of https://github.com/containers/ramalama.git synced 2026-02-05 15:47:26 +01:00
Files
ramalama/test/unit/data/test_compose/basic.yaml
abhibongale 630b75ec55 Add Docker Compose generator
This commit introduces the `--generate=compose` option to the
`ramalama serve` command, enabling users
to generate a `docker-compose.yaml` file for a given model.

sourcery-ai suggested changes:
1. Add test_genfile_empty_content.
This will verify that genfile handles empty input gracefully and avoids
unexpected behavior.

2. Supporting condition to handle  'oci://' prefix for RAG source
for consistency.

3. updating the parsing logic to support complex port formats, such as those i
with IP addresses or protocols, to ensure compatibility
with all valid Docker Compose specifications.

4. approach excludes images for other GPU backends like ROCm or custom builds.
Please update the check to support a wider range of GPU-enabled images.

fixes #184

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Signed-off-by: abhibongale <abhishekbongale@outlook.com>
2025-08-18 19:57:34 +01:00

20 lines
554 B
YAML

# Save this output to a 'docker-compose.yaml' file and run 'docker compose up'.
#
# Created with ramalama-0.1.0-test
services:
tinyllama:
container_name: ramalama-tinyllama
image: test-image/ramalama:latest
volumes:
- "/models/tinyllama.gguf:/mnt/models/tinyllama.gguf:ro"
ports:
- "8080:8080"
environment:
- ACCEL_ENV=true
devices:
- "/dev/dri:/dev/dri"
- "/dev/kfd:/dev/kfd"
- "/dev/accel:/dev/accel"
command: llama-server --model /mnt/models/tinyllama.gguf
restart: unless-stopped