1
0
mirror of https://github.com/containers/ramalama.git synced 2026-02-05 06:46:39 +01:00
Files

RamaLama

This script demonstrates running RamaLama with a sample workflow that pulls a model, serves it, and allows testing inference through a browser or curl.

Requirements

  • RamaLama installed and available in your PATH
  • Podman installed and configured

Usage

Run the script:

./ramalama.sh

Override the browser (optional):

BROWSER=google-chrome ./ramalama.sh

Features

  • Pulls and runs the smollm:135m and granite models with RamaLama
  • Opens the service endpoint in your browser automatically
  • Waits for the service to be ready before testing inference
  • Performs a sample inference with curl against the granite3.1-dense model

Advanced usage

You can also call specific functions from the script directly, for example:

./ramalama.sh pull
./ramalama.sh run
./ramalama.sh test

Extra arguments can be passed after the function name if supported.