Was installing uv 2 different ways (github action and make install-requirement).
Use `uv run` explicitly instead of activating the virtualenv in the uv
github action (it's docs advise not to use this option).
Signed-off-by: Oliver Walsh <owalsh@redhat.com>
Was installing uv 2 different ways (github action and make install-requirement).
Use `uv run` explicitly instead of activating the virtualenv in the uv
github action (it's docs advise not to use this option).
Signed-off-by: Oliver Walsh <owalsh@redhat.com>
They are not part of POSIX and are Linux specific. This code crashes on
Darwin because of missing symbols.
Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
ollama uses go-template for chat template files. llama.cpp uses jinja2.
The go-templates are converted to jinja2 but since the first
chat template file is always chosen this file is not used.
Pick the last file instead of the first to resolve this.
Fixes #1855
Signed-off-by: Oliver Walsh <owalsh@redhat.com>
Build and pull the complete list of files for split GGUF models.
Also handle case where the model is stored in a subdirectory of
the repo.
Signed-off-by: Oliver Walsh <owalsh@redhat.com>
Konflux is moving from a shared "appstudio-pipeline" service account
to a separate service account for each component, to increase security
and improve efficiency.
Signed-off-by: Mike Bonnet <mikeb@redhat.com>
Before:
Error:
Optional: ModelScope models require the modelscope module.
This module can be installed via PyPI tools like uv, pip, pip3, pipx, or via
distribution package managers like dnf or apt. Example:
uv pip install modelscope
After:
Error: This operation requires modelscope which is not available.
This tool can be installed via PyPI tools like uv, pip, pip3 or pipx. Example:
pip install modelscope
In particular:
- follows the same structure of error message as in `huggingface` module
- separate error from the explanation and guidance by new line
- explain that this particular operation requires the CLI tool -
currently the error is displayed on login, logout or push. Implies
that some other operations might not require this tool
- provide `pip` as first example how to solve the issue (that's also
the tool mention in the README)
- there aren't any distribution deb nor rpm packages with modelscope
so remove that guidance
- `modelscope` is a command line tool, not a module as it was
referred before
Fixes: #1766
Signed-off-by: Pavol Babincak <pbabinca@redhat.com>
Before:
Error:
Optional: Huggingface models require the huggingface-cli module.
This module can be installed via PyPI tools like uv, pip, pip3, pipx, or via
distribution package managers like dnf or apt. Example:
uv pip install huggingface_hub
After:
Error: This operation requires huggingface-cli which is not available.
This tool can be installed via PyPI tools like uv, pip, pip3 or pipx. Example:
pip install -U "huggingface_hub[cli]"
Or via distribution package managers like dnf or apt. Example:
sudo dnf install python3-huggingface-hub
In particular:
- separate error from the explanation and guidance by new line
- explain that this particular operation requires the CLI tool -
currently the error is displayed on login, logout or push. Implies
that some other operations might not require this tool
- provide `pip` as first example how to solve the issue (that's also
the tool mention in the README)
- use same package extra as official documentation to install the
CLI tool: https://huggingface.co/docs/huggingface_hub/en/guides/cli
- provide example with dnf command to install the package (dnf is
also mentioned in the README file)
- `huggingface-cli` is a command line tool, not a module as it was
referred before
Fixes: #1766
Signed-off-by: Pavol Babincak <pbabinca@redhat.com>
Previously, the location of the "llama-stack" image was hard-coded.
Allow it to be configured via an env var, in the same way that the
ramalama and gpu-specific images are.
Update the integration tests to use the newly-built llama-stack image.
Signed-off-by: Mike Bonnet <mikeb@redhat.com>
Podman health checks rely on systemd timers to periodically run the check. When
not running under systemd, the health check config is silently ignored and the
check never runs, leaving the status permanently in the "starting" state. This
causes wait_for_healthy() to time out.
Reimplement health checks in ramalama by polling the "/models" API endpoint and
parsing the result. This avoids the dependency on systemd, and works when running
in non-podman environments.
Signed-off-by: Mike Bonnet <mikeb@redhat.com>