1
0
mirror of https://github.com/containers/ramalama.git synced 2026-02-05 06:46:39 +01:00

21 Commits

Author SHA1 Message Date
Ian Eaves
db8bb5d9df adds support for hosted chat providers
Signed-off-by: Ian Eaves <ian.k.eaves@gmail.com>
2026-01-24 16:00:59 -06:00
Mike Bonnet
95b13e209e s390x: switch to a smaller bigendian model for testing
Should improve performance of integration tests.

Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-09-11 06:33:54 -07:00
Mike Bonnet
cb67ddc4dd use bigendian models when testing bigendian arches
Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-09-10 23:06:43 -07:00
Oliver Walsh
912bf92d6f Use Hugging Face models for tinylama and smollm:135
The chat templates from the ollama models do not work with llama.cpp after
converting (successfully) to jinja2.
Workaround by creating shortname.conf aliases to equivalent models on
Hugging Face.

Signed-off-by: Oliver Walsh <owalsh@redhat.com>
2025-09-10 14:38:26 +01:00
Oliver Walsh
4e96af0fb0 Revert back to ollama granite-code models
The issue with the ollama granite-code models are now resolved with
https://github.com/containers/ramalama/pull/1856
https://github.com/containers/ramalama/pull/1858

Revert the changes from da7eb54046.

Fixes: #1374

Signed-off-by: Oliver Walsh <owalsh@redhat.com>
2025-08-28 17:42:42 +01:00
Daniel J Walsh
de5ed5dd6a Sort shortnames alphabetically
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2025-08-12 05:02:09 -04:00
Daniel J Walsh
e4a833feb0 Add gpt-oss models to shortnames.conf
Fixes: https://github.com/containers/ramalama/issues/1805

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2025-08-11 16:06:07 -04:00
Eric Curtin
fa2f485175 Mistral should point to lmstudio gguf
I don't know who MaziyarPanahi is, but I know who lmstudio are

Signed-off-by: Eric Curtin <ecurtin@redhat.com>
2025-07-15 15:04:54 +01:00
Eric Curtin
089589cdfe Add gemma aliases
The ollama variants are incompatible

Signed-off-by: Eric Curtin <ecurtin@redhat.com>
2025-06-27 15:28:00 +01:00
Eric Curtin
9bc76c2757 This is not a multi-model model
Although the other gemma once are. Point the user towards a single
gguf.

Signed-off-by: Eric Curtin <ecurtin@redhat.com>
2025-06-10 18:43:06 +01:00
Eric Curtin
e494a6d924 Add smolvlm vision models
For multimodal usage

Signed-off-by: Eric Curtin <ecurtin@redhat.com>
2025-05-18 12:02:01 +01:00
Eric Curtin
2c48af0175 Add shortnames for mistral-small3.1 model
Another Ollama model that's only compatible with Ollama's forking
of llama.cpp

Signed-off-by: Eric Curtin <ecurtin@redhat.com>
2025-05-01 17:48:46 +01:00
Eric Curtin
35cbabdb74 Add gemma3 shortnames
Otherwise we will pull the incompatible gguf's from Ollama
registry.

Signed-off-by: Eric Curtin <ecurtin@redhat.com>
2025-04-25 18:06:06 +01:00
Daniel J Walsh
974a37fe00 Add shortname for deepseek
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2025-01-27 10:02:26 -05:00
Eric Curtin
da7eb54046 granite-code models in Ollama are malformed
To the extent where they work in Ollama but not vanilla llama.cpp

This is sortof a workaround, pulling the hf versions.

Signed-off-by: Eric Curtin <ecurtin@redhat.com>
2025-01-13 17:27:50 +00:00
Eric Curtin
a9ecebc190 smollm:135m for testing purposes
smollm is a model by Hugging Face, it's good for testing and CPU
inferencing.

Signed-off-by: Eric Curtin <ecurtin@redhat.com>
2025-01-09 16:04:15 +00:00
Eric Curtin
65f02b8051 Update shortnames.conf to alias new granite models
This updates the shortnames for the "granite" models to point to newer
versions.

Signed-off-by: Eric Curtin <ecurtin@redhat.com>
2024-12-18 17:42:01 +00:00
Daniel J Walsh
e4112a60e8 Add granite-8b to shortnames.conf
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2024-11-12 10:08:31 -07:00
swarajpande5
8cef39b8b4 Fix shortname paths
Signed-off-by: swarajpande5 <swarajpande5@gmail.com>
2024-10-25 20:17:44 +05:30
Michael Clifford
2f01df54ce add AI Lab models to shortnames
Signed-off-by: Michael Clifford <mcliffor@redhat.com>
2024-10-21 12:02:14 -04:00
Daniel J Walsh
71eeacdd93 Cleanup and install shortnames files
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2024-09-19 13:44:50 -04:00