mellea.backends.ollama
class mellea.backends.ollama.OllamaModelBackend(model_id: str | ModelIdentifier = model_ids.IBM_GRANITE_3_3_8B, formatter: Formatter | None = None, base_url: str | None = None, model_options: dict | None = None)
model_id
: str | ModelIdentifier
: Ollama model ID. If ModelIdentifier, then an ollama_name
must be provided by that ModelIdentifier.base_url
: str
: Endpoint that is serving the model API; defaults to env(OLLAMA_HOST) or http://localhost:11434
model_options
: dict
: Ollama model optionsformatter
: Formatter
: formatter for creating inputmellea.backends.ollama.OllamaModelBackend._get_ollama_model_id()
mellea.backends.ollama.OllamaModelBackend._check_ollama_server()
mellea.backends.ollama.OllamaModelBackend.is_model_available(model_name)
model_name
: The name of the model to check for (e.g., “llama2”).mellea.backends.ollama.OllamaModelBackend._pull_ollama_model()
mellea.backends.ollama.OllamaModelBackend._simplify_and_merge(model_options: dict[str, Any] | None)
model_options
: the model_options for this callmellea.backends.ollama.OllamaModelBackend._make_backend_specific_and_remove(model_options: dict[str, Any])
model_options
: the model_options for this callmellea.backends.ollama.OllamaModelBackend.generate_from_context(action: Component | CBlock, ctx: Context, format: type[BaseModelSubclass] | None = None, model_options: dict | None = None, generate_logs: list[GenerateLog] | None = None, tool_calls: bool = False)
generate_from_chat_context
.
mellea.backends.ollama.OllamaModelBackend.generate_from_chat_context(action: Component | CBlock, ctx: Context, format: type[BaseModelSubclass] | None = None, model_options: dict | None = None, generate_logs: list[GenerateLog] | None = None, tool_calls: bool = False)
Formatter
.
This implementation treats the Context
as a chat history, and uses the ollama.Client.chat()
interface to generate a completion.
This will not always work, because sometimes we want to use non-chat models.
mellea.backends.ollama.OllamaModelBackend._generate_from_raw(actions: list[Component | CBlock], format: type[BaseModelSubclass] | None = None, model_options: dict | None = None, generate_logs: list[GenerateLog] | None = None)
mellea.backends.ollama.OllamaModelBackend._extract_model_tool_requests(tools: dict[str, Callable], chat_response: ollama.ChatResponse)