Use pre- and post-conditions to validate your LLM outputs meet specific requirements.
m.instruct
call:
strategy
to the instruction.
This strategy (RejectionSamplingStrategy()
) checks if all requirements are met.
If any requirement fails, then the sampling strategy will sample a new email from the LLM.
This process will repeat until the loop_budget
on retries is consumed or all requirements are met.
Even with retries, sampling might not generate results that fulfill all requirements (email_candidate.success==False
).
Mellea forces you to think about what it means for an LLM call to fail;
in this case, we handle the situation by simply returning the first sample as the final result.
return_sampling_results=True
parameter, the instruct()
function returns a SamplingResult
object (not a ModelOutputThunk
) which
carries the full history of sampling and validation results for each sample.validation_fn
parameter requires to run validation on the full session context (see Context Management), Mellea provides a wrapper for simpler validation functions (simple_validate(fn: Callable[[str], bool])
) that take the output string and return a boolean as seen in this case.
The third requirement is a check()
. Checks are only used for validation, not for generation.
Checks aim to avoid the “do not think about B” effect that often primes models (and humans)
to do the opposite and “think” about B.
m tune
subcommand to train your own LoRAs for requirement checking
(and for other types of Mellea components as well).instruct()
method is a convenience function that creates and then
generates from an Instruction
Component, req()
similarly wraps the
Requirement
Component, etc. Quickstart will takes
us one level deeper into understanding what happens under the hood when you
call m.instruct()
.model_options
parameter.
Mellea supports many different types of inference engines (ollama, openai-compatible vllm, huggingface, etc.). These inference engines, which we call Backend
s, provide different and sometimes inconsistent dict keysets for specifying model options. For the most common options among model providers, Mellea provides some engine-agnostic options, which can be used by typing ModelOption.<TAB>
in your favorite IDE; for example, temperature can be specified as {"{ModelOption.TEMPERATURE": 0}
and this will “just work” across all inference engines.
You can add any key-value pair supported by the backend to the model_options
dictionary, and those options will be passed along to the inference engine *even if a Mellea-specific ModelOption.<KEY>
is defined for that option. This means you can safely copy over model option parameters from exiting codebases as-is:
m.*
calls. Options specified here will update the model options previously specified for that call only. If you specify an already existing key (with either the ModelOption.OPTION
version or the native name for that option for the given api), the value will be the one associated with the new key. If you specify the same key in different ways (ie ModelOption.TEMPERATURE
and temperature
), the ModelOption.OPTION
key will take precedence.model_options
for a series of calls by pushing a new set of model_options
and then revert those changes with a pop.