ModelLaboratory#
- class langchain.model_laboratory.ModelLaboratory(chains: Sequence[Chain], names: List[str] | None = None)[source]#
A utility to experiment with and compare the performance of different models.
Initialize the ModelLaboratory with chains to experiment with.
- Parameters:
chains (Sequence[Chain]) β A sequence of chains to experiment with.
variable. (Each chain must have exactly one input and one output)
names (Optional[List[str]])
- names (Optional[List[str]]): Optional list of names corresponding to each chain.
If provided, its length must match the number of chains.
- Raises:
ValueError β If any chain is not an instance of Chain.
ValueError β If a chain does not have exactly one input variable.
ValueError β If a chain does not have exactly one output variable.
ValueError β If the length of names does not match the number of chains.
- Parameters:
chains (Sequence[Chain])
names (Optional[List[str]])
Methods
__init__
(chains[,Β names])Initialize the ModelLaboratory with chains to experiment with.
compare
(text)Compare model outputs on an input text.
from_llms
(llms[,Β prompt])Initialize the ModelLaboratory with LLMs and an optional prompt.
- __init__(chains: Sequence[Chain], names: List[str] | None = None)[source]#
Initialize the ModelLaboratory with chains to experiment with.
- Parameters:
chains (Sequence[Chain]) β A sequence of chains to experiment with.
variable. (Each chain must have exactly one input and one output)
names (List[str] | None)
- names (Optional[List[str]]): Optional list of names corresponding to each chain.
If provided, its length must match the number of chains.
- Raises:
ValueError β If any chain is not an instance of Chain.
ValueError β If a chain does not have exactly one input variable.
ValueError β If a chain does not have exactly one output variable.
ValueError β If the length of names does not match the number of chains.
- Parameters:
chains (Sequence[Chain])
names (List[str] | None)
- compare(text: str) None [source]#
Compare model outputs on an input text.
If a prompt was provided with starting the laboratory, then this text will be fed into the prompt. If no prompt was provided, then the input text is the entire prompt.
- Parameters:
text (str) β input text to run all models on.
- Return type:
None
- classmethod from_llms(llms: List[BaseLLM], prompt: PromptTemplate | None = None) ModelLaboratory [source]#
Initialize the ModelLaboratory with LLMs and an optional prompt.
- Parameters:
llms (List[BaseLLM]) β A list of LLMs to experiment with.
prompt (Optional[PromptTemplate]) β An optional prompt to use with the LLMs. If provided, the prompt must contain exactly one input variable.
- Returns:
An instance of ModelLaboratory initialized with LLMs.
- Return type:
Examples using ModelLaboratory