LLMChainFilter#
- class langchain.retrievers.document_compressors.chain_filter.LLMChainFilter[source]#
Bases:
BaseDocumentCompressor
Filter that drops documents that arenβt relevant to the query.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- param get_input: Callable[[str, Document], dict] = <function default_get_input>#
Callable for constructing the chain input from the query and a Document.
- param llm_chain: Runnable [Required]#
LLM wrapper to use for filtering documents. The chain prompt is expected to have a BooleanOutputParser.
- async acompress_documents(documents: Sequence[Document], query: str, callbacks: list[BaseCallbackHandler] | BaseCallbackManager | None = None) Sequence[Document] [source]#
Filter down documents based on their relevance to the query.
- Parameters:
documents (Sequence[Document])
query (str)
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None)
- Return type:
Sequence[Document]
- compress_documents(documents: Sequence[Document], query: str, callbacks: list[BaseCallbackHandler] | BaseCallbackManager | None = None) Sequence[Document] [source]#
Filter down documents based on their relevance to the query.
- Parameters:
documents (Sequence[Document])
query (str)
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None)
- Return type:
Sequence[Document]
- classmethod from_llm(llm: BaseLanguageModel, prompt: BasePromptTemplate | None = None, **kwargs: Any) LLMChainFilter [source]#
Create a LLMChainFilter from a language model.
- Parameters:
llm (BaseLanguageModel) β The language model to use for filtering.
prompt (BasePromptTemplate | None) β The prompt to use for the filter.
kwargs (Any) β Additional arguments to pass to the constructor.
- Returns:
A LLMChainFilter that uses the given language model.
- Return type:
Examples using LLMChainFilter