LLMChainFilter#

class langchain.retrievers.document_compressors.chain_filter.LLMChainFilter[source]#

Bases: BaseDocumentCompressor

Filter that drops documents that aren’t relevant to the query.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

param get_input: Callable[[str, Document], dict] = <function default_get_input>#

Callable for constructing the chain input from the query and a Document.

param llm_chain: Runnable [Required]#

LLM wrapper to use for filtering documents. The chain prompt is expected to have a BooleanOutputParser.

classmethod from_llm(
llm: BaseLanguageModel,
prompt: BasePromptTemplate | None = None,
**kwargs: Any,
) β†’ LLMChainFilter[source]#

Create a LLMChainFilter from a language model.

Parameters:
  • llm (BaseLanguageModel) – The language model to use for filtering.

  • prompt (BasePromptTemplate | None) – The prompt to use for the filter.

  • kwargs (Any) – Additional arguments to pass to the constructor.

Returns:

A LLMChainFilter that uses the given language model.

Return type:

LLMChainFilter

async acompress_documents(
documents: Sequence[Document],
query: str,
callbacks: list[BaseCallbackHandler] | BaseCallbackManager | None = None,
) β†’ Sequence[Document][source]#

Filter down documents based on their relevance to the query.

Parameters:
Return type:

Sequence[Document]

compress_documents(
documents: Sequence[Document],
query: str,
callbacks: list[BaseCallbackHandler] | BaseCallbackManager | None = None,
) β†’ Sequence[Document][source]#

Filter down documents based on their relevance to the query.

Parameters:
Return type:

Sequence[Document]

Examples using LLMChainFilter