LLMListwiseRerank#
- class langchain.retrievers.document_compressors.listwise_rerank.LLMListwiseRerank[source]#
Bases:
BaseDocumentCompressor
Document compressor that uses Zero-Shot Listwise Document Reranking.
Adapted from: https://arxiv.org/pdf/2305.02156.pdf
LLMListwiseRerank
uses a language model to rerank a list of documents based on their relevance to a query.NOTE: requires that underlying model implement
with_structured_output
.- Example usage:
from langchain.retrievers.document_compressors.listwise_rerank import ( LLMListwiseRerank, ) from langchain_core.documents import Document from langchain_openai import ChatOpenAI documents = [ Document("Sally is my friend from school"), Document("Steve is my friend from home"), Document("I didn't always like yogurt"), Document("I wonder why it's called football"), Document("Where's waldo"), ] reranker = LLMListwiseRerank.from_llm( llm=ChatOpenAI(model="gpt-3.5-turbo"), top_n=3 ) compressed_docs = reranker.compress_documents(documents, "Who is steve") assert len(compressed_docs) == 3 assert "Steve" in compressed_docs[0].page_content
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- param reranker: Runnable[Dict, List[Document]] [Required]#
LLM-based reranker to use for filtering documents. Expected to take in a dict with βdocuments: Sequence[Document]β and βquery: strβ keys and output a List[Document].
- param top_n: int = 3#
Number of documents to return.
- async acompress_documents(documents: Sequence[Document], query: str, callbacks: list[BaseCallbackHandler] | BaseCallbackManager | None = None) Sequence[Document] #
Async compress retrieved documents given the query context.
- Parameters:
documents (Sequence[Document]) β The retrieved documents.
query (str) β The query context.
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None) β Optional callbacks to run during compression.
- Returns:
The compressed documents.
- Return type:
Sequence[Document]
- compress_documents(documents: Sequence[Document], query: str, callbacks: list[BaseCallbackHandler] | BaseCallbackManager | None = None) Sequence[Document] [source]#
Filter down documents based on their relevance to the query.
- Parameters:
documents (Sequence[Document])
query (str)
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None)
- Return type:
Sequence[Document]
- classmethod from_llm(llm: BaseLanguageModel, *, prompt: BasePromptTemplate | None = None, **kwargs: Any) LLMListwiseRerank [source]#
Create a LLMListwiseRerank document compressor from a language model.
- Parameters:
llm (BaseLanguageModel) β The language model to use for filtering. Must implement BaseLanguageModel.with_structured_output().
prompt (BasePromptTemplate | None) β The prompt to use for the filter.
kwargs (Any) β Additional arguments to pass to the constructor.
- Returns:
A LLMListwiseRerank document compressor that uses the given language model.
- Return type:
Examples using LLMListwiseRerank