SemanticChunker#
- class langchain_experimental.text_splitter.SemanticChunker(embeddings: Embeddings, buffer_size: int = 1, add_start_index: bool = False, breakpoint_threshold_type: Literal['percentile', 'standard_deviation', 'interquartile', 'gradient'] = 'percentile', breakpoint_threshold_amount: float | None = None, number_of_chunks: int | None = None, sentence_split_regex: str = '(?<=[.?!])\\s+')[source]#
Split the text based on semantic similarity.
Taken from Greg Kamradtβs wonderful notebook: FullStackRetrieval-com/RetrievalTutorials
All credits to him.
At a high level, this splits into sentences, then groups into groups of 3 sentences, and then merges one that are similar in the embedding space.
Methods
__init__
(embeddings[,Β buffer_size,Β ...])atransform_documents
(documents,Β **kwargs)Asynchronously transform a list of documents.
create_documents
(texts[,Β metadatas])Create documents from a list of texts.
split_documents
(documents)Split documents.
split_text
(text)transform_documents
(documents,Β **kwargs)Transform sequence of documents by splitting them.
- Parameters:
embeddings (Embeddings) β
buffer_size (int) β
add_start_index (bool) β
breakpoint_threshold_type (Literal['percentile', 'standard_deviation', 'interquartile', 'gradient']) β
breakpoint_threshold_amount (float | None) β
number_of_chunks (int | None) β
sentence_split_regex (str) β
- __init__(embeddings: Embeddings, buffer_size: int = 1, add_start_index: bool = False, breakpoint_threshold_type: Literal['percentile', 'standard_deviation', 'interquartile', 'gradient'] = 'percentile', breakpoint_threshold_amount: float | None = None, number_of_chunks: int | None = None, sentence_split_regex: str = '(?<=[.?!])\\s+')[source]#
- Parameters:
embeddings (Embeddings) β
buffer_size (int) β
add_start_index (bool) β
breakpoint_threshold_type (Literal['percentile', 'standard_deviation', 'interquartile', 'gradient']) β
breakpoint_threshold_amount (float | None) β
number_of_chunks (int | None) β
sentence_split_regex (str) β
- async atransform_documents(documents: Sequence[Document], **kwargs: Any) Sequence[Document] #
Asynchronously transform a list of documents.
- create_documents(texts: List[str], metadatas: List[dict] | None = None) List[Document] [source]#
Create documents from a list of texts.
- Parameters:
texts (List[str]) β
metadatas (List[dict] | None) β
- Return type:
List[Document]
Examples using SemanticChunker