SemanticChunker#
- class langchain_experimental.text_splitter.SemanticChunker(
- embeddings: Embeddings,
- buffer_size: int = 1,
- add_start_index: bool = False,
- breakpoint_threshold_type: Literal['percentile', 'standard_deviation', 'interquartile', 'gradient'] = 'percentile',
- breakpoint_threshold_amount: float | None = None,
- number_of_chunks: int | None = None,
- sentence_split_regex: str = '(?<=[.?!])\\s+',
- min_chunk_size: int | None = None,
Split the text based on semantic similarity.
Taken from Greg Kamradt’s wonderful notebook: FullStackRetrieval-com/RetrievalTutorials
All credits to him.
At a high level, this splits into sentences, then groups into groups of 3 sentences, and then merges one that are similar in the embedding space.
Methods
__init__
(embeddings[, buffer_size, ...])atransform_documents
(documents, **kwargs)Asynchronously transform a list of documents.
create_documents
(texts[, metadatas])Create documents from a list of texts.
split_documents
(documents)Split documents.
split_text
(text)transform_documents
(documents, **kwargs)Transform sequence of documents by splitting them.
- Parameters:
embeddings (Embeddings)
buffer_size (int)
add_start_index (bool)
breakpoint_threshold_type (Literal['percentile', 'standard_deviation', 'interquartile', 'gradient'])
breakpoint_threshold_amount (float | None)
number_of_chunks (int | None)
sentence_split_regex (str)
min_chunk_size (int | None)
- __init__(
- embeddings: Embeddings,
- buffer_size: int = 1,
- add_start_index: bool = False,
- breakpoint_threshold_type: Literal['percentile', 'standard_deviation', 'interquartile', 'gradient'] = 'percentile',
- breakpoint_threshold_amount: float | None = None,
- number_of_chunks: int | None = None,
- sentence_split_regex: str = '(?<=[.?!])\\s+',
- min_chunk_size: int | None = None,
- Parameters:
embeddings (Embeddings)
buffer_size (int)
add_start_index (bool)
breakpoint_threshold_type (Literal['percentile', 'standard_deviation', 'interquartile', 'gradient'])
breakpoint_threshold_amount (float | None)
number_of_chunks (int | None)
sentence_split_regex (str)
min_chunk_size (int | None)
- async atransform_documents(
- documents: Sequence[Document],
- **kwargs: Any,
Asynchronously transform a list of documents.
- create_documents(
- texts: List[str],
- metadatas: List[dict] | None = None,
Create documents from a list of texts.
- Parameters:
texts (List[str])
metadatas (List[dict] | None)
- Return type:
List[Document]
Examples using SemanticChunker