LLMManagerMixin#

class langchain_core.callbacks.base.LLMManagerMixin[source]#

Mixin for LLM callbacks.

Methods

on_llm_end(response, *, run_id[, parent_run_id])

Run when LLM ends running.

on_llm_error(error, *, run_id[, parent_run_id])

Run when LLM errors.

on_llm_new_token(token, *[, chunk, ...])

Run on new LLM token.

on_llm_end(
response: LLMResult,
*,
run_id: UUID,
parent_run_id: UUID | None = None,
**kwargs: Any,
) Any[source]#

Run when LLM ends running.

Parameters:
  • response (LLMResult) – The response which was generated.

  • run_id (UUID) – The run ID. This is the ID of the current run.

  • parent_run_id (UUID) – The parent run ID. This is the ID of the parent run.

  • kwargs (Any) – Additional keyword arguments.

Return type:

Any

on_llm_error(
error: BaseException,
*,
run_id: UUID,
parent_run_id: UUID | None = None,
**kwargs: Any,
) Any[source]#

Run when LLM errors.

Parameters:
  • error (BaseException) – The error that occurred.

  • run_id (UUID) – The run ID. This is the ID of the current run.

  • parent_run_id (UUID) – The parent run ID. This is the ID of the parent run.

  • kwargs (Any) – Additional keyword arguments.

Return type:

Any

on_llm_new_token(
token: str,
*,
chunk: GenerationChunk | ChatGenerationChunk | None = None,
run_id: UUID,
parent_run_id: UUID | None = None,
**kwargs: Any,
) Any[source]#

Run on new LLM token. Only available when streaming is enabled.

Parameters:
  • token (str) – The new token.

  • chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk, containing content and other information.

  • run_id (UUID) – The run ID. This is the ID of the current run.

  • parent_run_id (UUID) – The parent run ID. This is the ID of the parent run.

  • kwargs (Any) – Additional keyword arguments.

Return type:

Any