ChatVertexAI#
- class langchain_google_vertexai.chat_models.ChatVertexAI[source]#
Bases:
_VertexAICommon
,BaseChatModel
Google Cloud Vertex AI chat model integration.
- Setup:
- You must either:
Have credentials configured for your environment (gcloud, workload identity, etcβ¦)
Store the path to a service account JSON file as the
GOOGLE_APPLICATION_CREDENTIALS
environment variable
This codebase uses the
google.auth
library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.More information:
google.auth
API reference
- Key init args β completion params:
- model: str
Name of ChatVertexAI model to use. e.g.
'gemini-2.0-flash-001'
,'gemini-2.5-pro'
, etc.- temperature: Optional[float]
Sampling temperature.
- seed: Optional[int]
Sampling integer to use.
- max_tokens: Optional[int]
Max number of tokens to generate.
- stop: Optional[List[str]]
Default stop sequences.
- safety_settings: Optional[Dict[vertexai.generative_models.HarmCategory, vertexai.generative_models.HarmBlockThreshold]]
The default safety settings to use for all generations.
- Key init args β client params:
- max_retries: int
Max number of retries.
- wait_exponential_kwargs: Optional[dict[str, float]]
Optional dictionary with parameters for wait_exponential: - multiplier: Initial wait time multiplier (default:
1.0
) - min: Minimum wait time in seconds (default:4.0
) - max: Maximum wait time in seconds (default:10.0
) - exp_base: Exponent base to use (default:2.0
)- credentials: Optional[google.auth.credentials.Credentials]
The default custom credentials to use when making API calls. If not provided, credentials will be ascertained from the environment.
- project: Optional[str]
The default GCP project to use when making Vertex API calls.
- location: str = βus-central1β
The default location to use when making API calls.
- request_parallelism: int = 5
The amount of parallelism allowed for requests issued to VertexAI models. (default:
5
)- base_url: Optional[str]
Base URL for API requests.
See full list of supported init args and their descriptions in the params section.
- Instantiate:
from langchain_google_vertexai import ChatVertexAI llm = ChatVertexAI( model="gemini-1.5-flash-001", temperature=0, max_tokens=None, max_retries=6, stop=None, # other params... )
- Invoke:
messages = [ ("system", "You are a helpful translator. Translate the user sentence to French."), ("human", "I love programming."), ] llm.invoke(messages)
AIMessage(content="J'adore programmer. ", response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 17, 'candidates_token_count': 7, 'total_token_count': 24}}, id='run-925ce305-2268-44c4-875f-dde9128520ad-0')
- Stream:
for chunk in llm.stream(messages): print(chunk)
AIMessageChunk(content='J', response_metadata={'is_blocked': False, 'safety_ratings': [], 'citation_metadata': None}, id='run-9df01d73-84d9-42db-9d6b-b1466a019e89') AIMessageChunk(content="'adore programmer. ", response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}], 'citation_metadata': None}, id='run-9df01d73-84d9-42db-9d6b-b1466a019e89') AIMessageChunk(content='', response_metadata={'is_blocked': False, 'safety_ratings': [], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 17, 'candidates_token_count': 7, 'total_token_count': 24}}, id='run-9df01d73-84d9-42db-9d6b-b1466a019e89')
stream = llm.stream(messages) full = next(stream) for chunk in stream: full += chunk full
AIMessageChunk(content="J'adore programmer. ", response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 17, 'candidates_token_count': 7, 'total_token_count': 24}}, id='run-b7f7492c-4cb5-42d0-8fc3-dce9b293b0fb')
- Async:
await llm.ainvoke(messages) # stream: # async for chunk in (await llm.astream(messages)) # batch: # await llm.abatch([messages])
AIMessage(content="J'adore programmer. ", response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 17, 'candidates_token_count': 7, 'total_token_count': 24}}, id='run-925ce305-2268-44c4-875f-dde9128520ad-0')
- Context Caching:
Context caching allows you to store and reuse content (e.g., PDFs, images) for faster processing. The
cached_content
parameter accepts a cache name created via the Google Generative AI API with Vertex AI. Below is an example of caching content from GCS and querying it.Example: This caches content from GCS and queries it.
from google import genai from google.genai.types import Content, CreateCachedContentConfig, HttpOptions, Part from langchain_google_vertexai import ChatVertexAI from langchain_core.messages import HumanMessage client = genai.Client(http_options=HttpOptions(api_version="v1beta1")) contents = [ Content( role="user", parts=[ Part.from_uri( file_uri="gs://your-bucket/file1", mime_type="application/pdf", ), Part.from_uri( file_uri="gs://your-bucket/file2", mime_type="image/jpeg", ), ], ) ] cache = client.caches.create( model="gemini-1.5-flash-001", config=CreateCachedContentConfig( contents=contents, system_instruction="You are an expert content analyzer.", display_name="content-cache", ttl="300s", ), ) llm = ChatVertexAI( model_name="gemini-1.5-flash-001", cached_content=cache.name, ) message = HumanMessage(content="Provide a summary of the key information across the content.") llm.invoke([message])
- Tool calling:
from pydantic import BaseModel, Field class GetWeather(BaseModel): '''Get the current weather in a given location''' location: str = Field(..., description="The city and state, e.g. San Francisco, CA") class GetPopulation(BaseModel): '''Get the current population in a given location''' location: str = Field(..., description="The city and state, e.g. San Francisco, CA") llm_with_tools = llm.bind_tools([GetWeather, GetPopulation]) ai_msg = llm_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?") ai_msg.tool_calls
[{'name': 'GetWeather', 'args': {'location': 'Los Angeles, CA'}, 'id': '2a2401fa-40db-470d-83ce-4e52de910d9e'}, {'name': 'GetWeather', 'args': {'location': 'New York City, NY'}, 'id': '96761deb-ab7f-4ef9-b4b4-6d44562fc46e'}, {'name': 'GetPopulation', 'args': {'location': 'Los Angeles, CA'}, 'id': '9147d532-abee-43a2-adb5-12f164300484'}, {'name': 'GetPopulation', 'args': {'location': 'New York City, NY'}, 'id': 'c43374ea-bde5-49ca-8487-5b83ebeea1e6'}]
See
ChatVertexAI.bind_tools()
method for more.- Built-in search:
from google.cloud.aiplatform_v1beta1.types import Tool as VertexTool from langchain_google_vertexai import ChatVertexAI llm = ChatVertexAI(model="gemini-2.0-flash-exp") resp = llm.invoke( "When is the next total solar eclipse in US?", tools=[VertexTool(google_search={})], )
- Built-in code execution:
from google.cloud.aiplatform_v1beta1.types import Tool as VertexTool from langchain_google_vertexai import ChatVertexAI llm = ChatVertexAI(model="gemini-2.0-flash-exp") resp = llm.invoke( "What is 3^3?", tools=[VertexTool(code_execution={})], )
- Structured output:
from typing import Optional from pydantic import BaseModel, Field class Joke(BaseModel): '''Joke to tell user.''' setup: str = Field(description="The setup of the joke") punchline: str = Field(description="The punchline to the joke") rating: Optional[int] = Field(default=None, description="How funny the joke is, from 1 to 10") structured_llm = llm.with_structured_output(Joke) structured_llm.invoke("Tell me a joke about cats")
Joke(setup='What do you call a cat that loves to bowl?', punchline='An alley cat!', rating=None)
See
ChatVertexAI.with_structured_output()
for more.- Image input:
import base64 import httpx from langchain_core.messages import HumanMessage image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8") message = HumanMessage( content=[ {"type": "text", "text": "describe the weather in this image"}, { "type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image_data}"}, }, ], ) ai_msg = llm.invoke([message]) ai_msg.content
'The weather in this image appears to be sunny and pleasant. The sky is a bright blue with scattered white clouds, suggesting a clear and mild day. The lush green grass indicates recent rainfall or sufficient moisture. The absence of strong shadows suggests that the sun is high in the sky, possibly late afternoon. Overall, the image conveys a sense of tranquility and warmth, characteristic of a beautiful summer day.'
You can also point to GCS files which is faster / more efficient because bytes are transferred back and forth.
llm.invoke( [ HumanMessage( [ "What's in the image?", { "type": "media", "file_uri": "gs://cloud-samples-data/generative-ai/image/scones.jpg", "mime_type": "image/jpeg", }, ] ) ] ).content
'The image is of five blueberry scones arranged on a piece of baking paper. Here is a list of what is in the picture:* **Five blueberry scones:** They are scattered across the parchment paper, dusted with powdered sugar. * **Two cups of coffee:** Two white cups with saucers. One appears full, the other partially drunk. * **A bowl of blueberries:** A brown bowl is filled with fresh blueberries, placed near the scones.* **A spoon:** A silver spoon with the words "Let's Jam" rests on the paper.* **Pink peonies:** Several pink peonies lie beside the scones, adding a touch of color.* **Baking paper:** The scones, cups, bowl, and spoon are arranged on a piece of white baking paper, splattered with purple. The paper is crinkled and sits on a dark surface. The image has a rustic and delicious feel, suggesting a cozy and enjoyable breakfast or brunch setting.' # codespell:ignore brunch
- PDF input:
import base64 from langchain_core.messages import HumanMessage pdf_bytes = open("/path/to/your/test.pdf", 'rb').read() pdf_base64 = base64.b64encode(pdf_bytes).decode('utf-8') message = HumanMessage( content=[ {"type": "text", "text": "describe the document in a sentence"}, { "type": "file", "source_type": "base64", "mime_type":"application/pdf", "data": pdf_base64 } ] ) ai_msg = llm.invoke([message]) ai_msg.content
'This research paper describes a system developed for SemEval-2025 Task 9, which aims to automate the detection of food hazards from recall reports, addressing the class imbalance problem by leveraging LLM-based data augmentation techniques and transformer-based models to improve performance.'
You can also point to GCS files.
llm.invoke( [ HumanMessage( [ "describe the document in a sentence", { "type": "media", "file_uri": "gs://cloud-samples-data/generative-ai/pdf/1706.03762v7.pdf", "mime_type": "application/pdf", }, ] ) ] ).content
'The article introduces Transformer, a new model architecture for sequence transduction based solely on attention mechanisms, outperforming previous models in machine translation tasks and demonstrating good generalization to English constituency parsing.'
- Video input:
import base64 from langchain_core.messages import HumanMessage video_bytes = open("/path/to/your/video.mp4", 'rb').read() video_base64 = base64.b64encode(video_bytes).decode('utf-8') message = HumanMessage( content=[ {"type": "text", "text": "describe what's in this video in a sentence"}, { "type": "file", "source_type": "base64", "mime_type": "video/mp4", "data": video_base64 } ] ) ai_msg = llm.invoke([message]) ai_msg.content
'Tom and Jerry, along with a turkey, engage in a chaotic Thanksgiving-themed adventure involving a corn-on-the-cob chase, maze antics, and a disastrous attempt to prepare a turkey dinner.'
You can also pass YouTube URLs directly:
from langchain_core.messages import HumanMessage message = HumanMessage( content=[ {"type": "text", "text": "summarize the video in 3 sentences."}, { "type": "media", "file_uri": "https://www.youtube.com/watch?v=9hE5-98ZeCg", "mime_type": "video/mp4", } ] ) ai_msg = llm.invoke([message]) ai_msg.content
'The video is a demo of multimodal live streaming in Gemini 2.0. The narrator is sharing his screen in AI Studio and asks if the AI can see it. The AI then reads text that is highlighted on the screen, defines the word βmultimodal,β and summarizes everything that was seen and heard.'
You can also point to GCS files.
llm = ChatVertexAI(model="gemini-1.0-pro-vision") llm.invoke( [ HumanMessage( [ "What's in the video?", { "type": "media", "file_uri": "gs://cloud-samples-data/video/animals.mp4", "mime_type": "video/mp4", }, ] ) ] ).content
'The video is about a new feature in Google Photos called "Zoomable Selfies". The feature allows users to take selfies with animals at the zoo. The video shows several examples of people taking selfies with animals, including a tiger, an elephant, and a sea otter. The video also shows how the feature works. Users simply need to open the Google Photos app and select the "Zoomable Selfies" option. Then, they need to choose an animal from the list of available animals. The app will then guide the user through the process of taking the selfie.'
- Audio input:
import base64 from langchain_core.messages import HumanMessage audio_bytes = open("/path/to/your/audio.mp3", 'rb').read() audio_base64 = base64.b64encode(audio_bytes).decode('utf-8') message = HumanMessage( content=[ {"type": "text", "text": "summarize this audio in a sentence"}, { "type": "file", "source_type": "base64", "mime_type":"audio/mp3", "data": audio_base64 } ] ) ai_msg = llm.invoke([message]) ai_msg.content
"In this episode of the Made by Google podcast, Stephen Johnson and Simon Tokumine discuss NotebookLM, a tool designed to help users understand complex material in various modalities, with a focus on its unexpected uses, the development of audio overviews, and the implementation of new features like mind maps and source discovery."
You can also point to GCS files.
from langchain_core.messages import HumanMessage llm = ChatVertexAI(model="gemini-1.5-flash-001") llm.invoke( [ HumanMessage( [ "What's this audio about?", { "type": "media", "file_uri": "gs://cloud-samples-data/generative-ai/audio/pixel.mp3", "mime_type": "audio/mpeg", }, ] ) ] ).content
"This audio is an interview with two product managers from Google who work on Pixel feature drops. They discuss how feature drops are important for showcasing how Google devices are constantly improving and getting better. They also discuss some of the highlights of the January feature drop and the new features coming in the March drop for Pixel phones and Pixel watches. The interview concludes with discussion of how user feedback is extremely important to them in deciding which features to include in the feature drops. "
- Token usage:
ai_msg = llm.invoke(messages) ai_msg.usage_metadata
{'input_tokens': 17, 'output_tokens': 7, 'total_tokens': 24}
- Logprobs:
llm = ChatVertexAI(model="gemini-1.5-flash-001", logprobs=True) ai_msg = llm.invoke(messages) ai_msg.response_metadata["logprobs_result"]
[ {'token': 'J', 'logprob': -1.549651415189146e-06, 'top_logprobs': []}, {'token': "'", 'logprob': -1.549651415189146e-06, 'top_logprobs': []}, {'token': 'adore', 'logprob': 0.0, 'top_logprobs': []}, {'token': ' programmer', 'logprob': -1.1922384146600962e-07, 'top_logprobs': []}, {'token': '.', 'logprob': -4.827636439586058e-05, 'top_logprobs': []}, {'token': ' ', 'logprob': -0.018011733889579773, 'top_logprobs': []}, {'token': '\n', 'logprob': -0.0008687592926435173, 'top_logprobs': []} ]
- Response metadata
ai_msg = llm.invoke(messages) ai_msg.response_metadata
{'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}], 'usage_metadata': {'prompt_token_count': 17, 'candidates_token_count': 7, 'total_token_count': 24}}
- Safety settings
from langchain_google_vertexai import HarmBlockThreshold, HarmCategory llm = ChatVertexAI( model="gemini-1.5-pro", safety_settings={ HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE, HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE, HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE, HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_ONLY_HIGH, }, ) llm.invoke(messages).response_metadata
{'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'probability_score': 0.1, 'blocked': False, 'severity': 'HARM_SEVERITY_NEGLIGIBLE', 'severity_score': 0.1}], 'usage_metadata': {'prompt_token_count': 17, 'candidates_token_count': 7, 'total_token_count': 24}}
Needed for mypy typing to recognize model_name as a valid arg and for arg validation.
Note
ChatVertexAI implements the standard
Runnable Interface
. πThe
Runnable Interface
has additional methods that are available on runnables, such aswith_config
,with_types
,with_retry
,assign
,bind
,get_graph
, and more.- param additional_headers: Dict[str, str] | None = None#
A key-value dictionary representing additional headers for the model call
- param api_endpoint: str | None = None (alias 'base_url')#
Desired API endpoint, e.g., us-central1-aiplatform.googleapis.com
- param api_transport: str | None = None#
The desired API transport method, can be either βgrpcβ or βrestβ. Uses the default parameter in vertexai.init if defined.
- param audio_timestamp: bool | None = None#
Enable timestamp understanding of audio-only files
- param cache: BaseCache | bool | None = None#
Whether to cache the response.
If true, will use the global cache.
If false, will not use a cache
If None, will use the global cache if itβs set, otherwise no cache.
If instance of
BaseCache
, will use the provided cache.
Caching is not currently supported for streaming methods of models.
- param cached_content: str | None = None#
Optional. Use the model in cache mode. Only supported in Gemini 1.5 and later models. Must be a string containing the cache name (A sequence of numbers)
- param callback_manager: BaseCallbackManager | None = None#
Deprecated since version 0.1.7: Use
callbacks()
instead. It will be removed in pydantic==1.0.Callback manager to add to the run trace.
- param callbacks: Callbacks = None#
Callbacks to add to the run trace.
- param client_cert_source: Callable[[], Tuple[bytes, bytes]] | None = None#
A callback which returns client certificate bytes and private key bytes both
- param credentials: Any = None#
The default custom credentials (google.auth.credentials.Credentials) to use
- param custom_get_token_ids: Callable[[str], list[int]] | None = None#
Optional encoder to use for counting tokens.
- param disable_streaming: bool | Literal['tool_calling'] = False#
Whether to disable streaming for this model.
If streaming is bypassed, then
stream()
/astream()
/astream_events()
will defer toinvoke()
/ainvoke()
.If True, will always bypass streaming case.
If
'tool_calling'
, will bypass streaming case only when the model is called with atools
keyword argument. In other words, LangChain will automatically switch to non-streaming behavior (invoke()
) only when the tools argument is provided. This offers the best of both worlds.If False (default), will always use streaming case if available.
The main reason for this flag is that code might be written using
stream()
and a user may want to swap out a given model for another model whose the implementation does not properly support streaming.
- param endpoint_version: Literal['v1', 'v1beta1'] = 'v1beta1'#
Whether to use v1 or v1beta1 endpoint.
v1 is more performant, but v1beta1 might have some new features.
- param examples: List[BaseMessage] | None = None#
- param frequency_penalty: float | None = None#
Positive values penalize tokens that repeatedly appear in the generated text,
- param full_model_name: str | None = None#
The full name of the modelβs endpoint.
- param include_thoughts: bool | None = None#
Indicates whether to include thoughts in the response.
- param labels: Dict[str, str] | None = None#
Optional tag llm calls with metadata to help in tracebility and biling.
- param location: str = 'us-central1'#
The default location to use when making API calls.
- param logprobs: bool | int = False#
Whether to return logprobs as part of AIMessage.response_metadata.
If False, donβt return logprobs. If True, return logprobs for top candidate. If int, return logprobs for top
logprobs
candidates.Note
As of 2024-10-28 this is only supported for gemini-1.5-flash models.
- param max_output_tokens: int | None = None (alias 'max_tokens')#
Token limit determines the maximum amount of text output from one prompt.
- param max_retries: int = 6#
The maximum number of retries to make when generating.
- param metadata: dict[str, Any] | None = None#
Metadata to add to the run trace.
- param model_kwargs: dict[str, Any] [Optional]#
Holds any unexpected initialization parameters.
- param model_name: str [Required] (alias 'model')#
Underlying model name.
- param n: int = 1#
How many completions to generate for each prompt.
- param perform_literal_eval_on_string_raw_content: bool = True#
Whether to perform literal eval on string raw content.
- param presence_penalty: float | None = None#
Positive values penalize tokens that already appear in the generated text,
- param project: str | None = None#
The default GCP project to use when making Vertex API calls.
- param rate_limiter: BaseRateLimiter | None = None#
An optional rate limiter to use for limiting the number of requests.
- param request_parallelism: int = 5#
The amount of parallelism allowed for requests issued to VertexAI models.
- param response_mime_type: str | None = None#
- Optional. Output response mimetype of the generated candidate text. Only
- supported in Gemini 1.5 and later models. Supported mimetype:
'text/plain'
: (default) Text output.'application/json'
: JSON response in the candidates.'text/x.enum'
: Enum in plain text.
The model also needs to be prompted to output the appropriate response type, otherwise the behavior is undefined. This is a preview feature.
- param response_modalities: List[Modality] | None = None#
A list of modalities of the response
- param response_schema: Dict[str, Any] | None = None#
Optional. Enforce an schema to the output. The format of the dictionary should follow Open API schema.
- param safety_settings: 'SafetySettingsType' | None = None#
The default safety settings to use for all generations.
For example:
from langchain_google_vertexai import HarmBlockThreshold, HarmCategory
- safety_settings = {
HarmCategory.HARM_CATEGORY_UNSPECIFIED: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE, HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_ONLY_HIGH, HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE, HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
}
- param seed: int | None = None#
Random seed for the generation.
- param stop: List[str] | None = None (alias 'stop_sequences')#
Optional list of stop words to use when generating.
- param streaming: bool = False#
Whether to stream the results or not.
- param tags: list[str] | None = None#
Tags to add to the run trace.
- param temperature: float | None = None#
Sampling temperature, it controls the degree of randomness in token selection.
- param thinking_budget: int | None = None#
Indicates the thinking budget in tokens.
- param top_k: int | None = None#
How the model selects tokens for output, the next token is selected from
- param top_p: float | None = None#
Tokens are selected from most probable to least until the sum of their
- param tuned_model_name: str | None = None#
The name of a tuned model.
- param verbose: bool [Optional]#
Whether to print out response text.
- param wait_exponential_kwargs: dict[str, float] | None = None#
Optional dictionary with parameters for wait_exponential: - multiplier: Initial wait time multiplier (default:
1.0
) - min: Minimum wait time in seconds (default:4.0
) - max: Maximum wait time in seconds (default:10.0
) - exp_base: Exponent base to use (default:2.0
)
- __call__(
- messages: list[BaseMessage],
- stop: list[str] | None = None,
- callbacks: list[BaseCallbackHandler] | BaseCallbackManager | None = None,
- **kwargs: Any,
Deprecated since version 0.1.7: Use
invoke()
instead. It will not be removed until langchain-core==1.0.Call the model.
- Parameters:
messages (list[BaseMessage]) β List of messages.
stop (list[str] | None) β Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None) β Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.
**kwargs (Any) β Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
- Returns:
The model output message.
- Return type:
- async abatch(
- inputs: list[Input],
- config: RunnableConfig | list[RunnableConfig] | None = None,
- *,
- return_exceptions: bool = False,
- **kwargs: Any | None,
Default implementation runs
ainvoke
in parallel usingasyncio.gather
.The default implementation of
batch
works well for IO bound runnables.Subclasses should override this method if they can batch more efficiently; e.g., if the underlying
Runnable
uses an API which supports a batch mode.- Parameters:
inputs (list[Input]) β A list of inputs to the
Runnable
.config (RunnableConfig | list[RunnableConfig] | None) β A config to use when invoking the
Runnable
. The config supports standard keys like'tags'
,'metadata'
for tracing purposes,'max_concurrency'
for controlling how much work to do in parallel, and other keys. Please refer to theRunnableConfig
for more details. Defaults to None.return_exceptions (bool) β Whether to return exceptions instead of raising them. Defaults to False.
kwargs (Any | None) β Additional keyword arguments to pass to the
Runnable
.
- Returns:
A list of outputs from the
Runnable
.- Return type:
list[Output]
- async abatch_as_completed(
- inputs: Sequence[Input],
- config: RunnableConfig | Sequence[RunnableConfig] | None = None,
- *,
- return_exceptions: bool = False,
- **kwargs: Any | None,
Run
ainvoke
in parallel on a list of inputs.Yields results as they complete.
- Parameters:
inputs (Sequence[Input]) β A list of inputs to the
Runnable
.config (RunnableConfig | Sequence[RunnableConfig] | None) β A config to use when invoking the
Runnable
. The config supports standard keys like'tags'
,'metadata'
for tracing purposes,'max_concurrency'
for controlling how much work to do in parallel, and other keys. Please refer to theRunnableConfig
for more details. Defaults to None.return_exceptions (bool) β Whether to return exceptions instead of raising them. Defaults to False.
kwargs (Any | None) β Additional keyword arguments to pass to the
Runnable
.
- Yields:
A tuple of the index of the input and the output from the
Runnable
.- Return type:
AsyncIterator[tuple[int, Output | Exception]]
- async ainvoke(
- input: LanguageModelInput,
- config: RunnableConfig | None = None,
- *,
- stop: list[str] | None = None,
- **kwargs: Any,
Default implementation of
ainvoke
, callsinvoke
from a thread.The default implementation allows usage of async code even if the
Runnable
did not implement a native async version ofinvoke
.Subclasses should override this method if they can run asynchronously.
- Parameters:
input (LanguageModelInput)
config (Optional[RunnableConfig])
stop (Optional[list[str]])
kwargs (Any)
- Return type:
- async astream(
- input: LanguageModelInput,
- config: RunnableConfig | None = None,
- *,
- stop: list[str] | None = None,
- **kwargs: Any,
Default implementation of
astream
, which callsainvoke
.Subclasses should override this method if they support streaming output.
- Parameters:
input (LanguageModelInput) β The input to the
Runnable
.config (Optional[RunnableConfig]) β The config to use for the
Runnable
. Defaults to None.kwargs (Any) β Additional keyword arguments to pass to the
Runnable
.stop (Optional[list[str]])
- Yields:
The output of the
Runnable
.- Return type:
AsyncIterator[BaseMessageChunk]
- async astream_events(
- input: Any,
- config: RunnableConfig | None = None,
- *,
- version: Literal['v1', 'v2'] = 'v2',
- include_names: Sequence[str] | None = None,
- include_types: Sequence[str] | None = None,
- include_tags: Sequence[str] | None = None,
- exclude_names: Sequence[str] | None = None,
- exclude_types: Sequence[str] | None = None,
- exclude_tags: Sequence[str] | None = None,
- **kwargs: Any,
Generate a stream of events.
Use to create an iterator over
StreamEvents
that provide real-time information about the progress of theRunnable
, includingStreamEvents
from intermediate results.A
StreamEvent
is a dictionary with the following schema:event
: str - Event names are of the format:on_[runnable_type]_(start|stream|end)
.name
: str - The name of theRunnable
that generated the event.run_id
: str - randomly generated ID associated with the given execution of theRunnable
that emitted the event. A childRunnable
that gets invoked as part of the execution of a parentRunnable
is assigned its own unique ID.parent_ids
: list[str] - The IDs of the parent runnables that generated the event. The rootRunnable
will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.tags
: Optional[list[str]] - The tags of theRunnable
that generated the event.metadata
: Optional[dict[str, Any]] - The metadata of theRunnable
that generated the event.data
: dict[str, Any]
Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.
Note
This reference table is for the v2 version of the schema.
event
name
chunk
input
output
on_chat_model_start
[model name]
{"messages": [[SystemMessage, HumanMessage]]}
on_chat_model_stream
[model name]
AIMessageChunk(content="hello")
on_chat_model_end
[model name]
{"messages": [[SystemMessage, HumanMessage]]}
AIMessageChunk(content="hello world")
on_llm_start
[model name]
{'input': 'hello'}
on_llm_stream
[model name]
``βHelloβ ``
on_llm_end
[model name]
'Hello human!'
on_chain_start
format_docs
on_chain_stream
format_docs
'hello world!, goodbye world!'
on_chain_end
format_docs
[Document(...)]
'hello world!, goodbye world!'
on_tool_start
some_tool
{"x": 1, "y": "2"}
on_tool_end
some_tool
{"x": 1, "y": "2"}
on_retriever_start
[retriever name]
{"query": "hello"}
on_retriever_end
[retriever name]
{"query": "hello"}
[Document(...), ..]
on_prompt_start
[template_name]
{"question": "hello"}
on_prompt_end
[template_name]
{"question": "hello"}
ChatPromptValue(messages: [SystemMessage, ...])
In addition to the standard events, users can also dispatch custom events (see example below).
Custom events will be only be surfaced with in the v2 version of the API!
A custom event has following format:
Attribute
Type
Description
name
str
A user defined name for the event.
data
Any
The data associated with the event. This can be anything, though we suggest making it JSON serializable.
Here are declarations associated with the standard events shown above:
format_docs
:def format_docs(docs: list[Document]) -> str: '''Format the docs.''' return ", ".join([doc.page_content for doc in docs]) format_docs = RunnableLambda(format_docs)
some_tool
:@tool def some_tool(x: int, y: str) -> dict: '''Some_tool.''' return {"x": x, "y": y}
prompt
:template = ChatPromptTemplate.from_messages( [("system", "You are Cat Agent 007"), ("human", "{question}")] ).with_config({"run_name": "my_template", "tags": ["my_template"]})
Example:
from langchain_core.runnables import RunnableLambda async def reverse(s: str) -> str: return s[::-1] chain = RunnableLambda(func=reverse) events = [ event async for event in chain.astream_events("hello", version="v2") ] # will produce the following events (run_id, and parent_ids # has been omitted for brevity): [ { "data": {"input": "hello"}, "event": "on_chain_start", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"chunk": "olleh"}, "event": "on_chain_stream", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"output": "olleh"}, "event": "on_chain_end", "metadata": {}, "name": "reverse", "tags": [], }, ]
Example: Dispatch Custom Event
from langchain_core.callbacks.manager import ( adispatch_custom_event, ) from langchain_core.runnables import RunnableLambda, RunnableConfig import asyncio async def slow_thing(some_input: str, config: RunnableConfig) -> str: """Do something that takes a long time.""" await asyncio.sleep(1) # Placeholder for some slow operation await adispatch_custom_event( "progress_event", {"message": "Finished step 1 of 3"}, config=config # Must be included for python < 3.10 ) await asyncio.sleep(1) # Placeholder for some slow operation await adispatch_custom_event( "progress_event", {"message": "Finished step 2 of 3"}, config=config # Must be included for python < 3.10 ) await asyncio.sleep(1) # Placeholder for some slow operation return "Done" slow_thing = RunnableLambda(slow_thing) async for event in slow_thing.astream_events("some_input", version="v2"): print(event)
- Parameters:
input (Any) β The input to the
Runnable
.config (Optional[RunnableConfig]) β The config to use for the
Runnable
.version (Literal['v1', 'v2']) β The version of the schema to use either
'v2'
or'v1'
. Users should use'v2'
.'v1'
is for backwards compatibility and will be deprecated in 0.4.0. No default will be assigned until the API is stabilized. custom events will only be surfaced in'v2'
.include_names (Optional[Sequence[str]]) β Only include events from
Runnables
with matching names.include_types (Optional[Sequence[str]]) β Only include events from
Runnables
with matching types.include_tags (Optional[Sequence[str]]) β Only include events from
Runnables
with matching tags.exclude_names (Optional[Sequence[str]]) β Exclude events from
Runnables
with matching names.exclude_types (Optional[Sequence[str]]) β Exclude events from
Runnables
with matching types.exclude_tags (Optional[Sequence[str]]) β Exclude events from
Runnables
with matching tags.kwargs (Any) β Additional keyword arguments to pass to the
Runnable
. These will be passed toastream_log
as this implementation ofastream_events
is built on top ofastream_log
.
- Yields:
An async stream of
StreamEvents
.- Raises:
NotImplementedError β If the version is not
'v1'
or'v2'
.- Return type:
AsyncIterator[StreamEvent]
- batch(
- inputs: list[Input],
- config: RunnableConfig | list[RunnableConfig] | None = None,
- *,
- return_exceptions: bool = False,
- **kwargs: Any | None,
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently; e.g., if the underlying
Runnable
uses an API which supports a batch mode.- Parameters:
inputs (list[Input])
config (RunnableConfig | list[RunnableConfig] | None)
return_exceptions (bool)
kwargs (Any | None)
- Return type:
list[Output]
- batch_as_completed(
- inputs: Sequence[Input],
- config: RunnableConfig | Sequence[RunnableConfig] | None = None,
- *,
- return_exceptions: bool = False,
- **kwargs: Any | None,
Run
invoke
in parallel on a list of inputs.Yields results as they complete.
- Parameters:
inputs (Sequence[Input])
config (RunnableConfig | Sequence[RunnableConfig] | None)
return_exceptions (bool)
kwargs (Any | None)
- Return type:
Iterator[tuple[int, Output | Exception]]
- bind(
- **kwargs: Any,
Bind arguments to a
Runnable
, returning a newRunnable
.Useful when a
Runnable
in a chain requires an argument that is not in the output of the previousRunnable
or included in the user input.- Parameters:
kwargs (Any) β The arguments to bind to the
Runnable
.- Returns:
A new
Runnable
with the arguments bound.- Return type:
Runnable[Input, Output]
Example:
from langchain_ollama import ChatOllama from langchain_core.output_parsers import StrOutputParser llm = ChatOllama(model='llama2') # Without bind. chain = ( llm | StrOutputParser() ) chain.invoke("Repeat quoted words exactly: 'One two three four five.'") # Output is 'One two three four five.' # With bind. chain = ( llm.bind(stop=["three"]) | StrOutputParser() ) chain.invoke("Repeat quoted words exactly: 'One two three four five.'") # Output is 'One two'
- bind_tools(
- tools: Sequence[Tool | Tool | _ToolDictLike | BaseTool | Type[BaseModel] | FunctionDescription | Callable | FunctionDeclaration | Dict[str, Any]],
- tool_config: _ToolConfigDict | None = None,
- *,
- tool_choice: dict | List[str] | str | Literal['auto', 'none', 'any'] | Literal[True] | bool | None = None,
- **kwargs: Any,
Bind tool-like objects to this chat model.
Assumes model is compatible with Vertex tool-calling API.
- Parameters:
tools (Sequence[Tool | Tool | _ToolDictLike | BaseTool | Type[BaseModel] | FunctionDescription | Callable | FunctionDeclaration | Dict[str, Any]]) β A list of tool definitions to bind to this chat model. Can be a pydantic model, callable, or BaseTool. Pydantic models, callables, and BaseTools will be automatically converted to their schema dictionary representation. Tools with Union types in their arguments are now supported and converted to anyOf schemas.
**kwargs (Any) β Any additional parameters to pass to the
Runnable
constructor.tool_config (_ToolConfigDict | None)
tool_choice (dict | List[str] | str | Literal['auto', 'none', 'any'] | ~typing.Literal[True] | bool | None)
**kwargs
- Return type:
Runnable[PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]], BaseMessage]
- configurable_alternatives(
- which: ConfigurableField,
- *,
- default_key: str = 'default',
- prefix_keys: bool = False,
- **kwargs: Runnable[Input, Output] | Callable[[], Runnable[Input, Output]],
Configure alternatives for
Runnables
that can be set at runtime.- Parameters:
which (ConfigurableField) β The
ConfigurableField
instance that will be used to select the alternative.default_key (str) β The default key to use if no alternative is selected. Defaults to
'default'
.prefix_keys (bool) β Whether to prefix the keys with the
ConfigurableField
id. Defaults to False.**kwargs (Runnable[Input, Output] | Callable[[], Runnable[Input, Output]]) β A dictionary of keys to
Runnable
instances or callables that returnRunnable
instances.
- Returns:
A new
Runnable
with the alternatives configured.- Return type:
from langchain_anthropic import ChatAnthropic from langchain_core.runnables.utils import ConfigurableField from langchain_openai import ChatOpenAI model = ChatAnthropic( model_name="claude-3-7-sonnet-20250219" ).configurable_alternatives( ConfigurableField(id="llm"), default_key="anthropic", openai=ChatOpenAI() ) # uses the default model ChatAnthropic print(model.invoke("which organization created you?").content) # uses ChatOpenAI print( model.with_config( configurable={"llm": "openai"} ).invoke("which organization created you?").content )
- configurable_fields( ) RunnableSerializable #
Configure particular
Runnable
fields at runtime.- Parameters:
**kwargs (ConfigurableField | ConfigurableFieldSingleOption | ConfigurableFieldMultiOption) β A dictionary of
ConfigurableField
instances to configure.- Returns:
A new
Runnable
with the fields configured.- Return type:
from langchain_core.runnables import ConfigurableField from langchain_openai import ChatOpenAI model = ChatOpenAI(max_tokens=20).configurable_fields( max_tokens=ConfigurableField( id="output_token_number", name="Max tokens in the output", description="The maximum number of tokens in the output", ) ) # max_tokens = 20 print( "max_tokens_20: ", model.invoke("tell me something about chess").content ) # max_tokens = 200 print("max_tokens_200: ", model.with_config( configurable={"output_token_number": 200} ).invoke("tell me something about chess").content )
- get_num_tokens(text: str) int [source]#
Get the number of tokens present in the text.
- Parameters:
text (str)
- Return type:
int
- get_num_tokens_from_messages(
- messages: list[BaseMessage],
- tools: Sequence | None = None,
Get the number of tokens in the messages.
Useful for checking if an input fits in a modelβs context window.
Note
The base implementation of
get_num_tokens_from_messages
ignores tool schemas.- Parameters:
messages (list[BaseMessage]) β The message inputs to tokenize.
tools (Sequence | None) β If provided, sequence of dict,
BaseModel
, function, orBaseTools
to be converted to tool schemas.
- Returns:
The sum of the number of tokens across the messages.
- Return type:
int
- get_token_ids(text: str) list[int] #
Return the ordered ids of the tokens in a text.
- Parameters:
text (str) β The string input to tokenize.
- Returns:
A list of ids corresponding to the tokens in the text, in order they occur in the text.
- Return type:
list[int]
- invoke(
- input: LanguageModelInput,
- config: RunnableConfig | None = None,
- *,
- stop: list[str] | None = None,
- **kwargs: Any,
Transform a single input into an output.
- Parameters:
input (LanguageModelInput) β The input to the
Runnable
.config (Optional[RunnableConfig]) β A config to use when invoking the
Runnable
. The config supports standard keys like'tags'
,'metadata'
for tracing purposes,'max_concurrency'
for controlling how much work to do in parallel, and other keys. Please refer to theRunnableConfig
for more details. Defaults to None.stop (Optional[list[str]])
kwargs (Any)
- Returns:
The output of the
Runnable
.- Return type:
- stream(
- input: LanguageModelInput,
- config: RunnableConfig | None = None,
- *,
- stop: list[str] | None = None,
- **kwargs: Any,
Default implementation of
stream
, which callsinvoke
.Subclasses should override this method if they support streaming output.
- Parameters:
input (LanguageModelInput) β The input to the
Runnable
.config (Optional[RunnableConfig]) β The config to use for the
Runnable
. Defaults to None.kwargs (Any) β Additional keyword arguments to pass to the
Runnable
.stop (Optional[list[str]])
- Yields:
The output of the
Runnable
.- Return type:
Iterator[BaseMessageChunk]
- with_alisteners(
- *,
- on_start: AsyncListener | None = None,
- on_end: AsyncListener | None = None,
- on_error: AsyncListener | None = None,
Bind async lifecycle listeners to a
Runnable
, returning a newRunnable
.The Run object contains information about the run, including its
id
,type
,input
,output
,error
,start_time
,end_time
, and any tags or metadata added to the run.- Parameters:
on_start (Optional[AsyncListener]) β Called asynchronously before the
Runnable
starts running, with theRun
object. Defaults to None.on_end (Optional[AsyncListener]) β Called asynchronously after the
Runnable
finishes running, with theRun
object. Defaults to None.on_error (Optional[AsyncListener]) β Called asynchronously if the
Runnable
throws an error, with theRun
object. Defaults to None.
- Returns:
A new
Runnable
with the listeners bound.- Return type:
Runnable[Input, Output]
Example:
from langchain_core.runnables import RunnableLambda, Runnable from datetime import datetime, timezone import time import asyncio def format_t(timestamp: float) -> str: return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat() async def test_runnable(time_to_sleep : int): print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}") await asyncio.sleep(time_to_sleep) print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}") async def fn_start(run_obj : Runnable): print(f"on start callback starts at {format_t(time.time())}") await asyncio.sleep(3) print(f"on start callback ends at {format_t(time.time())}") async def fn_end(run_obj : Runnable): print(f"on end callback starts at {format_t(time.time())}") await asyncio.sleep(2) print(f"on end callback ends at {format_t(time.time())}") runnable = RunnableLambda(test_runnable).with_alisteners( on_start=fn_start, on_end=fn_end ) async def concurrent_runs(): await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3)) asyncio.run(concurrent_runs()) Result: on start callback starts at 2025-03-01T07:05:22.875378+00:00 on start callback starts at 2025-03-01T07:05:22.875495+00:00 on start callback ends at 2025-03-01T07:05:25.878862+00:00 on start callback ends at 2025-03-01T07:05:25.878947+00:00 Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00 Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00 Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00 on end callback starts at 2025-03-01T07:05:27.882360+00:00 Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00 on end callback starts at 2025-03-01T07:05:28.882428+00:00 on end callback ends at 2025-03-01T07:05:29.883893+00:00 on end callback ends at 2025-03-01T07:05:30.884831+00:00
- with_config(
- config: RunnableConfig | None = None,
- **kwargs: Any,
Bind config to a
Runnable
, returning a newRunnable
.- Parameters:
config (RunnableConfig | None) β The config to bind to the
Runnable
.kwargs (Any) β Additional keyword arguments to pass to the
Runnable
.
- Returns:
A new
Runnable
with the config bound.- Return type:
Runnable[Input, Output]
- with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: tuple[type[BaseException], ...] = (<class 'Exception'>,), exception_key: Optional[str] = None) RunnableWithFallbacksT[Input, Output] #
Add fallbacks to a
Runnable
, returning a newRunnable
.The new
Runnable
will try the originalRunnable
, and then each fallback in order, upon failures.- Parameters:
fallbacks (Sequence[Runnable[Input, Output]]) β A sequence of runnables to try if the original
Runnable
fails.exceptions_to_handle (tuple[type[BaseException], ...]) β A tuple of exception types to handle. Defaults to
(Exception,)
.exception_key (Optional[str]) β If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base
Runnable
and its fallbacks must accept a dictionary as input. Defaults to None.
- Returns:
A new
Runnable
that will try the originalRunnable
, and then each fallback in order, upon failures.- Return type:
RunnableWithFallbacksT[Input, Output]
Example
from typing import Iterator from langchain_core.runnables import RunnableGenerator def _generate_immediate_error(input: Iterator) -> Iterator[str]: raise ValueError() yield "" def _generate(input: Iterator) -> Iterator[str]: yield from "foo bar" runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks( [RunnableGenerator(_generate)] ) print(''.join(runnable.stream({}))) #foo bar
- Parameters:
fallbacks (Sequence[Runnable[Input, Output]]) β A sequence of runnables to try if the original
Runnable
fails.exceptions_to_handle (tuple[type[BaseException], ...]) β A tuple of exception types to handle.
exception_key (Optional[str]) β If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base
Runnable
and its fallbacks must accept a dictionary as input.
- Returns:
A new
Runnable
that will try the originalRunnable
, and then each fallback in order, upon failures.- Return type:
RunnableWithFallbacksT[Input, Output]
- with_listeners(
- *,
- on_start: Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None = None,
- on_end: Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None = None,
- on_error: Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None = None,
Bind lifecycle listeners to a
Runnable
, returning a newRunnable
.The Run object contains information about the run, including its
id
,type
,input
,output
,error
,start_time
,end_time
, and any tags or metadata added to the run.- Parameters:
on_start (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]]) β Called before the
Runnable
starts running, with theRun
object. Defaults to None.on_end (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]]) β Called after the
Runnable
finishes running, with theRun
object. Defaults to None.on_error (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]]) β Called if the
Runnable
throws an error, with theRun
object. Defaults to None.
- Returns:
A new
Runnable
with the listeners bound.- Return type:
Runnable[Input, Output]
Example:
from langchain_core.runnables import RunnableLambda from langchain_core.tracers.schemas import Run import time def test_runnable(time_to_sleep : int): time.sleep(time_to_sleep) def fn_start(run_obj: Run): print("start_time:", run_obj.start_time) def fn_end(run_obj: Run): print("end_time:", run_obj.end_time) chain = RunnableLambda(test_runnable).with_listeners( on_start=fn_start, on_end=fn_end ) chain.invoke(2)
- with_retry(*, retry_if_exception_type: tuple[type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, exponential_jitter_params: Optional[ExponentialJitterParams] = None, stop_after_attempt: int = 3) Runnable[Input, Output] #
Create a new Runnable that retries the original Runnable on exceptions.
- Parameters:
retry_if_exception_type (tuple[type[BaseException], ...]) β A tuple of exception types to retry on. Defaults to (Exception,).
wait_exponential_jitter (bool) β Whether to add jitter to the wait time between retries. Defaults to True.
stop_after_attempt (int) β The maximum number of attempts to make before giving up. Defaults to 3.
exponential_jitter_params (Optional[ExponentialJitterParams]) β Parameters for
tenacity.wait_exponential_jitter
. Namely:initial
,max
,exp_base
, andjitter
(all float values).
- Returns:
A new Runnable that retries the original Runnable on exceptions.
- Return type:
Runnable[Input, Output]
Example:
from langchain_core.runnables import RunnableLambda count = 0 def _lambda(x: int) -> None: global count count = count + 1 if x == 1: raise ValueError("x is 1") else: pass runnable = RunnableLambda(_lambda) try: runnable.with_retry( stop_after_attempt=2, retry_if_exception_type=(ValueError,), ).invoke(1) except ValueError: pass assert (count == 2)
- with_structured_output(
- schema: Dict | Type[BaseModel] | Type,
- *,
- include_raw: bool = False,
- method: Literal['json_mode'] | None = None,
- **kwargs: Any,
Model wrapper that returns outputs formatted to match the given schema.
Changed in version 1.1.0: Return type corrected in version 1.1.0. Previously if a dict schema was provided then the output had the form
[{"args": {}, "name": "schema_name"}]
where the output was a list with a single dict and the βargsβ of the one dict corresponded to the schema. As of1.1.0
this has been fixed so that the schema (the value corresponding to the old βargsβ key) is returned directly.- Parameters:
schema (Dict | Type[BaseModel] | Type) β The output schema as a dict or a Pydantic class. If a Pydantic class then the model output will be an object of that class. If a dict then the model output will be a dict. With a Pydantic class the returned attributes will be validated, whereas with a dict they will not be. If
method
is'function_calling'
andschema
is a dict, then the dict must match the OpenAI function-calling spec.include_raw (bool) β If False then only the parsed structured output is returned. If an error occurs during model output parsing it will be raised. If True then both the raw model response (a BaseMessage) and the parsed model response will be returned. If an error occurs during output parsing it will be caught and returned as well. The final output is always a dict with keys
'raw'
,'parsed'
, and'parsing_error'
.method (Literal['json_mode'] | None) β If set to
'json_schema'
it will use controlled genetration to generate the response rather than function calling. Does not work with schemas with references or Pydantic models with self-references.kwargs (Any)
- Returns:
A Runnable that takes any ChatModel input. If
'include_raw'
is True then a dict with keys β raw: BaseMessage, parsed: Optional[_DictOrPydantic], parsing_error: Optional[BaseException]. If'include_raw'
is False then just_DictOrPydantic
is returned, where_DictOrPydantic
depends on the schema. If schema is a Pydantic class then_DictOrPydantic
is the Pydantic class. If schema is a dict then_DictOrPydantic
is a dict.- Return type:
Runnable[PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]], Dict | BaseModel]
- Example: Pydantic schema, exclude raw:
from pydantic import BaseModel from langchain_google_vertexai import ChatVertexAI class AnswerWithJustification(BaseModel): '''An answer to the user question along with justification for the answer.''' answer: str justification: str llm = ChatVertexAI(model_name="gemini-2.0-flash-001", temperature=0) structured_llm = llm.with_structured_output(AnswerWithJustification) structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers") # -> AnswerWithJustification( # answer='They weigh the same.', justification='A pound is a pound.' # )
- Example: Pydantic schema, include raw:
from pydantic import BaseModel from langchain_google_vertexai import ChatVertexAI class AnswerWithJustification(BaseModel): '''An answer to the user question along with justification for the answer.''' answer: str justification: str llm = ChatVertexAI(model_name="gemini-2.0-flash-001", temperature=0) structured_llm = llm.with_structured_output(AnswerWithJustification, include_raw=True) structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers") # -> { # 'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ao02pnFYXD6GN1yzc0uXPsvF', 'function': {'arguments': '{"answer":"They weigh the same.","justification":"Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ."}', 'name': 'AnswerWithJustification'}, 'type': 'function'}]}), # 'parsed': AnswerWithJustification(answer='They weigh the same.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'), # 'parsing_error': None # }
- Example: Dict schema, exclude raw:
from pydantic import BaseModel from langchain_core.utils.function_calling import convert_to_openai_function from langchain_google_vertexai import ChatVertexAI class AnswerWithJustification(BaseModel): '''An answer to the user question along with justification for the answer.''' answer: str justification: str dict_schema = convert_to_openai_function(AnswerWithJustification) llm = ChatVertexAI(model_name="gemini-2.0-flash-001", temperature=0) structured_llm = llm.with_structured_output(dict_schema) structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers") # -> { # 'answer': 'They weigh the same', # 'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.' # }
- Example: Pydantic schema, streaming:
from pydantic import BaseModel, Field from langchain_google_vertexai import ChatVertexAI class Explanation(BaseModel): '''A topic explanation with examples.''' description: str = Field(description="A brief description of the topic.") examples: str = Field(description="Two examples related to the topic.") llm = ChatVertexAI(model_name="gemini-2.0-flash", temperature=0) structured_llm = llm.with_structured_output(Explanation, method="json_mode") for chunk in structured_llm.stream("Tell me about transformer models"): print(chunk) print('-------------------------') # -> description='Transformer models are a type of neural network architecture that have revolutionized the field of natural language processing (NLP) and are also increasingly used in computer vision and other domains. They rely on the self-attention mechanism to weigh the importance of different parts of the input data, allowing them to effectively capture long-range dependencies. Unlike recurrent neural networks (RNNs), transformers can process the entire input sequence in parallel, leading to significantly faster training times. Key components of transformer models include: the self-attention mechanism (calculates attention weights between different parts of the input), multi-head attention (performs self-attention multiple times with different learned parameters), positional encoding (adds information about the position of tokens in the input sequence), feedforward networks (applies a non-linear transformation to each position), and encoder-decoder structure (used for sequence-to-sequence tasks).' examples='1. BERT (Bidirectional Encoder Representations from Transformers): A pre-trained transformer' # ------------------------- # description='Transformer models are a type of neural network architecture that have revolutionized the field of natural language processing (NLP) and are also increasingly used in computer vision and other domains. They rely on the self-attention mechanism to weigh the importance of different parts of the input data, allowing them to effectively capture long-range dependencies. Unlike recurrent neural networks (RNNs), transformers can process the entire input sequence in parallel, leading to significantly faster training times. Key components of transformer models include: the self-attention mechanism (calculates attention weights between different parts of the input), multi-head attention (performs self-attention multiple times with different learned parameters), positional encoding (adds information about the position of tokens in the input sequence), feedforward networks (applies a non-linear transformation to each position), and encoder-decoder structure (used for sequence-to-sequence tasks).' examples='1. BERT (Bidirectional Encoder Representations from Transformers): A pre-trained transformer model that can be fine-tuned for various NLP tasks like text classification, question answering, and named entity recognition. 2. GPT (Generative Pre-trained Transformer): A language model that uses transformers to generate coherent and contextually relevant text. GPT models are used in chatbots, content creation, and code generation.' # -------------------------
- with_types(
- *,
- input_type: type[Input] | None = None,
- output_type: type[Output] | None = None,
Bind input and output types to a
Runnable
, returning a newRunnable
.- Parameters:
input_type (type[Input] | None) β The input type to bind to the
Runnable
. Defaults to None.output_type (type[Output] | None) β The output type to bind to the
Runnable
. Defaults to None.
- Returns:
A new Runnable with the types bound.
- Return type:
Runnable[Input, Output]
- property async_prediction_client: PredictionServiceAsyncClient | PredictionServiceAsyncClient#
Returns PredictionServiceClient.
- property max_tokens: int | None#
- property prediction_client: PredictionServiceClient | PredictionServiceClient#
Returns PredictionServiceClient.
- task_executor: ClassVar[Executor | None] = FieldInfo(annotation=NoneType, required=False, default=None, exclude=True)#