Skip to content

mirascope.core.google.call_response_chunk

This module contains the GoogleCallResponseChunk class.

Usage Documentation

Streams

GoogleCallResponseChunk

Bases: BaseCallResponseChunk[GenerateContentResponse, FinishReason]

A convenience wrapper around the Google API streamed response chunks.

When calling the Google API using a function decorated with google_call and stream set to True, the stream will contain GoogleCallResponseChunk instances

Example:

from mirascope.core import prompt_template
from mirascope.core.google import google_call


@google_call("google-1.5-flash", stream=True)
def recommend_book(genre: str) -> str:
    return f"Recommend a {genre} book"


stream = recommend_book("fantasy")  # response is an `GoogleStream`
for chunk, _ in stream:
    print(chunk.content, end="", flush=True)

content property

content: str

Returns the chunk content for the 0th choice.

finish_reasons property

finish_reasons: list[FinishReason]

Returns the finish reasons of the response.

model property

model: str | None

Returns the model name.

google.generativeai does not return model, so we return None

id property

id: str | None

Returns the id of the response.

google.generativeai does not return an id

usage property

usage: GenerateContentResponseUsageMetadata | None

Returns the usage of the chat completion.

google.generativeai does not have Usage, so we return None

input_tokens property

input_tokens: int | None

Returns the number of input tokens.

cached_tokens property

cached_tokens: int | None

Returns the number of cached tokens.

output_tokens property

output_tokens: int | None

Returns the number of output tokens.

cost_metadata property

cost_metadata: CostMetadata

Returns the cost metadata.