mirascope.core.gemini.call_response_chunk¶
This module contains the GeminiCallResponseChunk
class.
Usage Documentation
GeminiCallResponseChunk
¶
Bases: BaseCallResponseChunk[GenerateContentResponse, FinishReason]
A convenience wrapper around the Gemini API streamed response chunks.
When calling the Gemini API using a function decorated with gemini_call
and
stream
set to True
, the stream will contain GeminiCallResponseChunk
instances
Example:
from mirascope.core import prompt_template
from mirascope.core.gemini import gemini_call
@gemini_call("gemini-1.5-flash", stream=True)
def recommend_book(genre: str) -> str:
return f"Recommend a {genre} book"
stream = recommend_book("fantasy") # response is an `GeminiStream`
for chunk, _ in stream:
print(chunk.content, end="", flush=True)
finish_reasons
property
¶
finish_reasons: list[FinishReason]
Returns the finish reasons of the response.
model
property
¶
Returns the model name.
google.generativeai does not return model, so we return None
id
property
¶
id: str | None
Returns the id of the response.
google.generativeai does not return an id
usage
property
¶
Returns the usage of the chat completion.
google.generativeai does not have Usage, so we return None