mirascope.core.groq.stream¶
The GroqStream
class for convenience around streaming LLM calls.
Usage Documentation
GroqStream
¶
GroqStream(
*,
stream: (
Generator[
tuple[
_BaseCallResponseChunkT, _BaseToolT | None
],
None,
None,
]
| AsyncGenerator[
tuple[
_BaseCallResponseChunkT, _BaseToolT | None
],
None,
]
),
metadata: Metadata,
tool_types: list[type[_BaseToolT]] | None,
call_response_type: type[_BaseCallResponseT],
model: str,
prompt_template: str | None,
fn_args: dict[str, Any],
dynamic_config: _BaseDynamicConfigT,
messages: list[_MessageParamT],
call_params: _BaseCallParamsT,
call_kwargs: BaseCallKwargs[_ToolSchemaT]
)
Bases: BaseStream[GroqCallResponse, GroqCallResponseChunk, ChatCompletionUserMessageParam, ChatCompletionAssistantMessageParam, ChatCompletionToolMessageParam, ChatCompletionMessageParam, GroqTool, ChatCompletionToolParam, AsyncGroqDynamicConfig | GroqDynamicConfig, GroqCallParams, FinishReason]
A class for convenience around streaming Groq LLM calls.
Example:
from mirascope.core import prompt_template
from mirascope.core.groq import groq_call
@groq_call("llama-3.1-8b-instant", stream=True)
def recommend_book(genre: str) -> str:
return f"Recommend a {genre} book"
stream = recommend_book("fantasy") # returns `GroqStream` instance
for chunk, _ in stream:
print(chunk.content, end="", flush=True)
Source code in mirascope/core/base/stream.py
construct_call_response
¶
construct_call_response() -> GroqCallResponse
Constructs the call response from a consumed GroqStream.
Raises:
Type | Description |
---|---|
ValueError
|
if the stream has not yet been consumed. |