Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Specifically to use it with the useChat hook, you need to use to set the streamProtocol to data and convert the agent stream to the data stream protocol. (https://sdk.vercel.ai/docs/ai-sdk-ui/stream-protocol#data-stream-protocol).
- Anyways enough yap, heres how you do it.
- Lets just get the agent stream and formatting right first.
- Heres your agent stream:
- agent = Agent(model="openai:gpt-4.1-nano", # Nano used for wallet savings
- system_prompt="You're literally the coolest model ever, be super obnoxious about it."
- async chat_stream(prompt):
- async def stream_messages():
- async with agent.iter(prompt) as agent_run:
- async for node in agent_run:
- # This to_data_stream_protocol function will convert it to the vercel format
- async for chunk in to_data_stream_protocol(node, agent_run):
- yield chunk
- return stream_messages() # <--- returning this here for the FastAPI route example later.
- Now lets define the to_data_stream_protocol function:
- async def to_data_stream_protocol(node, run):
- if Agent.is_user_prompt_node(node):
- ...
- elif Agent.is_model_request_node(node):
- async with node.stream(run.ctx) as request_stream:
- async for event in request_stream:
- if isinstance(event, PartStartEvent):
- if event.part.part_kind == "text":
- yield "0:{text}\n".format(text=json.dumps(event.part.content))
- elif event.part.part_kind == "tool-call":
- yield 'b:{{"toolCallId": "{tool_call_id}", "toolName": "{tool_name}"}}\n'.format(
- tool_call_id=event.part.tool_call_id,
- tool_name=event.part.tool_name,
- )
- if isinstance(event, PartDeltaEvent):
- if event.delta.part_delta_kind == "text":
- yield "0:{text}\n".format(
- text=json.dumps(event.delta.content_delta)
- )
- elif event.delta.part_delta_kind == "tool-call":
- yield 'c:{{"toolCallId": "{tool_call_id}", "argsTextDelta": "{args_text_delta}"}}\n'.format(
- tool_call_id=event.part.tool_call_id,
- args_text_delta=event.part.args_delta,
- )
- elif Agent.is_call_tools_node(node):
- async with node.stream(run.ctx) as handle_stream:
- async for event in handle_stream:
- if isinstance(event, FunctionToolCallEvent):
- yield '9:{{"toolCallId":"{tool_call_id}","toolName":"{tool_name}","args":{args}}}\n'.format(
- tool_call_id=event.part.tool_call_id,
- tool_name=event.part.tool_name,
- args=event.part.args,
- )
- elif isinstance(event, FunctionToolResultEvent):
- yield 'a:{{"toolCallId":"{tool_call_id}", "result":{result}}}\n'.format(
- tool_call_id=event.result.tool_call_id,
- result=json.dumps(event.result.content.to_dict()
- if hasattr(event.result.content, 'to_dict')
- else event.result.content.model_dump()
- if hasattr(event.result.content, 'model_dump')
- else event.result.content.__dict__),
- )
- elif Agent.is_end_node(node):
- assert run.result.data == node.data.data
- yield 'd:{{"finishReason":"{reason}","usage":{{"promptTokens":{prompt},"completionTokens":{completion}}}}}\n'.format(
- reason=(
- "tool-call" if run.result._output_tool_name else "stop"
- ), # TODO: Add reason determining logic
- prompt=run.result.usage().request_tokens,
- completion=run.result.usage().response_tokens,
- )
- Cool. Now for FastAPI, you can wrap generators like this in a streaming response. Here's a route from my application https://app.zettel.study (its in early access beta btw if you wanna check it out :P)
- from pydantic import BaseModel
- from .whatever_file import chat_stream
- class ChatMessageRequest(BaseModel):
- message: str
- @router.post("/dummy-route", response_class=StreamingResponse)
- async def chat_stream_topic_assessment_roleplay(
- current_user: CurrentUserDependencyForProtection, # <--- You can ignore this
- request: ChatMessageRequest = Body(...),):
- """
- Stream chat interaction with agent.
- Supports text messages.
- """
- try:
- stream = await chat_stream(request.message)
- response = StreamingResponse(
- stream
- )
- response.headers["x-vercel-ai-data-stream"] = "v1"
- return response
- except Exception as e:
- logger.exception("Error in chat_stream endpoint")
- raise HTTPException(status_code=500, detail=str(e))
- That's the backend setup.
- The useChat should look something like this
- const { messages, input, handleInputChange, handleSubmit, status, error, stop, setMessages, reload, append, setInput } =
- useChat({
- api: `http://localhost:8000/api/v1/dummy-route`,
- id: randomId,
- initialMessages: [],
- streamProtocol: "data",
- credentials: "include",
- experimental_prepareRequestBody: ({ messages }) => ({
- message: messages[messages.length - 1].content}),
- experimental_throttle: 100,
- onError: () => {
- sonnerToast.error('An error occurred, please try again!');
- }
- })
- And thats the full stack.
- If you need to see how to handle rendering tool call responses and all that stuff. Check this out, https://github.com/vercel/ai-chatbot/blob/main/components/message.tsx
- The whole repository is a great resource for using the useChat hook with tool responses.
- This is my first reddit post ever btw. Hope this isn't too lengthy. Good luck and hope this helps.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement