Advertisement
Guest User

PydanticAI + Vercel AI SDK UI

a guest
Apr 29th, 2025
70
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 6.04 KB | None | 0 0
  1. Specifically to use it with the useChat hook, you need to use to set the streamProtocol to data and convert the agent stream to the data stream protocol. (https://sdk.vercel.ai/docs/ai-sdk-ui/stream-protocol#data-stream-protocol).
  2. Anyways enough yap, heres how you do it.
  3. Lets just get the agent stream and formatting right first.
  4. Heres your agent stream:
  5. agent = Agent(model="openai:gpt-4.1-nano", # Nano used for wallet savings
  6. system_prompt="You're literally the coolest model ever, be super obnoxious about it."
  7. async chat_stream(prompt):
  8. async def stream_messages():
  9. async with agent.iter(prompt) as agent_run:
  10. async for node in agent_run:
  11. # This to_data_stream_protocol function will convert it to the vercel format
  12. async for chunk in to_data_stream_protocol(node, agent_run):
  13. yield chunk
  14. return stream_messages() # <--- returning this here for the FastAPI route example later.
  15.  
  16. Now lets define the to_data_stream_protocol function:
  17. async def to_data_stream_protocol(node, run):
  18. if Agent.is_user_prompt_node(node):
  19. ...
  20. elif Agent.is_model_request_node(node):
  21. async with node.stream(run.ctx) as request_stream:
  22. async for event in request_stream:
  23. if isinstance(event, PartStartEvent):
  24. if event.part.part_kind == "text":
  25. yield "0:{text}\n".format(text=json.dumps(event.part.content))
  26. elif event.part.part_kind == "tool-call":
  27. yield 'b:{{"toolCallId": "{tool_call_id}", "toolName": "{tool_name}"}}\n'.format(
  28. tool_call_id=event.part.tool_call_id,
  29. tool_name=event.part.tool_name,
  30. )
  31. if isinstance(event, PartDeltaEvent):
  32. if event.delta.part_delta_kind == "text":
  33. yield "0:{text}\n".format(
  34. text=json.dumps(event.delta.content_delta)
  35. )
  36. elif event.delta.part_delta_kind == "tool-call":
  37. yield 'c:{{"toolCallId": "{tool_call_id}", "argsTextDelta": "{args_text_delta}"}}\n'.format(
  38. tool_call_id=event.part.tool_call_id,
  39. args_text_delta=event.part.args_delta,
  40. )
  41. elif Agent.is_call_tools_node(node):
  42. async with node.stream(run.ctx) as handle_stream:
  43. async for event in handle_stream:
  44. if isinstance(event, FunctionToolCallEvent):
  45. yield '9:{{"toolCallId":"{tool_call_id}","toolName":"{tool_name}","args":{args}}}\n'.format(
  46. tool_call_id=event.part.tool_call_id,
  47. tool_name=event.part.tool_name,
  48. args=event.part.args,
  49. )
  50. elif isinstance(event, FunctionToolResultEvent):
  51. yield 'a:{{"toolCallId":"{tool_call_id}", "result":{result}}}\n'.format(
  52. tool_call_id=event.result.tool_call_id,
  53. result=json.dumps(event.result.content.to_dict()
  54. if hasattr(event.result.content, 'to_dict')
  55. else event.result.content.model_dump()
  56. if hasattr(event.result.content, 'model_dump')
  57. else event.result.content.__dict__),
  58. )
  59. elif Agent.is_end_node(node):
  60. assert run.result.data == node.data.data
  61.  
  62. yield 'd:{{"finishReason":"{reason}","usage":{{"promptTokens":{prompt},"completionTokens":{completion}}}}}\n'.format(
  63. reason=(
  64. "tool-call" if run.result._output_tool_name else "stop"
  65. ), # TODO: Add reason determining logic
  66. prompt=run.result.usage().request_tokens,
  67. completion=run.result.usage().response_tokens,
  68. )
  69.  
  70. Cool. Now for FastAPI, you can wrap generators like this in a streaming response. Here's a route from my application https://app.zettel.study (its in early access beta btw if you wanna check it out :P)
  71. from pydantic import BaseModel
  72. from .whatever_file import chat_stream
  73.  
  74. class ChatMessageRequest(BaseModel):
  75. message: str
  76.  
  77. @router.post("/dummy-route", response_class=StreamingResponse)
  78. async def chat_stream_topic_assessment_roleplay(
  79. current_user: CurrentUserDependencyForProtection, # <--- You can ignore this
  80. request: ChatMessageRequest = Body(...),):
  81. """
  82. Stream chat interaction with agent.
  83.  
  84. Supports text messages.
  85. """
  86. try:
  87. stream = await chat_stream(request.message)
  88.  
  89. response = StreamingResponse(
  90. stream
  91. )
  92.  
  93. response.headers["x-vercel-ai-data-stream"] = "v1"
  94.  
  95. return response
  96. except Exception as e:
  97. logger.exception("Error in chat_stream endpoint")
  98. raise HTTPException(status_code=500, detail=str(e))
  99.  
  100. That's the backend setup.
  101. The useChat should look something like this
  102.  
  103. const { messages, input, handleInputChange, handleSubmit, status, error, stop, setMessages, reload, append, setInput } =
  104. useChat({
  105. api: `http://localhost:8000/api/v1/dummy-route`,
  106. id: randomId,
  107. initialMessages: [],
  108. streamProtocol: "data",
  109. credentials: "include",
  110. experimental_prepareRequestBody: ({ messages }) => ({
  111. message: messages[messages.length - 1].content}),
  112. experimental_throttle: 100,
  113. onError: () => {
  114. sonnerToast.error('An error occurred, please try again!');
  115. }
  116. })
  117.  
  118. And thats the full stack.
  119. If you need to see how to handle rendering tool call responses and all that stuff. Check this out, https://github.com/vercel/ai-chatbot/blob/main/components/message.tsx
  120. The whole repository is a great resource for using the useChat hook with tool responses.
  121. This is my first reddit post ever btw. Hope this isn't too lengthy. Good luck and hope this helps.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement