STREAMING_&_EVENTS
CORE_CONCEPTSBuild responsive UIs by streaming tokens and listening to agent lifecycle events.
WHY_STREAM#
LLMs are slow. Waiting for a complete response can take seconds or even minutes. Streaming allows you to display tokens as they are generated, improving perceived latency and user experience.
STANDARD
Request → Wait (3s) → Response
STREAMING
Request → Token (0.1s) → Token (0.2s) → ...
STREAM_API#
Instead of agent.run(), use agent.stream(). This returns an async generator that yields events.
stream-example.ts
FRONTEND_INTEGRATION#
Using Vercel AI SDK on the frontend with AKIOS on the backend is a powerful combo.
EDGE_RUNTIME
Streaming works best on the Edge. Ensure your API route uses
export const runtime = 'edge'.app/api/chat/route.ts
EVENT_TYPES_REFERENCE#
| Event Type | Data Payload | Description |
|---|---|---|
| token | string | A text chunk from the LLM. |
| tool_start | { tool: string, input: any } | Agent decided to call a tool. |
| tool_end | { output: string } | Tool execution completed. |
| step | StepObject | A full thought/action cycle finished. |