---
title: "AG-UI Protocol: Building Agent-Powered UIs for Your App in 2026"
author: "Nate Laquis"
author_role: "Founder & CEO"
date: "2028-02-25"
category: "Technology"
tags:
  - AG-UI protocol guide
  - agent user interface
  - agentic UI development
  - AI agent frontend
  - human-in-the-loop AI
excerpt: "AG-UI completes the MCP/A2A/AG-UI trifecta that every CTO building agent-powered products needs. Here is how it works and how to implement it in your applications."
reading_time: "14 min read"
canonical_url: "https://kanopylabs.com/blog/ag-ui-protocol-agent-user-interfaces"
---

# AG-UI Protocol: Building Agent-Powered UIs for Your App in 2026

## What Is AG-UI and Why Does It Matter?

AG-UI (Agent-User Interaction Protocol) is the missing piece in the agentic AI stack. [MCP](/blog/model-context-protocol-mcp-guide) defines how agents discover and use tools. A2A defines how agents communicate with each other. AG-UI defines how agents communicate with human users through rich, interactive interfaces.

Before AG-UI, every team building agent-powered products invented their own UI protocol. Some streamed plain text. Others sent JSON blobs that the frontend parsed into components. Others hacked together WebSocket messages with custom event types. The result was fragmented, non-portable, and painful to maintain.

AG-UI standardizes this interaction layer. An agent emits a stream of typed events (text chunks, tool calls, state updates, lifecycle signals), and the frontend renders them using a standard component library. Swap the agent backend from Claude to GPT to an open-source model, and the frontend works unchanged. Swap the frontend from React to Vue to a mobile app, and the agent works unchanged.

AWS Bedrock, Microsoft Agent Framework, and Google ADK adopted AG-UI in early 2026. CopilotKit, the leading open-source agentic UI framework, built their entire 2.0 release around it. If you are building products with AI agents, AG-UI is the standard you should follow.

![Developer implementing AG-UI protocol for an agent-powered user interface with streaming event architecture](https://images.unsplash.com/photo-1555949963-ff9fe0c870eb?w=800&q=80)

## The AG-UI Event System

AG-UI uses a streaming event protocol over HTTP (Server-Sent Events) or WebSockets. The agent emits a sequence of typed events that the frontend consumes and renders in real time.

### Core Event Types

- **RUN_STARTED:** Agent begins processing a request. Frontend shows a loading state.

- **TEXT_MESSAGE_START / TEXT_MESSAGE_CONTENT / TEXT_MESSAGE_END:** Streaming text output, token by token. Frontend renders progressively like ChatGPT.

- **TOOL_CALL_START / TOOL_CALL_ARGS / TOOL_CALL_END:** Agent invokes a tool. Frontend renders the tool call with its arguments, optionally showing the tool's UI component.

- **STATE_SNAPSHOT / STATE_DELTA:** Agent shares its internal state. Frontend can render state as a live dashboard (progress bars, data tables, charts).

- **STEP_STARTED / STEP_FINISHED:** Agent begins/completes a step in a multi-step plan. Frontend shows a progress timeline.

- **RUN_FINISHED / RUN_ERROR:** Agent completes or fails. Frontend updates the final state.

### Event Envelope

Every event includes: a unique event ID, a timestamp, the event type, and a JSON payload. Events arrive in order over a single stream. The frontend maintains a local state machine that transitions based on event types, building the complete conversation and agent state progressively.

### Why Streaming Matters

Agents that use tools, search databases, and reason through multi-step plans can take 10 to 60 seconds to complete a request. Without streaming, users stare at a spinner. With AG-UI streaming, users see the agent's thinking process: "Searching your CRM... found 12 matching accounts... analyzing usage patterns... generating recommendations." This transparency builds trust and reduces perceived latency by 70%.

## Tool Call Rendering

When an agent calls a tool (query a database, fetch an API, run a calculation), the frontend needs to render what is happening. AG-UI standardizes this with the TOOL_CALL event sequence.

### Basic Tool Rendering

The simplest approach: show the tool name and arguments as the agent calls them. "Searching CRM for accounts in healthcare sector with revenue > $1M." When the tool returns, show a summary of the result. This gives users transparency without requiring custom UI components for every tool.

### Rich Tool Components

For common tools, build dedicated UI components. A "search" tool renders results as a card grid. A "chart" tool renders an interactive chart. A "form" tool renders an input form for the user to fill in. Map tool names to React components using a registry pattern. When the agent emits a TOOL_CALL_START with tool_name: "search_accounts", the frontend looks up the SearchResults component and renders it with the tool arguments and results.

### Generative UI

The most advanced pattern: the agent generates the UI dynamically. Instead of mapping tool names to pre-built components, the agent emits a UI specification (JSON schema describing a table, chart, or form) and the frontend renders it dynamically. This approach is more flexible but requires careful sandboxing to prevent arbitrary code execution.

### Human-in-the-Loop Confirmation

For high-stakes actions (sending an email, updating a database, making a purchase), the agent emits a TOOL_CALL event with a "requires_confirmation" flag. The frontend renders a confirmation dialog showing what the agent wants to do. The user approves or rejects. The decision is sent back to the agent via a response event. This pattern is critical for production agent deployments where mistakes have real consequences.

## State Management and Shared Context

AG-UI's state events (STATE_SNAPSHOT and STATE_DELTA) enable something powerful: shared state between the agent and the frontend. The agent updates state, and the frontend reactively renders it.

### Use Cases for Shared State

A research agent populates a structured findings object as it works. The frontend renders a live dashboard of discoveries, sources, and confidence scores. A data analysis agent builds a report iteratively. The frontend shows the report taking shape in real time, with charts appearing as data is processed. A planning agent builds an itinerary step by step. The frontend shows a visual timeline that grows as the agent adds activities.

### Implementation Pattern

Define a state schema (TypeScript interface) shared between agent and frontend. The agent emits STATE_SNAPSHOT with the complete state object at the start. As processing continues, it emits STATE_DELTA events with JSON Patch operations (add, replace, remove) that update specific fields. The frontend applies deltas to its local copy of state and re-renders affected components.

![Global network architecture diagram showing AG-UI protocol event streaming between AI agents and user interfaces](https://images.unsplash.com/photo-1451187580459-43490279c0fa?w=800&q=80)

### Optimistic State and Rollback

Sometimes agents revise their work. A research agent might find new information that contradicts earlier findings. The STATE_DELTA mechanism handles this naturally: the agent sends a "replace" operation that overwrites the outdated value. The frontend updates reactively. For complex state changes, emit a full STATE_SNAPSHOT to reset the frontend to a known-good state rather than a long sequence of deltas.

## Implementing AG-UI in React

Here is a practical guide to implementing AG-UI in a React application using the CopilotKit framework or a custom implementation.

### Using CopilotKit (Fastest Path)

CopilotKit provides pre-built React components that consume AG-UI event streams. Install @copilotkit/react-core and @copilotkit/react-ui. The CopilotChat component handles the event stream, renders text messages, shows tool calls, and manages conversation state. Customize with your own tool components by registering them with the useCopilotAction hook.

### Custom Implementation

If you need full control, build your own AG-UI consumer. Connect to the event stream using EventSource (for SSE) or a WebSocket client. Maintain a state machine with useReducer that processes events. Render text events into a message list with streaming animation. Render tool calls by mapping tool_name to registered components. Apply state deltas using the immer library for immutable state updates.

### Key Components to Build

- **AgentMessage:** Renders streaming text with Markdown support.

- **ToolCallCard:** Shows tool name, arguments, status (running/completed/failed), and results.

- **StepTimeline:** Visual progress indicator for multi-step agent workflows.

- **ConfirmationDialog:** Human-in-the-loop approval for tool calls that require confirmation.

- **StateRenderer:** Dynamic component that renders agent state as tables, charts, or custom layouts.

### Performance Optimization

AG-UI event streams can emit hundreds of events per second during text streaming. Use React.memo on child components to prevent unnecessary re-renders. Batch state updates using requestAnimationFrame for smooth text streaming. Virtualize long conversation histories to prevent DOM bloat.

## AG-UI with MCP and A2A: The Complete Stack

AG-UI does not work in isolation. It is the frontend layer of a three-protocol stack that enables fully agentic applications.

### The Three-Protocol Architecture

MCP (Model Context Protocol): Agents discover and invoke tools. Your agent connects to MCP servers that expose database queries, API calls, file operations, and custom business logic as tools. A2A (Agent-to-Agent Protocol): Agents delegate tasks to other agents. Your orchestrator agent sends a research task to a research agent, a writing task to a writing agent, and a review task to a review agent. AG-UI: The orchestrator agent streams its progress, tool calls, and results to the user's browser via AG-UI events.

### Example: AI Sales Assistant

The user asks: "Prepare a proposal for Acme Corp." The orchestrator agent (1) uses MCP to query the CRM for Acme's account details, (2) delegates market research to a research agent via A2A, (3) delegates proposal writing to a writing agent via A2A, (4) streams progress to the user via AG-UI showing "Researching Acme Corp... Analyzing their industry... Drafting proposal sections..." The user sees the proposal taking shape in real time and can intervene at any point.

### Backend Architecture

Your backend runs the orchestrator agent (Claude, GPT-4, or an open-source model). The agent has MCP client connections to your tool servers. The agent has A2A client connections to specialized sub-agents. The backend exposes an SSE or WebSocket endpoint that emits AG-UI events. The frontend connects to this endpoint and renders the interaction. Use LangGraph, CrewAI, or the Anthropic Agent SDK to orchestrate the multi-agent workflow.

## Production Considerations and Best Practices

Shipping AG-UI in production requires handling edge cases that demos skip over.

### Error Handling

Tool calls can fail (API timeout, rate limit, invalid input). Render errors gracefully with retry options. Agent responses can be irrelevant or hallucinated. Implement feedback buttons (thumbs up/down) to collect quality signals. Stream disconnections happen. Implement automatic reconnection with event replay from the last received event ID.

### Rate Limiting and Cost Control

Agents making tool calls and LLM requests can burn through API budgets fast. Implement per-user rate limits (10 agent runs per hour on free tier). Track token usage per request and display estimated cost in admin dashboards. Set hard budget caps per workspace per month with automated alerts at 80% usage.

### Security

Validate all tool call arguments on the server before execution. Never trust tool call parameters from the agent without validation. Implement authentication on the SSE/WebSocket endpoint. Use separate API keys for different tool access levels. Log all agent actions for audit trails. Read more about [AI agents for business](/blog/ai-agents-for-business) for additional security patterns.

![Code on monitor showing AG-UI protocol implementation with event streaming and tool rendering logic](https://images.unsplash.com/photo-1461749280684-dccba630e2f6?w=800&q=80)

### Testing

Create mock event streams for frontend testing without running real agents. Record production event streams for replay testing. Test with slow and interrupted connections. Load test with concurrent agent sessions to validate WebSocket server capacity.

### Getting Started

Start with CopilotKit for the fastest path to a working AG-UI implementation. Build a simple chatbot that uses one MCP tool. Then expand to multi-step workflows with tool rendering and state management. The AG-UI specification is open and evolving, so follow the CopilotKit GitHub repo for the latest updates and best practices.

Ready to build agent-powered UIs for your product? [Book a free strategy call](/get-started) to plan your AG-UI implementation and agentic architecture.

---

*Originally published on [Kanopy Labs](https://kanopylabs.com/blog/ag-ui-protocol-agent-user-interfaces)*
