POCKLA

Generative UI: The Technology Landscape for AI-Generated Interfaces

POCKLA Team7 min read

Three major approaches are emerging for generative UI: declarative protocols like Google's A2UI, agent-UI standards like AG-UI, and code generation tools like v0.dev. Here's how they work and what they mean for AI products.


Three approaches are taking shape in generative UI, each tackling a different part of the problem. Declarative UI protocols like Google's A2UI have the AI generate JSON that renders as native components. Agent-UI standards like AG-UI define how agents talk to frontends. Code generation tools like v0.dev and Claude Artifacts write actual component code. What these share is a move away from chat interfaces where AI produces text that humans interpret, toward AI producing interface structures that render as visual, interactive experiences.

The difference matters. Chat interfaces force users through back-and-forth conversations to do simple things. Generative UI lets agents produce forms, charts, and controls that let users specify what they want more directly. That's a different kind of interaction.

Google A2UI: declarative JSON for UI

A2UI (Agent-to-User Interface) was announced by Google in December 2025 as an open protocol for declarative UI generation. The idea is simple: agents generate JSON describing their UI intent, and client apps render that JSON using their native component libraries. The same agent output can render differently across React, Flutter, SwiftUI, or any other framework.

The security model is worth mentioning. A2UI is declarative data, not executable code. Clients keep a catalog of pre-approved components, and agents can only request components from that catalog. No arbitrary code execution from LLM output.

The protocol is designed to be LLM-friendly, representing UI as a flat list with ID references that models can generate incrementally. This supports progressive rendering as the model streams its response, so users can see and interact with partially-generated interfaces. Instead of a multi-turn conversation asking "What date? What time? How many people?" for a restaurant booking, an A2UI-enabled agent generates a form with date picker, time selector, and party size controls in one response.

A2UI is currently at version 0.8 in public preview under Apache 2.0 licensing, with renderer libraries for Flutter, Web Components, and Angular. React and SwiftUI are planned.

Vercel json-render: constrained components

Vercel's json-render takes a similar declarative approach but with tighter constraints. Developers define an explicit catalog of allowed components (cards, charts, forms) and AI can only generate JSON that maps to this vocabulary. The result is predictable: JSON output matches the schema every time, streams and renders progressively, and stays within safe boundaries.

The @json-render/react package provides the building blocks: a DataProvider for supplying context, an ActionProvider for handling interactions, a Renderer for mapping JSON to components, and useUIStream for streaming AI responses directly to UI elements.

The Vercel AI SDK extends this with a React Server Components approach where LLMs stream UI components directly. Tools are defined with React render functions, so when a model calls a tool like showWeather, the tool executes, fetches data, and returns a rendered React component in one flow. The August 2025 AI Elements release is an open source library of customizable components including message threads, input boxes, reasoning panels, and response actions built on shadcn/ui.

AG-UI: the agent-frontend protocol

AG-UI (Agent-User Interaction Protocol) from CopilotKit approaches the problem differently. Rather than specifying how to render UI, AG-UI standardizes the communication layer between agentic backends and agentic frontends. It's an event-based protocol carrying messages, tool calls, state patches, lifecycle signals, and human-in-the-loop approvals.

The distinction from A2UI matters: A2UI is a generative UI specification focused on delivering widgets, while AG-UI is an interaction protocol focused on the connection between agent and frontend. They complement each other. AG-UI can transport A2UI payloads as part of its event stream.

AG-UI has picked up adoption. Microsoft and GitHub joined the steering committee at Build 2025, Oracle adopted it for their Agent Spec, and it works with LangGraph, Mastra, and Pydantic AI. That cross-industry support suggests AG-UI may become the standard transport layer for agent-frontend communication.

MCP Apps: sandboxed iframes for agents

MCP Apps is a November 2025 extension to Anthropic's Model Context Protocol that enables servers to present visual information via sandboxed iframes. The specification introduces UI templates as resources with a ui:// URI scheme, where HTML renders in sandboxed iframes with bidirectional communication between iframe and host.

The security model uses multiple protection layers: iframe sandboxing, predeclared templates only, auditable message passing, and user consent mechanisms. Both Anthropic and OpenAI collaborated on this specification, which suggests convergence toward common standards for agent-generated visual interfaces.

v0 and Artifacts: code generation

Where declarative approaches like A2UI constrain AI to predefined components, code generation tools let AI write actual code. v0.dev from Vercel converts natural language prompts into production-ready React code with Tailwind CSS and shadcn/ui. The tradeoff is clear: maximum flexibility but requiring code review, versus guaranteed safety but limited to the component catalog.

Claude Artifacts takes a similar approach, generating runnable code (React, HTML, SVG, Mermaid diagrams) that renders in a dedicated panel alongside the conversation. The June 2025 updates added AI capabilities embedded in artifacts, MCP integration for connecting to external services like Asana and Slack, and stateful persistence across sessions.

Both tools work well for rapid prototyping and common UI patterns but operate differently from declarative protocols. They produce code for humans to review and deploy, not runtime-interpreted schemas for immediate rendering.

The stack so far

The landscape is forming into distinct layers. At the top: the user interface layer with native widgets in React, Flutter, or SwiftUI and component libraries like shadcn/ui and Material. Below that, the rendering layer handles translation via A2UI renderers for declarative JSON, json-render for constrained catalogs, and sandboxed execution for code artifacts.

The transport and protocol layer includes AG-UI for agent-frontend event streams, MCP for context and tools, and A2A for agent-to-agent communication. At the foundation, the generation layer encompasses the LLMs themselves (Gemini, Claude, GPT) along with their tool definitions and prompts.

This separation lets innovation happen at each layer independently. A new LLM can generate better A2UI JSON without changing the renderer. A new frontend framework can implement an A2UI renderer without changing how agents generate output.

Open questions

Several technical questions need resolution. How should interfaces handle progressive rendering without disorienting users as components appear and shift? What's the right component vocabulary for different domains? Should a financial app have different primitives than a creative tool? How should generated UI handle LLM hallucinations or invalid JSON?

The cognitive questions may matter more. Traditional interface construction involves negotiation and iteration that builds shared understanding. When AI generates interfaces instantly, does that speed come at the cost of user comprehension? Generated UI addresses the functional transience of chat (persistent visual artifacts replace ephemeral text) but introduces new challenges around making sure users actually understand what they're interacting with.

The design implications point toward hybrid approaches: use generative UI for convergent phases once intent is clear, preserve conversation for divergent exploration and clarification, build in grounding checkpoints before finalizing, and enable user modification to capture some construction benefit.

Where this is going

Generative UI is the first real shift in interface paradigms in decades. Users no longer manually specify interface elements; AI systems generate task-specific UIs from natural language. The technology is moving quickly: Google's A2UI reached public preview within months of announcement, AG-UI has backing from Microsoft and Oracle, and tools like v0.dev and Claude Artifacts are already in production use.

The convergence on standards is worth noting. Declarative protocols provide safety through constraint, code generation provides flexibility through review, and transport protocols like AG-UI allow these approaches to coexist. As these standards mature, the question shifts from whether AI can generate interfaces to how those interfaces should fit into human workflows.

For teams building AI products: generative UI is ready to evaluate now. The protocols are open, the tools are available, and the architectural patterns are stabilizing.


Sources

Protocols and Standards

Implementation

Analysis

Products

generative UI
AI interfaces
product development