Skip to main content
The Canvas extension enables users to request edits to specific parts of artifacts (like code, documents, or other structured content) that your agent has generated. Instead of users describing what they want to change in text, they can select a portion of an artifact in the UI and request an edit, giving your agent precise information about what to modify.

How Canvas Works

When a user selects a portion of an artifact and requests an edit:
  1. The selection’s start and end indices (character positions) are captured.
  2. The user provides a description of what they want to change.
  3. The artifact ID identifies which artifact to edit.
  4. Your agent receives this structured information to process the edit request.
This allows your agent to:
  • Know exactly which part of the artifact the user wants to modify.
  • Access the original artifact content from history.
  • Make targeted edits based on the user’s description.
  • Generate a new artifact with the changes (using the same artifact_id to replace the previous version in the UI).

Example: Canvas with LLM

Here’s how to use canvas with an LLM, adapting your system prompt based on whether you’re generating new content or editing existing content:
# Copyright 2025 © BeeAI a Series of LF Projects, LLC
# SPDX-License-Identifier: Apache-2.0

from typing import Annotated

from a2a.types import Message, TextPart

from agentstack_sdk.a2a.extensions import LLMServiceExtensionServer, LLMServiceExtensionSpec
from agentstack_sdk.a2a.extensions.ui.canvas import (
    CanvasExtensionServer,
    CanvasExtensionSpec,
)
from agentstack_sdk.a2a.types import AgentArtifact
from agentstack_sdk.server import Server
from agentstack_sdk.server.context import RunContext

server = Server()

BASE_PROMPT = """\
You are a helpful coding assistant.
Generate code enclosed in triple-backtick blocks tagged ```python.
The first line should be a comment with the code's purpose.
"""

EDIT_PROMPT = (
    BASE_PROMPT
    + """
You are editing existing code. The user selected this portion:
```python
{selected_code}
```

They want: {description}

Respond with the FULL updated code. Only change the selected portion.
"""
)


def get_system_prompt(canvas_edit):
    if not canvas_edit:
        return BASE_PROMPT

    # Check if parts list is not empty and first part is TextPart
    if not canvas_edit.artifact.parts or not isinstance(canvas_edit.artifact.parts[0].root, TextPart):
        return BASE_PROMPT

    original_code = canvas_edit.artifact.parts[0].root.text

    # Validate indices are within bounds
    if not (0 <= canvas_edit.start_index <= canvas_edit.end_index <= len(original_code)):
        return BASE_PROMPT

    selected = original_code[canvas_edit.start_index : canvas_edit.end_index]

    return EDIT_PROMPT.format(selected_code=selected, description=canvas_edit.description)


async def call_llm(llm: LLMServiceExtensionServer, system_prompt: str, message: Message):
    """Call your LLM with the adapted prompt (implementation depends on your LLM framework)."""
    # As a placeholder, we return a mock response.
    example = "```python\n# Hard-coded example (no LLM used). Above is the prompt to use. This is the fake response.\nprint('Hello from LLM!')\n```"
    artifact = AgentArtifact(
        name="Response",
        parts=[
            TextPart(text=system_prompt),  # This is just for demonstration. Replace with actual LLM call.
            TextPart(text=example),  # This is just for demonstration. Replace with actual LLM call.
        ],
    )
    return artifact


@server.agent()
async def code_agent(
    message: Message,
    context: RunContext,
    llm: Annotated[LLMServiceExtensionServer, LLMServiceExtensionSpec.single_demand()],
    canvas: Annotated[CanvasExtensionServer, CanvasExtensionSpec()],
):
    await context.store(message)
    canvas_edit = await canvas.parse_canvas_edit_request(message=message)

    # Adapt system prompt based on whether this is an edit or new generation
    system_prompt = get_system_prompt(canvas_edit)

    artifact = await call_llm(llm, system_prompt, message)
    yield artifact

    await context.store(artifact)


if __name__ == "__main__":
    server.run()
1

Import the canvas extension

Import CanvasExtensionServer and CanvasExtensionSpec from agentstack_sdk.a2a.extensions.ui.canvas.
2

Inject the extension

Add a canvas parameter to your agent function using the Annotated type hint with CanvasExtensionSpec().
3

Parse edit requests

Call await canvas.parse_canvas_edit_request(message=message) to check if the incoming message contains a canvas edit request. This returns None if no edit request is present, or a CanvasEditRequest object with:
start_indexThe starting character position of the selected text
end_indexThe ending character position of the selected text
descriptionThe user’s description of what they want to change
artifactThe full original artifact object from history
4

Access the original content

Extract the text from artifact.parts[0].root.text (for text artifacts) into a content variable and use the start/end indices to get the selected portion: selected_text = content[start_index:end_index].
5

Return a new artifact

Create a new artifact with your changes.

How to work with Canvas

Artifacts in history: The extension automatically retrieves the original artifact from history using the artifact_id. If not found, a ValueError is raised. Text parts filtering: The extension filters out fallback text messages (sent for agents that don’t support canvas) so you only work with structured edit request data.

Best Practices

Adapt your system prompt: Provide different instructions to your LLM depending on whether you’re generating new content or editing existing content. Validate indices: Ensure start and end indices are within bounds before slicing the artifact text.

Examples

For more examples, see: