Code to Canvas
Understand the actual Neurovn code-to-canvas path: your workflow runs locally, Neurovn derives nodes and edges, the backend persists a remote canvas, and you open that canvas in the editor.
Purpose
Use this guide when you want the plain-English mental model before choosing CLI, decorators, or a framework-specific pattern. It explains what is local, what is remote, and when manual import/export is a separate workflow rather than the main ingestion path.
Developer Flow
Instrument or prepare input
Either add `@trace.agent` / `@trace.tool` decorators to Python code or prepare a workflow JSON file for the CLI.
Run locally
Execute your app or `neurovn trace ...` on your machine. Neurovn observes the workflow structure locally first.
Build graph payload
Neurovn turns that run into `nodes` and `edges`, then asks the backend for an estimate.
Persist remote canvas
The backend stores the graph in Neurovn's SDK trace tables and creates or updates a canvas record remotely.
Open the editor
For CLI runs and explicit `trace.session(...)` runs, copy the printed `Open: .../editor/{canvas_id}` URL. Implicit auto-sessions still persist remotely without terminal output.
Open the printed editor link while signed in
When a signed-in user opens an SDK-trace editor link, Neurovn now materializes that traced canvas into the normal My Canvases tables and redirects to the regular canvas ID.
Use manual import/export only when needed
The frontend `.neurovn.json` export/import flow is useful for snapshots, backups, or handoff between people. It is not the primary code-to-canvas ingestion path.
Install & Run
Install
Setuppython -m venv .venvsource .venv/bin/activatepip install --upgrade neurovnMost teams start here, then choose either the CLI guide or the decorator guide based on whether they already have workflow JSON or existing Python runtime code.
Run
Execute# CLI pathNEUROVN_API_URL=https://agentic-flow.onrender.com neurovn trace ./workflow.json --workflow-name "Research Workflow" --canvas-name "Research Workflow" --source cli# Decorator pathNEUROVN_API_URL=https://agentic-flow.onrender.com python your_workflow.pyBoth paths create remote Neurovn canvases. The difference is how the graph is captured: explicit JSON for CLI, observed runtime structure for decorators.
Architecture Flow
Local execution
Your code or workflow JSON lives on your machine. Neurovn first reads or observes it locally.
Backend ingestion
The SDK/CLI posts estimation and trace-session payloads to the Neurovn backend rather than asking the user to upload a file later.
Remote canvas
The backend persists a canvas record that the editor later hydrates by `canvas_id`.
Implementation Snippets
Mental model
textlocal code or workflow.json ↓Neurovn derives nodes + edges ↓POST /api/estimate ↓POST /api/traces/sessions ↓backend creates remote canvas ↓open /editor/{canvas_id}CLI path in one command
bashNEUROVN_API_URL=https://agentic-flow.onrender.com neurovn trace ./workflow.json --workflow-name "My Workflow" --canvas-name "My Workflow" --source cliDecorator path in one script
pythonfrom neurovn import trace@trace.agent(name="Planner", model="gpt-4o", provider="OpenAI")def plan(query: str) -> dict: return {"query": query}@trace.tool(name="Web Search", tool_id="mcp_web_search", tool_category="mcp_server")def search(query: str) -> str: return "results"with trace.session("Planner Workflow", source="decorator", canvas_name="Planner Workflow"): step = plan("latest model pricing updates") docs = search(step["query"]) print(docs)Manual `.neurovn.json` is a separate path
textUse UI export/import when you want:- a point-in-time canvas snapshot- a file to share with another person- an exact reload of canvas layout and metadataDo not treat it as the required final step after CLI or decorators.Reference
| Item | Details |
|---|---|
| CLI ingest | Local workflow JSON -> `POST /api/estimate` -> `POST /api/traces/sessions` -> remote canvas |
| Decorator ingest | Wrapped runtime execution -> in-memory trace session -> estimate -> `POST /api/traces/sessions` -> remote canvas |
| Session result access | After a successful explicit session, `trace.last_result` contains the latest `canvas_id`, `trace_session_id`, and estimation payload. |
| Canvas storage | Trace-generated canvases and sessions are persisted remotely in `sdk_canvases` and `sdk_trace_sessions`. |
| Manual file flow | UI export/import uses `.neurovn.json` for snapshots and exact reloads of canvas state. |
| Framework reality today | LangGraph and CrewAI guides are primarily decorator patterns or JSON ingestion today, not full automatic parsing of source code. |
Troubleshooting
If you expected a browser window to open automatically, that is not the current implementation. CLI and explicit sessions print a URL, but neither one opens the browser for you.
If you use an explicit `trace.session(...)`, the SDK now prints the summary and editor link for that run. Only implicit auto-sessions remain quiet.
If you expected a local export artifact after a CLI or decorator run, that is not the primary path. Those flows persist remotely.
SDK traces are still persisted in separate backend tables first, but opening the editor link while signed in now imports them into the regular My Canvases experience automatically.
If you want portable snapshots or handoff files, use the frontend `.neurovn.json` export/import flow separately.
If you are integrating LangGraph or CrewAI today, treat the framework guides as capture patterns, not as proof of a full source parser.