Decorator Integration
Install `neurovn` from PyPI and instrument Python sync/async functions with `trace.agent` and `trace.tool` decorators to emit runtime traces without changing business logic.
Purpose
Use decorator integration when workflows are already implemented in Python and you want runtime-observed traces with minimal code changes.
Developer Flow
Install from PyPI
Create a virtualenv and run `pip install --upgrade neurovn`.
Point at your backend
Set `NEUROVN_API_URL=https://agentic-flow.onrender.com` so emitted sessions go to the hosted backend.
Annotate functions
Add `@trace.agent` to model steps and `@trace.tool` to external call handlers.
Optionally scope session
Wrap execution in `with trace.session(...)` to group multiple top-level calls.
Execute app
Run your existing app/test flow; decorators capture context, return shape, and duration.
Review persisted canvas
On explicit session completion, Neurovn posts the trace, creates a remote canvas, and prints a summary with `canvas_id`, `trace_session_id`, and editor URL. Implicit auto-sessions still persist quietly.
Install & Run
Install
Setuppython -m venv .venvsource .venv/bin/activatepip install --upgrade neurovnThe published install path is `pip install --upgrade neurovn`. Editable installs remain available for contributors inside this repository, but they are no longer the primary onboarding flow.
Run
ExecuteNEUROVN_API_URL=https://agentic-flow.onrender.com python your_workflow.pyIf your app already manages environment variables, set `NEUROVN_API_URL` once in that environment and run your script normally.
Architecture Flow
Capture
Decorators capture arguments, return payload summaries, and duration per wrapped function.
Graph build
Nested calls are represented as parent/child nodes and connected edges in a session graph.
Emit
Session emits estimation + trace-session payloads to backend endpoints at completion, persists a remote canvas in Neurovn, and stores the response in `trace.last_result`.
Implementation Snippets
Install from PyPI
bashpython -m venv .venvsource .venv/bin/activatepip install --upgrade neurovnPoint the SDK at your backend
bashexport NEUROVN_API_URL=https://agentic-flow.onrender.comDecorator usage
pythonfrom neurovn import trace@trace.agent(name="Research Agent", model="gpt-4o")async def research(query: str) -> str: return "summary"@trace.tool(name="Web Search")async def web_search(query: str) -> str: return "results"Decorator with explicit tool metadata
pythonfrom neurovn import trace@trace.tool(name="Knowledge Base Search", tool_id="mcp_web_search", tool_category="mcp_server")def kb_search(query: str) -> str: return "..."Explicit session
pythonfrom neurovn import tracewith trace.session("My Workflow", source="decorator", canvas_name="My Workflow"): run_pipeline()Run your app with the backend URL
bashNEUROVN_API_URL=https://agentic-flow.onrender.com python your_workflow.pyRead the generated canvas ID
pythonfrom neurovn import tracewith trace.session("My Workflow", source="decorator", canvas_name="My Workflow"): run_pipeline()if trace.last_result: print(trace.last_result["canvas_id"])End-to-end decorator flow
pythonfrom neurovn import trace@trace.agent(name="Planner", model="gpt-4o", provider="OpenAI")def plan(query: str) -> dict: return {"query": query, "action": "search"}@trace.tool(name="Web Search", tool_id="mcp_web_search", tool_category="mcp_server")def search(query: str) -> str: return "results"@trace.agent(name="Synthesizer", model="Claude-3.5-Sonnet", provider="Anthropic")def synthesize(context: str) -> str: return "final answer"with trace.session("Planner workflow", source="decorator"): step = plan("latest AI infra updates") docs = search(step["query"]) output = synthesize(docs) print(output)Reference
| Item | Details |
|---|---|
| PyPI package | `neurovn` (verified in a fresh virtualenv with `neurovn==0.1.1`) |
| `trace.agent` | Marks an LLM step (`name`, `model`, optional `provider`) and emits `agentNode` metadata. |
| `trace.tool` | Marks external capability calls and emits `toolNode` metadata (`tool_id`, `tool_category` optional). |
| `trace.session` | Context manager for grouping multiple calls into one workflow session payload. |
| `NEUROVN_API_URL` | Backend base URL used by the SDK client when you do not pass a custom base URL directly |
| `trace.last_result` | Holds the latest successful trace-session response so framework integrations can read `canvas_id` programmatically after a run. |
| Persistence model | Decorator sessions end by posting to `/api/traces/sessions`, which creates or updates a remote canvas in Neurovn. |
Troubleshooting
Set `NEUROVN_API_URL` explicitly during local development so traces do not default to the wrong backend.
Use explicit sessions when you want grouped traces across multiple top-level calls.
Use explicit sessions when you want grouped traces, terminal output, and an editor link. Only implicit auto-sessions remain quiet.
If you are signed in and open the printed editor link, Neurovn now imports that traced canvas into My Canvases automatically.
Keep wrapped functions deterministic where possible to compare estimate-vs-actual trends.
For sensitive inputs, avoid logging raw payload fields inside your function bodies.
If traces appear fragmented, ensure nested calls execute within the same process/context manager.
If provider appears incorrect, set explicit `provider` instead of relying on model-name inference.
If trace persistence fails locally, verify the backend is running and its Supabase service-role key is real rather than a placeholder.