Stable
neurovnv0.1.1

CLI Integration

Install `neurovn` from PyPI and emit trace sessions directly from workflow JSON files using the Neurovn CLI, then hydrate canvases through the backend trace APIs.

Purpose

Use CLI integration when you already have workflow structure represented as JSON and want a deterministic ingest path into Neurovn without changing runtime code or cloning the monorepo.

Published on PyPI as `neurovn` for a one-command external install.
Best for migration from existing graph definitions or generated workflow files.
Deterministic and scriptable in CI or local developer scripts.
Produces trace-linked canvases and estimate summaries in one command.

Developer Flow

1

Install from PyPI

Create a virtualenv and run `pip install --upgrade neurovn`.

2

Point at your backend

Pass `--backend-url` explicitly or export `NEUROVN_API_URL=https://agentic-flow.onrender.com` for the hosted backend.

3

Prepare payload

Build or export a workflow JSON containing `nodes`, `edges`, and optional `recursion_limit`.

4

Run CLI

Execute `neurovn trace ...` with workflow name, canvas name, and backend URL.

5

Inspect output

Capture `canvas_id`, `trace_session_id`, cost, and latency from CLI JSON output.

6

Open canvas

Copy the printed `Open: .../editor/{canvas_id}` line into your browser to review flow, model mix, and bottlenecks. The current CLI does not launch the browser automatically.

Install & Run

Install

Setup
python -m venv .venvsource .venv/bin/activatepip install --upgrade neurovn

External users should install from PyPI. The editable `cd neurovn-sdk && pip install -e .` path is for contributors working inside this monorepo only.

Run

Execute
NEUROVN_API_URL=https://agentic-flow.onrender.com neurovn trace ./examples/workflow.json --workflow-name "My Workflow" --canvas-name "My Workflow" --source cli

The `neurovn` console command is the primary UX. `python -m neurovn trace ...` is equivalent if you prefer module execution. After success, the CLI prints an editor URL instead of opening the browser itself.

Architecture Flow

Input

CLI reads workflow payload (`nodes`, `edges`, optional `recursion_limit`) from disk.

Estimate

CLI calls backend `POST /api/estimate` and captures deterministic estimator outputs.

Persist

CLI calls `POST /api/traces/sessions` to create a remote trace-linked canvas entry for /editor hydration.

Implementation Snippets

Install from PyPI

bash
python -m venv .venvsource .venv/bin/activatepip install --upgrade neurovn

Set backend URL once

bash
export NEUROVN_API_URL=https://agentic-flow.onrender.com

Run the CLI

bash
neurovn trace ./examples/workflow.json --workflow-name "My Workflow" --canvas-name "My Workflow" --source cli

Minimal workflow JSON

json
{
  "name": "Research Workflow",
  "nodes": [
    {"id": "start", "type": "startNode", "label": "Start"},
    {"id": "agent", "type": "agentNode", "label": "Agent", "model_provider": "OpenAI", "model_name": "GPT-4o", "context": "Answer briefly."},
    {"id": "finish", "type": "finishNode", "label": "Finish"}
  ],
  "edges": [
    {"source": "start", "target": "agent"},
    {"source": "agent", "target": "finish"}
  ],
  "recursion_limit": 25
}

Alternative module entrypoint

bash
python -m neurovn trace ./examples/workflow.json --workflow-name "My Workflow" --canvas-name "My Workflow" --source cli

Developer loop (shell)

bash
neurovn trace ./workflows/research.json --workflow-name "Research v1" --canvas-name "Research v1" --source clineurovn trace ./workflows/research_v2.json --workflow-name "Research v2" --canvas-name "Research v2" --source cli# Compare canvases in Neurovn to evaluate model/cost deltas

Expected output

json
{
  "workflow_name": "My Workflow",
  "canvas_id": "c9f8f1a2-7a14-4a7f-8c53-2ab5fb1f9d22",
  "trace_session_id": "7c2f6d5b-6d8a-4d8a-96aa-4f8f6d8cb9e1",
  "estimate_total_cost": 0.00342,
  "estimate_total_latency": 1.87
}

Reference

ItemDetails
PyPI package`neurovn` (verified in a fresh virtualenv with `neurovn==0.1.1`)
Console command`neurovn trace <workflow_file> [--workflow-name] [--canvas-name] [--source] [--backend-url]`
Module command`python -m neurovn trace <workflow_file> ...` is an equivalent fallback
Source values`sdk | decorator | cli | manual`
Backend endpoints`POST /api/estimate` then `POST /api/traces/sessions`
Persistence modelSuccessful CLI runs create a remote trace-linked canvas in Neurovn rather than a new local import artifact.

Troubleshooting

Validate backend is reachable at `NEUROVN_API_URL` (for hosted usage, `https://agentic-flow.onrender.com`).

For local backend runs, trace persistence requires a real Supabase service-role key. Placeholder keys will allow `/api/estimate` but fail `/api/traces/sessions`.

Ensure workflow JSON includes valid node IDs referenced by every edge.

Use canonical provider/model names when possible; backend alias normalization handles minor variants.

The printed `Open:` URL uses `NEUROVN_APP_URL` when provided. Otherwise the hosted backend defaults to the Neurovn app origin, and local backends default to the same local origin.

Prefer `neurovn trace ...` in docs, shell scripts, and onboarding. Keep `python -m neurovn trace ...` as the fallback.

If `/docs/integrations/*` routes 404 after deploy, ensure latest commit includes static params generation for dynamic slug route.

If costs are zero, verify model/provider pair exists in backend pricing registry or maps through canonical aliases.

Related Integrations

Backend contracts: `/api/estimate`, `/api/traces/sessions`, `/api/canvases`