Stable

Node Palette

Neurovn has seven core node types. Each has a specific role, color, and impact on your workflow's cost and latency estimate.

Node Types

Start

Entry point of every workflow. Each graph must have exactly one Start node.

Cost: None — the Start node is a control-flow marker with no model assignment.

Latency: None — zero processing time.

Agent

An LLM call. Configurable with a model provider, model, context (system prompt), task type, and expected output size.

Cost: Primary cost driver. Cost = (input tokens / 1M) × input rate + (output tokens / 1M) × output rate.

Latency: Primary latency driver. Latency = output tokens / model's tokens-per-second.

Tool

An external tool invocation — web search, database query, API call, etc. No model assignment.

Cost: Per-tool overhead from registry metadata (schema/input + average response/output tokens). Fallback: +200 schema and +800 response tokens.

Latency: Per-tool latency from registry metadata. Fallback: 200 ms per invocation.

Condition

A routing / branching point. Routes execution down different paths based on logic (true/false, categories, etc.).

Cost: Minimal — the condition itself has no model cost. Branch probabilities affect expected cost of downstream paths.

Latency: Negligible — routing logic is near-instant. Downstream branch latency depends on which path is taken.

Finish

Terminal node. Marks the end of a workflow path. Each graph must have at least one Finish node.

Cost: None — control-flow marker only.

Latency: None — zero processing time.

Blank Box

A visual container for grouping related nodes. Customizable border style, background color, and label position for organizing complex workflows.

Cost: None — purely visual grouping element.

Latency: None — no processing overhead.

Text Label

A free-form text label for adding notes, comments, or documentation directly on the canvas. Supports multiple font sizes and styles.

Cost: None — documentation element only.

Latency: None — no processing overhead.

Workflow Patterns

The way you connect nodes determines graph topology and how cost and latency aggregate.

Sequential

Nodes execute one after another in a single chain. Total cost = sum of all node costs. Total latency = sum of all node latencies.

Start
Agent A
Agent B
Finish

Parallel (Fan-out / Fan-in)

A node fans out to multiple branches that execute concurrently, then merge at a join point. Total latency = max of parallel branch latencies.

Start
Agent A
Agent B
Agent C
Finish

Branched (Conditional)

A Condition node routes execution down one of multiple paths. Expected cost uses branch probabilities to weight each path's contribution.

Start
Cond
True
Agent A
False
Agent B

Loop / Retry

An edge loops back to a previous node for iterative refinement or retry logic. Cost scales by expected iterations. Estimation uses max_loop_steps for bounded calculation.

Start
Agent A
Tool
retry? returns to Agent A

Next up