Node Palette
Neurovn has seven core node types. Each has a specific role, color, and impact on your workflow's cost and latency estimate.
Node Types
| Node | Purpose | Cost Impact | Latency Impact |
|---|---|---|---|
Start | Entry point of every workflow. Each graph must have exactly one Start node. | None — the Start node is a control-flow marker with no model assignment. | None — zero processing time. |
Agent | An LLM call. Configurable with a model provider, model, context (system prompt), task type, and expected output size. | Primary cost driver. Cost = (input tokens / 1M) × input rate + (output tokens / 1M) × output rate. | Primary latency driver. Latency = output tokens / model's tokens-per-second. |
Tool | An external tool invocation — web search, database query, API call, etc. No model assignment. | Per-tool overhead from registry metadata (schema/input + average response/output tokens). Fallback: +200 schema and +800 response tokens. | Per-tool latency from registry metadata. Fallback: 200 ms per invocation. |
Condition | A routing / branching point. Routes execution down different paths based on logic (true/false, categories, etc.). | Minimal — the condition itself has no model cost. Branch probabilities affect expected cost of downstream paths. | Negligible — routing logic is near-instant. Downstream branch latency depends on which path is taken. |
Finish | Terminal node. Marks the end of a workflow path. Each graph must have at least one Finish node. | None — control-flow marker only. | None — zero processing time. |
Blank Box | A visual container for grouping related nodes. Customizable border style, background color, and label position for organizing complex workflows. | None — purely visual grouping element. | None — no processing overhead. |
Text Label | A free-form text label for adding notes, comments, or documentation directly on the canvas. Supports multiple font sizes and styles. | None — documentation element only. | None — no processing overhead. |
Entry point of every workflow. Each graph must have exactly one Start node.
Cost: None — the Start node is a control-flow marker with no model assignment.
Latency: None — zero processing time.
An LLM call. Configurable with a model provider, model, context (system prompt), task type, and expected output size.
Cost: Primary cost driver. Cost = (input tokens / 1M) × input rate + (output tokens / 1M) × output rate.
Latency: Primary latency driver. Latency = output tokens / model's tokens-per-second.
An external tool invocation — web search, database query, API call, etc. No model assignment.
Cost: Per-tool overhead from registry metadata (schema/input + average response/output tokens). Fallback: +200 schema and +800 response tokens.
Latency: Per-tool latency from registry metadata. Fallback: 200 ms per invocation.
A routing / branching point. Routes execution down different paths based on logic (true/false, categories, etc.).
Cost: Minimal — the condition itself has no model cost. Branch probabilities affect expected cost of downstream paths.
Latency: Negligible — routing logic is near-instant. Downstream branch latency depends on which path is taken.
Terminal node. Marks the end of a workflow path. Each graph must have at least one Finish node.
Cost: None — control-flow marker only.
Latency: None — zero processing time.
A visual container for grouping related nodes. Customizable border style, background color, and label position for organizing complex workflows.
Cost: None — purely visual grouping element.
Latency: None — no processing overhead.
A free-form text label for adding notes, comments, or documentation directly on the canvas. Supports multiple font sizes and styles.
Cost: None — documentation element only.
Latency: None — no processing overhead.
Workflow Patterns
The way you connect nodes determines graph topology and how cost and latency aggregate.
Sequential
Nodes execute one after another in a single chain. Total cost = sum of all node costs. Total latency = sum of all node latencies.
Parallel (Fan-out / Fan-in)
A node fans out to multiple branches that execute concurrently, then merge at a join point. Total latency = max of parallel branch latencies.
Branched (Conditional)
A Condition node routes execution down one of multiple paths. Expected cost uses branch probabilities to weight each path's contribution.
Loop / Retry
An edge loops back to a previous node for iterative refinement or retry logic. Cost scales by expected iterations. Estimation uses max_loop_steps for bounded calculation.