Core Concepts
The foundational vocabulary of Neurovn. Understanding these six concepts is all you need to design and estimate any agentic workflow.
Anatomy of a Workflow
Every workflow follows the same structure: a Start node, one or more processing nodes (Agents, Tools, Conditions), connected by edges, and ending at a Finish node.
Arrows between nodes are edges — they define execution order.
Key Definitions
Agent Graph
The top-level container for a workflow. A directed graph where nodes represent compute steps and edges define execution order.
Graphs can be DAGs (directed acyclic graphs) for simple sequential/parallel flows, or cyclic when loops or retries are involved. The graph analyzer classifies the type automatically.
Node
A single unit of work in the workflow — an LLM call, a tool invocation, a routing decision, or a control-flow marker (Start/Finish).
Each node has a type, optional model/provider configuration, and contributes independently to the total cost and latency estimate.
Edge
A directed connection between two nodes. Edges define execution order — data or control flows from the source node to the target node.
Regular edges form the happy path. Error edges (prefixed s-error) route to fallback or retry logic and are excluded from critical-path analysis.
Model / Provider
The LLM and its vendor assigned to an Agent node. Neurovn ships with pricing data for 38+ models across 7 providers.
Pricing comes from provider rate cards (input tokens per million, output tokens per million). Changing the model on a node instantly updates the cost estimate.
Tool
An external capability invoked during the workflow — web search, database lookup, API call, etc. Represented by a Tool node.
Tool nodes use per-tool schema/response token and latency metadata from the backend registry. Fallback defaults are +200 schema tokens, +800 response tokens, and 200 ms when metadata is missing.
Run / Scenario
A single execution of the workflow graph. A Scenario is a saved snapshot of a workflow + its estimate that you can compare against other configurations.
Use scenarios to A/B test different model choices. The Comparison Drawer shows side-by-side cost, latency, and token differences.