Stable

OpenAI Integration

Use decorators around OpenAI-backed agent/tool functions and feed traces into Neurovn for model mix, cost, and latency analysis.

Install & Run

Install

Setup
pip install openaipip install neurovn

Run

Execute
NEUROVN_API_URL=https://agentic-flow.onrender.com python openai_pipeline.py

Architecture Flow

LLM step

Agent function calls OpenAI model with your existing prompt flow.

Tool step

Tool functions are captured as explicit workflow nodes with execution metadata.

Neurovn

Trace payloads are posted to `/api/traces/sessions` and linked to a canvas.

Implementation Snippets

Example

python
from neurovn import tracefrom openai import OpenAIclient = OpenAI()@trace.agent(name="Answer Agent", model="gpt-4o", provider="OpenAI")def answer(question: str) -> str:    response = client.chat.completions.create(        model="gpt-4o",        messages=[{"role": "user", "content": question}],    )    return response.choices[0].message.content or ""

Troubleshooting

Wrap the top-level pipeline invocation in `with trace.session(..., source="decorator", canvas_name="...")` when you want one named canvas per run rather than separate implicit sessions.

Use explicit `provider="OpenAI"` when mixing providers in one workflow.

Prefer stable model IDs from backend pricing registry for deterministic estimates.

Capture separate agent functions for planner/executor stages to improve bottleneck analysis.

Related Integrations

Backend contracts: `/api/estimate`, `/api/traces/sessions`, `/api/canvases`