Author multi-model agent workflows with first-class syntax for prompts, tools, conditions, and parallel execution. Replace escaped DOT strings with real code.
Graphviz DOT is great for graph visualization. But authoring AI pipelines with multi-line prompts, typed nodes, and conditional edges? It falls apart.
digraph Pipeline { Planner [kind="agent" llm_model="claude-opus-4-6" prompt="You are a senior architect.\n Analyze the request and\nproduce a plan."]; Coder [kind="agent" llm_model="claude-sonnet-4-6" prompt="Implement the plan.\n Use best practices."]; Review [kind="agent" auto_status="true" prompt="Review the code.\n Set STATUS: success or fail."]; Planner -> Coder; Coder -> Review; Review -> Done [condition="context.outcome==success"]; Review -> Coder [condition="context.outcome==fail" restart="true"]; }
workflow CodeReview goal: "Plan, implement, and review" start: Planner exit: Done agent Planner model: claude-opus-4-6 prompt: You are a senior architect. Analyze the request and produce an implementation plan. agent Coder model: claude-sonnet-4-6 prompt: Implement the plan. Use best practices. agent Review auto_status: true prompt: Review the code. Set STATUS: success or fail. agent Done prompt: Ship it. edges Planner -> Coder Coder -> Review Review -> Done when ctx.outcome = success Review -> Coder when ctx.outcome = fail restart: true
First-class syntax for the things that matter. Not string attributes on a graph node.
Indented blocks with zero escaping. Write real prompts, preserve blank lines, embed variables like ${ctx.input}.
agent, tool, human, parallel, fan_in, subgraph. Each with typed, validated config fields.
Structural validation and semantic lint. Dead edges, unreachable nodes, missing prompts, invalid models. Things DOT silently ignores.
Native parallel fan-out and fan_in join with per-branch model overrides. Multi-provider consensus in a few lines.
dippin test injects context, simulates every conditional branch, and checks assertions. CI-ready JSON output.
Built-in Language Server Protocol server. Hover docs, go-to-definition, completions, and live diagnostics in any editor.
Syntax highlighting for .dip files. Keywords, node IDs, strings, conditions, and edge operators all colored distinctly.
dippin cost estimates per-run cost by model and provider. dippin optimize suggests cheaper alternatives.
dippin migrate converts existing DOT pipelines to Dippin. validate-migration verifies structural parity.
dippin doctor grades your workflow A–F with actionable suggestions. Coverage analysis, reachability, termination checks.
Validate, lint, and test without deploying. Every command works offline against the workflow source.
$ dippin validate pipeline.dip validation passed $ dippin lint pipeline.dip ⚠ DIP110 empty prompt on agent "Summarize" [line 45] ⚠ DIP111 tool node without timeout "RunTests" [line 72]
$ dippin test pipeline.dip ═══ Test Results ════════════════════ PASS happy path — all reviews pass PASS build fails — restarts loop PASS review rejects — exits clean ─── Summary ─────────────────────── 3 tests: 3 passed, 0 failed
$ dippin doctor pipeline.dip ═══ Health Report ═══════════════════ Grade: A (score: 95/100) Lint: 0 errors, 2 warnings Coverage: 12/12 nodes reachable Cost: $0.42/run (estimated) ─── Suggestions ──────────────────── • Add timeout to tool "RunTests"
Hands-on guides for every part of Dippin, from first install to CI integration.
DOT's escaped strings are unreadable. Dippin's indentation-based blocks let you write prompts without escaping.
NewRoute pipelines based on LLM output with the when keyword. Build branching workflows step by step.
Estimate per-run pipeline costs before spending real money on LLM calls.