Conditional Edges: Routing Pipelines with when

In the last post we showed how Dippin handles prompts. Now let's make pipelines that think. Linear workflows are fine for simple tasks, but real AI pipelines need to branch: retry on failure, escalate when something looks wrong, skip steps that don't apply. Dippin's when keyword is how you express all of that.

1 The Linear Baseline

Let's start with a document review pipeline. It drafts, reviews, and publishes -- in that order, every time, with no branching. Here's the whole thing:

workflow ReviewPipeline
  goal: "Draft, review, and publish a document"
  start: Start
  exit: Exit

  defaults
    provider: anthropic
    model: claude-sonnet-4-6

  agent Start
    label: Start

  agent Exit
    label: Exit

  agent Draft
    label: "Write Draft"
    prompt:
      Write a clear, concise technical document based on the
      provided requirements. Focus on accuracy and readability.

  agent Review
    label: "Review Draft"
    auto_status: true
    prompt:
      Review the draft for technical accuracy, clarity, and
      completeness. Return success if the draft meets standards,
      or fail with specific feedback.

  agent Publish
    label: Publish
    prompt:
      Format the approved draft for publication.

  edges
    Start -> Draft
    Draft -> Review
    Review -> Publish
    Publish -> Exit

This works, but every draft gets published regardless of the review. The Review node dutifully critiques the draft, but we ignore its verdict and publish anyway. We need branching.

2 Basic Conditions

Add when clauses to the edges out of Review. The node definitions stay exactly the same -- only the edges block changes:

  edges
    Start -> Draft
    Draft -> Review
    Review -> Publish  when ctx.outcome = success
    Review -> Draft    when ctx.outcome = fail
    Publish -> Exit

Lint it:

$ dippin lint pipeline.dip
PASS  pipeline.dip  (0 errors, 0 warnings)

Two things are happening here. First, auto_status: true on the Review node tells Dippin that the LLM will set ctx.outcome based on the content of its response -- it parses the output for success or fail and writes it into the pipeline context automatically. Second, the edges now form a retry loop: a failed review sends execution back to Draft, which writes a new version, which goes back to Review, until the review passes and the pipeline proceeds to Publish.

3 Operators

success/fail is the most common pattern, but conditions support a full set of string operators:

OperatorExample
= / == when ctx.outcome = success
!= when ctx.outcome != success
contains when ctx.feedback contains "security"
not contains when ctx.feedback not contains "security"
startswith when ctx.category startswith "urgent"
endswith when ctx.filename endswith ".go"
in when ctx.priority in high,critical

All operators work on string values. The left side is always a context variable, the right side is a literal. Comparisons are case-sensitive. The in operator matches against a comma-separated list of values with no spaces around the commas.

4 Compound Conditions

Sometimes a single condition isn't enough. Suppose failed reviews that mention security issues need to go to a dedicated escalation node, while ordinary failures just loop back to drafting:

    Review -> Escalate  when ctx.outcome = fail and ctx.feedback contains "security"
    Review -> Draft     when ctx.outcome = fail and ctx.feedback not contains "security"

and and or compose conditions in the expected way. Use parentheses to control precedence when mixing them -- without parens, and binds tighter than or:

    when (ctx.outcome = fail and ctx.severity = high) or ctx.override = true

This routes to the target if the outcome is a high-severity failure, or if an override flag was explicitly set -- whichever comes first.

5 Exhaustive Detection

Dippin tracks whether a node's outgoing edges cover all possible outcomes. If they don't, dippin lint will tell you. This is one of the most useful things the linter does -- a missing branch is a silent runtime failure waiting to happen.

There are three patterns to know. The first is an unconditional fallback -- one edge has a condition, the other doesn't. The unconditional edge fires when no condition matches:

    Review -> Publish  when ctx.outcome = success
    Review -> Exit
$ dippin lint pipeline.dip
PASS  pipeline.dip  (0 errors, 0 warnings)

The second pattern is a single conditional edge with no fallback. If ctx.outcome is never success, execution has nowhere to go:

    Review -> Publish  when ctx.outcome = success
$ dippin lint pipeline.dip
WARN  pipeline.dip
  DIP101: node "Review" has conditional edges but no unconditional fallback

The third pattern is a complementary pair. When both branches of a success/fail split are present, Dippin recognizes them as exhaustive and the warning is suppressed -- this is exactly the pattern from Section 2:

    Review -> Publish  when ctx.outcome = success
    Review -> Draft    when ctx.outcome = fail
$ dippin lint pipeline.dip
PASS  pipeline.dip  (0 errors, 0 warnings)

Dippin recognizes two kinds of exhaustive pairs: success/fail and contains X/not contains X for the same variable and value. Any other combination still needs an unconditional fallback to satisfy the linter.

6 The ctx. Prefix

Context variables must be namespace-prefixed. If you write a condition without one, the linter catches it:

    Review -> Publish  when outcome = success
$ dippin lint pipeline.dip
WARN  pipeline.dip
  DIP120: condition references "outcome" without namespace prefix — did you mean "ctx.outcome"?

The fix is one prefix away: change outcome to ctx.outcome. Dippin supports three namespaces in conditions: ctx for runtime context set by node execution, graph for pipeline-level metadata, and params for values passed in at invocation time. Most conditions you write day-to-day will use ctx.

What's Next

You've got pipelines that branch. But what do those branches cost? Each path through a conditional pipeline has a different token count and a different price tag. The next post shows how to estimate it before spending real money on LLM calls.