Skip to content

Pipelines

Pipelines let you chain multiple endpoints into a workflow. Each step runs one endpoint, and you can pass output from earlier steps into later steps. Pipelines support linear sequences, conditional branching, and a visual drag-and-drop editor.

Go to Dashboard → Pipelines and click New Pipeline. Give it a slug and a name. You can build the pipeline in two ways:

  • Visual editor — drag nodes onto a canvas and connect them with edges
  • AI generation — describe your workflow in plain English (see the AI Pipeline Builder guide)

Pipelines are callable via the same URL pattern as endpoints:

POST https://api.wrapd.sh/v1/yourname/my-pipeline

The visual editor uses a node-based canvas. You drag endpoint steps and condition nodes onto the canvas, then connect them with edges.

NodeShapePurpose
StepRectangleRuns an endpoint. Has success and failure output handles.
ConditionDiamondEvaluates an expression. Has true and false output handles.
  1. Click Add Step to place a step node, or Add Condition for a condition node
  2. Click a node to open its config panel on the right
  3. For step nodes: select the endpoint, configure parameters, set a capture name
  4. For condition nodes: write an expression (see below)
  5. Drag edges from output handles to input handles to define the flow
  6. The entry node (first node to execute) is auto-detected — it is the node with no incoming edges
  • Drag from a success handle (bottom of a step node) to the next step that should run on success
  • Drag from a failure handle to define what happens when a step fails
  • Drag from the true handle of a condition node to the branch that runs when the expression is true
  • Drag from the false handle to the branch that runs when the expression is false

Node positions are saved, so the layout persists when you reopen the editor.

Condition nodes evaluate an expression against data from previous steps. They do not call any endpoint — they branch the pipeline locally.

Expressions reference captured output from previous steps using the format $.<step-slug>.<capture-name>. The general form is:

<value> <operator> <operand>
OperatorDescriptionExample
==Equal$.get-status.code == 200
!=Not equal$.check.result != error
>Greater than$.disk.usage > 90
<Less than$.count.total < 10
>=Greater than or equal$.backup.size >= 1024
<=Less than or equal$.latency.ms <= 500
containsString contains substring$.logs.output contains ERROR
startsWithString starts with prefix$.version.tag startsWith v2
endsWithString ends with suffix$.file.name endsWith .gz
matchesRegex match$.output.line matches ^OK
isEmptyValue is empty (no operand)$.backup.output isEmpty
isNotEmptyValue is not empty (no operand)$.result.data isNotEmpty

Numeric comparisons (>, <, >=, <=) parse both sides as numbers. String comparisons are case-sensitive.

Each step node runs an endpoint and has these options:

FieldDescription
EndpointThe endpoint to run
On failureWhat to do if the step fails (see below)
Capture nameA label for the step’s output, used to pass data to later steps
Capture patternOptional regex to extract a specific value from stdout
EnvironmentVariables to inject, can reference previous step outputs
ModeBehavior
StopStop the pipeline immediately (default)
ContinueLog the failure and move to the next step
Skip on failureSkip this step if any previous step has failed

Steps can pass data forward using references in the environment variables field. The format is $.<step-slug>.<capture-name>:

Example: A two-step pipeline where step 1 gets a version number and step 2 deploys it.

  1. Step 1 — endpoint get-version, capture name: version
  2. Step 2 — endpoint deploy, env: VERSION = $.get-version.version

When step 2 runs, VERSION is set to whatever step 1 printed to stdout.

By default, the full stdout of a step is captured. Use a capture pattern (regex) to extract a specific value. The first capture group is used:

version: (.+)

If the command outputs version: 2.4.1, only 2.4.1 is captured.

Pipelines are stored as a directed acyclic graph (DAG) with three fields:

  • nodes — array of step and condition nodes, each with an ID, type, position, and config
  • edges — array of connections between nodes, each with a source, target, and handle type (success/failure/true/false)
  • entry — the ID of the first node to execute

Legacy pipelines that used flat step arrays are automatically converted to the graph format when opened in the visual editor.

View run history in Dashboard → Pipelines. Click any pipeline to see past runs with overall status, duration, and per-step results. Condition node evaluations are included in the run log showing which branch was taken.

  • Maximum 20 nodes per pipeline
  • Each step node counts as one execution toward your monthly quota
  • Condition nodes do not count toward execution quota
  • Pipeline slugs must not conflict with endpoint slugs
  • Cycles are not allowed — the graph must be a DAG