Pipelines
Pipelines let you chain multiple endpoints into a workflow. Each step runs one endpoint, and you can pass output from earlier steps into later steps. Pipelines support linear sequences, conditional branching, and a visual drag-and-drop editor.
Creating a pipeline
Section titled “Creating a pipeline”Go to Dashboard → Pipelines and click New Pipeline. Give it a slug and a name. You can build the pipeline in two ways:
- Visual editor — drag nodes onto a canvas and connect them with edges
- AI generation — describe your workflow in plain English (see the AI Pipeline Builder guide)
Pipelines are callable via the same URL pattern as endpoints:
POST https://api.wrapd.sh/v1/yourname/my-pipelineVisual editor
Section titled “Visual editor”The visual editor uses a node-based canvas. You drag endpoint steps and condition nodes onto the canvas, then connect them with edges.
Node types
Section titled “Node types”| Node | Shape | Purpose |
|---|---|---|
| Step | Rectangle | Runs an endpoint. Has success and failure output handles. |
| Condition | Diamond | Evaluates an expression. Has true and false output handles. |
Building a pipeline
Section titled “Building a pipeline”- Click Add Step to place a step node, or Add Condition for a condition node
- Click a node to open its config panel on the right
- For step nodes: select the endpoint, configure parameters, set a capture name
- For condition nodes: write an expression (see below)
- Drag edges from output handles to input handles to define the flow
- The entry node (first node to execute) is auto-detected — it is the node with no incoming edges
Connecting nodes
Section titled “Connecting nodes”- Drag from a success handle (bottom of a step node) to the next step that should run on success
- Drag from a failure handle to define what happens when a step fails
- Drag from the true handle of a condition node to the branch that runs when the expression is true
- Drag from the false handle to the branch that runs when the expression is false
Node positions are saved, so the layout persists when you reopen the editor.
Condition nodes
Section titled “Condition nodes”Condition nodes evaluate an expression against data from previous steps. They do not call any endpoint — they branch the pipeline locally.
Expression syntax
Section titled “Expression syntax”Expressions reference captured output from previous steps using the format $.<step-slug>.<capture-name>. The general form is:
<value> <operator> <operand>Operators
Section titled “Operators”| Operator | Description | Example |
|---|---|---|
== | Equal | $.get-status.code == 200 |
!= | Not equal | $.check.result != error |
> | Greater than | $.disk.usage > 90 |
< | Less than | $.count.total < 10 |
>= | Greater than or equal | $.backup.size >= 1024 |
<= | Less than or equal | $.latency.ms <= 500 |
contains | String contains substring | $.logs.output contains ERROR |
startsWith | String starts with prefix | $.version.tag startsWith v2 |
endsWith | String ends with suffix | $.file.name endsWith .gz |
matches | Regex match | $.output.line matches ^OK |
isEmpty | Value is empty (no operand) | $.backup.output isEmpty |
isNotEmpty | Value is not empty (no operand) | $.result.data isNotEmpty |
Numeric comparisons (>, <, >=, <=) parse both sides as numbers. String comparisons are case-sensitive.
Each step node runs an endpoint and has these options:
| Field | Description |
|---|---|
| Endpoint | The endpoint to run |
| On failure | What to do if the step fails (see below) |
| Capture name | A label for the step’s output, used to pass data to later steps |
| Capture pattern | Optional regex to extract a specific value from stdout |
| Environment | Variables to inject, can reference previous step outputs |
Failure modes
Section titled “Failure modes”| Mode | Behavior |
|---|---|
| Stop | Stop the pipeline immediately (default) |
| Continue | Log the failure and move to the next step |
| Skip on failure | Skip this step if any previous step has failed |
Passing data between steps
Section titled “Passing data between steps”Steps can pass data forward using references in the environment variables field. The format is $.<step-slug>.<capture-name>:
Example: A two-step pipeline where step 1 gets a version number and step 2 deploys it.
- Step 1 — endpoint
get-version, capture name:version - Step 2 — endpoint
deploy, env:VERSION = $.get-version.version
When step 2 runs, VERSION is set to whatever step 1 printed to stdout.
Capture patterns
Section titled “Capture patterns”By default, the full stdout of a step is captured. Use a capture pattern (regex) to extract a specific value. The first capture group is used:
version: (.+)If the command outputs version: 2.4.1, only 2.4.1 is captured.
Graph format
Section titled “Graph format”Pipelines are stored as a directed acyclic graph (DAG) with three fields:
- nodes — array of step and condition nodes, each with an ID, type, position, and config
- edges — array of connections between nodes, each with a source, target, and handle type (success/failure/true/false)
- entry — the ID of the first node to execute
Legacy pipelines that used flat step arrays are automatically converted to the graph format when opened in the visual editor.
Run history
Section titled “Run history”View run history in Dashboard → Pipelines. Click any pipeline to see past runs with overall status, duration, and per-step results. Condition node evaluations are included in the run log showing which branch was taken.
Limits
Section titled “Limits”- Maximum 20 nodes per pipeline
- Each step node counts as one execution toward your monthly quota
- Condition nodes do not count toward execution quota
- Pipeline slugs must not conflict with endpoint slugs
- Cycles are not allowed — the graph must be a DAG