Skip to content

File Artifacts

File artifacts let you collect files produced by your commands after execution. Define glob patterns in your endpoint config, and Wrapd automatically collects matching files, uploads them to S3, and returns download URLs in the SSE response.

  1. You define artifacts patterns on your endpoint (e.g., ["report.pdf", "*.csv"])
  2. Your command runs and produces files in its working directory
  3. On successful exit (code 0 only), the agent collects files matching your patterns
  4. Files are uploaded to S3 and presigned download URLs are delivered as artifact SSE events
  5. URLs expire based on your tier’s retention period

Artifacts are not collected on failure (non-zero exit code) to avoid capturing partial or corrupt files.

Files are uploaded to S3 (Cloudflare R2) and you receive download URLs in the SSE stream:

endpoints:
- name: generate-report
command: python3 /scripts/report.py
artifacts:
- "report.pdf"
- "*.csv"

For self-hosted agents where files are already accessible on the local filesystem, you can skip the upload. The agent reports file paths in the command output instead:

endpoints:
- name: build-binary
command: make release
artifacts:
- "build/*.tar.gz"
artifacts_local: true

In local mode, each artifact appears as an output line:

[artifact] /home/user/project/build/app-v1.2.tar.gz (4821503 bytes, sha256:a1b2c3...)

When artifacts are collected in upload mode, you receive additional SSE events after the command output:

data: {"line":"Generating report..."}
data: {"line":"Done."}
data: {"type":"artifact","filename":"report.pdf","url":"https://...","size_bytes":102400,"sha256":"a1b2c3..."}
data: {"exit_code":0}

For sensitive endpoints, the url field is omitted from SSE events. You can retrieve download URLs via the authenticated API instead.

Patterns follow standard glob syntax:

PatternMatches
report.pdfExact filename
*.csvAll CSV files in working directory
output/*.pngPNG files in the output/ subdirectory
build/*.tar.gzTarballs in the build/ subdirectory
  • No absolute paths — patterns must be relative to the working directory
  • No .. traversal — patterns cannot contain ..
  • No bare wildcards*, **, **/* are rejected; use a specific extension like *.png
  • Deny list.env, .ssh/*, *.pem, *.key, and dotfiles are always rejected regardless of pattern

Cloud runner endpoints write artifacts to the /output tmpfs directory. After the container exits, the worker scans /output for files matching the artifact patterns and uploads them to S3.

endpoints:
- name: render-video
command: ffmpeg -i input.mp4 -o /output/result.mp4
agent_name: hosted
artifacts:
- "result.mp4"

Note: artifacts_local is not supported on cloud runners since containers are ephemeral.

Terminal window
curl -H "Authorization: Bearer <jwt>" \
https://api.wrapd.sh/artifacts
Terminal window
curl -H "Authorization: Bearer <jwt>" \
https://api.wrapd.sh/executions/<execution_id>/artifacts
Terminal window
curl -L -H "Authorization: Bearer <jwt>" \
https://api.wrapd.sh/artifacts/<id>/download

The download endpoint returns a redirect to a presigned S3 URL.

Terminal window
curl -X DELETE -H "Authorization: Bearer <jwt>" \
https://api.wrapd.sh/artifacts/<id>
LimitFreeProTeam
Patterns per endpoint31020
Max file size10 MB100 MB500 MB
Max total per execution25 MB500 MB2 GB
Files per execution52050
Monthly storage100 MB5 GB50 GB
Retention24 hours7 days30 days
  • Path traversal prevention: All file paths are canonicalized and verified to be within the working directory. Symlinks pointing outside the working directory are rejected.
  • Deny list: Sensitive file patterns (.env, *.pem, *.key, .ssh/*, dotfiles) are always blocked.
  • Sensitive endpoints: Artifacts from endpoints marked sensitive: true are stored but download URLs are not included in SSE events. Access them via the authenticated API.
  • S3 key isolation: Each user’s artifacts are stored under a user-specific S3 key prefix. The internal API validates that the key prefix matches the authenticated user.
  • Automatic cleanup: Expired artifacts are deleted from both S3 and the database by an hourly cleanup job.