Skip to main content

Privacy & Telemetry

Gump collects anonymous usage metrics to improve the product and publish workflow benchmarks. This page explains exactly what is collected, what is never collected, and how to opt out.

What Gump collects

When telemetry is enabled, Gump sends anonymous, aggregated metrics after each run:
  • Workflow name and step count
  • Which agents were used
  • Pass/fail status per step
  • Duration per step
  • Estimated cost per step
  • Token counts (input, output, cache)
  • Number of retries and guard triggers
  • TTFD (time to first diff)
  • Features used (foreach, parallel, hitl, composition)
  • Operating system and architecture
  • Programming language of the repo (detected from project files)
  • Repository size bucket (small / medium / large — not the exact size)

What Gump never collects

  • Source code
  • Prompts
  • File paths
  • Spec content
  • Item names or descriptions
  • Agent output (diffs, plans, artifacts, reviews)
  • Environment variables
  • Git history or commit messages
Your code and your intellectual property never leave your machine through Gump’s telemetry. The agents you use (Claude, Codex, etc.) have their own privacy policies — Gump’s telemetry is separate and contains no code.

How it works

On first run, Gump generates an anonymous UUID stored in ~/.gump/anonymous_id. This ID has no link to your identity, email, or GitHub account. The first run displays a clear message explaining telemetry and creates the ID, but does not send any data. This gives you time to opt out before anything is transmitted. Subsequent runs send metrics asynchronously (best-effort, 5 second timeout — a failed send doesn’t affect your run).

Opt out

Disable telemetry at any time:
gump config set analytics false
Or set it in your config file:
# ~/.gump/config.toml or gump.toml
[analytics]
enabled = false
When disabled, Gump sends nothing. The anonymous ID file remains but is not used.

Why telemetry exists

Gump’s telemetry feeds cross-workflow benchmarks: which workflow performs best for which type of task, which agents have the highest first-try success rate, where cost can be optimized. These benchmarks are published for the community — they help everyone write better workflows. The aggregation happens server-side and no individual run is ever identifiable. The data is structural (step count, cost, status) not semantic (what the code does).

Zero LLM inside

Gump itself makes zero LLM calls. No summarization, no auto-analysis, no “explain this error” features. All intelligence comes from the agents you configure in your workflow. This means Gump adds no hidden cost, no unpredictable latency, and no dependency on a specific provider for its own operation. The telemetry system follows the same principle — it sends structured metrics, not LLM-processed summaries.