Documentation
Core Concepts
Privacy Model

Privacy Model

Spark is designed around a core principle: your code stays local. The CLI never sends source code, file contents, or environment state to the network. This page documents exactly what data crosses the wire, how authentication works, and how credentials are stored.

Data flow

When you use Spark, only two categories of data leave your machine:

Data sentExampleWhen
Error messages and query text"how to handle CORS errors in Express"spark query
Solution text you author"Use cors() middleware with origin config..."spark share

The following never leave your machine:

  • Source code or file contents
  • File paths or directory structures
  • Environment variables or secrets
  • Git history or diffs
  • System configuration
⚠️

Spark does not silently collect or transmit any data. The spark share command is an explicit, intentional action. Nothing is shared unless you run it.

Explicit sharing model

Sharing is always a deliberate act:

# You choose what to share — title, content, and tags
spark share ses_abc123 \
  --title "CORS fix for Express" \
  --content "Use cors() middleware with origin config..." \
  --tag framework:express:4

There is no background sync, no automatic telemetry, and no passive data collection. If you never run spark share, nothing from your local environment ever reaches the network.

Authentication

Spark uses OAuth 2.0 with PKCE (Proof Key for Code Exchange) for interactive authentication:

  • No long-lived secrets in the browser. PKCE eliminates the need to store client secrets, making the OAuth flow safe for CLI applications.
  • Token-based sessions. After login, Spark stores an access token and a refresh token locally.
  • Automatic refresh. Tokens are refreshed automatically with a 5-minute buffer before expiration. You should rarely need to re-authenticate.

For non-interactive environments (CI/CD pipelines, automated scripts), use an API key instead:

export SPARK_API_KEY=sk_your_key_here
spark query "deployment error" --tag domain:deployment

Transport security

All communication between the CLI and the Spark API uses HTTPS only. There is no HTTP fallback and no option to disable TLS.

Local storage security

Spark stores credentials and configuration files with restrictive permissions:

ResourcePermissionsDescription
Configuration files0o600Read/write for owner only
Configuration directories0o700Read/write/execute for owner only

These permissions ensure that other users on the same machine cannot read your tokens or configuration.

Token lifecycle

  1. Loginspark login initiates the OAuth PKCE flow and stores tokens locally.
  2. Automatic refresh — Before each API call, the CLI checks token expiration. If the token expires within 5 minutes, it is refreshed automatically.
  3. Logoutspark logout deletes all locally stored tokens and credentials.
  4. API key override — The SPARK_API_KEY environment variable or --api-key flag takes precedence over stored OAuth tokens for a single invocation.

No telemetry

Spark does not collect:

  • Usage analytics or feature telemetry
  • Crash reports containing source code
  • Performance metrics tied to your codebase
  • Any data outside of explicit spark query and spark share invocations

Comparison with typical developer tool telemetry

Many developer tools collect anonymous usage data by default, with opt-out mechanisms that vary in discoverability. Spark takes the opposite approach:

Typical developer toolSpark CLI
Default data collectionOpt-out telemetryNo telemetry
Source code accessMay index or process locallyNever accessed or transmitted
Sharing modelAutomatic or semi-automaticExplicit spark share only
Environment dataOS, editor, extensionsNone
Crash reportsOften includes stack tracesNo crash reporting

For teams with strict compliance requirements, the Teams & Enterprise tier includes additional controls for data governance, audit logging, and private network isolation.