Privacy Model
Spark is designed around a core principle: your code stays local. The CLI never sends source code, file contents, or environment state to the network. This page documents exactly what data crosses the wire, how authentication works, and how credentials are stored.
Data flow
When you use Spark, only two categories of data leave your machine:
| Data sent | Example | When |
|---|---|---|
| Error messages and query text | "how to handle CORS errors in Express" | spark query |
| Solution text you author | "Use cors() middleware with origin config..." | spark share |
The following never leave your machine:
- Source code or file contents
- File paths or directory structures
- Environment variables or secrets
- Git history or diffs
- System configuration
Spark does not silently collect or transmit any data. The spark share command is an explicit, intentional action. Nothing is shared unless you run it.
Explicit sharing model
Sharing is always a deliberate act:
# You choose what to share — title, content, and tags
spark share ses_abc123 \
--title "CORS fix for Express" \
--content "Use cors() middleware with origin config..." \
--tag framework:express:4There is no background sync, no automatic telemetry, and no passive data collection. If you never run spark share, nothing from your local environment ever reaches the network.
Authentication
Spark uses OAuth 2.0 with PKCE (Proof Key for Code Exchange) for interactive authentication:
- No long-lived secrets in the browser. PKCE eliminates the need to store client secrets, making the OAuth flow safe for CLI applications.
- Token-based sessions. After login, Spark stores an access token and a refresh token locally.
- Automatic refresh. Tokens are refreshed automatically with a 5-minute buffer before expiration. You should rarely need to re-authenticate.
For non-interactive environments (CI/CD pipelines, automated scripts), use an API key instead:
export SPARK_API_KEY=sk_your_key_here
spark query "deployment error" --tag domain:deploymentTransport security
All communication between the CLI and the Spark API uses HTTPS only. There is no HTTP fallback and no option to disable TLS.
Local storage security
Spark stores credentials and configuration files with restrictive permissions:
| Resource | Permissions | Description |
|---|---|---|
| Configuration files | 0o600 | Read/write for owner only |
| Configuration directories | 0o700 | Read/write/execute for owner only |
These permissions ensure that other users on the same machine cannot read your tokens or configuration.
Token lifecycle
- Login —
spark logininitiates the OAuth PKCE flow and stores tokens locally. - Automatic refresh — Before each API call, the CLI checks token expiration. If the token expires within 5 minutes, it is refreshed automatically.
- Logout —
spark logoutdeletes all locally stored tokens and credentials. - API key override — The
SPARK_API_KEYenvironment variable or--api-keyflag takes precedence over stored OAuth tokens for a single invocation.
No telemetry
Spark does not collect:
- Usage analytics or feature telemetry
- Crash reports containing source code
- Performance metrics tied to your codebase
- Any data outside of explicit
spark queryandspark shareinvocations
Comparison with typical developer tool telemetry
Many developer tools collect anonymous usage data by default, with opt-out mechanisms that vary in discoverability. Spark takes the opposite approach:
| Typical developer tool | Spark CLI | |
|---|---|---|
| Default data collection | Opt-out telemetry | No telemetry |
| Source code access | May index or process locally | Never accessed or transmitted |
| Sharing model | Automatic or semi-automatic | Explicit spark share only |
| Environment data | OS, editor, extensions | None |
| Crash reports | Often includes stack traces | No crash reporting |
For teams with strict compliance requirements, the Teams & Enterprise tier includes additional controls for data governance, audit logging, and private network isolation.