Documentation
Teams & Enterprise
API Keys & Management

API Keys & Management

How to create, manage, and configure credentials for Spark CLI across your team.

Creating API keys

API keys are created in the Spark dashboard at spark.memco.ai/dashboard (opens in a new tab).

  1. Navigate to Settings > API Keys
  2. Click Create Key
  3. Choose a scope: Read-only, Read-write, or Admin
  4. Add a descriptive name (e.g., "CI/CD pipeline", "staging environment", "developer-jane")
  5. Copy the key immediately — it won't be shown again

API keys follow the format sk_ followed by a random string:

sk_a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6

Key rotation

Rotate keys regularly and whenever a team member leaves:

  1. Create a new key in the dashboard
  2. Update all environments using the old key
  3. Verify the new key works in each environment
  4. Revoke the old key in the dashboard
⚠️

Revoke keys immediately when a team member leaves or when a key may have been exposed. You can revoke keys from the dashboard without affecting other active keys.

Environment variable setup for CI/CD

Set the SPARK_API_KEY environment variable in your CI/CD platform. The CLI automatically uses this variable when present — no login step required.

Add SPARK_API_KEY as a repository secret, then reference it in your workflow:

# .github/workflows/build.yml
steps:
  - name: Query Spark for known issues
    env:
      SPARK_API_KEY: ${{ secrets.SPARK_API_KEY }}
    run: spark query "build failure ${{ job.status }}" --json

API keys set via environment variables are never written to disk by Spark.

Per-project credentials

Use spark login --local to store credentials scoped to a specific project:

cd your-project
spark login --local

This creates a .spark/settings.json file in your project root with project-level credentials.

🚫

Add .spark/ to your .gitignore to prevent accidentally committing credentials to version control.

echo ".spark/" >> .gitignore

Global vs. project-level configuration

ScopeFile locationCreated by
Global~/.spark/settings.jsonspark login
Project./.spark/settings.jsonspark login --local or spark init

Global settings apply to all projects. Project-level settings override global settings for that specific project.

Configuration resolution order

When Spark needs a setting, it checks these sources in order. The first value found wins:

1. CLI flag        →  spark query --api-key sk_... "query"
2. Environment var →  SPARK_API_KEY=sk_...
3. Local settings  →  ./.spark/settings.json
4. Global settings →  ~/.spark/settings.json

This means:

  • A CLI flag always takes precedence
  • An environment variable overrides any settings file
  • Project-level settings override global settings
  • Global settings are the fallback default

Settings file format

Both global and project-level settings files use the same JSON format:

{
  "apiKey": "sk_...",
  "workspace": "your-team",
  "network": "team",
  "apiBaseUrl": "https://api.memco.ai"
}
FieldDescription
apiKeyYour API key (if using key-based auth)
workspaceYour team workspace identifier
network"public" or "team" — which knowledge network to query
apiBaseUrlAPI endpoint (only change for self-hosted deployments)

Best practices

  • Use OAuth for developer machines, API keys for CI/CD. OAuth tokens auto-refresh and don't require manual rotation.
  • Scope CI/CD keys to read-only unless the pipeline needs to share solutions.
  • Use separate keys per environment (staging, production, CI) so you can revoke one without affecting others.
  • Rotate keys quarterly as a baseline, and immediately after any team member departure.
  • Never commit keys to version control. Use environment variables or secrets management tools.