API Keys & Management
How to create, manage, and configure credentials for Spark CLI across your team.
Creating API keys
API keys are created in the Spark dashboard at spark.memco.ai/dashboard (opens in a new tab).
- Navigate to Settings > API Keys
- Click Create Key
- Choose a scope: Read-only, Read-write, or Admin
- Add a descriptive name (e.g., "CI/CD pipeline", "staging environment", "developer-jane")
- Copy the key immediately — it won't be shown again
API keys follow the format sk_ followed by a random string:
sk_a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6Key rotation
Rotate keys regularly and whenever a team member leaves:
- Create a new key in the dashboard
- Update all environments using the old key
- Verify the new key works in each environment
- Revoke the old key in the dashboard
Revoke keys immediately when a team member leaves or when a key may have been exposed. You can revoke keys from the dashboard without affecting other active keys.
Environment variable setup for CI/CD
Set the SPARK_API_KEY environment variable in your CI/CD platform. The CLI automatically uses this variable when present — no login step required.
Add SPARK_API_KEY as a repository secret, then reference it in your workflow:
# .github/workflows/build.yml
steps:
- name: Query Spark for known issues
env:
SPARK_API_KEY: ${{ secrets.SPARK_API_KEY }}
run: spark query "build failure ${{ job.status }}" --jsonAPI keys set via environment variables are never written to disk by Spark.
Per-project credentials
Use spark login --local to store credentials scoped to a specific project:
cd your-project
spark login --localThis creates a .spark/settings.json file in your project root with project-level credentials.
Add .spark/ to your .gitignore to prevent accidentally committing credentials to version control.
echo ".spark/" >> .gitignoreGlobal vs. project-level configuration
| Scope | File location | Created by |
|---|---|---|
| Global | ~/.spark/settings.json | spark login |
| Project | ./.spark/settings.json | spark login --local or spark init |
Global settings apply to all projects. Project-level settings override global settings for that specific project.
Configuration resolution order
When Spark needs a setting, it checks these sources in order. The first value found wins:
1. CLI flag → spark query --api-key sk_... "query"
2. Environment var → SPARK_API_KEY=sk_...
3. Local settings → ./.spark/settings.json
4. Global settings → ~/.spark/settings.jsonThis means:
- A CLI flag always takes precedence
- An environment variable overrides any settings file
- Project-level settings override global settings
- Global settings are the fallback default
Settings file format
Both global and project-level settings files use the same JSON format:
{
"apiKey": "sk_...",
"workspace": "your-team",
"network": "team",
"apiBaseUrl": "https://api.memco.ai"
}| Field | Description |
|---|---|
apiKey | Your API key (if using key-based auth) |
workspace | Your team workspace identifier |
network | "public" or "team" — which knowledge network to query |
apiBaseUrl | API endpoint (only change for self-hosted deployments) |
Best practices
- Use OAuth for developer machines, API keys for CI/CD. OAuth tokens auto-refresh and don't require manual rotation.
- Scope CI/CD keys to read-only unless the pipeline needs to share solutions.
- Use separate keys per environment (staging, production, CI) so you can revoke one without affecting others.
- Rotate keys quarterly as a baseline, and immediately after any team member departure.
- Never commit keys to version control. Use environment variables or secrets management tools.