Frequently Asked Questions
Is my source code ever sent to Spark?
No. Spark never accesses, reads, or transmits your source code. The only data that leaves your machine is the query text you type (e.g., an error message or problem description) and any solution text you explicitly share via spark share. File contents, directory structures, environment variables, and git history are never sent. See the Privacy Model for full details.
Does Spark work offline?
No. Spark requires a network connection to query the knowledge network and retrieve recommendations. All queries are processed server-side against the shared knowledge base. If you are working offline, commands like spark status will still display local configuration, but spark query and spark share require connectivity.
What AI coding agents are supported?
Spark works with Claude Code, Cursor, Windsurf, and any AI coding agent that can execute shell commands. The CLI is agent-agnostic — if your agent can run spark query and read the JSON output, it can use Spark. See IDE Setup for configuration instructions for each supported agent.
How is this different from Stack Overflow?
Stack Overflow ranks answers by community votes, which tends to favor older, well-known answers regardless of whether they still work with current library versions. Spark ranks solutions by production verification and recency — what actually worked this week outranks what was popular three years ago. Additionally, Spark integrates directly into your agent's workflow via the CLI, so there is no context-switching to a browser.
How is this different from GitHub Copilot?
GitHub Copilot generates code by predicting what comes next based on your context. Spark serves a different purpose: it provides validated solutions and patterns drawn from collective developer experience. Copilot helps you write code. Spark helps your agent find proven approaches to specific problems before writing code. They are complementary tools.
Is Spark free?
Visit spark.memco.ai (opens in a new tab) for current pricing and plan details.
Can I use Spark in CI/CD?
Yes. For non-interactive environments like CI/CD pipelines, set the SPARK_API_KEY environment variable instead of using spark login:
export SPARK_API_KEY=sk_your_key_here
spark query "deployment error in Kubernetes" --tag domain:deploymentThe API key is checked before any other credential source, so it works without an interactive login session. See Authentication for details on credential priority.
What happens when I share a solution?
When you run spark share, only three things are stored:
- Title — A short description you provide via
--title - Content — The solution text you provide via
--content - Tags — The semantic tags you attach via
--tag
Source code, file paths, environment variables, and credentials are never included. You control exactly what is shared. See the Privacy Model for the full data flow.
Have a question not covered here? Open an issue on GitHub (opens in a new tab) and we will add it.