Best Practices
These tips will help you get better results from Spark queries, write more useful solutions, and integrate Spark smoothly into your daily workflow.
Writing effective queries
Include the actual error text
The recommendation engine matches against real error messages. Including the exact error text dramatically improves result quality.
# Good — includes the real error
spark query "TypeError: Cannot read properties of undefined (reading 'map')" \
--tag framework:react:18 \
--pretty
# Bad — too vague to match anything specific
spark query "it doesn't work" --prettyCopy-paste the first line of the error message directly into your query. This is the single most effective thing you can do for better matches.
Always add semantic tags
Tags narrow your results to solutions from developers using the same stack. Without tags, you get broader results that may not apply to your specific language, framework, or version.
At minimum, include:
- Language and version:
--tag language:node:20 - Framework:
--tag framework:express:4 - Task type:
--tag task_type:bug_fix
spark query "CORS error when calling API from React app" \
--tag language:typescript \
--tag framework:react:18 \
--tag task_type:bug_fix \
--tag error_type:CORS \
--prettyUse --pretty for interactive use
The --pretty flag formats output for human readability with alignment, colors, and section separators. Always use it when running Spark manually in a terminal:
spark query "webpack chunking strategy for large app" \
--tag library:webpack:5 \
--tag task_type:performance \
--prettyWhen running Spark programmatically or piping output to another tool, omit --pretty to get the raw output format.
After solving a problem
Share your solution — it takes 30 seconds
If you just spent 20 minutes debugging a problem, take 30 seconds to share the solution. It's the highest-leverage thing you can do for the community:
spark share <session-id> \
--title "Fix for the problem you just solved" \
--content "What the problem was, why it happened, and how to fix it" \
--tag language:python:3.12 \
--tag task_type:bug_fixEvery solution you share helps the next developer skip the debugging you just did.
Rate recommendations with spark feedback
After applying a recommendation, tell Spark whether it helped:
# The recommendation solved your problem
spark feedback <session-id> --helpful
# The recommendation was inaccurate or unhelpful
spark feedback <session-id> --not-helpfulFeedback directly affects confidence scores. Helpful ratings push good solutions higher in future results. Unhelpful ratings flag solutions that may be outdated or incorrect. Both are valuable.
Even if you didn't use a recommendation, rating it as unhelpful is useful data. It tells the engine that the match wasn't relevant for your query, which improves future matching.
When results aren't great
If your first query doesn't return useful results, try these strategies:
Try different tags
Swap out or add tags to shift the results. If --tag library:webpack:5 didn't help, try --tag task_type:configuration or --tag domain:web:
# First attempt
spark query "webpack build fails silently" --tag library:webpack:5 --pretty
# Refined attempt with different tags
spark query "webpack build fails silently" \
--tag library:webpack:5 \
--tag task_type:configuration \
--tag language:typescript \
--prettyInclude more context in the query
Add details that might match solution descriptions in the network:
# Sparse query
spark query "prisma error" --pretty
# Richer query with context
spark query "Prisma migrate dev fails with shadow database permission error on PostgreSQL" \
--tag library:prisma:5 \
--tag domain:database \
--tag task_type:configuration \
--prettyCheck that your query includes the specific error
Generic descriptions like "it crashed" or "doesn't work" rarely match anything useful. If you have an error message, a status code, or an exception name, include it.
Broaden your version tags
If you're using a very new version, try dropping the version from your tag to get results from adjacent versions:
# Too specific — maybe nobody has shared a solution for 5.3 yet
--tag library:prisma:5.3
# Broader — matches solutions for any Prisma 5.x
--tag library:prisma:5IDE integration
Spark works best when it's part of your natural workflow. If you use an AI coding agent in your IDE (Claude Code, Cursor, Windsurf, etc.), Spark can integrate directly so your agent queries the network automatically when it encounters errors.
Check the Getting Started guide for setup instructions specific to your editor.
Summary
| Practice | Why it matters |
|---|---|
| Include the real error text | Matches against what other developers searched for |
| Add semantic tags | Narrows results to your exact stack and version |
Use --pretty in terminal | Human-readable formatting for interactive use |
| Share solutions after solving | 30 seconds of your time saves thousands of developers hours |
Rate with spark feedback | Improves confidence scores for future queries |
| Refine queries when results miss | Different tags and more context unlock better matches |