Documentation
Teams & Enterprise
Team Workflows

Team Workflows

Concrete workflows showing how teams use Spark in practice. Each includes the actual commands your developers will run.

Workflow 1: Error resolution

Scenario: A developer hits an unfamiliar error in legacy code — a cryptic database migration failure that nobody on the current team has seen before.

Without Spark: The developer spends 2-3 hours reading through migration history, searching Stack Overflow, and experimenting with fixes.

With Spark:

# Developer hits the error and queries Spark
spark query "ActiveRecord::StatementInvalid PG::UndefinedColumn alter_table" --pretty

Spark returns a recommendation from a teammate who solved this exact issue two weeks ago:

Recommendation #1 (relevance: 0.94)
Author: jamie@yourteam.com
Title: PostgreSQL migration failure on renamed columns

Solution: The migration references a column that was renamed in a
previous migration but the schema cache is stale. Run
`bin/rails db:schema:cache:clear` before retrying. If the column
was renamed, update the migration to use the new column name.

Tags: postgresql, activerecord, migrations, schema-cache

The developer applies the fix in minutes. Optionally, they provide feedback:

# Confirm the recommendation worked
spark feedback rec_abc123 --rating up --comment "Exact fix, schema cache was stale"

Workflow 2: Senior dev knowledge sharing

Scenario: A senior developer spends an afternoon debugging a tricky Kubernetes deployment issue — pods were failing health checks because of a misconfigured readiness probe timeout interacting with a slow database connection pool warmup.

After solving it:

spark share \
  --title "K8s readiness probe timeout vs connection pool warmup" \
  --solution "When using HikariCP with default pool warmup, the readiness probe \
must account for connection pool initialization time. Set \
initialDelay to 30s and timeout to 10s. The default 5s \
initialDelay causes pods to restart before the pool is ready, \
creating a crash loop. Also set minIdle to match the expected \
baseline load to reduce warmup time." \
  --tags "kubernetes,health-checks,hikaricp,connection-pool,deployment"

Now every developer on the team — and their AI agents — can find this solution. The next time anyone encounters pods crash-looping during deployment, a simple query surfaces the fix:

spark query "kubernetes pods restarting CrashLoopBackOff readiness probe" --pretty
💡

The value of this workflow is asymmetric: the senior developer spends 30 seconds sharing, but saves potentially hours of debugging for every teammate who encounters the same class of problem.

Workflow 3: Onboarding new developers

Scenario: A new hire joins the team and starts working on the main application. They're unfamiliar with the team's conventions, infrastructure quirks, and common pitfalls.

Their AI agent queries Spark as it works:

# New dev's agent encounters a test failure pattern
spark query "factory_bot trait not found for user model" --pretty
 
# Agent finds team convention
# Recommendation: "Our User factory uses :with_permissions trait defined in
# spec/factories/shared_traits.rb, not in the user factory file.
# Import shared traits in rails_helper.rb."
# New dev is setting up their local environment
spark query "docker compose redis connection refused on macOS" --pretty
 
# Agent finds team-specific fix
# Recommendation: "On Apple Silicon Macs, use redis:7-alpine image instead
# of redis:latest. Add platform: linux/arm64 to the redis service in
# docker-compose.override.yml."
# New dev hits a deployment question
spark query "how to deploy feature branch to staging environment" --pretty
 
# Agent finds team's deployment workflow
# Recommendation: "Push to a branch named staging/<feature-name>. The
# CI pipeline auto-deploys branches with the staging/ prefix to the
# staging cluster. Access at <feature-name>.staging.yourapp.com."

Instead of interrupting teammates or searching through Confluence pages, the new developer's agent finds team-specific answers immediately. The knowledge base acts as a living, queryable onboarding document.

Workflow 4: Post-incident knowledge sharing

Scenario: After a production incident — a memory leak caused by an unbounded cache in a background job worker — the team runs a postmortem and documents the root cause.

The on-call engineer shares the incident findings:

spark share \
  --title "Memory leak in Sidekiq workers from unbounded Rails.cache" \
  --solution "Background jobs using Rails.cache.fetch without an expires_in \
option caused unbounded memory growth in Sidekiq workers. The default \
memory store has no eviction policy. Fix: always set expires_in on \
cache entries in background jobs, or switch to a bounded cache store \
like LRU. Monitoring: alert on Sidekiq worker RSS exceeding 512MB. \
Detection: watch for linear memory growth in worker metrics over hours." \
  --tags "sidekiq,memory-leak,rails-cache,production-incident,monitoring"

Now the pattern is permanently documented in the knowledge network. Future queries about similar symptoms surface the fix immediately:

# Six months later, a different developer notices memory growth
spark query "sidekiq worker memory growing unbounded RSS increasing" --pretty
 
# Returns the incident finding immediately — no need to search through
# postmortem documents or ask who was on-call six months ago

Post-incident shares are some of the highest-value contributions to a team's knowledge base. They capture hard-won operational knowledge that is otherwise lost in postmortem documents that nobody reads twice.

Pattern: integrating Spark into your daily workflow

The highest-value habit is simple: query before you debug, share after you solve.

# Before starting work on an error
spark query "the error message or problem description" --pretty
 
# After solving the problem
spark share --title "Short description" \
  --solution "What you did and why it worked" \
  --tags "relevant,tags,here"

This two-command pattern, adopted across a team, is what produces the compound knowledge returns described in the research.