2659 words No Slides

Video 21.5: Parallel Research Patterns

Course: Claude Code - Parallel Agent Development | Section: 21. Subagents for Parallel Work | Length: 5 minutes | Presenter: Daniel Treasure


Opening Hook

This is where subagents become a force multiplier. By running multiple agents in parallel, you can research problems faster, generate documentation at scale, and investigate complex codebases simultaneously. In this final video of the section, we'll explore the most powerful parallel research patterns: multi-agent investigations, documentation generation pipelines, chained workflows, and isolating high-volume operations. These are the patterns that turn a 2-hour analysis into a 15-minute parallel search.


Key Talking Points

What to say:

Pattern 1: Multiple Independent Investigations (Parallel Discovery) - Problem: You need to understand 5 different authentication approaches - Traditional approach: Investigate them sequentially (slow) - Parallel approach: Spawn 5 independent Explore agents, one per approach - Each agent searches independently, no dependencies - Results come back, you synthesize and compare - Best for: Research, exploration, comparative analysis - Example tasks: - "Research OAuth 2.0, OpenID Connect, SAML, JWT, and mTLS in this codebase" → 5 parallel Explore agents - "Find all instances of database access in the auth layer, API layer, and storage layer" → 3 parallel Explores - "Analyze the caching strategy in these 4 different services" → 4 parallel agents - Time savings: 1 hour sequential → 15 minutes parallel

Pattern 2: Documentation Generation at Scale (Delegated Documentation) - Problem: You have 50 functions that need documented - Traditional approach: Manually document or ask AI to do all 50 in one go (slow, expensive) - Parallel approach: 1. Main agent lists all 50 functions 2. Spawns 50 General-Purpose subagents in background, each documenting 1 function 3. Results compile automatically - Best for: Large-scale documentation, API docs, code libraries - Example workflow: - Agent 1: Lists all exported functions - Agents 2-51: Each generates docs for 1 function (with code example, parameters, return value) - Results aggregated into final documentation - Time savings: 2 hours sequential → 10 minutes parallel + compilation

Pattern 3: Chained Subagents (Sequential Multi-Step Workflows) - Problem: You need to refactor code, then write tests, then update docs - Traditional approach: Do all 3 sequentially - Parallel approach: Chained agents where output of one feeds input of the next - Agent A: Refactor the code, save result - Agent B: Take Agent A's refactored code, write tests - Agent C: Take Agent B's test coverage report, update docs - Different from parallel: These are sequential, but pipelined - Best for: Multi-step transformations where order matters - Good for: Code generation → testing → documentation → deployment - Benefit: While Agent A refactors, you're not waiting. Agent B starts as soon as A is done.

Pattern 4: Isolating High-Volume Operations (Noise Isolation) - Problem: Running tests generates 1000 lines of output. Running one test is useful, but running 50 tests floods your context. - Parallel approach: Delegate high-volume operations to background subagents - Agent runs tests, linting, or file analysis in isolation - Only the summary comes back to you - Main agent context stays clean - Best for: Test output, log analysis, file system scanning, large Git diffs - Example tasks: - "Bash agent: Run 'npm test' and summarize only failures and coverage" - "Explore agent: Scan all 500 files in the project, report only those with TODO comments" - "Bash agent: Run linting and summarize issues by file" - Benefit: You get the insight without the noise

Common Thread: Asking for the Right Level of Summary - Key to all patterns: Tell agents what summary level you need - "Find all database queries and report locations + patterns" (useful summary) - vs. "Run find . -name '*.js' and show me everything" (useless noise) - Good subagent requests have a clear output format in mind

How to Combine Patterns - Documentation at scale + background execution = 50 docs generated while you work on features - Multiple investigations + foreground execution = Rapid comparative analysis with your input - Chained subagents + background execution = Multi-step transformation pipeline that doesn't block you

What to show on screen:

  1. Multiple investigations diagram: Show 5 Explore agents branching in parallel
  2. Each one searching a different auth approach
  3. Results flowing back to main agent
  4. Merge point where you compare results

  5. Documentation generation workflow: Show the pipeline

  6. List all functions (main agent)
  7. Spawn 1 agent per function (visual: 5-10 agents in parallel)
  8. Each generating docs
  9. Aggregation step bringing results together

  10. Chained agents diagram: Show sequential dependencies

  11. Agent A → refactor
  12. Agent B → test (waits for A)
  13. Agent C → docs (waits for B)
  14. Timeline showing relative speed vs. sequential

  15. High-volume operation isolation: Show

  16. Full test output (1000 lines) being generated in background
  17. Summary (10 lines) appearing in foreground
  18. Contrast the noise vs. signal

  19. Live demo of one pattern: Show actual execution

  20. Pick one pattern and execute it live
  21. Show agents starting, progress, results appearing
  22. Show speed improvement vs. what sequential would look like

Demo Plan

Setup (30 seconds): - Open Claude Code with a real codebase (or a sample project with multiple files) - Have a list of tasks or functions ready to parallelize - Show the file structure and what you're analyzing

Pattern 1 Demo: Multiple Independent Investigations (1.5 minutes) 1. Say: "I want to understand how errors are handled in different parts of this system." 2. Identify 4 areas: API layer, database layer, auth layer, and background jobs 3. Spawn 4 Explore agents in background (Ctrl+B), one per area: - "Search for all error handling in the API layer. Show catch blocks, error middleware, and error types." - "Search for all error handling in the database layer. Show error checks and retry logic." - "Search for all error handling in the auth layer. Show validation failures and permission errors." - "Search for all error handling in background jobs. Show timeout and failure handling." 4. Show all 4 starting concurrently 5. Continue working while they run (say: "While these 4 searches run in parallel...") 6. When results start appearing, show synthesizing them (patterns emerge faster than sequential searching) 7. Call out the time savings: "That would take 10 minutes if I searched sequentially. All done in parallel."

Pattern 2 Demo: Documentation Generation (1.5 minutes) 1. Say: "Now imagine I have 10 functions I need to document." 2. List the functions on screen (or load from a file) 3. Explain: "Instead of documenting all 10 myself, I'll spawn 10 agents, each documenting one." 4. Show the main agent creating a simple list: Functions to document: 1. authenticateUser() 2. validateToken() 3. refreshToken() 4. logoutUser() 5. encryptPassword() [etc.] 5. Spawn background agents (Ctrl+B) for each: Agent 1: Document authenticateUser() with parameters, return value, and example Agent 2: Document validateToken() with parameters, return value, and example [etc.] 6. Show multiple agents starting 7. As they complete, show docs aggregating (e.g., a markdown file building) 8. Conclusion: "10 functions documented in parallel. No sequential bottleneck."

Pattern 3 Demo: Chained Subagents (1 minute) 1. Say: "Chained agents are for sequential work where you still benefit from parallelization." 2. Show a concrete example: "Refactor this module, then add tests, then update the README" 3. Explain the chain: - Agent A refactors the code and saves it - Agent B runs against Agent A's output, writes tests - Agent C runs against Agent B's test file, updates README with test examples 4. Show starting Agent A 5. As soon as Agent A completes (and you're notified), Agent B starts 6. Agent C starts after Agent B 7. Mention: "This is sequential, but it feels faster because agents are ready immediately when each step completes. No manual handoff delays."

Pattern 4 Demo: Isolating High-Volume Operations (1 minute) 1. Say: "Finally, high-volume operations. Test output is a perfect example." 2. Show running a test command in the foreground (show lots of output flooding in) 3. Say: "See all that noise? That's hard to parse." 4. Now show running the same command in background with a request for summary: - "Bash agent: Run npm test and summarize. Report only failures, success count, and coverage change." 5. Show the background running 6. When done, show a clean summary appearing (10 lines instead of 1000) 7. Mention: "Same information, way less noise. And it didn't block my work."

Synthesis: Combining Patterns (30 seconds) 1. Say: "You can combine these patterns." 2. Example: "Investigate 5 different caching approaches (Pattern 1) while documentation generates (Pattern 2) and tests run in the background (Pattern 4). All happening simultaneously." 3. Show a visual of multiple patterns stacked (or just describe it convincingly) 4. Conclusion: "This is where subagents become a productivity superpower."


Code Examples & Commands

Pattern 1: Multiple Independent Investigations

# Spawn multiple Explore agents in parallel to investigate different aspects
[Ctrl+B] "Explore agent: Find all HTTP requests in the codebase. Report URLs, methods, and error handling."
[Ctrl+B] "Explore agent: Find all database queries in the codebase. Report query types and N+1 risks."
[Ctrl+B] "Explore agent: Find all authentication checks in the codebase. Report where auth is validated."
[Ctrl+B] "Explore agent: Find all file system access in the codebase. Report patterns and risk areas."

# All 4 run in parallel. Results synthesize to give you full picture.

Pattern 2: Documentation Generation at Scale

# Main agent step 1: List functions needing documentation
[Foreground] "List all exported functions in src/utils/ with signatures"

# After getting the list, spawn agents for each function
[Ctrl+B] "Document parseDate() with parameters, return type, error cases, and a working example"
[Ctrl+B] "Document formatDate() with parameters, return type, error cases, and a working example"
[Ctrl+B] "Document validateEmail() with parameters, return type, error cases, and a working example"
[Ctrl+B] "Document sanitizeInput() with parameters, return type, error cases, and a working example"

# As results come in, aggregate into final API documentation
[Foreground] "Compile the documentation I received into a markdown file. Order by function name."

Pattern 3: Chained Subagents (Sequential with Parallelization Benefits)

# Step 1: Refactor
[Foreground] "Refactor the authentication module to use async/await. Save the refactored code."

# Step 2: Test (starts when Step 1 completes)
[Foreground] "Write unit tests for the refactored auth module. Cover happy path and error cases."

# Step 3: Documentation (starts when Step 2 completes)
[Foreground] "Update the README.md to reflect the new authentication flow. Include before/after examples."

# In practice, agents can start as soon as previous output is available.
# Even though sequential, you're not blocked between steps.

Pattern 4: Isolating High-Volume Operations

# Test output isolation
[Ctrl+B] "Run npm test. Summarize: total tests, passed count, failed count, failures listed, coverage change."

# Linting output isolation
[Ctrl+B] "Run npx eslint . Summarize: total files, files with errors (list each), files with warnings, top 3 most common issues."

# File scanning isolation
[Ctrl+B] "Scan the codebase for all TODO and FIXME comments. Group by file and severity. Count by type."

# Large diff isolation
[Ctrl+B] "Run git diff main. Summarize: files changed, lines added/removed, risk areas (security, database, auth)."

# Key: Always ask for a clear summary. Don't ask for raw output.

Combining Multiple Patterns

# Investigate different approaches in parallel (Pattern 1)
[Ctrl+B] "Research REST API design patterns"
[Ctrl+B] "Research GraphQL design patterns"
[Ctrl+B] "Research gRPC design patterns"

# Generate documentation for 3 functions in parallel (Pattern 2)
[Ctrl+B] "Document getUserById() function"
[Ctrl+B] "Document createUser() function"
[Ctrl+B] "Document updateUser() function"

# Run tests and linting in the background (Pattern 4)
[Ctrl+B] "Run npm test and summarize failures"
[Ctrl+B] "Run npx eslint and report files with errors"

# All 7 agents run in parallel. You stay in foreground and keep working.

Gotchas & Tips

Gotcha: "I spawned 50 agents and Claude is now slow" - Don't spawn more agents than your system can handle concurrently - Start with 3-5 parallel agents, see how it feels - If the system gets slow, reduce parallelism or run in smaller batches

Gotcha: "Results from parallel agents are hard to compare" - Give each agent a specific output format - Example: "Report as: [Function] | [Status] | [Key Finding] | [Recommendation]" - Structured output is much easier to synthesize than free-form results

Gotcha: "One agent's failure blocked my whole analysis" - Background agents fail independently. One failure doesn't crash others. - This is actually a feature—you get 4/5 results even if one agent hits an issue - Check results carefully and re-run any that failed

Tip: Scale gradually - Start with 2-3 parallel agents until you're comfortable - Move to 5-10 once the pattern feels natural - You're aiming for "fast enough", not "maximum possible"

Tip: Ask for structured output - Don't ask agents to "Find all X". Ask them to "Find all X and report as: [location] [context] [risk]" - Structured summaries are 10x more useful for synthesis

Tip: Use Explore agents for parallel investigation - Explore is fast, cheap, and perfect for parallel codebase searching - 10 parallel Explore agents are way cheaper than 1 General-Purpose doing it all

Tip: Reserve foreground for synthesis - Run investigations in background - Do the thinking/synthesis in foreground - Opposite of your intuition, but it's more efficient

Tip: Document your patterns - If you find a pattern that works (e.g., "investigate 4 auth approaches"), document it - Next time you need similar work, you have a template

Tip: Monitor aggregate token usage - Multiple parallel agents = multiple sessions = token costs multiply - Watch your usage. Parallel is fast but not free. - Good trade-off: 10x speed for 2-3x cost


Lead-out

You've now completed the section on subagents for parallel work. You understand the landscape: subagents vs. teams, the four built-in agents, custom agent creation, foreground vs. background execution, and the most powerful parallel research patterns. In the next section, we'll explore git worktrees and how to layer them on top of subagent patterns for even more powerful multi-stream development. But first, take what you've learned here and try it on a real project. Spawn your first parallel investigation. You'll feel the difference immediately.


Reference URLs


Prep Reading

  • Blog: "Scaling Code Analysis with Parallel Agents" — practical examples
  • Docs: "Orchestration Patterns" — understand agent sequencing and dependencies
  • Examples: Look for open-source projects using agent patterns at scale
  • Think: What's a large task in your workflow that could be parallelized?

Notes for Daniel

This is the capstone video for the section. It should feel like you're unlocking the real power of subagents. The message is: "With these patterns, you can do in minutes what would take hours sequentially."

Live demonstration is critical. Viewers need to see multiple agents starting, running in parallel, and results coming back. It's a "wow moment" when 5 agents finish faster than 1 would.

The documentation generation pattern is especially powerful for viewers because it's concrete and immediately applicable. Show it working on a real project if possible.

Talk through the token cost honestly. More agents = more tokens. But the time savings often justify it. Let viewers make that trade-off decision.

Tone: You're showing advanced patterns, but they're not complex—just different ways of thinking about work. "Instead of doing this sequentially, do it in parallel. It's that simple, and the results speak for themselves."

Mention the next section (worktrees) as a natural follow-up. "This is subagents. Next, we layer in git worktrees, and things get really interesting."

Closing thought: "Subagents are about doing more in less time. Parallel work is the future of development. You've now learned the patterns that make it happen."