Video 23.2: Hooks for Agent Teams
Course: Claude Code - Parallel Agent Development (Course 4) Section: 23: Orchestration and Best Practices Video Length: 4–5 minutes Presenter: Daniel Treasure
Opening Hook
Parallel agents are powerful—but without guardrails, they become chaos. An agent finishes a task that doesn't actually work. Another agent is about to go idle when their test suite is still failing. You need hooks: quality gates that fire at critical moments and enforce your standards. Today, we're building a multi-agent workflow with real teeth.
Key Talking Points
What to say:
What Are Hooks? - Hooks are scripts that fire at specific moments in the agent lifecycle: when an agent is about to go idle, or when a task is marked complete. - They run on your machine (not on the agent's machine), so you control the execution. - Each hook receives structured input (JSON) about what's happening and can accept or reject the action.
TeammateIdle Hook - Fires when an agent is about to go idle (no more tasks to claim). - Input: JSON with teammate name, team name, current working directory, session ID. - Exit code 0 = allow idle. Exit code 2 = block with feedback (send error message to agent via stderr). - Use case: "Don't let Agent A go idle until their linter passes. If it fails, put them back to work." - You can write the hook in bash, Python, or any language.
TaskCompleted Hook - Fires when an agent marks a task as complete. - Input: JSON with task ID, task subject, task description, teammate name, team name. - Exit code 0 = allow completion. Exit code 2 = block with feedback. - Use case: "Before marking this task done, run the test suite. If tests fail, reject and send the agent back to fix it." - The agent will see the feedback and know why they can't finish yet.
Hook Configuration - Hooks live in your Claude Code settings.json under the "hooks" key. - Path format: relative to your project root or absolute path. - Multiple hooks can run (teammateIdle and taskCompleted can both fire).
Writing Hook Scripts - Keep them simple and fast (they block the agent, so slow hooks are bad UX). - Exit codes matter: 0 = success/allow, 2 = block with error, anything else = error. - Send feedback to stderr (agent sees it). - Read the input JSON from stdin.
What to show on screen:
- Claude Code settings.json with hooks configured
- Show the "hooks" section.
- Highlight the two hook types and their paths.
-
Explain that hooks are relative to the project root.
-
A hook script example (bash or Python)
- Show how to parse the JSON input.
- Show simple logic: check a condition, exit with 0 or 2.
-
Explain the feedback message that the agent sees.
-
Hook firing in action
- Show an agent running a task, completing it, and the hook blocking the completion.
- Show the feedback message the agent receives.
-
Show the agent going back to work to fix the issue.
-
Hook logs/output
- Show what the hook printed.
- Explain how you can use logs to monitor hook effectiveness.
Demo Plan
Scenario: You have a Python project with multiple agents. You want to enforce: 1. TeammateIdle hook: Before an agent goes idle, their code must pass linting (flake8). 2. TaskCompleted hook: Before a task is marked done, pytest must pass.
Timing: ~4 minutes
Step 1: Show the Settings Configuration (30 seconds)
- Open
.claude/settings.json(or create it if it doesn't exist). - Add the hooks configuration:
json { "hooks": { "teammateIdle": ".claude/hooks/check-lint.sh", "taskCompleted": ".claude/hooks/check-tests.sh" } } - Explain: "These paths point to scripts that will enforce quality."
Step 2: Create the TeammateIdle Hook (45 seconds)
- Create
.claude/hooks/check-lint.sh: ```bash #!/bin/bash set -e
# Read input JSON from stdin input=$(cat) teammate=$(echo "$input" | jq -r '.teammate_name') cwd=$(echo "$input" | jq -r '.cwd')
cd "$cwd"
# Run flake8 on the project if flake8 . --max-line-length=100; then # Linting passed, allow idle exit 0 else # Linting failed, block idle echo "LINT FAILURE: $teammate, your code has linting issues. Fix them before going idle." >&2 exit 2 fi ``` - Explain: "This runs flake8. If it passes, the agent can go idle (exit 0). If it fails, we block them and they see the error message (exit 2)."
Step 3: Create the TaskCompleted Hook (45 seconds)
- Create
.claude/hooks/check-tests.sh: ```bash #!/bin/bash set -e
# Read input JSON from stdin input=$(cat) task_id=$(echo "$input" | jq -r '.task_id') task_subject=$(echo "$input" | jq -r '.task_subject') teammate=$(echo "$input" | jq -r '.teammate_name') cwd=$(echo "$input" | jq -r '.cwd')
cd "$cwd"
# Run pytest if pytest -v --tb=short; then # Tests passed, allow completion echo "✓ All tests passed. Task $task_id approved for completion." >&2 exit 0 else # Tests failed, block completion echo "TEST FAILURE: Task '$task_subject' has failing tests. Fix them, then mark complete again." >&2 exit 2 fi ``` - Explain: "This runs pytest. Passing tests = task completes (exit 0). Failing tests = task is blocked (exit 2) and the agent sees the error."
Step 4: Make Hooks Executable (15 seconds)
- Run in terminal:
bash chmod +x .claude/hooks/check-lint.sh chmod +x .claude/hooks/check-tests.sh - Explain: "Scripts need execution permission."
Step 5: Trigger the Hooks in Action (90 seconds)
- Start an agent task (or show a recorded session).
- Agent completes a task. TaskCompleted hook fires:
- Show the hook running pytest.
- Show pytest failing (intentionally introduce a failing test for demo).
- Show hook output: "TEST FAILURE: ... Fix them, then mark complete again."
- Agent sees the feedback and goes back to fix the test.
- Once agent fixes the test, TaskCompleted hook fires again.
- Show pytest passing.
- Show hook output: "✓ All tests passed."
- Task is approved, agent moves on or goes idle.
- When agent is about to go idle, TeammateIdle hook fires:
- Show the hook running flake8.
- Show flake8 output (either passing or failing).
- If passing, agent goes idle.
- If failing, agent sees feedback and fixes linting issues.
Step 6: Show Hook Logs (30 seconds)
- Open a log file or show stderr output.
- Show a record of all hook executions:
[14:23:45] TaskCompleted hook fired for Agent_1 on task auth_module [14:23:46] Running pytest... FAILED [14:23:47] Feedback sent: TEST FAILURE... [14:24:12] Agent_1 fixed the test [14:24:13] TaskCompleted hook fired again [14:24:14] Running pytest... PASSED [14:24:15] Task approved for completion [14:25:00] TeammateIdle hook fired for Agent_1 [14:25:01] Running flake8... PASSED [14:25:02] Agent_1 approved to go idle - Explain: "You can monitor hooks to see where quality issues surface."
Code Examples & Commands
Example 1: Complete TeammateIdle Hook (Bash)
#!/bin/bash
# .claude/hooks/check-lint.sh
# Blocks agents from going idle if code doesn't pass linting.
set -e
# Read JSON input from stdin
input=$(cat)
teammate=$(echo "$input" | jq -r '.teammate_name')
cwd=$(echo "$input" | jq -r '.cwd')
session_id=$(echo "$input" | jq -r '.session_id')
cd "$cwd"
echo "TeammateIdle Hook: Checking code quality for $teammate..." >&2
# Run flake8 on all Python files
if flake8 --max-line-length=100 --count --show-source; then
echo "✓ Linting passed. $teammate approved to go idle." >&2
exit 0
else
echo "✗ LINT FAILURE: $teammate's code has style issues. Fix them before going idle." >&2
echo " Run: flake8 --max-line-length=100 ." >&2
exit 2
fi
Example 2: Complete TaskCompleted Hook (Bash)
#!/bin/bash
# .claude/hooks/check-tests.sh
# Blocks task completion if tests don't pass.
set -e
# Read JSON input from stdin
input=$(cat)
task_id=$(echo "$input" | jq -r '.task_id')
task_subject=$(echo "$input" | jq -r '.task_subject')
teammate=$(echo "$input" | jq -r '.teammate_name')
cwd=$(echo "$input" | jq -r '.cwd')
cd "$cwd"
echo "TaskCompleted Hook: Verifying task $task_id for $teammate..." >&2
# Run pytest
if pytest -v --tb=short 2>&1; then
echo "✓ All tests passed. Task $task_id approved for completion." >&2
exit 0
else
echo "✗ TEST FAILURE: Task '$task_subject' has failing tests." >&2
echo " Run: pytest -v --tb=short" >&2
exit 2
fi
Example 3: Advanced TaskCompleted Hook (Python)
#!/usr/bin/env python3
# .claude/hooks/check-tests.py
# Advanced hook: runs tests and checks coverage.
import json
import subprocess
import sys
import os
def main():
# Read JSON input from stdin
input_data = json.loads(sys.stdin.read())
task_id = input_data.get("task_id")
task_subject = input_data.get("task_subject")
teammate = input_data.get("teammate_name")
cwd = input_data.get("cwd")
os.chdir(cwd)
print(f"TaskCompleted Hook: Validating task {task_id} ({task_subject})...", file=sys.stderr)
# Run pytest with coverage
try:
result = subprocess.run(
["pytest", "-v", "--cov=.", "--cov-report=term"],
capture_output=False,
check=True
)
print(f"✓ All tests passed. Task {task_id} approved.", file=sys.stderr)
return 0
except subprocess.CalledProcessError as e:
print(f"✗ TEST FAILURE: Task '{task_subject}' has failing tests.", file=sys.stderr)
print(f" Fix the tests and mark complete again.", file=sys.stderr)
return 2
if __name__ == "__main__":
sys.exit(main())
Example 4: Settings.json Configuration
{
"model": "claude-opus-4-6",
"max_tokens": 8000,
"hooks": {
"teammateIdle": ".claude/hooks/check-lint.sh",
"taskCompleted": ".claude/hooks/check-tests.sh"
},
"team": {
"name": "backend-team",
"agents": [
{ "name": "Agent_1", "role": "Backend Developer" },
{ "name": "Agent_2", "role": "Backend Developer" }
]
}
}
Example 5: Combined Lint + Test Hook (Bash)
#!/bin/bash
# .claude/hooks/check-quality.sh
# Comprehensive quality check: linting + tests + coverage.
set -e
input=$(cat)
teammate=$(echo "$input" | jq -r '.teammate_name')
cwd=$(echo "$input" | jq -r '.cwd')
cd "$cwd"
echo "Quality Check for $teammate..." >&2
# Step 1: Linting
echo " [1/3] Running flake8..." >&2
if ! flake8 . --max-line-length=100; then
echo "✗ LINTING FAILED" >&2
exit 2
fi
echo " ✓ Linting passed" >&2
# Step 2: Tests
echo " [2/3] Running pytest..." >&2
if ! pytest -v --tb=short; then
echo "✗ TESTS FAILED" >&2
exit 2
fi
echo " ✓ Tests passed" >&2
# Step 3: Coverage
echo " [3/3] Checking coverage..." >&2
coverage_result=$(pytest --cov=. --cov-report=term-missing 2>&1 | grep -E "TOTAL|coverage")
echo "$coverage_result" >&2
echo "✓ All quality checks passed." >&2
exit 0
Gotchas & Tips
Gotcha 1: Slow Hooks - If your hook runs a full test suite (5 minutes), the agent is blocked waiting. Bad UX. - Tip: Keep hooks fast. Run only critical checks. Let the agent run full tests themselves.
Gotcha 2: Missing jq - The example hooks use jq to parse JSON. If your system doesn't have jq, they'll fail. - Tip: Install jq or rewrite hooks in Python/Node where JSON parsing is built-in.
Gotcha 3: Wrong Exit Codes - Exit code 1 (error) is different from exit code 2 (rejection). Use 2 to block the action with feedback. - Tip: Always exit 0 (success), 2 (block), or document your exit codes clearly.
Gotcha 4: Hooks Block Everything
- If your TaskCompleted hook runs pytest and pytest hangs, the agent is stuck forever.
- Tip: Add timeouts to hook commands. Example: timeout 60 pytest -v.
Gotcha 5: No Error Visibility
- If a hook fails silently, agents don't know why they're blocked.
- Tip: Always send clear, actionable feedback to stderr. "Run pytest -v --tb=short to see what failed."
Lead-out
Now your agents are held to quality standards. But what about the cost? Four parallel agents using independent context windows can run up your token bill fast. Next video, we're tackling cost management—how to keep multi-agent work affordable and where parallelization actually saves money.
Reference URLs
- Claude Code Hooks Documentation: https://claude.ai/docs/hooks (or current docs URL)
- Flake8 Linter: https://flake8.pycqa.org/
- Pytest Testing Framework: https://docs.pytest.org/
- jq JSON Query Tool: https://stedolan.github.io/jq/
- Bash Exit Codes: https://tldp.org/LDP/abs/html/exit-status.html
Prep Reading
- Bash scripting fundamentals: Exit codes, subcommands, error handling.
- JSON in bash: Using jq or native JSON parsing.
- Testing best practices: What makes a good test suite (fast, reliable, comprehensive).
- CI/CD pipelines: Hooks are essentially local CI/CD gates.
Notes for Daniel
This video should feel like you're building trust into your agent team. The hooks section is technical, so explain the bash/Python carefully. Show the input JSON format clearly so viewers understand what data they have access to.
The demo is critical. Show a hook actually blocking an agent and the agent responding to feedback. That "feedback loop" is what makes hooks powerful—it's not just a gate, it's a teacher.
Emphasize that hooks are YOUR control. You set the standards (linting, tests, coverage, whatever). Agents respect them because they're clear and automated.
If your bash is rusty, show Python versions too. Some viewers might prefer Python for hook scripts.