1614 words No Slides

18.1 Best Practices for Effective Use

Course: Claude Code - Enterprise Development

Section: Best Practices

Video Length: 4-5 minutes

Presenter: Daniel Treasure


Opening Hook

"You've learned everything: CLI, SDK, MCPs, agents, deployments. Now the meta-question: how do you use all this effectively? We'll synthesize core principles, proven patterns, and anti-patterns. This is where strategy meets execution."


Key Talking Points

What to say:

  • "Effective AI automation isn't about using every feature—it's about using the right tool for the right job."
  • "Core principles: focus, structure, automation, verification, iteration."
  • "Patterns that work: plan-review-execute, read-only-then-write, human approval for high-impact."
  • "Anti-patterns: automating too much, insufficient validation, ignoring error handling."

What to show on screen:

  • Decision trees (when to use CLI vs. SDK vs. MCP)
  • Architecture diagrams (simple vs. complex workflows)
  • Real workflows with success and failure cases
  • Best practice checklist

Demo Plan

[00:00 - 01:00] Core Principles 1. Focused Agents: Narrow role → better decisions. Bad: "do everything." Good: "review security only." 2. Structured Workflows: Plan → Review → Execute. Don't skip the review step. 3. Automate Repetitive: Only automate tasks that are truly repetitive. Manual oversight for novel decisions. 4. Verify Always: Read-only before write. Test before production. Approval gates for high-impact.

[01:00 - 02:00] Proven Patterns 1. Plan-Review-Execute Pattern: Agent creates plan → human reviews → agent executes → human verifies result 2. Read-Only-Then-Write: Separate tools/phases: agent reads system state, proposes changes, then applies 3. Human-in-the-Loop: Permission callbacks, approval workflows, audit trails 4. Multi-Agent Pipelines: Specialized agents for different tasks, parent coordinates 5. CI/CD Integration: Agents in deployment pipelines, with gates and approvals

[02:00 - 03:30] Prompting & Configuration 1. Explicit I/O: Clear input format → expected output format. "Given X, return Y as JSON." 2. Deterministic Language: Avoid vagueness. "Capitalize names" better than "improve presentation." 3. Minimal Context: Give agent only info it needs. Avoid token bloat. 4. Specific Tools: Restrict agent tools to what it needs. Fewer tools → clearer decisions. 5. Error Cases: Explicitly handle: "If API is unreachable, return null. If file doesn't exist, create template."

[03:30 - 04:30] Advanced Patterns & Anti-Patterns 1. Advanced: Multi-agent orchestration, cascading approvals, streaming for real-time feedback, caching results 2. Anti-Patterns: Automating judgment calls, trusting single agent without verification, ignoring error handling, over-permissioning, no audit trail

[04:30 - 05:00] Putting It Together 1. Show: realistic workflow combining everything 2. Example: code review pipeline (CLI for interactive, SDK for automation, MCPs for data, permissions for safety) 3. Recap: every video from 16.1 to 17.10 in one example


Code Examples & Commands

Plan-Review-Execute Pattern:

import asyncio
from claude_code import Agent

async def plan_review_execute():
    planner = Agent(
        model="claude-sonnet-4-5-20250929",
        system_prompt="Create detailed plans. Be specific and realistic."
    )

    executor = Agent(
        model="claude-sonnet-4-5-20250929",
        system_prompt="Execute plans carefully. Verify each step.",
        permission_mode="delegate"  # Requires approval
    )

    # Step 1: Plan
    plan_result = await planner.run(
        "Create a plan to migrate our database from MySQL 5.7 to 8.0. Include: timeline, data backups, validation, rollback."
    )
    print(f"Plan:\n{plan_result.output}\n")

    # Step 2: Review (human reads plan)
    user_approval = input("Does this plan look good? (yes/no): ").strip().lower()
    if user_approval != "yes":
        print("Plan rejected. Stopping.")
        return

    # Step 3: Execute
    execute_result = await executor.run(
        f"Execute this plan:\n{plan_result.output}\n\nReport progress and any issues."
    )
    print(f"Execution Result:\n{execute_result.output}")

    # Step 4: Verify (human checks result)
    print("\nVerify: Check database is running, data is intact, performance is acceptable.")

asyncio.run(plan_review_execute())

Read-Only-Then-Write Pattern:

import asyncio
from claude_code import Agent

async def read_only_then_write():
    # Phase 1: Read-only agent analyzes
    analyzer = Agent(
        model="claude-sonnet-4-5-20250929",
        system_prompt="Analyze code and suggest improvements. Do NOT modify code yet.",
        tools=["Read", "Grep"]  # Read-only tools
    )

    analysis = await analyzer.run(
        "Analyze /src/auth.py and suggest refactoring opportunities"
    )
    print(f"Analysis:\n{analysis.output}\n")

    # Phase 2: Human reviews suggestions
    approve = input("Apply these suggestions? (yes/no): ")
    if approve != "yes":
        return

    # Phase 3: Write-capable agent applies changes
    refactorer = Agent(
        model="claude-sonnet-4-5-20250929",
        system_prompt="Apply the suggested refactoring to code. Preserve functionality.",
        tools=["Read", "Edit", "Bash"],  # Can write
        permission_mode="plan"  # Show plan before executing
    )

    refactoring = await refactorer.run(
        f"Refactor /src/auth.py based on these suggestions:\n{analysis.output}"
    )
    print(f"Refactoring Result:\n{refactoring.output}")

asyncio.run(read_only_then_write())

Prompting Best Practice Example:

from claude_code import Agent

# BAD: Vague
bad_prompt = "Make the code better"

# GOOD: Explicit, structured
good_prompt = """
Analyze the provided Python function for security vulnerabilities.

Input: Python code as a string
Process:
1. Identify potential vulnerabilities (SQL injection, XXS, etc.)
2. For each vulnerability, explain the risk and impact
3. Suggest a specific fix with code example

Output format:
{
  "vulnerabilities": [
    {
      "type": "string",
      "description": "string",
      "risk": "high|medium|low",
      "fix": "code snippet"
    }
  ],
  "overall_risk": "high|medium|low"
}

Code to analyze:
def get_user(user_id):
    return db.query(f"SELECT * FROM users WHERE id = {user_id}")
"""

agent = Agent(model="claude-sonnet-4-5-20250929")
result = agent.run(good_prompt)
# Result is structured JSON, easy to parse and act on

Minimal Context Pattern:

from claude_code import Agent

# BAD: Provide entire codebase
bad_agent = Agent(
    model="claude-sonnet-4-5-20250929",
    system_prompt=open("/path/to/entire_codebase.txt").read()  # Huge!
)

# GOOD: Provide only relevant context
relevant_code = """
def authenticate(username, password):
    user = db.query(f"SELECT * FROM users WHERE username = '{username}'")
    if user and verify_password(password, user.password_hash):
        return create_session(user.id)
    return None
"""

good_agent = Agent(
    model="claude-sonnet-4-5-20250929",
    system_prompt=f"You are a security reviewer. Review this code:\n{relevant_code}"
)
# Same result, less tokens, faster

Error Handling Pattern:

import asyncio
from claude_code import Agent

async def robust_agent():
    agent = Agent(
        model="claude-sonnet-4-5-20250929",
        system_prompt="""
        You are a data processor.

        Error handling rules:
        - If input file not found, create template
        - If API is unreachable, log error and return null
        - If validation fails, return detailed error with suggestions
        - Never leave transactions incomplete
        """
    )

    try:
        result = await agent.run(
            "Process /data/input.csv and generate report"
        )

        if result.status == "error":
            # Handle specific errors
            if "not found" in result.output.lower():
                print("Input file missing—created template.")
            elif "timeout" in result.output.lower():
                print("Service timeout—retrying...")
                # Retry logic
            else:
                print(f"Unknown error: {result.output}")
        else:
            print(f"Success: {result.output}")

    except Exception as e:
        # Unexpected error
        print(f"Unexpected error: {e}")
        # Alert, log, notify

asyncio.run(robust_agent())

Complete Workflow Example (tying everything together):

import asyncio
from claude_code import Agent, PermissionMode

async def complete_code_review_workflow():
    """
    Realistic workflow: CLI + SDK + MCP + permissions + structure
    """

    # 1. Read-only analysis (analyzer agent)
    analyzer = Agent(
        model="claude-sonnet-4-5-20250929",
        system_prompt="You are a code reviewer. Analyze PRs for security, performance, style.",
        tools=["Read", "Grep"]
    )

    analysis = await analyzer.run(
        "Review PR #456 changes in /src/payment/. Check for SQL injection, error handling, logging."
    )
    print("=== Code Review Analysis ===")
    print(analysis.output)

    # 2. Human review of analysis
    approve_analysis = input("\nContinue with suggested fixes? (yes/no): ")
    if approve_analysis != "yes":
        print("Review stopped by user.")
        return

    # 3. Implementation with permissions
    implementer = Agent(
        model="claude-sonnet-4-5-20250929",
        system_prompt="Implement code fixes suggested in the review. Preserve all functionality.",
        tools=["Read", "Edit", "Bash"],
        permission_mode=PermissionMode.DELEGATE,
        permission_callback=lambda action, resource, details: (
            action == "read" or  # Always allow read
            (action == "edit" and resource.startswith("/src/"))  # Only edit src/
        )
    )

    implementation = await implementer.run(
        f"Based on this review:\n{analysis.output}\n\nImplement the suggested fixes."
    )
    print("\n=== Implementation ===")
    print(implementation.output)

    # 4. Verification
    verifier = Agent(
        model="claude-sonnet-4-5-20250929",
        system_prompt="Verify that fixes were applied correctly and tests pass.",
        tools=["Read", "Bash", "Grep"]
    )

    verification = await verifier.run(
        "Run tests and verify all suggested fixes were applied in /src/payment/."
    )
    print("\n=== Verification ===")
    print(verification.output)

    # 5. Human approval to merge
    approve_merge = input("\nReady to merge? (yes/no): ")
    if approve_merge == "yes":
        print("✅ Approved for merge. (You would run: git merge PR #456)")
    else:
        print("❌ Merge rejected. Requesting additional changes.")

asyncio.run(complete_code_review_workflow())

Gotchas & Tips

Gotcha 1: Feature Creep - Tempting to automate everything. Resist. - Automate what's truly repetitive; keep judgment manual.

Gotcha 2: Permission Escalation - Oversight is critical. Too much automation with minimal approval = risk. - Plan-review-execute isn't slow; it's safe.

Gotcha 3: Prompt Complexity - Longer, more complex prompts ≠ better results. - Clear, specific prompts > long, rambling ones.

Tip 1: Start Small - Automate simple, low-risk tasks first. - Prove value, then expand.

Tip 2: Measure & Iterate - Track: success rate, latency, cost, user satisfaction. - Adjust prompts, tools, workflow based on data.

Tip 3: Test in Staging - Always test agent workflows in staging first. - Edge cases reveal themselves quickly.

Tip 4: Build Audit Trails - Log all agent actions: input, output, decisions, approvals. - Critical for debugging, compliance, trust.

Tip 5: Have Fallbacks - If agent fails, what happens? Manual override? Notification? - Design fallbacks explicitly.


Lead-out

"You've completed the Claude Code Enterprise Development course. From CLI basics (Module 1) to advanced MCPs and agent orchestration (Module 3). Now it's time to apply this to your organization. Start with a focused use case, build it, measure success, and iterate. Thank you for joining us—let's see what you build with Claude."


Reference URLs

  • Previous sections (16.1 - 17.10): All reference materials apply
  • Software Engineering Best Practices: https://en.wikipedia.org/wiki/Software_engineering
  • Prompt Engineering Guides: https://docs.anthropic.com/
  • Enterprise AI Patterns: [Would reference industry resources]

Prep Reading

  • Review all previous sections (conceptually, not code)
  • Identify your use case: what task would AI automation improve?
  • Design: how would you approach it? Plan-review-execute?

Notes for Daniel

  • Tone: This is the culmination—inspirational but practical. "You now have the tools; here's how to use them wisely."
  • Holistic: This video should reference earlier videos (16.1, 17.5, 17.10, etc.). Tie them together.
  • Meta-discussion: Talk about the decision-making process: when to use SDK vs. CLI, when to add MCPs, when to add permissions.
  • Real success: Share (sanitized) examples of where automation worked and where it failed. Learning from both.
  • Call-to-action: Encourage viewers to start a project, share results, iterate.
  • Course recap: Optional: show brief montage of all 16 videos, reminding viewers of breadth of content.