1017 words Slides

2.5 Writing & Running Tests

Course: Claude Code - Essentials Section: Core Workflows Video Length: 2-5 minutes Presenter: Daniel Treasure


Opening Hook

Tests are your safety net. They catch bugs before users do, and they give you confidence to refactor fearlessly. Claude Code can write tests for you. In this video, we'll show how Claude generates tests, how to run them, and how tests help you ship code with confidence.


Key Talking Points

  1. Claude Generates Tests from Code
  2. Ask: "Write unit tests for this function"
  3. Claude analyzes the code and writes tests covering the main cases
  4. Tests are created in the project's test framework (Jest, pytest, etc.)
  5. Show on screen: Ask Claude to write tests; see the test file appear

  6. Different Test Types

  7. Unit tests: Test individual functions in isolation
  8. Integration tests: Test how components work together
  9. Edge case tests: Cover boundary conditions and error scenarios
  10. Claude can write any of these based on your request
  11. Show: Ask for "edge case tests" and see different test cases

  12. Running Tests and Reading Results

  13. Tests run with your project's test framework: npm test, pytest, etc.
  14. Pass/fail results are clear and easy to interpret
  15. Claude helps you understand failures
  16. Show: Run tests; show passing and (optionally) a failing test

  17. Improving Test Coverage

  18. Ask Claude: "What paths aren't tested?" or "Add tests for error scenarios"
  19. Claude identifies gaps and fills them
  20. Comprehensive tests prevent regressions
  21. Show: Run coverage report (if available); highlight what's untested

  22. Tests Give You Confidence

  23. Tests act as regression detection: they catch when you break something
  24. When refactoring (like in 2.2), run tests first to confirm you didn't break anything
  25. Tests document expected behavior
  26. Show: Make a change that would break a test; show the test catches it

Demo Plan

Demo Project: Continue with the same project from 2.1-2.4

Setup: - Project has Jest or Pytest already installed (or Claude can add it) - At least one function/module worth testing (use something from earlier videos) - Terminal ready to run tests

On-Camera Sequence (2-3 minutes):

  1. Ask Claude to Write Tests Write unit tests for the getUserName function
  2. Claude creates a test file (e.g., getUserName.test.js or test_getUserName.py)
  3. Show the test file with multiple test cases
  4. Explain: these test different inputs and scenarios
  5. ~40 seconds

  6. Run the Tests bash npm test # or pytest

  7. Show tests running
  8. All tests pass (or one fails, demonstrating failure output)
  9. ~30 seconds

  10. Add Edge Case Tests Add tests for error scenarios and null inputs

  11. Claude adds more test cases
  12. Covers boundary conditions
  13. Show the expanded test file
  14. ~30 seconds

  15. Run Tests Again bash npm test

  16. Show all tests passing
  17. Brief coverage report (if available)
  18. ~20 seconds

  19. (Optional) Break and Fix

  20. Make a small deliberate change to the code that breaks a test
  21. Run tests again
  22. Show the failure message
  23. Ask Claude to fix it
  24. Run tests one more time to confirm all pass
  25. ~30 seconds (only if time permits)

Code Examples & Commands

Asking Claude to write tests:

Write unit tests for the calculateTotal function
Add tests for edge cases and error scenarios
Generate integration tests for the API endpoints
Write tests for error handling
Create a test file for the authentication module

Running tests:

npm test                    # JavaScript/Node
pytest                      # Python
python -m pytest            # Python (explicit)
./gradlew test              # Java/Gradle

Reading test output: - PASS/FAIL for each test - Number of tests passed/failed - Coverage percentage (if enabled) - Detailed error messages for failures

Example test structure (JavaScript with Jest):

describe('calculateTotal', () => {
  test('sums positive numbers', () => {
    expect(calculateTotal([1, 2, 3])).toBe(6);
  });

  test('handles empty array', () => {
    expect(calculateTotal([])).toBe(0);
  });

  test('throws error on null input', () => {
    expect(() => calculateTotal(null)).toThrow();
  });
});

Gotchas & Tips

  1. Tests Require Setup – Make sure your project has a test framework installed (Jest, Pytest, Mocha, etc.). If not, Claude can help add one.

  2. Test Names Matter – Claude writes descriptive test names like "should return 0 for empty array." These help you understand what's being tested.

  3. Red-Green-Refactor Cycle – Best practice: write tests first (they fail), then write code to make them pass, then refactor. Claude can assist all three steps.

  4. Coverage Doesn't Mean Completeness – 100% code coverage doesn't mean the code is bug-free. Well-written tests focus on likely failure modes.

  5. Run Tests Before Committing – In real workflows, tests run automatically before merge (CI/CD). In these videos, you run them manually.

  6. Mocking and Dependencies – For advanced testing (mocking external APIs, databases), Claude can help set up. For this intro, focus on simple unit tests.


Lead-out

Tests verify that your code works. But how do other developers know what your code does? That's documentation. In the next video, we'll show how Claude helps you write clear, comprehensive documentation that keeps your project understandable.


Reference Material

  • Common Workflows - Writing Tests: https://code.claude.com/docs/en/common-workflows
  • Quickstart - Run Tests: https://code.claude.com/docs/en/quickstart
  • Best Practices Guide: https://code.claude.com/docs/en/best-practices

Relevant Articles & Posts

  • Testing Patterns: Joe Njenga's "17 Best Claude Code Workflows" (Medium)
  • Production Testing: Lukasz Fryc's "15 Tips from Running 6 Projects" (DEV Community)
  • Official Resource: Common Workflows - Claude Code Docs (https://code.claude.com/docs/en/common-workflows)

Additional Notes

  • Use a real, meaningful function for testing (not add(a,b) unless it's your actual demo project)
  • Show passing tests first; then (optionally) a failure to demonstrate how Claude helps fix it
  • Emphasize: Claude doesn't replace thinking about test cases—it accelerates writing them
  • Note: comprehensive testing is part of "well-configured projects" that ship features 5-10x faster (from recent articles prep)
  • Mention that in production, tests often run automatically in CI/CD pipelines (covered in advanced sections if applicable)