13.3 Agent Skills: Predefined Capabilities
Course: Claude Code - Enterprise Development Section: Extending Claude Code Video Length: 3-4 minutes Presenter: Daniel Treasure
Opening Hook
"You've got specialized agents and automated guardrails. Now, what if you could give agents—and Claude—predefined capabilities they can invoke without expanding their prompt? That's Agent Skills. Think of them as macros for structured work. Let me show you what they do and how to build them."
Key Talking Points
1. What Are Agent Skills?
What to say: "Agent Skills are predefined capabilities that agents invoke automatically when appropriate. They're not tools—they don't show up in the tool list. They're higher-level actions: 'validate this JSON schema,' 'query this API,' 'run this security check.' The model decides when to invoke them based on the skill's description. Once invoked, the skill runs to completion without the model reasoning about each step."
What to show on screen: - Show terminal: agent processes a request, automatically invokes a skill (e.g., "ValidateSchema") - Show skill output: structured JSON result passed back to agent - Highlight: no explicit tool call, pure delegation
2. When to Use Skills vs. Tools
What to say: "Tools are low-level—Read a file, run Bash. Skills are high-level. Use a skill when you want the agent to invoke a complete procedure: 'validate the config against schema,' 'fetch and cache user data,' 'generate a test suite.' Tools are building blocks. Skills are assembled workflows that agents shouldn't need to think about step-by-step."
What to show on screen: - Side-by-side: Tool use (Bash, Read, Write) vs. Skill invocation - Example: instead of agent manually running curl + jq to query API, invoke "FetchUserData" skill → structured result returned - Emphasize: skills hide complexity, tools expose it
3. SKILL.md Structure and Frontmatter
What to say: "Skills live in .claude/skills/
What to show on screen:
- Open a SKILL.md file
- Walk frontmatter: name, description (critical—this tells the model when to use it), argument-hint (helps the model know what inputs you expect), allowed-tools, user-invocable (boolean), model (inherit/sonnet/haiku), context (fork to Explore/Plan), agent (optional override)
- Show content below ---: the skill logic (can be text, code, or instructions)
4. Dynamic Context and String Substitution
What to say: "Skills support powerful variable substitution. $ARGUMENTS gives you the full argument string. $ARGUMENTS[N] or $N gives you the Nth argument. You can embed shell commands with backticks—git log --oneline | wc -l executes before Claude sees it, and the result replaces the backticks. Perfect for injecting real-time data into the skill logic."
What to show on screen:
- Show SKILL.md example with $ARGUMENTS, $ARGUMENTS[0], $ARGUMENTS[1]
- Show example with backtick command: `grep -c TODO src/` → outputs count of TODOs dynamically
- Show CLAUDE_SESSION_ID variable—useful for logging
- Emphasize: these expand before the agent sees the skill, making skills adaptive
5. Skill Invocation: Model-Driven and User-Driven
What to say: "By default, skills are model-driven—Claude decides to invoke them based on the description and context. If you set user-invocable: true, humans can also invoke them directly via a command. Some skills are private (only for agents), some are public (you can call them anytime)."
What to show on screen:
- Show non-invocable skill: used only when agent decides
- Show user-invocable skill: agent can call it, and you can too
- Demo: /skills <skill-name> <arguments> command
- Show output returned to user or agent
Demo Plan
- (0:30) Open
.claude/skillsdirectory, show built-in and custom skills - (1:00) Open a simple skill SKILL.md: ValidateJSON. Walk frontmatter and content
- (1:30) Create a new skill interactively:
/skills→ Create → "FetchRecentCommits" → allowed tools: Bash, Read - (2:00) Show SKILL.md with dynamic backtick command:
`git log --oneline -n 5` - (2:30) Trigger an agent task that automatically invokes the skill; show the result
- (3:00) Manually invoke the skill with arguments:
/skills FetchRecentCommits --max 10 - (3:30) Discuss: use cases (API wrappers, structured checks, artifact generation)
Code Examples & Commands
SKILL.md minimal example
---
name: ValidateJSON
description: "Validates JSON content against schema. Takes JSON file path and schema path as arguments."
argument-hint: "<json-file-path> <schema-file-path>"
allowed-tools: [Bash, Read]
user-invocable: true
model: haiku
---
You are a JSON validator. Given a JSON file and schema:
1. Read both files
2. Use jsonschema to validate
3. Report errors or success
Be concise. Return structured output: {valid: true/false, errors: [list if invalid]}.
SKILL.md with dynamic context
---
name: FetchRecentCommits
description: "Returns recent commits in the current repository."
argument-hint: "--max N"
allowed-tools: [Bash]
user-invocable: true
context: fork
agent: Explore
---
Latest 5 commits in this repo:
git log --oneline -n 5
Summarize their purpose and impact. Focus on what changed, not the details.
SKILL.md with ultrathink
---
name: ArchitectReview
description: "Reviews code architecture and suggests improvements. Uses extended thinking."
argument-hint: "<directory-path>"
allowed-tools: [Bash, Read, Grep]
user-invocable: false
---
ultrathink
Given the codebase in the directory $ARGUMENTS[0]:
1. Identify architectural patterns
2. Find potential bottlenecks or design issues
3. Suggest refactoring with priority levels
4. Estimate impact of each suggestion
Focus on scalability and maintainability. Be specific with examples.
SKILL.md with multiple arguments
---
name: GenerateTestSuite
description: "Generates unit tests for a given source file."
argument-hint: "<source-file> [--coverage-target N]"
allowed-tools: [Write, Read, Bash]
user-invocable: true
model: sonnet
---
Generate a comprehensive test suite for $ARGUMENTS[0].
Target coverage: ${ARGUMENTS[1]:-85}%
Create tests that cover:
- Happy paths
- Edge cases
- Error conditions
- Integration points
Output: pytest-compatible Python file.
Invoke a skill from CLI
/skills <skill-name> <arguments>
Example:
/skills FetchRecentCommits --max 10
/skills ValidateJSON config.json schema.json
Gotchas & Tips
Description is how the model knows when to invoke the skill. Vague descriptions ("helper skill") are ignored. Specific descriptions ("generates unit tests for Python files, returns pytest-compatible code") cause the model to invoke it correctly and often.
$ARGUMENTS vs. $ARGUMENTS[N] matters. $ARGUMENTS gives the whole string. $ARGUMENTS[0] is the first argument. $ARGUMENTS[1] is the second. If your skill expects two arguments and you only receive one, the skill should handle it gracefully or error clearly.
Backtick commands run at skill definition time, not invocation time. If you embed `git log`, it captures the commits at the moment Claude interprets the skill, then returns that snapshot to the model. Good for stable data; bad for real-time changes. Prefer the agent running Bash if you need live queries.
Extended thinking ("ultrathink") works in skills. If your skill is complex (architecture review, security audit), add the "ultrathink" keyword to enable Claude's extended thinking. It's slower but higher quality.
Skills can't invoke other skills directly. Skills are single-purpose. If you need skill composition, break it down and let the model coordinate.
user-invocable: true makes the skill public. If it queries private data or credentials, set it to false. Let agents use it; prevent human misuse.
Lead-out
"Skills are how you encapsulate complexity. Instead of agents reasoning through a multi-step validation or API query, they invoke a skill and get a clean result. Next video, we're scaling this up: Plugins bundle multiple skills, agents, hooks, and tools into reusable packages. That's how you share functionality across teams and projects."
Reference URLs
- Claude Code Agent Skills documentation
- SKILL.md schema and variables reference
- Dynamic context and substitution rules
- Extended thinking (ultrathink) in skills
Prep Reading
- Skill use cases and patterns
- Argument handling best practices
- Performance considerations for skills
- Testing skills locally
Notes for Daniel: Pre-create 2-3 working SKILL.md files. ValidateJSON is good for a simple example. FetchRecentCommits with backticks is great for showing dynamic context. If you have time, create one with ultrathink enabled—it's a nice peek at Claude's reasoning capabilities. Emphasize the "user-invocable" flag because people often want to use skills manually after agents have proven them useful. Show the /skills command output clearly so viewers see how to call skills themselves.