Claude Code Custom Skills: Automating Repeatable Dev Tasks

by Alien Brain Trust AI Learning
Claude Code Custom Skills: Automating Repeatable Dev Tasks

Claude Code Custom Skills: Automating Repeatable Dev Tasks

TL;DR: Claude Code lets you define custom slash commands called skills — markdown files that give Claude specific instructions for a repeatable task. We’ve built a library of them at ABT Labs that handles blog writing, security testing, Linear issue updates, and more. Here’s how we structure them and when they’re worth building.

Every engineering team has tasks that are almost identical every time but still require judgment. Write a blog post in our voice. Test this prompt against jailbreak attacks. Update the Linear issue with a status comment. These aren’t things you want to fully automate with a script — you want a smart assistant that knows the context — but you also don’t want to re-explain the context every session.

Claude Code skills solve this. We’ve been building them for months and the ROI is real.

What a Skill Actually Is

A skill is a markdown file at .claude/skills/<skill-name>/SKILL.md in your repo. When you type /<skill-name> in a Claude Code session, Claude reads that file and follows its instructions.

The key insight: a skill isn’t code, it’s context. It’s the institutional knowledge about how to do a specific task — the voice guidelines, the file paths, the quality checklist, the edge cases — captured once and available in every session.

.claude/
└── skills/
    ├── blog-post/
    │   └── SKILL.md        ← /blog-post command
    ├── sp-test/
    │   └── SKILL.md        ← /sp-test command
    └── update-linear-issue/
        └── SKILL.md        ← /update-linear-issue command

Any file in that structure becomes a slash command. No config, no registration — Claude Code picks them up automatically.

The Skills We’ve Built at ABT Labs

/blog-post — The one we use most. It knows the blog’s directory structure, the frontmatter format, our brand voice guidelines, SEO requirements, and the publishing workflow. When I type /blog-post we shipped a new content dedup system today, it reads the existing posts to find the next available date, writes the draft following our voice, saves it to the right location, and starts the dev server for preview. What used to take 30-45 minutes of context-setting takes about 3 minutes.

/sp-test — Tests a prompt against a battery of jailbreak attack patterns. The skill defines the attack taxonomy, the scoring rubric, and the output format. It’s part of our Secure Prompt Builder course tooling and runs consistently across every session because the evaluation criteria live in the skill file, not in someone’s head.

/sp-scan — Scans code for exposed API keys and secrets. Not a replacement for automated secret scanning in CI, but useful for spot-checking before a commit when you want Claude to read the context around a potential leak, not just flag a pattern.

/security-review — Reviews the current branch’s changes for security issues before a PR. Pulls in context about our stack (AWS SSM for secrets, OIDC for CI credentials, no hardcoded keys) so it’s not giving generic advice.

/update-linear-issue — Posts a status comment to a Linear issue. Useful when Claude finishes a task and I want it to close the loop on the ticket without me switching contexts.

How We Structure a Good Skill

After building about a dozen of these, the pattern that works:

1. State the trigger clearly. When does this skill activate? What problem does it solve in one sentence? This helps Claude understand the intent even when the user’s invocation is vague.

2. Give it the source of truth. Instead of embedding all the voice guidelines in the skill file, point to the authoritative files. Our /blog-post skill points to agents/_shared/VOICE.md and agents/content-writer/SEO.md. When those files update, all skills that reference them automatically get the new rules.

3. Write the steps as instructions, not documentation. Skills work best when they’re imperative: “Read the directory. Find the most recent date. Add one day.” Not: “The skill works by reading the directory to determine dates.”

4. Include a quality checklist. Every skill we’ve built ends with a checklist Claude runs before outputting. For the blog skill it’s SEO checks. For sp-test it’s coverage checks. This catches the cases where Claude drifts from the spec mid-task.

5. Show one or two examples. Not because Claude needs them to understand, but because examples make edge cases explicit. “If the user types a URL, fetch it and use it as source context.”

When NOT to Build a Skill

Skills add overhead — someone has to maintain the SKILL.md as requirements change. Not every task is worth it.

Build a skill when:

  • You’re doing the same task more than once a week
  • The task requires consistent output format or voice
  • The task has a quality checklist that matters
  • You’re training others to do the task through Claude

Don’t build a skill when:

  • It’s a one-time task
  • The task is simple enough to describe in a single sentence
  • The “skill” would just be “do what I say” with no real constraints

The test: if you deleted the skill file, would Claude produce noticeably worse results on this task? If yes, the skill is earning its keep.

Key Takeaways

Claude Code skills are the closest thing to a team playbook that actually gets used. The reason most team playbooks collect dust is that no one reads them at the moment they’re needed. A skill file gets read automatically, every time, at exactly the right moment.

We’ve built about a dozen at ABT Labs. The ones we use daily have saved hours per week. The ones we built speculatively sit unused. Start with the task you do most often and build from there.

Related: How We Use Claude Code as a Product Manager

Tags: #claude-code#automation#workflows#ai-tools#building-in-public

Comments

Loading comments...