Claude Code Slash Commands That Save Real Hours
Claude Code Slash Commands That Save Real Hours
TL;DR: Claude Code’s slash command system lets you define repeatable workflows once and execute them in seconds. The commands that save the most time aren’t the obvious ones — they’re the ones that replace the four-step mental context switches you do twenty times a day.
I’ve been heads-down in Claude Code for months now. What started as curiosity about AI-assisted development turned into a full workflow overhaul. The custom slash commands were not the first thing I reached for — they were the thing I built after I noticed how much time I was spending re-explaining context for tasks I do constantly.
This post is about three specific commands I use daily, what they replaced, and the exact mechanics of how they work. No theory. No “here’s what you could build.” Here’s what I built and what it actually costs me versus what I was doing before.
What Claude Code Slash Commands Actually Are
If you haven’t used them, slash commands in Claude Code are stored prompt templates that you invoke with a / prefix. They live as Markdown files in .claude/commands/ at the project level (scoped to the repo) or in ~/.claude/commands/ for commands you want available everywhere.
When you invoke /your-command, Claude reads the contents of that file and executes against your current context — the open files, the repo state, whatever you’ve told it to focus on.
That’s the mechanism. The power is in what you put in those files and when you reach for them.
The productivity gains from slash commands aren’t about speed-of-typing. They’re about eliminating the tax of re-establishing context. Every time I sit down to do a PR review, a security check, or a changelog entry, I’m starting from scratch unless I’ve codified what “good” looks like. The slash command is where I store that definition.
The Three Slash Commands I Use Every Day
1. /security-review — The One That Earns Its Keep
This is the command I reach for before every push. It’s also the one that takes the most upfront investment to build correctly — and the one that pays back the most.
What it replaced: A mental checklist I kept in my head. Secrets in code, overly permissive IAM policies, hardcoded endpoints, missing input validation, logging that captures too much. In 25 years of security work I’ve internalized a lot of these patterns, but I’m also human. I forget. I’m moving fast. I ship a file I meant to sanitize.
What the command does: It tells Claude to scan the staged or recently modified files for a specific list of security concerns I care about — organized by severity. It doesn’t replace a real SAST tool. What it does is catch the embarrassing stuff before the pipeline does, and do it in 15 seconds instead of the 3–5 minutes it takes me to manually walk the diff.
Here’s the core of the command file:
Review the files I'm about to commit for the following security issues.
Organize findings by severity: CRITICAL, HIGH, MEDIUM, LOW.
Check for:
- Hardcoded credentials, tokens, API keys, or secrets
- Overly broad IAM permissions or wildcard resource ARNs
- Logging statements that may capture PII, tokens, or sensitive payloads
- Missing input sanitization on any value entering a data store or external call
- Unvalidated redirects or open SSRF vectors
- Dependencies being pulled without pinned versions
For each finding, show the file, line number, and a one-line explanation.
If nothing is found, say so explicitly. Do not fabricate findings.
That last line matters. I’ve had AI tools hallucinate security issues. The explicit instruction to not fabricate findings reduces that significantly.
Time saved: Rough estimate — 10 to 15 minutes per commit cycle that involves security-sensitive files. Across a week of active building, that’s real.
2. /pr-description — The One I Hated Writing
Pull request descriptions are one of those tasks where the overhead is disproportionate to the complexity. You know what changed. Writing it down in a way that’s useful to reviewers (or to future-you reading git blame at 11pm) takes longer than it should.
What it replaced: Staring at the diff and writing a description from scratch. Or — more honestly — writing a lazy one-liner that helps nobody.
What the command does: It reads the diff, the affected files, and any related Linear or GitHub issue context I’ve dropped into the conversation, then drafts a PR description in a consistent format.
The format I use:
Generate a pull request description using this structure:
## What Changed
[2-3 sentences: what was modified and why]
## Files Affected
[bulleted list of key files and what changed in each]
## Security Considerations
[any security implications — even if the answer is "none identified"]
## Testing Notes
[what was tested, how, and what wasn't covered]
Use the staged diff and any issue context in our conversation.
Be specific. Do not use filler phrases like "various improvements."
The security considerations section is non-negotiable for me. Even if the PR is a CSS tweak, I want the habit of asking the question. It’s also useful when a compliance auditor asks why a particular change was made and what risk it carried.
Time saved: 5–8 minutes per PR. I open PRs multiple times a day. This one is a quiet compounding win.
3. /weekly-summary — The One That Keeps the Build Log Honest
I build in public. That means a weekly post about what actually happened — not a polished highlight reel, but a real account of what shipped, what broke, and what I learned. Writing that from memory is a mess. Writing it from Claude Code’s session history and the commit log is fast.
What it replaced: Reconstructing the week from memory on Friday afternoon. Half the context is gone. The honest details — the thing that broke on Wednesday, the decision I reversed — get softened or forgotten.
What the command does: It prompts Claude to pull the week’s commits, any tracked issues, and session notes I’ve tagged with a specific keyword, then draft a structured summary in the ABT building-in-public format.
Draft a weekly build summary using this week's commit history,
any Linear issues closed or moved to In Progress, and any notes
I've tagged with [WEEKLY] in our conversation.
Format:
## What Shipped
## What Broke (and how it got fixed)
## What I Learned
## What's Next
Voice: first person, direct, no hype. Honest about failures.
This goes on the blog. It should read like a field report, not a newsletter.
The [WEEKLY] tag is a convention I use during the week. When something notable happens — a decision, a failure, a lesson — I’ll say it out loud in the Claude session and drop [WEEKLY] at the end. By Friday, the command has material to work with.
Time saved: 30–45 minutes of reconstruction work, every week. This is the single highest-value command in my workflow.
How to Build a Slash Command That Actually Works
The commands that fail are the ones that are too generic. “Review my code” is not a slash command — it’s a prompt. A slash command is a defined process with:
- A specific scope — what files, what context, what moment in the workflow
- A defined output format — exactly what you want back and how it should be structured
- Explicit constraints — what not to do (don’t fabricate, don’t pad, don’t summarize what I already know)
- A voice or tone instruction if the output goes anywhere public
Start with the task you do most often that involves re-explaining the same context. That’s your first command.
A Note on Security for the Commands Themselves
Slash command files are stored in your repo. That means they’re in version control, potentially visible to collaborators, and subject to your repo’s access controls.
Two things to watch:
Don’t put sensitive context in the command file itself. If your security review command references specific internal system names, network topology, or proprietary tool names that shouldn’t be public — those details should come from the session context, not be hardcoded in the command file.
Be aware of prompt injection risk in automated pipelines. If you’re running slash commands in a CI context where external input (issue titles, PR descriptions from forks) feeds into the command — that’s an injection surface. Keep automated command execution scoped to trusted inputs.
These aren’t reasons to avoid slash commands. They’re reasons to build them the same way you’d build any other automation: with an eye on what goes where.
Key Takeaways
- Claude Code slash commands live in
.claude/commands/as Markdown files — simple to create, zero infrastructure required - The highest-value commands replace tasks where you’re re-establishing context repeatedly, not just typing
/security-reviewcatches the pre-commit mistakes that make it past human eyes/pr-descriptionenforces a consistent format that includes security considerations by default/weekly-summarymakes building in public sustainable without losing the honest details- Command files are in version control — treat them accordingly, don’t embed sensitive context directly
- Start with your most frequent context-switch task and work backward to the command structure
The slash command system isn’t magic. It’s a way to operationalize the judgment you’ve already developed and apply it consistently without burning cycles re-explaining it every time. That’s the only kind of automation worth building.
Comments