AI Prompts for Every Development Workflow Step (With Copy/Paste Examples)Meta Description

Alex Garrett-Smith •1w
Effective prompting is harder than it seems. You’re no longer expressing logical steps directly in code. Instead, you’re turning intent into a written instruction, which can take a lot of practice to become good at.
In this post, I’m handing over some easy-to-copy-and-paste example prompts that I use in every step of my AI development workflow. I’ve condensed this down to the most important parts so you get the most value.
Planning
This is the most important stage of any AI workflow, because you’re defining exactly what needs to be built up-front. If you invest time in this step, implementation is more likely to follow easily.
Project Planning Prompt
If you’re unsure where to start, or you only have a vague idea of what you want to build, start inside an AI chat interface like Claude and use this prompt to kick the process off.
I need help creating a product specification document for a software project. Please ask me questions to understand what I want to build, then create a comprehensive spec following this structure:
# Product Specification: [Product Name]
## Overview
Brief description of what the product does and its core purpose.
## Problem Statement
What problem does this solve? What pain points does it address?
## Goals
3-5 bullet points of what this product aims to achieve.
## User Roles
Define each type of user and what they can do:
- Role name
- List of capabilities/permissions
## Technical Considerations
Key technical requirements or constraints:
- Access/authentication requirements
- Privacy/security considerations
- Scalability needs
- Any other technical constraints
## Technical Stack
- Backend framework/language
- Frontend framework/technology
- Database (if applicable)
- Any other key technologies
The spec should define WHAT needs to be built, not HOW to build it (that comes in feature implementation documents later).
Start by asking me questions about what I want to build. Focus on:
1. What problem am I solving?
2. Who will use this?
3. What are the core features/capabilities needed?
4. Are there any specific technical requirements or constraints?
5. What's the rough tech stack preference?
Once you understand my needs, create the specification document.
Once you’ve answered the questions, your agent will generally produce a clear spec file that you can continue to iterate on. The goal here isn’t to define every feature and step out yet, but to produce a working document that you can refine and later use to generate feature tasks (which we’ll cover soon).
Creating a Specification From Existing Data
If you already have documents, notes, graphics, meeting transcripts, etc., the process becomes a lot easier. We can take all of this data and use a similar prompt to generate a specification document.
I need help creating a product specification document for a software project. I have some background materials that describe what I want to build.
Please review the attached files/documents/images/transcripts and then create a comprehensive spec following this structure:
# Product Specification: [Product Name]
## Overview
Brief description of what the product does and its core purpose.
## Problem Statement
What problem does this solve? What pain points does it address?
## Goals
3-5 bullet points of what this product aims to achieve.
## User Roles
Define each type of user and what they can do:
- Role name
- List of capabilities/permissions
## Technical Considerations
Key technical requirements or constraints:
- Access/authentication requirements
- Privacy/security considerations
- Scalability needs
- Any other technical constraints
## Technical Stack
- Backend framework/language
- Frontend framework/technology
- Database (if applicable)
- Any other key technologies
Instructions:
1. First, review all the attached materials and summarize what you understand about the project
2. Ask clarifying questions about anything that's unclear or missing
3. Once I confirm your understanding, create the specification document
The spec should define WHAT needs to be built, not HOW to build it (that comes in feature implementation documents later).
Notice we still instruct it to ask clarifying questions in case anything isn’t clear, or there’s any conflicting information in the data we’ve fed in.
Feature Planning Prompt
Whether you’ve generated a specification with AI or have one to hand already, we now want to extract and define the features (or single feature) in more detail.
Here’s what I’d prompt to kick off this process:
Based on the product specification we've created, please generate a list of features in priority order for implementation.
For each feature, provide:
### [Number]. [Feature Name]
- Brief description (1-2 sentences) of what this feature does
- Key capabilities included in this feature
- Dependencies (if any features must be built before this one)
The list should:
- Be ordered logically for implementation (dependencies first)
- Match the core features from the spec
- Be concise and actionable
- Indicate which features build on each other
Format the output as a simple numbered list that can be used as a roadmap for breaking features down into implementation steps later.
Example format:
### 1. User Authentication
- Set up login system for team members to access the admin area
- Includes: login page, logout, session management
- Dependencies: None
### 2. Feature Name
- Description
- Includes: key points
- Dependencies: Feature 1
At this point, you’re relying on your specification being clear enough. If this prompt doesn’t give you what you expect, go back and refine the specification first.
Once you’ve extracted a list of features, you can continue to refine by adding and removing features, breaking features up into smaller ones, and adjusting assumptions or technical details.
Feature Step Implementation Prompt
This is the golden prompt that allows me to move incredibly fast once I’ve generated and tweaked each feature step to perfection. If you get this step right, you can implement a feature much faster and with fewer issues.
The goal here is to take a feature, and break it into small, testable, readable and reviewable steps. This is exactly what successful AI implementation of features looks like.
Based on the feature we want to implement, please generate a step-by-step implementation guide.
The document should be named: feature-[feature-name]-steps.md
Structure:
# Feature Implementation Steps: [Feature Name]
## Overview
Brief description of what this feature does and its purpose within the overall product.
---
For each implementation step:
## Step [N]: [Clear, Descriptive Step Name]
### What to Build
2-3 sentences explaining what needs to be implemented in this specific step.
### Claude Code Prompt
[Detailed, ready-to-use prompt for Claude Code
---
Requirements:
- Break the feature into small, focused steps (each completable in one sitting)
- Each step should have a single, clear purpose
- Steps should build on each other logically
- Include all necessary details in the Claude Code prompt (no placeholders)
- Reference the technical stack from the spec
- Each prompt should be copy-paste ready for Claude Code
Before generating, ask me which feature from the spec should be broken down, or proceed if I confirm.
This prompt gives you a feature broken down into steps with Claude Code prompts for each step. All you need to do is copy and paste each prompt, verify each step (yes, review what it’s written, even briefly) and move onto the next.
Of course, you don’t need to follow this exact structure or implementation method. I prefer to move a little slower with feature implementation (while still saving huge amounts of time) so I can check each step as I go.
Follow up Prompts
It’s unlikely you’ll be able to one-shot specifications, features, or anything really. Follow up prompts I use during the planning phase generally look like this (they’re simple):
“We’re not using x, we’re using y.”
“We won’t build our own x, there’s a package [package name] for that.”
“Can you remove x, y and z?”
“Can we swap the order of x and y?”
You get the idea. The planning phase should be more like a conversation than a series of prompts.
Testing
Whether you’re an advocate for TDD or not, writing tests first is a great way to validate the behaviour of the code your AI agent produces against each feature implementation step.
We don’t need to write these tests ourselves, though. Using our feature specifications, we can prompt to generate these, making sure they’re clear, and that they cover everything.
Test Generation Prompt
Remember the feature steps you prompted for in the last section? You can use those to generate tests quickly, then implement the feature, then see if the tests pass. And you can do this very quickly.
The test generation prompt here isn’t the magic, what we did in the planning step was.
Generate tests for this feature:
[feature]
Honestly, that’s it.
If your feature is well defined, you should get a pretty solid set of tests created. Review the tests manually, then paste in the prompt for the feature implementation. Tests passed? You’re done.
Implementing features
How you prompt to implement features determines the output you get. Like the planning phase, taking the time to be clear and specific up front is more likely to save you time in the future, as your agent is more likely to implement exactly what you need first time.
If you’ve already followed the planning and testing step of this post (which I highly recommend you do for every product and feature), then you’ll already have copyable prompts for your AI agent. However, feature implementation doesn’t end there!
Intent checking
Having AI check its own work might seem counterintuitive, but it’s a useful process. This prompt (which I add as a slash command) allows me to take an implementation step from our planning phase and verify that it was implemented correctly.
If there was any scope creep or your agent changed something else as a side effect, this is where you’ll get feedback.
Verify the staged changes against the current task.
Steps:
1. Read `.ai/task.md`.
- If the file is empty or missing, tell the user to run /intent first.
2. Run this command and capture the output:
git diff --cached --no-color
3. Compare the REQUEST from `.ai/task.md` with the DIFF.
Important:
- Focus only on changes that appear to implement the task.
- Ignore unrelated or incidental changes such as:
- lock files
- formatting-only changes
- config or dependency updates
- IDE/editor files
- Do not fail the task because of unrelated files.
- Only judge whether the requested feature is implemented correctly.
Output exactly:
Verdict: PASS | PARTIAL | FAIL
Met requirements
- bullet points with evidence from the diff
Missing requirements
- bullet points
Unintended changes / scope creep
- bullet points
Risks & edge cases
- only if within scope of the intent
- max 5 bullets
Rules:
- Only use evidence from the diff.
- Be strict.
- Only mark PASS if everything in the request is clearly implemented.
I have two slash commands, /intent and /intent-verify (the prompt above). Here’s what /intent looks like:
Set or update the current task.
Steps:
1. Ask the user to paste acceptance criteria every time. No questions.
2. Write the result into `.ai/task.md`.
3. Confirm the task is saved.
4. Implement it
So this part of our workflow becomes:
Run /intent to specify the intent from our feature planning phase
Then run /intent-verify to have AI review itself
This isn’t a replacement for code review. It’s really just a quick feedback loop to make sure your agent is staying on track. For a final code review, I’d recommend a service like CodeRabbit or a human review.
The great thing about this flow is you’re able to switch and ask other models or even other agents to review the implementation.
Commit message generation prompt
Here’s a simple prompt that I use to generate a commit message. I add this as a /commit slash command.
---
allowed-tools: Bash(git add:*), Bash(git status:*), Bash(git commit:*)
description: Create a git commit
---
## Context
- Current git status: !`git status`
- Current git diff (staged and unstaged changes): !`git diff HEAD`
- Current branch: !`git branch --show-current`
- Recent commits: !`git log --oneline -10`
## Your task
1. Analyze the diff content to understand the nature and purpose of the changes
2. Generate 3 commit message candidates based on the changes
- Each candidate should be concise, clear, and capture the essence of the changes
- Do not use formats like feat:, fix:, docs:, refactor:, etc.
3. Select the most appropriate commit message from the 3 candidates and explain the reasoning for your choice
4. Stage changes if necessary using git add
5. Execute git commit using the selected commit message
## Constraints
- DO NOT add Claude co-authorship footer to commits
I know what you’re probably thinking… all this to generate a commit message?
It might seem over the top, but I’ve tweaked this prompt over time and now find it generates really good commit messages. Notice how we’re asking our agent to come up with three possible candidates before asking it to choose.
CI and Deploying
CI and deploying can cover a lot, but let’s keep it simple here. What do we need to know when we push up to source control? What about when a new feature is released?
Here are just a couple of things I do aside from running tests.
Documentation generation prompt
If you’d like to document your codebase and use a service like GitBook to display it, here’s a prompt to automatically do this. I create this as a sub-agent in Claude.
---
name: docs-writer
description: Generate and update GitBook-friendly documentation in /docs from diffs or full repo scan.
tools:
- Bash
- Read
- Write
- Edit
---
You are a documentation subagent.
Goal:
Maintain docs in the /docs directory.
Rules:
- Only create or edit files inside /docs.
- Never modify application source code.
- Prefer staged diff. If staged diff is empty, use unstaged diff.
- Behavior:
1) If /docs does not exist OR has no feature docs (/docs/features/*.md):
- Perform an initial documentation pass.
- Scan the repository and create:
- docs/README.md
- docs/architecture.md
- docs/SUMMARY.md
- docs/features/<feature>.md pages for major features you can identify
2) If a diff exists:
- Update ONLY docs related to the diff.
- Do not rewrite unrelated docs.
3) If no diff exists:
- Refresh only high-level docs: docs/README.md and docs/architecture.md.
Docs structure:
- docs/README.md (entry page)
- docs/architecture.md (system overview)
- docs/features/<feature>.md (feature pages)
- docs/SUMMARY.md (navigation)
Feature page template:
# <Feature Name>
## What it does
Short, user-focused description.
## How it works
High-level flow of the feature.
## Configuration
Any required environment variables / config.
## Background work
Queues, workers, schedulers.
Navigation rules:
- Ensure docs/SUMMARY.md exists.
- Add new pages to the summary.
- Never remove entries unless the feature was removed from the codebase.
Process (do this every time):
1) Determine doc mode:
- initial (no docs)
- update-from-diff (diff exists)
- refresh-high-level (no diff)
2) Gather inputs:
- Run:
- git diff --staged
- if empty, git diff
- If initial mode, list key directories and files to understand the app.
3) Write/update docs files accordingly.
4) Print a short report listing the files changed.
Commands you may run:
- git diff --staged
- git diff
- git status --porcelain
- ls, find, rg (ripgrep) to locate relevant code and config
- rm
Tweak this depending on your own priorities. This will also need to evolve as your codebase gets larger and/or more complex. As it is here though, I find this prompt pretty good at documenting the critical points of each feature without going into too much detail.
The prompt to use this sub-agent (which again, I add as a slash command) is:
Use the docs-writer subagent to generate/update documentation.
Hand off to subagent:
- Run docs-writer.
- Do not do doc work yourself.
Release Notes Prompt
Here’s a fun prompt I created to generate public release notes inside GitHub releases. Once you have this hooked up to a GitHub action (I’ll show you soon), you get a nice summary of changes to share with your team or customers.
Here’s the prompt:
You are a product writer generating user-facing release notes.
Return ONLY markdown.
Format:
## What's New in ${TAG}
### ✨ New Features
- ...
### 🔧 Improvements
- ...
### 🐛 Bug Fixes
- ...
Rules:
- Write for end users, not developers. No jargon.
- Translate technical changes into user-visible outcomes.
- Skip internal changes unless clearly user-visible.
- Omit empty sections.
- Do NOT invent or infer features that are not supported by the data below.
- Use the file change status list to understand what was added vs removed vs modified. In particular, if a file is marked D, it was deleted and must not be described as newly added.
- CRITICAL: Never invert meaning. If a change line indicates removal/deletion/disable/revert, you must NOT describe it as added/new/improved.
- CRITICAL: Only mention features that exist AFTER this release. If a feature was removed in this release (or earlier), do not present it as available or newly added. If it is unclear, omit it.
- If you cannot confidently map a change line to a user-visible outcome, omit it rather than guessing.
- If there are no user-facing changes, output exactly:
## What's New in ${TAG}
This release contains internal improvements and stability fixes.
Changes included in this release:
${CHANGES}
File changes included in this release (name-status):
${FILES}
The string interpolated values here can come from wherever you fetch them from. In my case, here’s the full GitHub Action YAML file if you want to steal it:
name: Release Notes
on:
release:
types: [published]
permissions:
contents: write
jobs:
release-notes:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build changes list
shell: bash
run: |
set -euo pipefail
TAG="${{ github.event.release.tag_name }}"
PREV="$(git describe --tags --abbrev=0 "${TAG}^" 2>/dev/null || true)"
if [ -n "$PREV" ]; then
git log "${PREV}...${TAG}" --pretty=format:'- %s' --no-merges > changes.txt
git diff --name-status "${PREV}...${TAG}" > files.txt
else
git log "${TAG}" --pretty=format:'- %s' --no-merges > changes.txt
git show --name-status --pretty="" "${TAG}" > files.txt
fi
- name: Generate release notes (Claude API)
shell: bash
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: |
set -euo pipefail
TAG="${{ github.event.release.tag_name }}"
CHANGES="$(cat changes.txt)"
FILES="$(cat files.txt)"
PROMPT="You are a product writer generating user-facing release notes.
Return ONLY markdown.
Format:
## What's New in ${TAG}
### ✨ New Features
- ...
### 🔧 Improvements
- ...
### 🐛 Bug Fixes
- ...
Rules:
- Write for end users, not developers. No jargon.
- Translate technical changes into user-visible outcomes.
- Skip internal changes unless clearly user-visible.
- Omit empty sections.
- Do NOT invent or infer features that are not supported by the data below.
- Use the file change status list to understand what was added vs removed vs modified. In particular, if a file is marked D, it was deleted and must not be described as newly added.
- CRITICAL: Never invert meaning. If a change line indicates removal/deletion/disable/revert, you must NOT describe it as added/new/improved.
- CRITICAL: Only mention features that exist AFTER this release. If a feature was removed in this release (or earlier), do not present it as available or newly added. If it is unclear, omit it.
- If you cannot confidently map a change line to a user-visible outcome, omit it rather than guessing.
- If there are no user-facing changes, output exactly:
## What's New in ${TAG}
This release contains internal improvements and stability fixes.
Changes included in this release:
${CHANGES}
File changes included in this release (name-status):
${FILES}
"
# JSON-escape the entire prompt (handles quotes + ALL control chars safely)
PROMPT_JSON=$(printf '%s' "$PROMPT" | jq -Rs .)
cat > request.json <<EOF
{
"model": "claude-sonnet-4-5",
"max_tokens": 1000,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": ${PROMPT_JSON}
}
]
}
]
}
EOF
RESP=$(curl -sS <https://api.anthropic.com/v1/messages> \\
-H "content-type: application/json" \\
-H "x-api-key: ${ANTHROPIC_API_KEY}" \\
-H "anthropic-version: 2023-06-01" \\
-d @request.json)
# If API returned an error, show it clearly
if echo "$RESP" | jq -e '.error' >/dev/null 2>&1; then
echo "Claude API error:"
echo "$RESP" | jq -r '.error.message // .'
exit 1
fi
# Normal path
echo "$RESP" | jq -r '.content[] | select(.type=="text") | .text' > release-notes.md
- name: Update GitHub release body
shell: bash
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
set -euo pipefail
gh release edit "${{ github.event.release.tag_name }}" --notes-file release-notes.md
Now, whenever you create a release, it’ll generate release notes of what’s changed since the last one.
Refactoring
If you spot something ripe for refactoring, it’s best not to jump in straight away with a “Refactor this so it’s simpler” prompt. Let’s step back, maybe generate some more test coverage and then let our agent attempt a refactor.
First, Existing Feature Tests Generation
Before a refactor, it’s a great idea to have tests written. If you’re approaching a feature that doesn’t have tests, here’s my tried and tested prompt for analysing the feature, suggesting tests, and then implementing them.
Analyze the [feature] feature. Map out how it works, what depends on it, and what depends on those things. Then write comprehensive tests using [testing framework] that cover the happy path, edge cases, error handling, and any related functionality that could break if this feature changes. Use descriptive test names and mock external dependencies. Confirm the tests you're going to implement before you do.
I’m assuming that you already have a test structure in the project you’re running this in, but if not, you may need to feed this prompt your testing framework details (or add to AGENTS.md).
Once you run this prompt and have some passing tests, it’s time to refactor.
Refactoring Prompt
With tests in place, you’re in a good position. Here’s the prompt I reach for to refactor a feature, and most importantly, verify everything is still working:
Analyze the [feature] feature. Find every caller, dependency, and integration point. Then refactor it to [improve readability/extract into a service class/remove duplication/whatever]. Make changes incrementally, running existing tests after each step.
You’ll notice this isn’t an overly detailed prompt. Specifically when I’m using Claude code, it doesn’t need much more. Plus, the most important part of this prompt is the improve readability/extract into a service class/remove duplication/whatever part. That’s where you want to be specific about your goals for the refactor.
Keep the refactor as small as possible. When you replace feature in the prompt, that could refer to something as small as a function or class.
Why Did You Have to Refactor?
If AI is writing our code, why did we have to refactor in the first place? This is a great question to ask, because it allows us to review the assumptions our agent is making about our preferences and codebase.
If your agent implements something in a way you don’t agree with, add an instruction to your AGENTS.md file for the future. That way, it’s unlikely to make the same mistake again.
And, we have a prompt for this!
[refactor prompt]
Can you update CLAUDE.md so this doesn't happen again?
Just swap out CLAUDE.md if you’re using another agent.
Once you’ve run this after a successful refactor, check your agent file to see what’s been added and make sure the assumption is correct and that it’s not included too much detail.
The “Any questions?” Modifier
Whether I’m in planning mode or implementation mode, the “Any questions?” modifier is gold.
With almost any prompt, adding this question at the end forces your agent to “think” a little harder before it takes action, and follow up with any clarifying questions. I find this modifier particularly helpful during the planning phase, and it saves a huge amount of time by clarifying details before it goes ahead and modifies my project specification or feature details.
My top AGENTS.md prompts
Aside from the usual code standards and preferences I’d add to my AGENTS.md file, I also include these two prompts in every project. They’re a great starting point.
Development Philosophy Prompt
Adding a development philosophy to your agent file is a great way to set the tone of anything that comes next. It’s not a silver bullet, but I notice a slight shift in the way my agent interacts with me and writes code when this is present.
# Development philosophy
- Prefer simple solutions over clever ones.
- Write code that is clear and self-explanatory.
- Build with the long term in mind.
These are my three core ones, but feel free to add your own.
Don’t Always Agree with Me
Another interesting prompt addition to your agent file is asking your agent not to just agree with you.
Don’t just agree with me or accept my conclusions. Push back if you think I'm wrong.
Let’s face it, we are wrong sometimes. And sometimes your agent will just follow your direction. Adding this safeguard sets the expectation that you’d like more of a back-and-forth relationship.
Prompting Is Hard
In conclusion, prompting is hard. Like everything, with practice, you’ll get better at it. Take my prompt suggestions, create your own, experiment with them, tweak them, and store them away for your next project or feature.
Happy prompting!