Skip to content
DevOps··9 min

How to automate code review with Claude Code — a practical setup

Set up Claude Code to review every pull request automatically: catch security issues, enforce style, spot performance problems. Includes a working GitHub Actions workflow.

How to automate code review with Claude Code — a practical setup
Manual code review is one of the highest-value activities in software development and one of the most inconsistently applied. Reviewers miss things when tired, styles vary between reviewers, and security patterns require specialist knowledge most teams don't have on call. Automated Claude Code review doesn't replace human review — it raises the floor so humans can focus on the things that matter. What this setup does --------------------- - Runs on every pull request (or push to main) - Posts a structured review comment on the PR - Flags: security issues, TypeScript type gaps, missing error handling, performance concerns, and style deviations from your CLAUDE.md - Blocks merge on high-severity security issues (optional) - Runs in ~45 seconds per PR What you need -------------- - GitHub repository - Anthropic API key (set as a GitHub Actions secret: ANTHROPIC_API_KEY) - Claude Code installed - A CLAUDE.md file in your repo root (with your standards) Step 1 — Write your CLAUDE.md ------------------------------- The review is only as good as your standards. If you haven't written a CLAUDE.md yet, create one now. Keep it under 200 lines — the entire file is read on every review. # Code Review Standards ## TypeScript - No 'any' types. Use 'unknown' for truly unknown types, then narrow. - All public functions must have explicit return types. - Prefer discriminated unions over optional chains for state. ## Security - Never log auth tokens, passwords, or user PII. - All user input to SQL queries must use parameterized queries. - API routes must validate the session before reading or writing data. ## Error handling - Async functions must have try/catch or .catch() — no unhandled rejections. - Error messages shown to users must not expose stack traces or internals. ## Performance - No synchronous file I/O in request handlers. - DB queries in loops should be refactored to single queries. Step 2 — Create the review script ----------------------------------- Create a file at scripts/review-pr.sh: #!/bin/bash # Usage: ./scripts/review-pr.sh "base-branch" "head-branch" BASE=$1 HEAD=$2 DIFF=$(git diff "$BASE"..."$HEAD" -- '*.ts' '*.tsx' '*.js' '*.jsx') if [ -z "$DIFF" ]; then echo "No JavaScript/TypeScript changes to review." exit 0 fi claude --dangerously-skip-permissions " You are a senior engineer doing a code review. Read the CLAUDE.md in this repo to understand our standards. Then review this diff: $DIFF Format your review as: ## Summary One paragraph on what this PR does. ## Issues (numbered) For each issue: severity (critical/major/minor), location (file:line), description, and suggested fix. ## Approved with comments / Changes requested Final verdict on one line. Focus on: security vulnerabilities, TypeScript type safety, missing error handling, and deviations from CLAUDE.md standards. Ignore minor formatting. " Step 3 — Create the GitHub Actions workflow --------------------------------------------- Create .github/workflows/claude-review.yml: name: Claude Code Review on: pull_request: types: [opened, synchronize] jobs: review: runs-on: ubuntu-latest permissions: contents: read pull-requests: write steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - name: Install Claude Code run: npm install -g @anthropic-ai/claude-code - name: Run review id: review env: ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} run: | REVIEW=$(./scripts/review-pr.sh origin/${{ github.base_ref }} HEAD) echo "review<<EOF" >> $GITHUB_OUTPUT echo "$REVIEW" >> $GITHUB_OUTPUT echo "EOF" >> $GITHUB_OUTPUT - name: Post review comment uses: actions/github-script@v7 with: script: | github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: '## 🤖 Claude Code Review ' + '${{ steps.review.outputs.review }}' }) Step 4 — Block on critical issues (optional) ---------------------------------------------- If you want to block merges on critical security issues, add this step after the review: - name: Check for critical issues run: | if echo "${{ steps.review.outputs.review }}" | grep -i "severity.*critical|critical.*severity"; then echo "Critical security issues found. Blocking merge." exit 1 fi This is a simple string match. For stricter enforcement, ask Claude Code to output a structured JSON block alongside the prose review that your CI can parse programmatically. Step 5 — Tune the review prompt --------------------------------- After a week of reviews, you'll have a sense of what the agent catches well and what it misses. Tune the prompt in review-pr.sh. Common additions: > also check: are there any new dependencies added in package.json? if so, flag them for manual security review. > check if any database queries are missing indexes on the queried columns (based on the table schema in supabase/migrations/). > if the PR adds new API routes, verify they all have authentication checks consistent with the patterns in middleware.ts. The more project-specific your review instructions, the higher the signal-to-noise ratio. What human reviewers should still do -------------------------------------- Automated review handles: style consistency, type safety, common security patterns, missing error handling, obvious performance issues. Human reviewers should focus on: product correctness (does this do the right thing?), architecture decisions (is this the right approach?), business logic correctness (edge cases that require domain knowledge), and the things the automated review explicitly flagged as needing human eyes. The pattern: automated review posts first, human reviewers read the automated review summary, then do their own pass. The automated review shortens the human pass because it's already handled the checklist items. Find more DevOps skills including CI automation at claudeskil.com/category/devops.