Module 1: Weekly Performance Review Agent
Build a repeatable system for consistent weekly performance reviews
What This Agent Does
Every week, the agent will:
- •Read fresh performance data
- •Compare it against the previous period
- •Detect meaningful changes
- •Explain why those changes likely happened
- •Output clear actions and risks
This replaces ad-hoc reviews and gut-feel decisions.
Prerequisites (Do Not Skip)
You must already have:
- •Claude Code installed and working
- •A project folder open in VS Code or Cursor
- •This folder structure: /inputs, /analysis, /experiments, /outputs, /workflows
Step 1: Define the Review Standard (Once)
Create: `workflows/weekly_review_standard.md`
text
workflows/weekly_review_standard.md
Paste and customize:
Review Objective
Detect performance changes that materially affect growth or profitability.
Time Window
- Current period: Last 7 days
- Comparison period: Previous 7 days
Core Metrics (do not add more than 7)
- Spend
- Impressions
- CTR
- CPC
- CVR
- CPA
- Revenue / ROAS
Platforms Covered
- Meta Ads
- Google Ads
- Landing page analytics
What Counts as Meaningful Change
- ±15% or more week-over-week
- Sustained across at least 3 days
What to Ignore
- Single-day spikes
- Low-volume campaigns
This file is the constitution of the agent.
Step 2: Prepare Weekly Inputs (Repeat Weekly)
Every week, place the following in `/inputs`:
- •performance_week_current.csv
- •performance_week_previous.csv
Exports should be raw. Do NOT clean or summarize manually.
Step 3: Create the Analysis Workflow
Create: `workflows/weekly_review_workflow.md`
text
workflows/weekly_review_workflow.md
Paste this structure:
Inputs Required
- performance_week_current.csv
- performance_week_previous.csv
- weekly_review_standard.md
Analysis Steps
- Validate data completeness
- Compare metrics week-over-week
- Flag meaningful deviations
- Map deviations to funnel stages
- Identify plausible causes (no speculation)
Reasoning Rules
- Prefer simplest explanation first
- Do not blame creative unless CTR changes
- Do not blame traffic unless CPC changes
Output Format
- Executive summary (5 bullets max)
- Wins
- Losses
- Risks
- Recommended actions (ranked)
This file defines how thinking happens.
Step 4: Run the Agent (Weekly)
In the IDE terminal, run:
bash
claude "Execute workflows/weekly_review_workflow.md using inputs for this week."
Claude will:
- •Follow your rules
- •Ignore irrelevant noise
- •Produce a structured review
Step 5: Save the Output
Save Claude's output as:
text
outputs/weekly_review_YYYY_MM_DD.md
Never overwrite old reviews.
History is part of intelligence.
Step 6: Human Override (Mandatory)
Read the output and:
- •Remove actions you disagree with
- •Add context Claude cannot know
- •Mark decisions you accept
Claude proposes. You decide.
Step 7: Iterate the Agent (Monthly)
Once per month:
- Review last 4 outputs
- Ask: What was consistently useful? What was consistently wrong?
Update:
- •weekly_review_standard.md
- •weekly_review_workflow.md
Version them instead of overwriting.
Common Failure Modes
Agent feels generic
Cause: Standards too vague
Fix: Tighten thresholds and rules
Agent suggests obvious things
Cause: Inputs lack funnel context
Fix: Include funnel failure map
Agent misses real problems
Cause: Metrics list too shallow
Fix: Add one diagnostic metric
Why This Works
- •Same logic every week
- •No emotional bias
- •No recency bias
- •Decisions are auditable
This is how teams scale judgment.