marketerswiki
HomeCoursesPlaybooksAbout
marketerswiki

The open playbook for performance marketers who build with AI.

Resources

  • Playbooks
  • Claude Code Guide
  • Vibecoding

Legal

  • Privacy Policy
  • Terms of Service

© 2026 marketers wiki. All rights reserved.

Built withClaude CodeClaude Code
← Back to Course Overview
Module 1: Weekly Performance Review Agent
What This Agent DoesPrerequisitesStep 1: Define the Review StandardStep 2: Prepare Weekly InputsStep 3: Create the Analysis WorkflowStep 4: Run the AgentStep 5: Save the OutputStep 6: Human OverrideStep 7: Iterate the AgentCommon Failure ModesWhy This Works
Module 2: Creative Performance Review Agent
Module 3: Budget Reallocation Agent
Module 4: Pre-Launch Risk Assessment Agent
Module 5: Team Adoption & Guardrails

Module 1: Weekly Performance Review Agent

Build a repeatable system for consistent weekly performance reviews

What This Agent Does

Every week, the agent will:

  • •Read fresh performance data
  • •Compare it against the previous period
  • •Detect meaningful changes
  • •Explain why those changes likely happened
  • •Output clear actions and risks

This replaces ad-hoc reviews and gut-feel decisions.

Prerequisites (Do Not Skip)

You must already have:

  • •Claude Code installed and working
  • •A project folder open in VS Code or Cursor
  • •This folder structure: /inputs, /analysis, /experiments, /outputs, /workflows

Step 1: Define the Review Standard (Once)

Create: `workflows/weekly_review_standard.md`

text
workflows/weekly_review_standard.md

Paste and customize:

Review Objective

Detect performance changes that materially affect growth or profitability.

Time Window

  • Current period: Last 7 days
  • Comparison period: Previous 7 days

Core Metrics (do not add more than 7)

  • Spend
  • Impressions
  • CTR
  • CPC
  • CVR
  • CPA
  • Revenue / ROAS

Platforms Covered

  • Meta Ads
  • Google Ads
  • Landing page analytics

What Counts as Meaningful Change

  • ±15% or more week-over-week
  • Sustained across at least 3 days

What to Ignore

  • Single-day spikes
  • Low-volume campaigns

This file is the constitution of the agent.

Step 2: Prepare Weekly Inputs (Repeat Weekly)

Every week, place the following in `/inputs`:

  • •performance_week_current.csv
  • •performance_week_previous.csv

Exports should be raw. Do NOT clean or summarize manually.

Step 3: Create the Analysis Workflow

Create: `workflows/weekly_review_workflow.md`

text
workflows/weekly_review_workflow.md

Paste this structure:

Inputs Required

  • performance_week_current.csv
  • performance_week_previous.csv
  • weekly_review_standard.md

Analysis Steps

  1. Validate data completeness
  2. Compare metrics week-over-week
  3. Flag meaningful deviations
  4. Map deviations to funnel stages
  5. Identify plausible causes (no speculation)

Reasoning Rules

  • Prefer simplest explanation first
  • Do not blame creative unless CTR changes
  • Do not blame traffic unless CPC changes

Output Format

  • Executive summary (5 bullets max)
  • Wins
  • Losses
  • Risks
  • Recommended actions (ranked)

This file defines how thinking happens.

Step 4: Run the Agent (Weekly)

In the IDE terminal, run:

bash
claude "Execute workflows/weekly_review_workflow.md using inputs for this week."

Claude will:

  • •Follow your rules
  • •Ignore irrelevant noise
  • •Produce a structured review

Step 5: Save the Output

Save Claude's output as:

text
outputs/weekly_review_YYYY_MM_DD.md

Never overwrite old reviews.

History is part of intelligence.

Step 6: Human Override (Mandatory)

Read the output and:

  • •Remove actions you disagree with
  • •Add context Claude cannot know
  • •Mark decisions you accept

Claude proposes. You decide.

Step 7: Iterate the Agent (Monthly)

Once per month:

  1. Review last 4 outputs
  2. Ask: What was consistently useful? What was consistently wrong?

Update:

  • •weekly_review_standard.md
  • •weekly_review_workflow.md

Version them instead of overwriting.

Common Failure Modes

Agent feels generic

Cause: Standards too vague

Fix: Tighten thresholds and rules

Agent suggests obvious things

Cause: Inputs lack funnel context

Fix: Include funnel failure map

Agent misses real problems

Cause: Metrics list too shallow

Fix: Add one diagnostic metric

Why This Works

  • •Same logic every week
  • •No emotional bias
  • •No recency bias
  • •Decisions are auditable

This is how teams scale judgment.

Next: Module 2: Creative Performance Review Agent →