A manual Google Ads audit takes two to three days. You export reports, sort data in spreadsheets, and try to spot patterns across hundreds of search terms, ad groups, and campaigns. Claude Code compresses this into under an hour. This guide shows you exactly what to export, what prompt to use, and how to turn the findings into a prioritised action plan that covers account structure, quality scores, wasted spend, bidding strategy, conversion tracking, and ad copy quality.
A thorough Google Ads audit examines six areas. Skipping any one of them leaves money on the table.
New to Claude Code? Install it in under 10 minutes here.
Claude Code works with the files you give it. The richer the export, the better the audit. You need three reports at minimum.
In Google Ads, go to Reports > Predefined reports > Search terms. Set the date range to the last 90 days (or 30 days minimum). Include these columns: Search term, Match type, Campaign, Ad group, Impressions, Clicks, Cost, Conversions, Conv. value, Quality score. Download as CSV.
Go to Campaigns. Set the same date range. Include: Campaign name, Campaign type, Status, Budget, Impressions, Clicks, Cost, Conversions, CPA, ROAS, Bid strategy type. Download as CSV.
Go to Ads and assets > Ads. Filter for Enabled status. Include: Campaign, Ad group, Ad type, Headlines, Descriptions, Final URL, Impressions, Clicks, CTR, Ad strength. Download as CSV.
Folder tip: Save all three CSVs into a single folder, e.g. /ads-audit-march/. Then open Claude Code from that directory. Claude Code will be able to read all three files without you specifying paths manually.
Open Claude Code in your audit folder. Paste the prompt below exactly. The specificity matters - vague prompts produce vague audits. This prompt tells Claude Code what to look for, how to rank findings, and what format to produce the output in.
Claude Code will read all three CSV files and work through each section. On a large account (50+ campaigns), expect this to take two to four minutes. On smaller accounts it will be under a minute.
A good Claude Code audit output will include specific numbers, not generalities. You should see exact search terms listed, exact ad groups flagged, and specific dollar amounts attached to the wasted spend estimate. If the output is vague ("you have some quality score issues"), ask Claude Code to go deeper:
List every keyword with Quality Score below 5, including the campaign name, ad group, keyword text, and current bid. Sort by spend in the last 90 days, highest first.Cross-reference the wasted spend findings against your knowledge of the account. Some flagged search terms may be intentional (e.g., brand terms you exclude from conversion tracking). Claude Code does not know your business - it finds patterns in the data. Your job is to validate which findings are genuine problems.
Signs of a thorough audit output:
The audit output will include a priority list. Use it to build a two-tier plan: quick wins you can implement this week, and structural changes you schedule for the next sprint.
Use Google Ads Editor for bulk negative keyword uploads and ad group restructuring - it is significantly faster than the web interface for large-scale changes. For each set of changes, add a note in Google Ads change history so you can correlate performance shifts to specific actions.
Set a calendar reminder to re-run the audit in 30 days using the same three CSV exports. Compare the wasted spend total, the number of keywords with Quality Score below 5, and the average CTR across RSAs. These three numbers will tell you whether the audit drove real improvement.
For ongoing automated monitoring without manual exports, the Ads OS system runs continuous account analysis and surfaces issues before they compound.
Manual audits rely on sorting spreadsheet columns and scanning for obvious outliers. Three patterns are reliably missed because they require cross-referencing multiple data sets simultaneously.
A manual audit might catch that one campaign is wasting spend on irrelevant queries. Claude Code reads the entire search terms file and clusters similar terms across all campaigns. It is common to find the same problem keyword theme appearing in five different campaigns simultaneously - a pattern invisible in a column sort.
When two campaigns target overlapping audiences or keywords, they bid against each other in the same auction. Claude Code identifies campaigns with significant keyword overlap, which manual auditors rarely check because it requires comparing ad group keyword lists across the full campaign structure.
Manual auditors look at ad strength scores. Claude Code can analyse the actual headline and description text across all RSAs and find the specific word patterns, call-to-action structures, or value proposition framings that correlate with CTR below the account average - giving you direction for rewriting, not just a flag that something is underperforming.
Quality score analysis and search term patterns require volume to be meaningful. An account with 200 clicks in 30 days will produce findings that are technically correct but statistically thin. If your account is small, extend the export to 90 days. If you have fewer than 500 clicks in 90 days, the audit findings will be directionally useful but not reliable enough to drive structural decisions.
Adding negative keywords to a poorly structured campaign is useful but temporary. If ad groups contain 30 keywords with no thematic coherence, irrelevant queries will keep appearing because the match logic is too broad. Read the full audit output before touching anything. Understand the structural picture first, then sequence the fixes in the right order.
An audit without a before-and-after measurement is just a list of opinions. Before making any change, note the current wasted spend total, average CPA, and the number of sub-5 quality score keywords. Check the same numbers 30 days later. Without this baseline, you cannot attribute performance improvement to the audit or defend the time investment to stakeholders.
A minimum of 30 days of data is required for meaningful search term and quality score analysis. Ninety days is better for identifying seasonal patterns and budget allocation trends. Accounts with fewer than 500 clicks in the period will produce limited findings because the statistical signal is too weak for Claude Code to draw confident conclusions about which patterns are genuine and which are noise.
No. Claude Code analyses the data you export and generates recommendations. You implement those changes yourself in Google Ads or Google Ads Editor. This maintains a human approval step before anything goes live, which is important for accounts with significant spend. Automated direct-write integrations exist through the Google Ads API but require separate engineering work.
A full structural audit covering all six areas is worth running every 60 to 90 days. For active accounts spending more than five thousand dollars per month, a lighter monthly review - focused on search terms and quality scores - keeps problems from compounding between full audits. The export-and-analyse workflow takes under an hour once you have done it once.
Claude Code is more flexible than purpose-built audit tools. Standard tools apply fixed rules (flag ad groups with over 20 keywords, flag CTR below 2%). Claude Code can identify patterns specific to your account structure, your industry, and your goals, because you give it context in the prompt that those tools never have. The trade-off is that you must provide data manually via CSV export rather than through a direct API connection.
The audit is a starting point. Ads OS turns Claude Code into a continuous performance layer for your Google Ads account. Or explore the full playbook for Google Ads managers using Claude Code.