Performance Max

How to Rescue a Failing PMax Campaign: A Diagnostic Checklist

Duke Labs TeamMarch 202611 min read

How to Rescue a Failing PMax Campaign: A Diagnostic Checklist

The instinct when a PMax campaign fails is to pause it and start fresh. That's usually the wrong call.

When you pause a PMax campaign and recreate it, you reset the learning period from zero. You throw away whatever auction signals, audience data, and conversion patterns the algorithm has accumulated โ€” even if that accumulation took months. You're not fixing anything; you're just delaying the problem by 4-6 weeks while the new campaign crawls back to where the old one was.

Before you nuke it, work through this checklist. Most failing PMax campaigns have a diagnosable root cause. Find it and fix it, and you'll recover performance faster than any fresh-start approach.


Layer 1: Conversion Tracking

This is always the first diagnostic layer. Every time.

A PMax campaign running on bad conversion data isn't just performing poorly โ€” it's actively learning the wrong things. The longer broken tracking runs, the worse the situation becomes. Fixing it later doesn't undo what the algorithm learned from garbage inputs.

What to check:

  • Is your primary conversion action firing on actual purchases? Open Google Tag Manager (or your tag implementation) and trigger a test purchase on your staging environment. Verify the conversion tag fires once, with the correct transaction value.
  • Are you double-counting? The most common culprit is both a GA4 import and a native Google Ads purchase tag firing on the same transaction. Check your Google Ads โ†’ Conversions list for any actions flagged as "may count duplicates."
  • Are you optimising for the right conversion type? If your conversion action is "Add to Cart" rather than "Purchase," the algorithm is optimising for a micro-conversion with weak purchase intent signal. It will find lots of people who add to cart and abandon โ€” because that's what it was trained to find.
  • Is there significant conversion lag in your category? High-consideration categories (furniture, B2B software, luxury goods) can have multi-day or multi-week attribution windows. If your attribution window is set to 7 days and your customers typically convert on day 10, your recent data will consistently look worse than it is.

What bad looks like:

  • Zero or near-zero conversions despite meaningful spend (>$500 over 2+ weeks)
  • Conversion rate that varies wildly day-to-day without an obvious external explanation
  • Conversion value in Google Ads that doesn't reconcile with your order management system revenue
  • Suspiciously high conversion counts relative to revenue (double-counting signal)

What to do:

Use Google Tag Assistant (tagassistant.google.com) to debug your tag implementation on a live test purchase. Compare conversion counts in Google Ads over the last 30 days against your actual orders from your e-commerce platform or CRM. If they don't reconcile within a reasonable margin, you have a tracking problem.

If you're running micro-conversions as your primary optimization target, switch to purchase conversions. Yes, the volume will be lower and it'll extend your learning period โ€” but the algorithm will make better decisions with purchase signal than with cart signal. The short-term pain is worth it.


Layer 2: Asset Quality and Asset Strength

PMax's serving eligibility is directly gated by Asset Strength. Poor or Good strength means the campaign is running with one hand tied behind its back.

What to check:

Navigate to your PMax campaign โ†’ Asset Groups โ†’ open each Asset Group and check the Asset Strength indicator in the top right. This is a composite signal Google uses to determine how broadly your Asset Group can serve across its inventory of placements.

What bad looks like:

  • "Poor" strength on any Asset Group (severely limits serving eligibility)
  • "Good" strength (still limiting โ€” "Excellent" is the target)
  • Missing video assets (video absence is one of the largest single penalties to Asset Strength)
  • Headlines that are all variations of the same message (low diversity score)
  • Images below minimum size requirements or with overlaid promotional text
  • Single-line description sets with minimal word count

What to do:

Video is the highest-leverage fix. PMax serves across YouTube, Discover, and Gmail in addition to Search and Shopping โ€” without video assets, you're locked out of those placements. If you don't have professionally produced video, a 15-30 second slideshow of product images with a voiceover is sufficient to unlock video serving eligibility. Google Ads' built-in video creation tool can generate a basic video from your existing image assets in minutes.

For headlines: Google's Asset Strength algorithm rewards diversity of value proposition, length variation, and keyword inclusion. If all your headlines are short brand taglines, add longer benefit-focused headlines. If all your headlines are price-focused, add social proof and specificity-focused variants. Aim for at least 15 headlines with genuine variety.


Layer 3: Product Feed Health

A PMax campaign is only as healthy as the products it has to serve. Feed problems are silent campaign killers โ€” they don't produce error messages in Google Ads, just declining performance.

What to check:

Log into Google Merchant Center โ†’ Products โ†’ All products. Note how many products have "Active" status versus "Disapproved" or "Limited." Navigate to the Diagnostics tab for a categorised breakdown of feed errors and policy issues.

What bad looks like:

  • More than 5% of your product catalogue in "Disapproved" status
  • Recurring policy warnings (price mismatch, unavailable products, prohibited content)
  • Large numbers of products in "Limited" status (serving, but with reduced eligibility)
  • Diagnostic tab showing the same error categories week after week without resolution

What to do:

Prioritise disapprovals by volume. Price mismatch (where your feed price doesn't match your website landing page price) and availability mismatch (serving out-of-stock products) are typically the highest-volume disapproval categories and the fastest to fix at the source.

While you're resolving feed errors, consider temporarily excluding the affected products from your PMax Asset Groups rather than letting them continue dragging down your campaign's overall health signals. A smaller, fully healthy product set performs better than a large product set with significant disapproval rates.


Layer 4: Budget Constraints

Budget and target ROAS interact in ways that aren't always obvious. Both "too little budget" and "too high a tROAS target" produce the same symptom: underperformance.

What to check:

Is your campaign displaying a "Limited by budget" status indicator? What is your budget utilisation rate โ€” your actual daily spend as a percentage of your daily budget cap? What is your target ROAS, and are you actually hitting it (or are you spending below budget because the algorithm can't find enough qualifying opportunities)?

What bad looks like:

Budget consistently hitting 100% of daily cap โ€” you're artificially capping reach at the point where performance could scale further. Or the opposite: spend consistently running at 40-60% of budget cap with poor ROAS โ€” the algorithm is trying to find opportunities at your tROAS constraint and failing.

What to do:

Budget-limited campaigns: your tROAS target is mathematically irrelevant if you're hitting your budget cap. The algorithm can't optimise toward a target it has no room to pursue. Either increase budget if the unit economics justify it, or accept that your effective ROAS target is whatever the campaign delivers at the budget cap.

Underspending campaigns: your tROAS target is too high. The algorithm is looking for opportunities that meet your return threshold and not finding enough of them. Lower your tROAS target in 10-15% increments โ€” not all at once โ€” and give each adjustment 2 weeks to settle before evaluating. Each reduction gives the algorithm access to a wider inventory of auctions.


Layer 5: Audience Signals

Audience signals are often misunderstood. They're not targeting โ€” they're hints. The algorithm uses them as a starting point, not a boundary.

What to check:

Navigate to your Asset Group's audience signal section. Review the lists and segments you've added. Check the member counts for any custom lists you're using. Look for any signals that are very narrow (highly specific custom intent audiences built on a small set of keywords, or precise demographic overlays).

What bad looks like:

  • Audience lists with fewer than 1,000 members โ€” too small to be statistically useful as a signal
  • Signals built on narrow custom intent audiences that may be starving the algorithm early in the campaign's life
  • Over-specified signals that don't cover enough of the potential customer journey

What to do:

Broaden your signals. Add your full website visitor list (all pages, 30-day window minimum). Add a customer match list if you have sufficient CRM data. Add broad in-market segments relevant to your product category. These signals give the algorithm a larger starting population to analyse patterns from.

Remember: the algorithm will expand beyond your signals if it identifies auction opportunities. Overly narrow signals don't prevent the algorithm from finding new customers โ€” but they can starve it of useful learning signal early, which extends the time to stable performance. Give it more to work with, not less.


Layer 6: Learning Period Resets

The most self-inflicted PMax failure mode: panic-editing during the learning period, causing a cascade of resets that keeps the campaign perpetually unstable.

What to check:

Open your campaign's Change History (Tools โ†’ Change history, filtered to this campaign). Look at what changes were made in the last 30 days. Significant budget changes, tROAS changes, Asset Group additions or removals, and conversion action changes all reset the learning period.

What bad looks like:

  • "Learning" status badge in the campaign list view (hover over the status dot to confirm)
  • ROAS or CPA varying by more than 50% day-to-day without seasonal explanation
  • CPCs that are 2x+ your historical average
  • Impression share that spikes high one day and collapses the next
  • A change history showing multiple significant edits in a 2-week window

What to do:

Stop touching the campaign. This is genuinely the hardest advice to follow when performance is bad, but it's the right call. Each significant change during the learning period extends the recalibration timeline. A campaign that keeps getting edited never reaches stability.

If you need to make changes, keep them small. Stay within ยฑ15% on budget and tROAS adjustments. Space changes at least 2 weeks apart. Document your changes and their dates so you can correlate them with performance shifts.


Bonus: Brand Cannibalisation Check

Before concluding your diagnosis, check whether PMax is cannibalising your branded Search campaigns.

Navigate to Insights โ†’ Search term insights within your PMax campaign. Look for your brand name and branded product terms appearing as served queries. If PMax is capturing branded traffic, it's competing with your dedicated brand campaign โ€” which almost certainly has a better conversion rate and lower CPA for branded queries than PMax does.

As of 2025, you can add brand exclusions directly at the PMax campaign level (Campaign settings โ†’ Brand exclusions). Do this. It routes branded traffic to your brand campaign where it belongs, which improves both campaigns' performance simultaneously.


The Nuclear Option: When to Actually Restart

Sometimes a fresh start is the right call. But the bar should be high:

Restart when:

  • Conversion tracking was fundamentally broken for more than 60 days, and the algorithm has spent months learning from garbage data. The accumulated model is worse than no model โ€” a fresh start with clean tracking will outperform a corrupted model within its learning period.
  • You're making a complete strategic pivot: entirely different products, entirely different audience, target ROAS that's more than 2x different from current. The existing model is calibrated for a different campaign than the one you're running.
  • The campaign has been in a "Learning" state for more than 8 weeks despite no significant changes and sufficient budget โ€” this suggests a fundamental structural issue that accumulated learning can't fix.

Even then: consider launching a new campaign alongside the existing one rather than pausing. Run both for 2-3 weeks and compare performance as the new campaign accumulates learning. This gives you a live benchmark and lets you confirm the new campaign is actually performing better before you commit to retiring the old one.


Work through these six layers in order. Most PMax campaigns that appear to be failing have a diagnosable root cause in Layer 1 (conversion tracking) or Layer 4 (budget/tROAS mismatch). Fix the diagnosis before you reach for the nuclear option.

Ready to optimise your PMax campaigns?

Start free. Connect in 5 minutes. First month on us.

Start Free Trial