Performance Max

Why Your PMax Campaign Is Wasting Budget on Your Worst Products (And How to Stop It)

Duke Labs TeamFebruary 20269 min read

Why Your PMax Campaign Is Wasting Budget on Your Worst Products (And How to Stop It)

You check your Performance Max campaign and the ROAS looks... fine. 4.2x. Respectable. You leave it alone.

What you're not seeing: underneath that aggregate number, Product A is running at 3.2x ROAS and Product B is running at 0.84x ROAS — and Product B is eating twice the budget.

That's not a hypothetical. That's what happens when you let Google's algorithm allocate spend across your full catalogue without segmentation. The aggregate ROAS hides the carnage happening at the product level.

Google Optimises for Conversions. Not Your Margin.

This is the foundational misunderstanding that causes most PMax accounts to underperform.

Google's Smart Bidding objective in a PMax campaign — even when you set a target ROAS — is to maximise conversion value within the efficiency constraint you've set. If Google's model predicts a conversion is likely, it will bid for it. The algorithm doesn't know that Product B has a 12% gross margin and Product A has a 68% gross margin. It doesn't know that Product B's customers are serial returners. It doesn't know that you need 3x ROAS minimum to cover fulfilment costs on that SKU.

It knows one thing: conversion value is conversion value.

Here's a concrete example to make this visceral:

Product CPA AOV ROAS Daily Spend Daily Revenue
Product A (Widget Pro) $12 $40 3.2x $180 $576
Product B (Widget Lite) $45 $38 0.84x $320 $269

Product B is generating $269 in revenue from $320 in ad spend. It's losing money on advertising. And Google is spending 77% more on it than on Product A — because it converts, and converting is what Google's model is rewarded for.

The aggregate numbers? $500/day, $845 in revenue, 1.69x ROAS. Your account-level report shows you a number that feels fine. Meanwhile you're burning $320/day on a product that's destroying value.

Why PMax Can't Tell the Difference Without Your Help

Performance Max's architecture is fundamentally different from Standard Shopping. In Standard Shopping, you could bid at the product level — literally set different CPCs or bid adjustments per SKU. PMax doesn't work that way.

In PMax, the bidding signal flows through Asset Groups, and Asset Groups contain Listing Groups. If your entire catalogue lives in a single Asset Group with a single listing group ("All Products"), Google treats every product as having the same strategic value. The algorithm distributes budget based on predicted conversion probability, not your profitability framework.

The algorithm isn't broken. It's doing exactly what it's designed to do. The problem is that it's working with incomplete information — it has no idea which of your products should be prioritised and which should be suppressed. That information doesn't exist in your feed unless you put it there.

The Three Levers Everyone Tries First (And Why They're Not Enough)

When advertisers notice efficiency problems in PMax, they reach for one of three controls. Each one is a partial fix at best.

1. Raising the Target ROAS

Logic: if you set tROAS to 6x instead of 4x, the algorithm will only bid on high-value opportunities.

The problem: raising tROAS lifts the efficiency floor for the entire campaign. It might improve average ROAS, but it doesn't redirect budget away from bad products and toward good ones. The algorithm simply becomes more conservative across the board — it bids less aggressively on everything, including your stars. You get less waste, but you also get less volume on your best performers. You've traded one problem for another.

2. Cutting Overall Budget

Logic: spend less, waste less.

The problem: you can't cut the budget in half and expect your winners to keep performing. Budget reduction is non-selective. Your stars and your dogs get cut simultaneously. You're sacrificing growth on high-ROAS products to reduce waste on low-ROAS ones — when the real solution is to separate them.

3. Excluding Problem Products

Logic: if Product B is the problem, just exclude it from the campaign.

This is closer to useful, but it's still a blunt instrument. Exclusion is binary — in or out. You lose all data on excluded products. Some "bad" products are bad because of seasonal timing or a temporary pricing issue, not because they're fundamentally unviable. And manually reviewing every product in a catalogue of 500+ SKUs to decide what to exclude is not a sustainable process.

None of these approaches solve the core problem: you need different strategic treatment for different product tiers — and that requires segmentation.

The BCG Segmentation Framework: The Actual Fix

The BCG Matrix — originally designed for corporate portfolio strategy — maps directly onto a Google Ads product catalogue. The four quadrants give you a clear, actionable taxonomy.

Contenders (High ROAS + High Revenue)

These are your best products. High efficiency, high volume. They're generating most of your conversion value and doing it profitably. Your job: protect their budget, scale them carefully. Don't strangle them with an overly aggressive tROAS target — they're already performing.

Cash Cows (High ROAS + Stable/Lower Revenue)

Efficient performers with limited scale headroom. These might be high-price, low-volume products or niche SKUs with small search audiences. They punch above their weight on ROAS. Strategy: maintain, don't over-invest. Good candidates for higher tROAS targets to protect margins on constrained spend.

Question Marks (Low ROAS + High Revenue Potential)

These products have volume — people are finding them, clicking, sometimes buying — but the economics aren't working yet. Low ROAS doesn't necessarily mean give up. It might mean a feed quality issue, a landing page conversion problem, or a pricing mismatch. These deserve investigation before suppression. Test carefully with a separate Asset Group and a slightly relaxed tROAS to gather data.

Dogs (Low ROAS + Low Revenue)

Neither volume nor efficiency. These products aren't converting, and they're not contributing meaningful revenue. The default action is suppression — either exclude them entirely or isolate them in a low-priority Asset Group with an extremely aggressive tROAS that effectively prevents spend. Review quarterly; some dogs become question marks as product positioning improves.

How to Run the Audit Right Now

You don't need third-party tools to identify which products fall into which quadrant. The data is already in your Google Ads account.

Step 1: Pull the Product Performance Report

In Google Ads, navigate to Products in the left-hand navigation (under the Shopping campaigns section, or via the Products tab in PMax). Filter by your PMax campaign. Set the date range to the last 30 days minimum — 90 days is better if you have seasonal products.

Step 2: Export the Right Columns

You need these columns:

  • Product title (or Item ID)
  • Conversion value (total revenue attributed)
  • Cost (total ad spend)
  • Conversions
  • Conv. value / cost — this is your ROAS column
  • Impressions and Clicks for context

Export to Google Sheets or CSV.

Step 3: Identify the 20/80 Distribution

Sort by Cost descending. In almost every account, you'll find that 15–25% of products are consuming 60–80% of total spend. That's expected — Pareto is real. The question is whether those top spenders are also your top revenue generators.

Add a column: Revenue % = each product's conversion value divided by total conversion value across all products. Add another: Spend % = each product's cost divided by total cost.

Products where Spend % >> Revenue % are your budget sinks. Products where Revenue % >> Spend % are your underfunded stars.

Step 4: Calculate Relative ROAS

Calculate your account average ROAS (total conversion value / total cost). Then add a column: Relative ROAS = each product's ROAS divided by the account average. Products below 0.8 are underperforming. Products above 1.5 are outperforming.

Cross-reference with Revenue % to slot each product into the BCG quadrant.

Step 5: Build Your Segmentation Structure

Create custom labels in your Merchant Center feed to tag each product with its quadrant. Then restructure your PMax campaign to use separate Asset Groups — one per tier — each with its own tROAS target calibrated to that tier's strategic role.

This is where the real leverage lives. Not in account-level settings, but in the architecture underneath.

The Problem with Doing This Manually

The audit above is tractable as a one-time exercise. The problem is that product performance doesn't stay static. A product's ROAS quadrant can change month to month as competitive dynamics shift, seasonality hits, inventory changes, or pricing adjusts. A product that was a Star in Q4 might be a Question Mark in Q1.

Manually re-running this analysis every four to six weeks, updating your Merchant Center feed, and adjusting tROAS targets across Asset Groups is a significant operational overhead — one that most teams don't have capacity for. So the segmentation gets set once and goes stale. Which means the waste creeps back.

Automating the Segmentation

DukesMatrix solves this with a bivariate optimisation algorithm that continuously evaluates each product across two dimensions simultaneously — ROAS efficiency and revenue contribution — and updates your segmentation labels and Asset Group tROAS targets accordingly.

Instead of a one-time audit, you get a continuously maintained segmentation layer. Contenders stay protected. Dogs get suppressed before they drain budget. Question Marks get the structured testing environment they need. And the system adjusts as your catalogue evolves, not just when you have time to run a spreadsheet.

The result isn't just less waste — it's active reallocation. Budget that was going to Dogs gets redirected to Contenders. That's not a marginal efficiency gain. In a catalogue with hundreds of SKUs, that's a structural improvement in returns.


If you're running PMax on a catalogue larger than 50 products and you haven't segmented by performance tier, you're leaving significant ROAS gains on the table. The question isn't whether your products need different treatment — it's whether your campaign structure currently allows for it.

Start your free DukesMatrix audit → ` },

"google-shopping-custom-labels-guide": {
  meta: {
    title: "Google Shopping Custom Labels: The Complete Guide (Labels 0–4 Explained)",
    excerpt: "Custom labels are the most underused lever in Google Shopping. Here's exactly what to put in each of the 5 slots — and how to structure them for PMax Asset Groups.",
    category: "shopping-ads",
    categoryName: "Feed Management",
    date: "March 2026",
    readTime: "11 min read",
    author: "Duke Labs Team",
    publishedAt: "2026-03-01T07:00:00+11:00"
  },
  markdown: `# Google Shopping Custom Labels: The Complete Guide (Labels 0–4 Explained)

Most Google Shopping accounts use zero custom labels. Of the ones that do use them, most use one or two, inconsistently, with values like "sale" or "clearance" that never get updated. Almost nobody uses all five with a coherent, data-driven strategy.

That's a significant competitive advantage being left on the table — because custom labels are the only mechanism that lets you segment products inside a Performance Max campaign based on your logic, not Google's.

This guide covers what custom labels actually are, why they matter architecturally for PMax, and exactly what to put in each of the five slots.

What Custom Labels Are (And What They're Not)

Custom labels are Merchant Center feed attributes: custom_label_0 through custom_label_4. Five slots. Each accepts a string value up to 1,000 characters. They flow from your product feed through Merchant Center and become available in Google Ads as segmentation dimensions inside Asset Group Listing Groups.

Here's the critical thing to understand: Google never reads these values for auction matching or relevance scoring. They have zero impact on whether your ad shows for a given query. Google doesn't use them for targeting. It doesn't infer anything from them.

They exist entirely for you — as a way to impose your own organisational logic on your catalogue inside the Google Ads interface.

Without custom labels, your only options for splitting products across Asset Groups are the standard feed attributes: brand, product category (Google taxonomy), product type, item ID, or condition. These are useful dimensions, but they're Google's taxonomy, not yours. They tell you what a product is, not how it's performing or what strategic role it plays.

With custom labels, you control the segmentation logic entirely. And that segmentation logic determines how your campaign budget is distributed.

Why This Matters for Performance Max Specifically

In Performance Max, an Asset Group is both a creative container (headlines, descriptions, images, videos) and a product container (via its Listing Group). When you want different tROAS targets for different product cohorts, you need different Asset Groups. When you want different creative messaging for different product types, you need different Asset Groups.

The only way to split products into different Asset Groups without creating separate campaigns is via Listing Group conditions — and the most flexible Listing Group conditions are custom labels.

Without custom labels, if you want to put your high-margin products in one Asset Group and your low-margin products in another, you're stuck hoping they happen to align neatly with a brand or category boundary. They rarely do.

With custom labels, you define the boundary. You tag every high-margin product with custom_label_1 = "High-margin" and you're done. Google Ads will respect that segmentation exactly.

Label 0 — Performance Tier

Recommended values: Star / Cash-Cow / Question-Mark / Dog (Alternative: High-ROAS / Mid-ROAS / Low-ROAS if BCG terminology feels too abstract for your team)

Label 0 should reflect the product's current advertising performance, updated from actual account data. This is your primary bidding segmentation dimension — the one that maps most directly to tROAS targets in your Asset Groups.

  • Star: High ROAS relative to account average, high revenue contribution. Scale with investment.
  • Cash Cow: High ROAS, lower revenue contribution. Maintain, protect margins, don't over-invest.
  • Question Mark: Below-average ROAS but meaningful revenue or growth potential. Test and investigate.
  • Dog: Below-average ROAS, minimal revenue. Suppress or impose heavy bid constraints.

This label needs to be updated regularly — at minimum monthly, ideally weekly for large catalogues. Stale performance labels are worse than no labels, because they misdirect budget based on outdated data. Automation is strongly recommended for any catalogue over 100 SKUs.

Label 1 — Margin Tier

Recommended values: High-margin / Mid-margin / Low-margin

This is your profitability signal from outside Google Ads — your internal cost data mapped into the feed. A product with an 80% gross margin can tolerate a much lower ROAS target than a product at 15% margin and still be profitable. Google Ads doesn't know any of this unless you tell it.

The formula for minimum viable ROAS given margin is:

Min ROAS = 1 / Gross Margin %

At 20% margin, you need 5x ROAS to break even on ad spend before any other costs. At 60% margin, 1.67x ROAS keeps you profitable. These are fundamentally different products from a bidding standpoint — and treating them identically is one of the fastest ways to bleed margin at scale.

Use your internal cost-of-goods data to generate this label. It typically doesn't change frequently, but review it quarterly for products where COGS fluctuates (commodities, supplier price changes, etc.).

Combined with Label 0, you now have a two-dimensional view: a Star with high margin can have its tROAS target set conservatively (let it scale), while a Star with low margin needs a tighter efficiency floor.

Label 2 — Price Competitiveness

Recommended values: Competitive / At-market / Above-market

Source: Merchant Center's Price Competitiveness report (under Growth → Price competitiveness). This shows you how your prices compare to other merchants selling the same or similar products.

Why does this belong in a custom label? Because price position affects your auction eligibility and conversion rate before bidding even happens. A product priced 25% above the market benchmark will have:

  • Lower impression share (Google's algorithm deprioritises uncompetitive listings)
  • Lower click-through rate
  • Lower conversion rate
  • Therefore higher effective CPA

If you're bidding aggressively on an above-market product, you're fighting the algorithm from the start. Tagging these products lets you either (a) apply more conservative bids to limit waste, or (b) create a separate Asset Group with adjusted messaging that justifies the premium positioning.

Conversely, Competitive products warrant more aggressive bidding — you have a structural advantage in the auction that Smart Bidding should be encouraged to exploit.

Update this label monthly, or automate from the Merchant Center Price Insights API.

Label 3 — Seasonality and Promotion Status

Recommended values: In-season / Off-season / On-promotion / Clearance

This is your temporal dimension. It lets you respond to promotions, seasonal peaks, and clearance events without restructuring your campaign or creating new campaigns for each event.

When a sale starts, update Label 3 to On-promotion for the relevant products. Your "Promotions" Asset Group — already built, with appropriate promotional creative — automatically picks up those products because they now match the Listing Group condition. When the sale ends, revert the label. No campaign changes required.

The same logic applies to seasonality. Winter apparel moves to In-season in April (Southern Hemisphere) or October (Northern Hemisphere). You've pre-built the Asset Group with seasonal creative. The label change activates it.

Clearance is a distinct case — products you need to move at any cost to recover inventory value. These warrant their own Asset Group with lower tROAS targets (more aggressive bidding) and clearance-specific messaging.

Label 4 — Product Lifecycle

Recommended values: New-launch / Evergreen / Discontinuing / Bestseller

Where a product is in its lifecycle fundamentally changes how you should be advertising it.

New-launch products have no performance history. Smart Bidding has no signal. If you set an aggressive tROAS on a new product, the algorithm has nothing to bid on — you'll get minimal impressions and no data. New launches need a dedicated Asset Group with a relaxed tROAS target (or Maximise Conversion Value with no ROAS floor) to enter a discovery phase. Once you have 30 days of data, reassess and move it to the appropriate Label 0 performance tier.

Bestsellers need protection. These are your proven high-volume products. They should have dedicated budget, appropriate creative, and tROAS targets that prioritise stability over aggressive efficiency gains.

Discontinuing products should be suppressed. There's no strategic value in investing ad spend on a product you're winding down. Add them to a high-tROAS holding Asset Group that effectively prevents spend, or exclude them entirely.

Evergreen products are your stable catalogue — performing adequately, no special treatment required. They get the default Asset Group treatment.

Combining Labels in Asset Group Listing Groups

The real power of custom labels is in combining them. Google Ads Listing Groups support AND logic — a product must match all specified conditions to be included.

For example, your highest-priority Asset Group might be:

  • Label 0 = Star
  • AND Label 1 = High-margin

These are your best products on both dimensions. They get the most creative investment, the most carefully calibrated tROAS target, and dedicated performance monitoring.

A secondary Asset Group might be:

  • Label 0 = Star
  • AND Label 1 = Mid-margin OR Low-margin

Same ROAS performance, but the efficiency target needs to be higher to maintain profitability given lower margins.

Conflicts to avoid:

  • Don't create overlapping Asset Group conditions where a product could match multiple groups. Google will include it in the first matching group, which may not be your intent.
  • Always have an "Everything else" catch-all Asset Group at the bottom of your Asset Group hierarchy with a conservative tROAS. This captures any products that fall through all your label-based groups (e.g., products with missing labels).
  • Test your Listing Group logic on a small product set before rolling out to the full catalogue.

Implementation: Supplemental Feed vs Direct Feed Update

There are two ways to populate custom labels.

Method 1: Direct Feed Update

Add custom_label_0 through custom_label_4 as columns directly in your primary product feed. This works if you maintain your feed programmatically and can generate label values as part of your feed build process.

Downside: requires rebuilding and re-uploading your primary feed whenever labels change. For daily or weekly performance tier updates, this creates significant feed management overhead.

Method 2: Supplemental Feed (Recommended)

A supplemental feed overrides or augments attributes from your primary feed. Create a Google Sheets supplemental feed with two columns: id (matching your product IDs) and the label attributes you want to set.

Merchant Center → Feeds → Add supplemental feed → Google Sheets.

Why this is better:

  • Your primary feed stays clean and stable
  • You can update the Sheets feed without touching your primary feed infrastructure
  • You can generate label values programmatically (from a script querying Google Ads API for performance data) and write them directly to the Sheet
  • Changes propagate to Merchant Center on the next feed fetch (configurable, typically daily)

For performance tier labels that need frequent updates, the supplemental feed + automated script combination is the most practical architecture.

Common Mistakes

Using labels to duplicate standard attributes. If you're labelling products by brand or category, you're burning a label slot on data that's already available in the standard feed. Labels are for your logic — margin, performance, lifecycle — that isn't captured anywhere else.

Too many values per label slot. Each label slot can technically hold any string value, but if you create 20 different performance tiers, your Listing Group structure becomes unmanageable. Cap each label at 4–6 distinct values. Enough to create meaningful segmentation; not so many that the structure becomes unmaintainable.

Setting labels once and forgetting them. A performance tier label that was set six months ago is worse than no label. It actively misdirects budget. If you can't commit to updating labels regularly, start with the lifecycle and margin labels (which change infrequently) before tackling the performance tier labels that need frequent refresh.

Not having a catch-all Asset Group. Any product without matching labels will be excluded from all your label-based Asset Groups and potentially receive no impressions at all. Always maintain a catch-all group.


Custom labels are the unsexy infrastructure of a well-optimised PMax campaign. But they're the mechanism that lets everything else — tROAS targets, Asset Group creative strategies, budget segmentation — operate with actual precision instead of blunt-instrument account-level settings.

Five label slots. Infinite structural leverage.

See how DukesMatrix manages custom labels automatically → ` },

"troas-vs-budget-pmax-performance": {
  meta: {
    title: "tROAS vs Budget: Which Lever Actually Controls Your PMax Performance?",
    excerpt: "Most advertisers pull one lever or the other. Here's why treating tROAS and budget as independent controls leads to suboptimal results — and what to do instead.",
    category: "tutorials",
    categoryName: "Bidding Strategy",
    date: "March 2026",
    readTime: "8 min read",
    author: "Duke Labs Team",
    publishedAt: "2026-03-02T07:00:00+11:00"
  },
  markdown: `# tROAS vs Budget: Which Lever Actually Controls Your PMax Performance?

There are two types of PMax advertisers, and both are doing it wrong.

The tROAS obsessive checks ROAS every morning. When it dips, they raise tROAS. When volume dips, they lower tROAS. Their target ROAS has been adjusted 14 times in the past two months. The algorithm is in a permanent state of relearning, and their impression share looks like an EKG.

The budget thrower treats Google Ads like a vending machine. ROAS looks good? Add budget. ROAS drops? Cut budget. Rinse and repeat. Their account has stable-ish ROAS but consistently underperforms because they're leaving efficiency gains untouched while managing spend like a tap.

The insight that changes how you manage PMax: tROAS and budget are not independent controls. They're coupled variables in a single optimisation system, and adjusting one without accounting for the other creates a predictable cascade of unintended consequences.

What tROAS Actually Does

Target ROAS sets the efficiency floor for Google's Smart Bidding algorithm.

When you set tROAS to 400%, you're telling the algorithm: "I want you to generate $4 in conversion value for every $1 I spend. Don't bid in auctions where you can't meet that threshold."

The algorithm responds by becoming selective. It evaluates each auction and its probability of generating a high-enough conversion value, and it only enters auctions where the modelled return meets the floor. The higher you set tROAS, the more selective (and therefore more conservative) the algorithm becomes.

This has a direct, predictable effect on impression share. Set tROAS to 800% and Google will only bid on the most conversion-certain opportunities — which typically means branded searches, warm remarketing audiences, and a narrow slice of high-intent queries. Everything else gets passed.

The trade-off is explicit: higher tROAS = higher efficiency floor, lower volume ceiling. You can't have a 12x ROAS target and expect high impression share. The auctions that would generate 12x are rare. Google will find some of them, but it will skip the vast majority of the catalogue.

This is fine as a deliberate strategy for protecting margin on a specific product tier. It's catastrophic as a response to a temporary ROAS dip across the whole account.

What Budget Actually Does

Budget sets the volume ceiling for the campaign.

Google's pacing algorithm will attempt to spend the full daily budget. If the budget is $500/day, Google will find $500/day worth of auctions to enter. The question is which auctions — and that's where the interaction with tROAS comes in.

Budget alone doesn't control efficiency. A campaign with a $10,000/day budget and no tROAS constraint will find a way to spend $10,000 — including on auctions where the modelled conversion probability is low and the ROAS will be poor. Budget without an efficiency floor is a blank cheque.

Conversely, a campaign with a $100/day budget and an aggressive tROAS is artificially constrained in both dimensions simultaneously. The algorithm has a small pool of money and a high bar for spending it. It will often fail to spend the full budget not because opportunities don't exist, but because the combination of constraints eliminates most of them.

This is the scenario where budget "utilisation" drops below 80% — a common signal that your tROAS target is too aggressive for your budget level.

The Interaction Effect: Why This Is the Real Issue

Here's the core insight that most PMax documentation glosses over: tROAS and budget aren't just two separate dials. They define a constraint system, and the algorithm operates within the feasible region defined by both simultaneously.

Let's model two common mistake scenarios.

Scenario A: Raising tROAS Without Touching Budget

You're running at $500/day, 400% tROAS, generating 6.2x average ROAS. It's been a great month. You decide to push efficiency and raise tROAS to 600%.

What happens: the algorithm recalibrates. At 600% tROAS, a large portion of the auctions that previously generated 400–599% ROAS are now below the floor. The algorithm steps back from those auctions. Impression share drops — let's say by 40%.

The campaign is now only spending $280–320/day against a $500 budget. Your ROAS might tick up slightly (say, 6.8x), but your revenue drops by roughly 40% because you've exited most of the addressable market. You've got a high-efficiency, low-volume campaign when what you wanted was a high-efficiency, same-volume campaign.

The correct adjustment was to either raise budget proportionally (to access more high-ROAS auctions) OR accept the volume reduction as a deliberate trade-off — not assume the efficiency gains were free.

Scenario B: Raising Budget Without Adjusting tROAS

You're running at $500/day, 400% tROAS, hitting roughly 4.1x average ROAS. Revenue looks good and you want to scale. You double the budget to $1,000/day.

What happens: Google has $1,000 to spend and a 400% tROAS floor. It quickly exhausts the auctions that reliably generate 400%+. To spend the remaining budget, it starts entering lower-quality auctions — broader match queries, less-warm audiences, more competitive verticals. Your ROAS starts dropping.

By the end of week two, you're at $1,000/day spend with a 2.8x ROAS. The nominal tROAS target is still 400%, but the algorithm is effectively ignoring it because it needs to find somewhere to put the money.

This is the most common scaling failure in PMax accounts: budget increases that destroy ROAS because the tROAS target wasn't recalibrated to match the new volume expectations.

The Whack-a-Mole Problem

When advertisers try to fix either scenario, they create a feedback loop.

  • ROAS drops (Scenario B) → raise tROAS → impressions collapse (Scenario A) → lower tROAS → ROAS drops again → raise budget to compensate → ROAS drops...

Every intervention introduces a new variable before the algorithm has stabilised from the last one. The campaign spends weeks in relearning phases, performance data becomes unreliable for decision-making, and the advertiser loses confidence in the system.

The algorithm isn't at fault. It's responding rationally to a rapidly changing constraint environment. The problem is the adjustment strategy.

The Bivariate Approach

The correct way to manage tROAS and budget is to model them as a coupled system — not as independent variables.

For any given campaign, there is an efficient frontier: a curve of tROAS × budget combinations that maximise conversion value subject to a minimum ROAS constraint. At one extreme, you have very high tROAS + low budget = high efficiency, low volume. At the other, very low tROAS + high budget = high volume, low efficiency. The goal is to find the point on the efficient frontier that matches your strategic objective.

This requires modelling what the algorithm will do at different combinations — not just adjusting one and waiting to see what happens.

DukesMatrix's Bivariate Portfolio Optimisation does exactly this. It maps the tROAS × budget grid across your campaign's historical performance data, identifies the efficient frontier, and recommends the combination of settings that maximises conversion value at or above your minimum ROAS threshold. Instead of reactive adjustments, you get a proactive setting recommendation grounded in your account's specific performance model.

Practical Rules for Manual Management

Until you have a system that models the interaction automatically, here are the ground rules for managing this manually:

1. Make Incremental Changes Only

Never adjust tROAS or budget by more than 15% in a single change. The algorithm's learning phase is triggered by significant changes to its operating parameters. Small incremental changes allow it to adjust continuously without a full relearn cycle.

A 15% tROAS increase is typically absorbed within a few days. A 50% increase can trigger a multi-week learning phase where performance data is unreliable.

2. When Making Significant Moves, Adjust Both Together

If you want to scale to significantly higher revenue, model what happens: higher budget requires lower effective tROAS (or the same tROAS needs proportionally more high-ROAS inventory to exist). If you're raising budget by 30%, consider whether your tROAS target needs to come down by 10–15% to give the algorithm room to spend efficiently.

The two adjustments should be planned simultaneously, not sequentially.

3. Wait for Stabilisation Before Evaluating

After any meaningful change, allow a minimum of 14 days before drawing conclusions or making further adjustments. Seven days is insufficient — weekly seasonality cycles and algorithm learning phases mean you need two full weeks to see a stable performance signal.

If you're reviewing performance at day 5 and the numbers look bad, the instinct is to adjust. Resist it. That signal is noise from the relearning phase, not a stable read on whether the new settings are working.

4. Track Impression Share Alongside ROAS

ROAS in isolation is an incomplete signal. A campaign with 90% impression share and 4x ROAS is a very different beast from a campaign with 30% impression share and 4x ROAS. The second campaign has massive headroom for scaling; the first is near its ceiling.

When you raise tROAS and impression share drops sharply, that's the algorithm telling you your tROAS target is above what the available market can support at that budget level.


The fundamental shift is from thinking of tROAS and budget as independent dials to treating them as coordinates on a performance surface. Moving one changes your position on that surface. Moving both together lets you navigate it intentionally.

Most advertisers are navigating blind. The ones who model the interaction first move in a straight line toward their target.

See how DukesMatrix models your tROAS × budget curve → ` },

"star-cash-cow-dog-products-google-ads": {
  meta: {
    title: "How to Identify Your Star, Cash Cow, and Dog Products in Google Ads",
    excerpt: "The BCG Matrix applied to your Google Ads account. The exact metrics and thresholds to classify every product in your catalogue — plus a spreadsheet methodology.",
    category: "bcg-matrix",
    categoryName: "Product Segmentation",
    date: "March 2026",
    readTime: "10 min read",
    author: "Duke Labs Team",
    publishedAt: "2026-03-03T07:00:00+11:00"
  },
  markdown: `# How to Identify Your Star, Cash Cow, and Dog Products in Google Ads

You already know the BCG Matrix — four quadrants, two dimensions, invented by the Boston Consulting Group in 1970 for corporate portfolio strategy. This post isn't about that theory. It's about applying it to your Google Ads account specifically, using actual ad performance data instead of market share estimates. The framework translates directly. The axes just need redefining.

Redefining the Axes for Google Ads

The original BCG Matrix uses market growth rate (y-axis) and relative market share (x-axis). These are useful for corporate strategy but unmeasurable at the product level in a Google Ads context — you don't have SKU-level market share data.

For Google Ads, we substitute with dimensions you do have:

  • Y-axis: ROAS efficiency — how efficiently each product converts ad spend into revenue, expressed as ROAS relative to your account average
  • X-axis: Revenue contribution — what percentage of your total account conversion value this product generates

This 2×2 maps directly onto your advertising portfolio. Every product in your catalogue can be plotted on it. The quadrants tell you exactly what to do.

The Four Quadrants, Redefined

Contenders: High Relative ROAS + High Revenue Contribution

Contenders are your best performers on both dimensions. They're generating a disproportionately large share of your total revenue and doing it at above-average efficiency. These are the products that, if you scale them correctly, drive compound growth.

The right action: invest aggressively. Raise budget on the Asset Groups containing Contenders. Adjust tROAS upward gently (not aggressively — you don't want to throttle volume) to ensure you're capturing all available high-intent traffic. Give Contenders the best creative, the highest bid priority, and dedicated reporting.

What "Contenders" doesn't mean: complacency. A Star can become a Cash Cow if the category matures or competition intensifies. Monitor impression share — if it's declining while ROAS holds, a competitor is taking ground. Respond.

Cash Cows: High Relative ROAS + Low Revenue Contribution

Cash Cows are efficient but limited in scale. They might be high-ticket products with small search volumes, niche items with narrow audiences, or products that convert well but don't get many searches. They punch above their weight on ROAS, but they can't carry the account.

The right action: maintain, don't over-invest. Set a slightly higher tROAS target than account average to protect margins on constrained spend. These products don't need — and won't respond well to — aggressive budget increases, because the market for them is simply small. Trying to scale a Cash Cow typically results in falling ROAS as you exhaust the high-intent audience and start bidding on progressively weaker signals.

Use Cash Cows to fund your Contenders. The efficient revenue they generate at low budget is a net positive — just don't mistake their efficiency for scale potential.

Question Marks: Low Relative ROAS + High Revenue Contribution

Question Marks are the most interesting and most mishandled quadrant. These products are spending a lot (they have high revenue contribution, which means they're getting significant traffic and conversions) but doing it inefficiently. Low ROAS, high volume.

The common mistake: treating Question Marks like Dogs and suppressing them. That's wrong. A product with high revenue contribution has demand. People are searching for it, clicking on it, sometimes buying it. The low ROAS is a signal that something in the funnel is broken — not necessarily that the product is unsalvageable.

Diagnose before you suppress:

  • Feed quality: Is the title optimised? Does the description include relevant search terms? Is the product type attribute correctly populated?
  • Pricing: Check the Merchant Center Price Competitiveness report. Are you priced significantly above market? Above-market pricing drives down conversion rate and increases effective CPA.
  • Landing page: Is the product page fast, mobile-optimised, and conversion-focused? A high-traffic product with a poor landing page is a funnel leak.
  • Ad relevance: Is the creative in the Asset Group actually relevant to this product type?

Question Marks need a structured testing environment. Isolate them in their own Asset Group with a tROAS target set at or slightly below account average — enough to allow volume, but not so low that you're bleeding spend recklessly. Run for 60–90 days with clear improvement milestones. If ROAS improves to within 20% of account average, begin migrating to the Cash Cow or Star treatment. If it doesn't move, reclassify as Dog.

Dogs: Low Relative ROAS + Low Revenue Contribution

Dogs have neither efficiency nor volume going for them. They're spending money, generating minimal revenue, and the traffic signal is too weak to diagnose or fix.

The right action: suppress. There are two mechanisms:

  1. Exclude from PMax entirely. Use product exclusions at the campaign level (Google Ads → Edit → Excluded products). Clean and simple for clear Dogs with no redemption path.

  2. Isolate in a high-tROAS Asset Group. Set tROAS so high (e.g., 15–20x) that the algorithm effectively stops bidding, but you maintain data collection. Use this for products that might re-emerge as Question Marks during peak seasons.

Review your Dog quadrant quarterly. Market conditions change, competitors exit, pricing dynamics shift. A Dog in Q1 might have legitimate Question Mark potential in Q4.

Pulling the Right Metrics from Google Ads

Where to Find the Data

Navigate to the Products tab in Google Ads (left navigation → Products, or via your PMax campaign → Products sub-tab). This gives you a product-level breakdown of performance.

If you're running PMax, segment by campaign to isolate your PMax data from any Standard Shopping campaigns you might also be running (they share the Products tab).

Date range: Use a minimum of 30 days. For products with moderate traffic, 90 days gives more statistical stability and smooths out short-term anomalies. For seasonal catalogues, use a comparable seasonal window from the prior year alongside the current period.

The Columns You Need

Add these columns to your Products report (click the columns icon → Modify columns):

Column What It Tells You
Conv. value / cost Product-level ROAS — your primary efficiency metric
Conversion value Total revenue attributed — your volume metric
Conversions Number of purchases — supporting volume data
Cost Total ad spend — needed to calculate spend %
CPC Average cost per click — signals auction competitiveness
Impr. share (Search lost IS budget) Whether you're budget-constrained on this product

Export to CSV or Google Sheets.

The Spreadsheet Methodology

Here's the exact step-by-step process to classify every product in your catalogue.

Step 1: Export the Products Report

From the Products tab, apply your date range, add the columns above, and export to CSV.

Paste into a new Google Sheet. Columns should be roughly: Product ID, Product Title, Cost, Conversion Value, Conversions, Conv. Value/Cost (ROAS), CPC.

Step 2: Calculate Account Average ROAS

In an empty cell, calculate:

= SUM(ConversionValue column) / SUM(Cost column)

Name this cell account_avg_roas. This is your benchmark — every product ROAS will be evaluated relative to this number.

For a sample account: total conversion value $485,000, total cost $112,000 → account average ROAS = 4.33x.

Step 3: Add a Relative ROAS Column

Add a column labelled Relative ROAS. Formula (assuming Conv. Value/Cost is in column E):

= E2 / account_avg_roas

A relative ROAS of 1.0 means this product performs exactly at account average. Above 1.5 = outperforming. Below 0.8 = underperforming.

Step 4: Add a Revenue Contribution (%) Column

Add a column labelled Revenue %. Formula (assuming Conversion Value is in column D):

= D2 / SUM($D$2:$D$1000)

(Adjust the range to match your data.) This tells you what fraction of total account revenue each product represents.

Step 5: Apply the Classification Formula

Add a column labelled Quadrant. This is the formula that classifies every product automatically. Assuming Relative ROAS is in column E and Revenue % is in column F:

=IF(AND(E2>1.5, F2>0.5%),"Star",IF(AND(E2>1.5,F2<=0.5%),"Cash Cow",IF(AND(E2<=0.8,F2>0.5%),"Question Mark","Dog")))

Copy this formula down for every row. Every product is now classified.

Example Classification Output

Here's what this looks like in practice across a sample of 6 products:

Product ROAS Rel. ROAS Revenue % Classification
Wireless Earbuds Pro 7.1x 1.64 3.2% Star
Premium HDMI Cable 6.8x 1.57 0.3% Cash Cow
Budget Phone Case (Multi) 2.9x 0.67 4.8% Question Mark
Smart Watch Series X 6.2x 1.43 2.1% Mid-tier (neither threshold met)
Keyboard Cover (Discontinued) 1.4x 0.32 0.08% Dog
Portable Charger 10000mAh 8.3x 1.92 1.9% Star

Note: The Smart Watch at 6.2x ROAS / 1.43 relative ROAS sits just below the Star threshold. That's fine — use your thresholds as starting points and exercise judgement on borderline cases. A product at 1.43 relative ROAS deserves Star-like treatment even if it doesn't quite make the cut numerically.

Calibrating the Thresholds for Your Account

The thresholds in the formula above — 1.5× relative ROAS for "high", 0.8× for "low", 0.5% revenue for "high", 0.1% for "low" — are starting points, not absolutes. Calibrate them based on:

Catalogue size: A catalogue with 5,000 SKUs should use a lower revenue % threshold (even 0.1% = 5 products) than a catalogue with 50 SKUs (where 0.5% = 1 product and is probably too narrow).

Account ROAS distribution: If your account has a wide ROAS spread (products ranging from 0.5x to 20x), the 1.5× threshold will classify many products as Contenders. Consider tightening it to 2× for more selectivity.

Minimum spend filter: Apply a minimum spend filter before classification (e.g., at least $100 spend in the period). Products with $5 in spend and 10x ROAS are statistical noise, not genuine Contenders.

Mapping Classifications to PMax Asset Groups

Once every product has a quadrant classification, update your Merchant Center custom labels (Label 0) to reflect the classification. Then structure your PMax campaign Asset Groups accordingly:

Asset Group Listing Group Condition tROAS Target
Contenders Custom Label 0 = "Star" Account average ROAS
Cash Cows Custom Label 0 = "Cash-Cow" 10–20% above account average
Question Marks Custom Label 0 = "Question-Mark" 10–15% below account average
Dogs / Low Priority Custom Label 0 = "Dog" Very high (effective suppression)
Catch-all All other products Account average or conservative

The tROAS differentials are intentional. Contenders should be allowed to scale — don't strangle them with an overly aggressive target. Cash Cows should be efficiency-protected. Question Marks need room to breathe and generate the data you need to diagnose them. Dogs need to be effectively frozen.

The Maintenance Problem

This classification is not a one-time exercise. Product performance changes — competitor pricing shifts, seasons turn, feed quality updates improve conversion rates, inventory depletes. A Star in October might be a Cash Cow in January.

The analysis above should be re-run monthly at minimum. For large catalogues, weekly. The custom labels in your Merchant Center feed need to be updated to reflect current classifications.

Manual re-runs are workable up to about 100–200 SKUs. Beyond that, the operational overhead becomes significant — and the risk of stale classifications misdirecting budget compounds with catalogue size.

DukesMatrix automates this classification loop: pulling product performance data from your Google Ads account, running the bivariate classification, updating Merchant Center custom labels, and adjusting Asset Group tROAS targets — on a continuous basis, not just when you have time to run the spreadsheet.


The BCG Matrix didn't make Boston Consulting Group famous because it's a clever framework. It made them famous because it turns complex portfolio decisions into actionable, unambiguous directives. That's exactly what your Google Ads account needs.

Classify your products. Act on the classification. Repeat.

See how DukesMatrix automates product classification → ` },

// === Scheduled posts === "price-competitiveness-google-shopping": { meta: { title: "Price Competitiveness in Google Shopping: Why You're Losing Auctions Before the Bidding Even Starts", excerpt: "Being 10–15% above market price suppresses your impression share before you even enter the auction. Here's how Google's price competitiveness signals work — and what to do about it.", category: "pmax-optimisation", categoryName: "Price Strategy", date: "March 2026", readTime: "9 min read", publishedAt: "2026-03-04T07:00:00+11:00", }, markdown: `

The Thing Most Advertisers Don't Know: Google Judges You Before the Auction

Most Google Ads practitioners understand the auction. You set bids, Google weighs your bid against Quality Score and expected CTR, the winner gets the impression. Simple enough.

What far fewer people realise is that for Shopping and Performance Max, there's a pre-auction filter — and price is one of the primary inputs.

If your product is priced materially above the market benchmark for that item, Google can deprioritise or suppress your listing before a single bid is placed. Your campaign budget, your bid strategy, your beautifully optimised feed — none of it matters if Google has already decided your product is too expensive to show.

This is separate from Quality Score. It's a price signal, and it's baked into how Google's Shopping algorithm decides which products deserve impressions.

How Google Knows What "Market Price" Is

Google doesn't need to guess at market pricing. It has the most comprehensive view of retail pricing on the planet: every merchant that submits a Merchant Center feed is, in effect, contributing to a global price database.

Google aggregates pricing data across all retailers listing the same or similar products. From this, it derives a benchmark price — the market-representative price for a given product at a given point in time. This benchmark is dynamic. It updates as prices across the market move.

Merchant Center surfaces this data in the Price Competitiveness report, found under Products → Price competitiveness. The report shows:

  • Your price — what you're currently listing the product at
  • Benchmark price — what the market is charging, on aggregate
  • Price competitiveness % — how your price compares to the benchmark (positive means you're cheaper, negative means you're more expensive)

This is first-party data from Google itself. If you're not checking this report regularly, you're operating without one of the most important signals in your Shopping ecosystem.

What Happens When You're Above Benchmark

Here's the mechanism. Google's algorithm is optimised to surface products that users are likely to click on and buy. Competitively priced products get higher CTR. Higher CTR means better user experience on Google's platform, which means more ad revenue for Google in aggregate.

The system therefore has a structural incentive to favour competitive pricing. Products priced significantly above benchmark see two forms of suppression:

  1. Organic Shopping demotion — in free listings, above-benchmark products are ranked lower or omitted
  2. Paid Shopping and PMax de-prioritisation — the algorithm reduces impression share for above-benchmark products even in paid placements, because it predicts lower CTR

This isn't Google penalising you arbitrarily. It's the algorithm doing exactly what it's designed to do: maximise user satisfaction and click-through rates across the platform.

The ~15% Threshold (And What Happens Beyond It)

Based on published Google guidance and the collective experience of practitioners managing large Shopping catalogues, the practical thresholds work roughly like this:

  • 0–10% above benchmark: Minimal suppression. Some marginal impression share impact, but generally manageable.
  • 10–15% above benchmark: Meaningful impression share drops. Products in this range start to see noticeably reduced serving, particularly in competitive categories.
  • 15–20% above benchmark: Significant suppression. These products will consistently underperform their feed quality and bid levels would suggest.
  • 20%+ above benchmark: Severe suppression. At this point, serving can be near-zero for some product categories regardless of how much budget you throw at the campaign.

The threshold isn't a hard line — it varies by category, competition density, and how many other merchants are listing the same product. In commoditised categories (consumer electronics, branded apparel), the algorithm is more aggressive about price signals. In niche categories with fewer comparables, there's more tolerance.

Finding Your Price Competitiveness Report

Merchant Center → Products → Price competitiveness

The columns you care about:

Column What It Tells You
Your price Current listed price
Benchmark price Google's market aggregate
Price competitiveness % How far off benchmark you are

Start with a filter: price competitiveness % < -10% (i.e., your price is more than 10% above benchmark).

Export that list. Cross-reference it with your PMax impression data. You'll almost certainly find a strong correlation between products that are above benchmark and products that are getting minimal PMax impressions despite being in your feed.

That's not a budget problem. That's a pricing signal problem.

If You Can Lower Your Prices

This is the cleanest path. The calculation is straightforward:

Minimum viable price = cost × (1 + minimum acceptable margin %)

If the benchmark price is above your minimum viable price, you have room to move. The target: get within 5% of benchmark. You don't need to win on price — you just need to stop being flagged as materially above market.

For a concrete example: if your cost is $80 and your minimum margin is 20%, your minimum viable price is $96. If the benchmark is $105, you can lower from $115 to $105 and stay well above minimum margin while getting back into the competitive range.

Prioritise your highest-traffic products first. A 5% price reduction on a product that drives 30% of your Shopping impressions will have a disproportionate impact on overall campaign performance.

If You Cannot Lower Your Prices

Some products genuinely can't be priced at benchmark — MAP agreements, margin constraints, positioning strategy. That doesn't mean you're helpless, but it does mean you need to compensate through other quality signals and be realistic about expectations.

1. Strengthen Your Other Quality Signals

Google's ranking algorithm is multi-factor. While you can't outbid your way past a price competitiveness problem, you can partially offset it with stronger signals elsewhere:

  • Images: Use high-resolution lifestyle images, not just white-background product shots. Google's image quality signals affect CTR predictions.
  • Titles: Richer, more specific titles help Google match your product to relevant queries. Include brand, model, key specs, and size/colour where relevant.
  • Reviews and ratings: Google product ratings aggregate from Google Shopping reviews and third-party review partners. Products with strong ratings see higher CTR. If you're not actively working to generate reviews, start.

2. Verify Your Google Product Category

This is underappreciated. If your product is in the wrong Google Product Category, Google is benchmarking your price against the wrong set of competitors. A premium kitchen knife listed under a broad "Kitchen Tools" category might look wildly overpriced compared to silicone spatulas. Re-categorise to the most specific applicable GPC — this ensures your benchmark is calculated against actual comparables.

3. Use Promotional Pricing Strategically

The sale_price attribute in your feed does two things: it signals a lower price during the promotional period, AND it triggers Google's price drop badge in Shopping results. The badge improves CTR even if your sale price is still slightly above benchmark. Reserve this for your highest-traffic periods — peak seasons, key retail events.

4. Accept Reduced Impression Share and Redirect Investment

Some products simply aren't Shopping viable at their current price. The pragmatic answer: exclude them from your Shopping/PMax campaigns or give them a minimal budget, and redirect your investment to the products where you're price-competitive. A dollar spent on a competitively priced product generates more return than a dollar wasted fighting a suppressed one.

The PMax Angle: Why Some Products Get Almost No Impressions

This is where the price competitiveness issue becomes a PMax diagnostic issue.

When you look at asset group reporting in PMax and see that certain products are getting near-zero impressions despite healthy campaign budgets, the default assumption is that budget is being allocated elsewhere. Sometimes that's true. But often, it's the price competitiveness signal at work.

Google's PMax algorithm builds a serving model that predicts the probability of conversion for every product in your feed. Price competitiveness is one of the inputs to that model. Products that are above benchmark receive lower predicted conversion probability, which means the algorithm allocates less serving budget to them — automatically, without any explicit setting you can adjust.

This is the hidden tax of poor price competitiveness in PMax: it's not just that those products underperform, it's that they consume feed slots and campaign overhead while generating almost no impressions. The budget doesn't disappear — it flows to other products. But if your best-margin products are the ones with pricing issues, you have a systematic problem.

How DukesMatrix Surfaces This

DukesMatrix integrates Merchant Center price competitiveness data into its product segmentation analysis. When a product is showing poor performance metrics — low impressions, low conversion rate, weak ROAS — the platform flags whether price competitiveness is a likely contributing factor.

This matters because the diagnosis changes the prescription. A product underperforming because of a feed quality issue needs different action than one underperforming because it's 18% above benchmark. Without visibility into the price signal, you're optimising blind.

The segmentation layer in DukesMatrix lets you filter your product catalogue by performance tier and price competitiveness together, so you can identify: which underperforming products have a pricing problem that's solvable, and which ones need to be deprioritised in your PMax investment.

The Practical Checklist

  1. Pull the Price Competitiveness report in Merchant Center today
  2. Filter for products >10% above benchmark
  3. Cross-reference with PMax impression data — confirm the correlation
  4. For products where you can lower price: calculate minimum viable price and adjust
  5. For products where you can't: audit GPC assignment, improve image/title quality, evaluate whether they belong in your Shopping campaign at all
  6. Build this into a monthly review cadence — benchmark prices shift as the market moves

Price competitiveness isn't a one-time fix. It's an ongoing variable in a dynamic market. The advertisers who treat it as a routine operational metric will consistently outperform those who only look at bids and budgets.

Ready to optimise your PMax campaigns?

Start free. Connect in 5 minutes. First month on us.

Start Free Trial