Google launched Performance Max in 2021 promising to simplify ecommerce advertising. A single campaign, all Google channels, automation that optimizes in real time. For many teams, it was a relief to stop managing Search, Display and Shopping separately.

The problem appeared later. Performance Max makes budget distribution decisions that the team can't quite see. The system doesn't exactly fail, it optimizes based on its own signals, and those signals tend to favor what worked before.

The most common result is this: the products with the best sales record account for most of the spending. New products don't generate data because they don't receive traffic.

And there is a third group, products with real potential, that have been without significant exposure for months and are still doing so because the algorithm has no reason to test them.

Why organizing by category doesn't solve the Performance Max problem

The instinctive response of many teams is to organize campaigns by category. Shoes in one campaign, accessories in another. It makes sense from a catalog perspective, but not from a performance perspective.

The PMax algorithm does not distribute the budget evenly within a campaign. Under “shoes”, you'll favor models with a history of conversion, regardless of whether you're paying for all the shoes to compete. Those who were already selling well continue to receive the expense. The others are waiting.

The organization that does work is for real performance. Group products based on how they behave, not what they are.

The segmentation that changes how PMax works

The most direct framework divides the catalog into three groups.

The first are the star products

High ROAS, consistent conversions, clicks that justify the expense. The objective with this group is to maintain profitability while maximizing volume. The ROAS targets are higher, between 3x and 5x depending on the business margin.

The second group is zombie products

They have been in the catalog for some time but with insufficient exposure to generate relevant data. They may be bad products, or they may be good products that the algorithm never tested because they had no track record.

For this group, the goal is visibility, not immediate profitability. The ROAS targets are lower, between 0.5x and 2x, because the goal is to obtain data to make a real decision about each SKU.

The third group is what's new

Newly added products that can't compete in the same campaign as star products because they don't have a track record. They need a separate campaign with different evaluation criteria: the KPI is not ROAS, it's visibility and initial behavioral data.

The thresholds that define each PMax segmentation

For segmentation to work in practice, the team needs to precisely define which metrics determine which group each product is in. Some reference parameters:

  1. Stars: ROAS above 3x to 5x, volume of clicks sufficient for the data to be statistically relevant, consistent conversions over the period of analysis.
  2. Zombies: ROAS below 2x, or insufficient data to evaluate, or low clicks in relation to the catalog average. The exact threshold depends on the margin and volume of the business.
  3. What's new: criteria based on the date of incorporation into the catalog, for example products added in the last 30 days, with objectives of awareness and data accumulation before being evaluated against other groups.

These thresholds are not universal. A business with high margins can tolerate a lower star ROAS than one with tight margins. The point is to define them clearly so that the classification is consistent and does not depend on the criteria of the analyst on duty.

The analysis period matters more than it seems

Many teams use 30-day windows to evaluate performance. For catalogs that change fast, that's too slow.

With a 30-day window, a product that performed well three weeks ago and started to fall this week still seems profitable in the aggregate.

And a seasonal product that took off ten days ago still doesn't show the potential it's already having in recent days.

A 14-day window provides more up-to-date signals. It is especially relevant in fashion, home, and any category where demand changes with trends or seasonality.

The trade-off is that with less data there is more noise, so it makes sense to combine the short window with a minimum volume of clicks before making reclassification decisions.

The most time-saving step: automating movement between groups

Segmentation works if products move between groups when their performance changes. If that is done manually by a person, reviewing SKU by SKU, the system doesn't scale in large catalogs.

The way to make it sustainable is to define rules that automatically move products.

If a zombie product exceeds a ROAS of 3x in 14 days, it goes to the star group. If a star product falls below 2x in the same period, it goes down to the zombie group for review. New products always enter the group of novelties and migrate to zombie or star after a defined period of data accumulation.

Feed management tools allow this logic to be automated without the team having to review each product individually.

The case documented by Channable with La Maison Simons, a Canadian fashion retailer, illustrates the type of result that this approach can produce: ROAS that almost doubled in three years, reduced cost per click, increased the average order value by 14%.

Products that didn't receive exposure before ended up being some of the best performing once they had a campaign designed to give them visibility.

The same logic applies to other channels

Once the star/zombie/novelty segmentation exists for Google, it makes sense to replicate it on Meta, TikTok, Pinterest and any other paid channel where the team is active.

A zombie product in Google may have traction on TikTok. The audience profile that converts to one channel doesn't necessarily convert to another.

Having the same ranking on all channels allows us to see where each product really works, and to distribute the budget accordingly, instead of assuming that what didn't work at Google doesn't work anywhere.

Consistency between channels also simplifies reporting. Instead of analyzing the performance of each platform separately, the team can evaluate how each product segment moves across all channels, and detect patterns that wouldn't be visible by looking at Google or Meta in isolation.

What the paid media team needs to optimize Performance Max

Three specific things.

  1. First, clarity on what metrics define each group for the specific business. ROAS thresholds vary depending on the margin, category and objectives for each season. There is no universal number.
  2. Second, centralized visibility of SKU-level performance. PMax doesn't natively deliver that granularity, so in most cases you need a feed management tool or integration that consolidates product data from all campaigns and channels.
  3. Third, defined rules for automatic movement between groups. Without automation, segmentation becomes another manual process that the team has to carry out week to week, and that doesn't scale.

The starting point doesn't have to be perfect. Starting with three simple campaigns and thresholds is already better than a single campaign organized by category where the algorithm distributes the budget according to its own history.

Performance segmentation doesn't eliminate PMax automation, it makes it work with better information about what deserves spending and what you still have to prove is worth it.

Tu marca merece ser visible. Creemos juntos una estrategia impactante