- We suggest testing at least 5-7 new creative sets/week but feel free to add any number of creative sets at any time - adding more creatives will give the model more options to test with new audiences
- Recommendation: Clone creative sets and swap out new videos/LPs instead of creating new sets. Directions for uploading videos and adsets
- : our optimizer will automatically reduce spend on creatives that don't perform. There is no need to manually pause and doing so may affect the creative's opportunity to explore
- Aim for an on our platform.
- Higher CTR indicates users are more interested and engaged, which helps our ad platform find relevant consumers.
- If CTR is low, it likely means users are not resonating with your ad and you should be consistently testing new creatives.
HTML End Cards make up more than 50% of the clicks of a creative set, so it is critical to continue testing new ones to maximize the scale in your campaign
As you have new static winners on other platforms, experiment with turning them into endcards with your HTML generator . Also experiment with using GIFs and animations, as well as creating more custom end cards for even better performance. Apply the new end card to the top 2-3 videos (keeping the landing URL consistent as well). You should get a good read on whether the new ECs perform well with that data, and keeping it to 2-3 of the top performers ensures you don’t redistribute too many impressions from the existing top creative sets
The Dynamic Product Catalog is a third screen in your ad unit that dynamically showcases your products. Enabling DPA can be a strong performance lever, often driving up to a 30% lift in checkouts.
To activate DPA in your creative set, advertisers must complete the required integration steps.
By default we can create a catalog using data from your AppLovin pixel;, please note that exclusions or customizations are not supported at this time.
If you opt to use a feed provider or CSV method, this will override the pixel-based catalog and you’ll have greater control over which items are displayed.
Brand information appears on ads and helps establish a consistent brand experience for users. It can be configured at either the Advertiser, Campaign or Creative level. By default, creatives will inherit advertiser-level brand info unless overridden at the campaign or creative level. When creating a new account, it’s recommended to populate the advertiser-level brand info to establish a baseline experience.
Advertisers are encouraged to test variations at the creative level—such as descriptions, CTAs, and button colors—to identify top-performing combinations.
To test new landing pages, take the top 2-3 creative sets, clone them exactly as they are, and swap out the Landing URL for a new one.
Adjusting the name of the creative set so its easy to compare these new ones with what they had before.
To maximize campaign performance and ensure sustained growth, it’s essential to adopt a multi-campaign structure that uses both CPP and ROAS campaign types. Running multiple campaigns targeting different audience segments helps expand reach and bring in new users. ROAS campaigns often focus on converting high-intent users to meet specific profitability goals, CPP campaigns prioritize acquiring users at a cost-effective rate. Together these models ensure your ads are served in the most effective way on our platform.
D0 and D7 optimization windows are now supported for both CPP and ROAS campaigns. Advertisers exhibiting a strong D7:D0 performance multiple are highly encouraged to test the D7 optimization window to better capture downstream value.
Consider the following actions when evaluating campaign performance:
- : When analyzing campaign performance, consider maturity over multiple days rather than focusing solely on D0 ROAS. For instance, while a campaign may initially underperform against a target on D0, it could achieve or exceed goals by Day 3 (D3) or Day 7 (D7). This approach is especially useful for campaigns targeting audiences with longer purchase cycles. By looking at performance across extended timeframes, campaigns may reveal profitability that isn’t immediately apparent.
- : We recommend assessing campaign performance over a weekly time frame rather than daily. Variations in daily results—missing targets one day and exceeding them the next—can be balanced out when aggregated over a longer period. Evaluating weekly data provides a more reliable view of performance, especially in cases where daily budgets are too small to reach statistical significance. This method ensures a more accurate assessment of overall campaign success.
- : Ensuring alignment between AppLovin reported performance data and internal/3rd party tracking systems is essential. The models run off of AppLovin reported data, so if you use a different platform to assess performance, some translation may be needed. Small differences between reporting platforms is not unusual due to variance in attribution modeling and time zones, but if significant differences persist between data sources, these can be flagged to your AppLovin growth advisor for further investigation.