- We suggest testing 5-7 new creative sets/week at <$10k/day spend and 7-10+ sets/week above $10k/day.
- Recommendation: Clone creative sets and swap out new videos/LPs instead of creating new sets. Directions for uploading videos and adsets
- : our optimizer will automatically reduce spend on creatives that don't perform. There is no need to manually pause and doing so may affect the creative's opportunity to explore
- Aim for an on our platform.
- Higher CTR indicates users are more interested and engaged, which helps our ad platform find relevant consumers.
- If CTR is low, it likely means users are not resonating with your ad and you should be consistently testing new creatives.
HTML End Cards make up 50% of the clicks of a creative set, so it is critical to continue testing new ones to maximize the scale in your campaign
As you have new static winners on other platforms, experiment with turning them into endcards with your HTML generator . Also experiment with using GIFs and animations, as well as creating more custom end cards for even better performance. Apply the new end card to the top 2-3 videos (keeping the landing URL consistent as well). You should get a good read on whether the new ECs perform well with that data, and keeping it to 2-3 of the top performers ensures you don’t redistribute too many impressions from the existing top creative sets
To test new landing pages, take the top 2-3 creative sets, clone them exactly as they are, and swap out the Landing URL for a new one.
Adjusting the name of the creative set so its easy to compare these new ones with what they had before.
To maximize campaign performance and ensure sustained growth, it’s essential to adopt a multi-campaign structure that uses both CPP and ROAS campaign types. Running multiple campaigns targeting different audience segments helps expand reach and bring in new users. ROAS campaigns often focus on converting high-intent users to meet specific profitability goals, CPP campaigns prioritize acquiring users at a cost-effective rate. Together these models ensure your ads are served in the most effective way on our platform.
Consider the following actions when evaluating campaign performance:
- : When analyzing campaign performance, consider maturity over multiple days rather than focusing solely on D0 ROAS. For instance, while a campaign may initially underperform against a target on D0, it could achieve or exceed goals by Day 3 (D3) or Day 7 (D7). This approach is especially useful for campaigns targeting audiences with longer purchase cycles. By looking at performance across extended timeframes, campaigns may reveal profitability that isn’t immediately apparent.
- : We recommend assessing campaign performance over a weekly time frame rather than daily. Variations in daily results—missing targets one day and exceeding them the next—can be balanced out when aggregated over a longer period. Evaluating weekly data provides a more reliable view of performance, especially in cases where daily budgets are too small to reach statistical significance. This method ensures a more accurate assessment of overall campaign success.
- : Ensuring alignment between AppLovin reported performance data and internal/3rd party tracking systems is essential. The models run off of AppLovin reported data, so if you use a different platform to assess performance, some translation may be needed. Small differences between reporting platforms is not unusual due to variance in attribution modeling and time zones, but if significant differences persist between data sources, these can be flagged to your AppLovin growth advisor for further investigation.