We recommend launching campaigns with at least 10 diverse video creatives and 1–2 end cards, followed by a Dynamic Product Catalog (DPA). This setup allows for robust creative testing, maximizes engagement across formats, and ensures strong performance by optimizing for conversion at the most critical touchpoint.
Testing Cadence
After your launch, we recommend adding as many creative sets as possible—ideally at least 5–7 new creative sets per week—to help optimize your campaign performance.
No need to pause creatives: our optimizer will automatically reduce spend on creatives that don't perform. There is no need to manually pause and doing so may affect the creative's opportunity to explore
Performance Benchmark
Aim for an average CTR of 3-5% on our platform.
Higher CTR indicates users are more interested and engaged, which helps our ad platform find relevant consumers.
If CTR is low, it likely means users are not resonating with your ad and you should be consistently testing new creatives.
Testing Additional Videos
We start each campaign with as many videos as you have
Files should be uploaded at the highest possible quality in .mov or .mp4
Take advantage of Rewarded 30-60 second un-skippable ad opportunities with longer ads; 60 seconds is the maximum length for all videos
End Cards make up more than 50% of the clicks of a creative set, so it is critical to continue testing new ones to maximize the scale in your campaign
As you have new static winners on other platforms, experiment with turning them into end cards. Also experiment with using GIFs and animations, as well as creating more custom end cards for even better performance.
Apply the new end card to the top 2-3 videos (keeping the landing URL consistent as well). You should get a good read on whether the new ECs perform well with that data, and keeping it to 2-3 of the top performers ensures you don’t redistribute too many impressions from the existing top creative sets
An End Card can be created with either HTML or a Portrait Image; when using Portrait Image, no CTA button needs to be added
Testing Dynamic Product Catalog
Dynamic Product Catalog (DPA)The Dynamic Product Catalog is a screen in your ad unit that dynamically showcases your products. Enabling DPA can be a strong performance lever, often driving up to a 30% lift in checkouts. Integration methods here
Multi-Campaign Structure
To maximize campaign performance and ensure sustained growth, we recommend experimentation with different campaign types.
ROAS vs CPP: Running multiple campaigns targeting different audience segments helps expand reach and bring in new users. ROAS campaigns often focus on converting high-intent users to meet specific profitability goals, CPP campaigns prioritize acquiring users at a cost-effective rate.
Optimization Window: D0 and D7 optimization windows are now supported for both CPP and ROAS campaigns. D0 models optimize to purchases made in the first 24 hours from the initial impression, while D7 models optimize to those made in the first 7 days. Advertisers with a longer consideration period (for instance exhibiting a maturation from D0 to D7 reported results of 1.5x+ are highly encouraged to test the D7 optimization window to better capture downstream value.
Campaign Performance Reporting
Consider the following actions when evaluating campaign performance:
Evaluate Maturity Beyond Day 0 (D0) ROAS: When analyzing campaign performance, consider maturity over multiple days rather than focusing solely on D0 ROAS. For instance, while a campaign may initially underperform against a target on D0, it could achieve or exceed goals by Day 3 (D3) or Day 7 (D7). This approach is especially useful for campaigns targeting audiences with longer purchase cycles. By looking at performance across extended timeframes, campaigns may reveal profitability that isn’t immediately apparent.
Analyze Weekly Aggregates Instead of Daily Performance: We recommend assessing campaign performance over a weekly time frame rather than daily. Variations in daily results—missing targets one day and exceeding them the next—can be balanced out when aggregated over a longer period. Evaluating weekly data provides a more reliable view of performance, especially in cases where daily budgets are too small to reach statistical significance. This method ensures a more accurate assessment of overall campaign success.
Monitor For, and Minimize Data Discrepancies Across Platforms: Ensuring alignment between AppLovin reported performance data and internal/3rd party tracking systems is essential. The models run off of AppLovin reported data, so if you use a different platform to assess performance, some translation may be needed. Small differences between reporting platforms is not unusual due to variance in attribution modeling and time zones, but if significant differences persist between data sources, these can be flagged to your AppLovin growth advisor for further investigation.