Adjusted ROAS is a leap forward in measurement
Disclaimer: The opinions represented here are those of the individual and do not necessarily represent those of their current or former employer.
For mobile performance marketers, scaling retargeting profitably requires navigating tension between standard attribution and true incrementality. Traditional Return on Ad Spend (ROAS) often overstates the contribution of advertising by taking credit for users who would have converted organically. Conversely, running rigorous incrementality testing provides undeniable causal truth, but holdout tests are difficult to scale on a daily basis.
Senior marketers are turning to new strategies to bridge the gap. Adikteev sat down with Brett Patterson, VP of User Acquisition at the rewards application Copper, to outline how user acquisition leaders can deploy "Adjusted ROAS"—a continuous, machine-learning-driven proxy developed by Adikteev—to bring the rigor of incrementality into daily optimization.
Watch the full interview below, or read on for a selection of key takeaways.
Key Takeaways
Sophisticated performance organizations rely on sophisticated testing. For platforms like Copper, incrementality testing is a foundational practice. However, standard incremental Return on Ad Spend (iROAS) is historically binary—an organization either has a controlled test running, or it does not.
Patterson summarizes the traditional operational flow: "Typically, I'll run an incrementality through Adikteev... take total costs and the difference between the control and test and see what the incrementality will look like."
While the method is effective for periodic audits, relying on holdout groups introduces bottlenecks. "The biggest thing for me is having the holdout and the test be weighted correctly," Patterson explains. "Especially in gaming, when you're working with very big whale-driven audiences, you can have some skews between different... control tests."
Adjusted ROAS solves this gap. It is not another competing attribution model, but rather a transformation of the revenue numerator within the standard ROAS calculation. By discounting the revenue of "would-have-converted" users based on predictive machine learning, the metric reflects anticipated incremental revenue rather than total gross revenue.
For UA leaders like Patterson, this evolution is a leap forward: "A more sophisticated way of doing this without having to constantly run tests, I think is obviously on the forefront of where we want to be as re-targeters.”
The foundation of Adjusted ROAS relies on a simple premise: similar users behave similarly over time. By utilizing historical data from thousands of users who previously participated in standard holdout tests, Adikteev’s machine learning models learn to identify two distinct behavioral patterns among ad-addressable audiences:
The algorithm trains on an array of signals, including recency of activity, prior spend tiers, session depth, engagement density, and app-category-specific baselines, like distinguishing RPG players from Fintech users.
At Copper, a rewards platform incentivizing users to download and engage with mobile games, these signals are highly specific. Patterson notes:
Most of the time we're looking at what type of down-funnel engagement they're doing... For traditional Copper revenue, we want that valuable user who's generated some kind of value in the past to do more of that value, which is a game install for us. So that's a clear, that's definitely the clear and obvious signal for an ML to kind of train off of"
When a live campaign generates an attributed revenue event, the model analyzes the user's profile against these historical patterns. If the model determines the user exhibits behavior identical to those who previously converted organically in a control group, it applies a smaller incremental weight to that revenue event.
To maintain structural integrity and prevent an artificial inflation of campaign value, Adjusted ROAS relies on a highly conservative, mathematical framework. The system is designed strictly as a calibration tool, ensuring outputs remain comparable to real-world lift test results.
Crucially, probability weights are mathematically capped at 1.0. Even if a user appears highly incremental, the system prohibits weights greater than 1.0 to prevent an "arms race" of inflated value, ensuring the metric acts as a reliable guardrail for media buyers.
Because standard holdout tests are expensive, require immense sample sizes, and take weeks to reach statistical significance, marketers often face blind spots in their mid-campaign decision-making. Adjusted ROAS provides leverage in three distinct areas between these quarterly tests:
Deploying this capability is streamlined, relying on the ingestion of first-party historical data. Upon understanding the capabilities, Patterson's immediate response focused on operational integration: "What are the needs that I need to push back to you? Do you need some kind of... additional lists, additional criteria, additional creative?"
The implementation follows a distinct, five-step architecture yielding its first usable output in approximately ten business days
This implementation has proven highly effective across the mobile ecosystem. For example, mobile game publisher Devsisters successfully utilized Adjusted ROAS modeling to empirically validate the need for a shorter inactivity window before classifying users as "lapsed," allowing them to capture incremental value much earlier in the churn lifecycle.
It is vital for UA practitioners to understand the boundaries of predictive metrics. Adjusted ROAS is a decision-support layer, not a replacement for causal experiments It does not negate the necessity of periodic geo-tests, deep VIP forensic analysis, or macroeconomic MMM for top-level channel budgeting.
Its primary utility lies in interpreting the divergence between standard and adjusted metrics. Consider a campaign yielding an Attributed ROAS of 4.0 but an Adjusted ROAS of 1.5. This variance indicates that while the campaign captures significant revenue, the majority stems from users who were likely to convert organically. An Adjusted ROAS of 1.5 remains positive, indicating incremental success, but it warns the UA manager that scaling this specific audience further will not yield linear returns. The optimal operational response is to expand targeting to audiences with lower attribution saturation, refresh creatives, or shift the budget to lapsed, rather than active, players.
The era of scaling user acquisition and retargeting campaigns based solely on standard platform attribution has ended. As UA costs rise and retention becomes the ultimate driver of profitability, subsidizing organic conversions represents an unacceptable waste of marketing capital.
Adjusted ROAS effectively closes the gap between the rapid pacing of digital media buying and the slow, rigorous science of incrementality testing. By training machine learning models on first-party holdout data, publishers can accurately discount the revenue of "would-have-converted" users, generating a continuous, daily read on true ad-driven value. To see how Adjusted ROAS can improve your retargeting efficiency, contact Adikteev for a tailored walkthrough.