Skip to content

Tap In Digital's Ted Coxworth on the Future of Incrementality

Marketers have seen a lot of shifts in recent years — the largest are still to come.

Disclaimer: The opinions represented here are those of the individual and do not necessarily represent those of their current or former employer.

In the fast-evolving world of mobile apps and games, there’s no single magic formula for marketing success. But learning from industry experts can give senior marketers a roadmap to refine their workflows. We recently spoke with Pauline Berger, Lead User Acquisition Manager at Million Victories, the studio behind Million Lords, to uncover how she drives growth and retention in today’s competitive landscape. Pauline’s decade-plus experience in mobile gaming has taught her to balance creative strategy, player engagement, data challenges, and more.

Watch the full interview below, or read on for a selection of key takeaways.

 

Key Takeaways 

  1. Design incrementality tests for scalable, repeatable results. This ensures the insights you gain will translate to full-funnel impact, not just isolated short-term wins.

  2. Select user segments that are large enough to yield statistically valid test results. Avoid niche or undersized groups that result in inconclusive data and waste your marketing budget.

  3. Run tests that reveal non-incremental channels so you can cut inefficiencies. Negative results often save more money than positive results generate—if you act on them.

  4. Lead with the test’s key outcome using clear, intuitive language. A 15% incremental lift sounds more convincing than a dense chart of statistical assumptions.

  5. Build repeatable processes and a prioritized testing backlog. Treat experimentation as an engineered system, not an occasional tactic, to drive consistent growth.

Design Experiments for Broad, Real-World Impact 

When running incrementality tests, Ted says it’s key to always consider generalizability – ensure your experiment’s conditions and results can scale to your broader marketing efforts. A common mistake is designing narrowly, such that the outcome doesn’t hold outside the test. Instead, craft experiments that span various user segments, time periods, or geographies so a positive result truly indicates an initiative that will drive lift at scale. 

flches

“The biggest thing for me is generalizability … it’s the idea that you want the outcome of your experiment to be something that you can expect to see if your experiment is successful and you roll out whatever change you’re testing more broadly. Rather than just having a specific segment or a specific period of time for which the result is applicable, you want to design an experiment that allows you to see that kind of an effect across seasons and across segments… Some of the ways that people tend to miss the boat… is not having a design that fosters that kind of an outcome.”

Execution is an equally important aspect of incrementality test design. Make sure your test parameters (budgets, targeting, creatives) are implemented exactly as planned. A well-designed experiment can fail to translate into business lift if it’s executed poorly or in a siloed context. In short, think big – an effective incrementality test should simulate a real campaign rollout, not a one-off anomaly.

Target Meaningful, High-Impact Audiences

Ted emphasizes picking audience segments that are meaningful to your business’s bottom line – not just those that are convenient or novel. It’s a common trap to choose an easily accessible subset for a quick experiment. But if that segment isn’t significant to revenue, a lift there won’t move the needle for the business. Instead, focus on user groups that represent substantial value. Moreover, ensure the segment is large enough to produce statistically robust results. Too narrow an audience leads to tests with low power, high variance, and the dreaded “inconclusive” outcome.

flches

“So the thing to start with when identifying a good segment is picking a piece of the market that’s meaningful for your business. It can be a common trap to go after a segment that’s easy to test or very unique… But the goal is to have a result that affects the business in a meaningful way. … And the last thing I’ll mention around segmentation is picking a segment that has enough individuals in it to produce statistically robust analytical data. Very important. Otherwise you get mired in the worst possible outcome, which is an inconclusive test – no one wants that, it’s a waste of time and money. Not including the test segment in your control is probably the number one thing I see going wrong with segments… we need to have that segment in both the test condition and the comparison set, meaning enough people to split between those two groups.”

The importance of segment selection is illustrated by LALALAB., a photo printing app. LALALAB had grown its user base through broad user acquisition, but to boost retention and revenue, they turned to segmentation in their re-engagement strategy. The team segmented users by behavior and value, separating new “openers,” product viewers, hot prospects, and past purchasers. Critically, they didn’t treat all customers alike: they identified top spenders by geography, grouping high-value users in a Tier 1 segment and allocating the bulk of budget to them. This meaningful segmentation ensured that any lift achieved would materially improve revenue. The results were dramatic – by targeting each segment with tailored creative (including dynamic product ads) and focusing on the most valuable users, LALALAB doubled its return on ad spend and increased revenue by 242%. 

Embrace Negative Results

Don’t fear a negative lift in your incrementality test; instead, treat it as a potential goldmine of insight. As Ted points out, no marketer wants to see a campaign show zero or negative incremental return. However, these counterintuitive results can be extremely powerful. A negative incrementality result essentially tells you that stopping or reducing an activity won’t hurt – which means you’ve likely found wasteful spend or a misallocated budget. Often, negative lifts surface through “natural experiments,” when an external event forces a pause in marketing. Smart teams can take advantage of these moments to analyze outcomes. If turning off a channel yields no drop (or a drop smaller than spend savings), that budget can be reallocated to more fruitful channels. In Ted’s words, negative results are far better than inconclusive ones – at least you learn something and can act on it. The actionable mindset here is: actively look for indications of non-incremental spend, and be willing to pivot. A failed test isn’t a failure if it leads to smarter budget allocation.

flches

“The funny thing about negative lift tests is that nobody wants to run them, ever. This includes marketers, but also in the scientific world – nobody wants a result that’s anything other than a positive result. They can be really powerful. The way I’ve typically seen them show up is in the form of natural experiments. So marketers stop spending in a channel for one reason or another (either an operational hiccup or a lack of budget), and they cut something off. That means we’ve got this really rich, unplanned-for windfall of data to see – this channel went away for a period of time, what did we see in terms of the KPIs that matter ? So that’s typically the form I see them take.”

A striking example comes from Tally, a fintech app that consolidates credit card debt. Tally’s growth team ran a series of incrementality experiments and discovered that some of their hefty marketing investments weren’t truly pulling their weight. In particular, they questioned certain “must-buy” media channels – for instance, linear TV ads – that had long been assumed effective. The tests revealed little incremental lift from those channels, effectively a negative result versus expectations. Instead of being discouraged, Tally used this insight to confidently cut back on low-impact spend. The outcome: they plan to redistribute millions of dollars annually from those low-incremental tactics to higher-performing channels, and project a 4% increase in total borrowers as a result.

Communicate Incrementality in Plain Terms

It’s not enough to run sophisticated experiments, you must communicate the results in a clear, compelling way to non-technical stakeholders. Marketing leaders often need to justify decisions to a CEO, CFO, or board who may not live and breathe attribution models. Ted’s advice: start with the punchline. Lead with the core result of the test. By immediately answering the “so what,” you capture executives’ attention with a digestible takeaway. Then, only if needed, provide the supporting details and caveats. A common mistake is front-loading your presentation with methodology, statistical nuances, or technical jargon – this can confuse or undermine your message. Most business leaders will grasp an A/B test story if told without fluff. Also, design your tests to be as clean and straightforward as possible; a simple test yields a simple story.

flches

“Communication is a huge part of this process… My advice always is to start with the punchline. What’s the result of the test? And keep it as simple as you possibly can when communicating this. Most people can intuitively get it: you didn’t do a marketing effort in one place or with one group, and you did do it with another, and in the place where you pursued this effort, we saw more sales … Design a really clean, simplistic test that’s easy to analyze and communicate. That way you don’t have to summarize a lot of complexity.”

Let’s say you run an incrementality test on a new app re-engagement campaign, and the result is a +20% uplift in purchases among the tested audience. Communicating this to a CFO or CMO would start with a headline like: “Campaign X drove a 20% incremental increase in sales.” You might follow with a one-liner of how: “For example, users who saw the ads bought 120 items, vs. 100 in the identical holdout group – a clear lift of 20% that can be attributed to the campaign.” Framing it this way focuses on the business outcome (more sales) in plain language. As a marketing leader, when you deliver clear, outcome-focused messages like this, you not only justify your testing program but also build trust with the leadership team. They see incrementality not as academic, but as a practical decision-making tool that speaks their language of revenue and ROI.

Interested in applying these principles to your own campaigns? Adikteev empowers marketers to design robust incrementality tests, target high-value app users, and turn insights into action. Explore how Adikteev can help you drive incremental growth in your mobile app strategy.

Build a Scalable “Test-and-Learn” Culture with Systems and Process

Ted suggests that scaling from a few tests to dozens per quarter requires treating your testing program as an engineered system. That means establishing clear processes for ideation, test design, execution, and analysis that everyone follows. Create templates and checklists for each step so that teams aren’t reinventing the wheel every time, a consistent approach ensures quality and efficiency. Coordination and communication are also critical: as you increase test velocity, you need mechanisms to avoid conflicts and to disseminate learnings. Ted advises having a centralized backlog of test ideas prioritized by potential impact, so you always tackle the most valuable questions first.

flches

“Having robust processes in place for test design, ideation, analysis, deployment – all of those pieces – is essential, so that you don’t have to think about each one of these tests as a custom process. There will be details that differ, but having the same approach across all tests… allows you to have high-quality, robust tests at a high rate of throughput… The testing program becomes an engineered system in its own right... My guidance always is: start with the thing you think is going to be most impactful and work down the list.” 

 

Booking.com is widely regarded as a benchmark for large-scale experimentation, extending its test-and-learn philosophy beyond product into marketing execution. The company reportedly runs more than 25,000 experiments per year, with over 1,000 tests live at any given time, spanning areas like ad messaging, landing page flows, pricing presentation, and promotional strategy. Crucially, experimentation at Booking.com is democratized, as teams are empowered to launch tests without layers of managerial approval, allowing insights to compound quickly across channels. This disciplined, high-velocity approach to testing has enabled the company to systematically optimize marketing performance, reduce reliance on assumptions, and scale decisions backed by real behavioral data rather than intuition alone.

Incrementality With Scientific Rigor

An incrementality program is only as valuable as the conviction it builds in your organization, so tell the story clearly and build a framework that makes testing efficient at scale. As the marketing landscape evolves, those who cultivate an experimentation culture now will be the big winners in the next 3–5 years.In the words of Ted, incrementality testing is removing so much friction, that ”everyone can design and deploy tests quickly and easily to start getting more effective with their marketing portfolio.” For more from Ted, follow him on LinkedIn or check out Tap In Digital.

Ready to put incrementality into action? Contact Adikteev to see how you can test, measure, and scale your app marketing with confidence.