From launch to 3 million installations in one year: How to scale an app on Android

Katia Pietukhova is the marketer at the OBRIO company, a part of the Genesis business ecosystem. In the article for AIN.ua, she shares how to launch an application on Android, reach 3 million downloads, and become the number one in a niche in just one year.

Katia Pietukhova. Photo courtesy of the author

OBRIO, one of Genesis’s businesses, works on four product directions: Mobile, Web, GameDev, SaaS. Currently, the astrology app Nebula is the largest project of the company and is also the most downloaded app in the US in the astrology niche. It is used by over 13 million people in 50 countries. Besides, it has become the app of the day in App Store in the US and Great Britain and even outpaced Tinder several times.

Launch of the Android version

Although Android is the most popular OS for smartphones, the companies involved with mobile apps development usually invest more effort into the iOS platform. It is quite logical since iOS users normally bring more profit as compared to Android users. However, after the iOS 14.5 release, it became vital to use all opportunities for conducting purchases with clear and timely metrics. 

My work in the OBRIO company and in marketing began in June 2020. I started with marketing for Nebula, the only existing product at that time, which was available only on iOS. Our marketing team comprised three media buyers, each of whom was responsible for their ads platform (one or several). My first experiences were related to Snapchat and TikTok.

These platforms brought us good results but were quite unstable. They are rather suitable for one-time scaling, when one has a top-performing ad creative, than for the correctly organized strategy and stable volumes.

Later on, new products appeared on the project. One of them was Nebula for the Android platform. Thus we had to change the working process of the marketing team. Since then, the focus of each marketer shifted from ad platforms to a particular product. We wanted to make the new products as steeply developing as Nebula iOS was. However, this was impossible without corresponding resources. Furthermore, we had noticed that our team often faced communication difficulties. For instance, some team members could not understand why did we make particular changes? Why did we conduct these tests, not others? This pushed us to create separate marketing mini teams inside OBRIO, those which had to be responsible for Nebula iOS, Nebula Android, and other apps.

It may seem that parting the app versions for different operating systems into a separate product was not logical. The thing is that the Android version was released much later than the version for iOS. When the iOS version was on the scaling stage, outpacing Tinder in the ratings, bitting new records each day, the Android version was only on the MVP stage, and we knew neither the audience nor the economy well. 

Our mini teams were quite small in size. Back then, we lacked a product manager for all apps and an analyst. A marketer was in response for choosing which ad platforms to invest time in to reach business aims. On the level of mini teams, we also identified problems of the products and brainstormed ideas to solve them.

Prioritization and ideas testing 

We estimated the pool of available ideas according to the ICE framework (crossed out). We TRIED to estimate the pool of available ideas according to the ICE framework. But we discovered that it was somewhat hard to objectively consider the ideas and their impact on the product (even based on the prepared grading scale).

One of the failed plans was to limit the most popular functionality. Nebula contains the section of zodiac compatibility in which the user can check up to 10 zodiac pairs in one session. Our idea formed naturally: what would happen if we restrict the number of checks per day and offer them as the premium functionality? Now it seems quite logical that motivation through restrictions is not the best idea, but now we also have the numbers with test results for complete confidence.

We have learned three lessons out of this case:

  1. If you want to make the product successful — you should translate it into value, and do not force a user to believe that it is valuable.
  2. When you generate an idea, you should choose the instruments that would help you distinctly estimate its impacts and necessary resources. You need the experience to make those instruments work right. 
  3. Each genius idea has to be underpinned by the MVP functionality, according to which you can check the hypothesis. 

Nebula Android pipeline: the problems

To understand, which changes does the product need, one has to carefully analyze previous marketing results and product metrics. Analysis of the pipeline considerably helps in forming ideas and tasks prioritization. 

The Nebula Android pipelines had significant differences from the iOS-version on each stage: conversion from installing to trial was smaller by 1.5 times, conversion from trial to payment — by 2.4. These metrics hugely impacted the efficiency and volumes of purchases as the LTV trial (the effective cost per user who began the trial) was too low to scale volumes with positive ROMI (Return of Marketing Investments).

Additionally, we discovered that above 70% of users who began the trial did it immediately after onboarding. So it was vital to translate the value of the product from the first seconds of usage. 

Apart from the product metrics, the potential obstacle concerned our score in the Play Market. In April 2021, it was 2.9. The users complained that the Android version was not complete, that the subscription was too expensive, and the ads were too many. The score can either motivate a user to download an app or push them to find an alternative. In fact, it represents reputation. The 2.9 score is objectively not too attractive for a user. Especially I would not have trust or wish to try such a product.

We could have chosen to improve only the app content, add new sections, but decided to focus on two metrics instead — conversion from installing to trial and from trial to subscription. To boost these, we resolved to improve the first two stages — onboarding and user activation. And it gave remarkable results. 

What we did

  1. Conversion from trial to subscription 

This was the first step we started working with. We held monetization tests, changed subscription opportunities, prices, and tested different variants of sales screens. But practically, the test groups showed worse results as compared to the actual variants. If there were some which worked better, their benefit still was minor. 

Ordering compatibility reports was one of the most popular services among our users at that time. The report was completed by astrologers based on parters’ data and gave detailed information about the relations of two people.  Our idea was to present our subscribers with the basic compatibility report for free. It was also completed by astrologers and provided accurate information (but did not take into account all personal data, was more generalized). The idea did not require major development resources but boosted the conversion from trial to subscription by more than 2 percentage points.   

  1. Trial conversion

This number is formed by estimating an app onboarding: the experience of a user during their first usage of the product, its usefulness, and pleasantness. 

At that moment, Nebula Android had the standard onboarding process, which required basic information about the user: name, place, and date of birth. On one hand, the onboarding was simple and understandable. It needed a minimum of personal information, which meant it was secure for a user since we did not collect their detailed data. On the other hand, such onboarding could signify that we provided only basic knowledge which could be found on other resources.

We decided to conduct a test, by adding more questions to better understand the purposes for which users downloaded the app, and later introduce even more unique content and opportunities.

The onboarding consisted of extended questions about diverse life spheres and required more time to pass. However, the test results turned out positive, and the trial conversion was boosted by more than 1 percentage point. 

  1. Rating compilation in the app

Besides conducting tests, we also worked over other app features and content. The Play Market score remained low, but we had the hypothesis that it would increase as we improve our product. Nonetheless, we decided to realize the rating pop-up in the app. Mostly, people are more willing to complain about the problems than to express their gratitude. So it was vital for us to receive more user responses to understand if our product was helpful or needed drastic changes. We did not expect that rating collection would change our Play Market score. But when the new version with the pop-ups was released, our score elevated from 2.9 to 4.6 in less than a day. 

The screenshot provided by the author

All we needed to do was to ask active users about how much they liked our app. 

With such seemingly easy steps, we improved each stage of the pipeline starting with a click on an ad to the activation and users retain. 

Marketing 

Singular improvements, which we made, did not guarantee the growth of the audience. The situation changed drastically only when we applied all the product changes for 100% of users. In approximately a month, the volumes increased 4 times. 

Facebook is the primary source of traffic for Nebula Android. For quite some time, the platform did not provide the required results. The deterring factors besides Facebook itself also included our approach to marketing and the efficiency of ad campaigns. 

We had one question in mind: how can we boost the volumes from Facebook? Back then, almost all the campaigns were launched on “bid cap”, which meant that volumes were minimal. The expenses could reach ~5% of the daily budget during the whole life cycle of the campaign. Meanwhile, the purchases at the “lowest cost”, which could boost our traffic, were expensive, and campaigns burnt out in a few days. However, we had to learn to optimize these exact campaigns, which would help us scale in the long run. Here are several rules which we had defined during this period:

  1. If you have any doubts regarding work with the lowest cost — you would better withdraw these campaigns into a separate account or revise the automated rules for the main one to reduce their impact on the lowest cost campaigns. Automatic Facebook rules for ads turning on/off can impact the optimization and restrict you from noticing the positive dynamics.
  2. Start with small budgets. I cannot provide exact recommendations, but you should expect approximately ten target actions.
  3. Try not to make decisions about campaign shut down during the first day and observe which conversion % you will receive in some time. Our model is subscriptions, and we are optimizing for trials, so at the beginning of working with these campaigns, I watched how the trials lasted in the first 2-3 days. If I see that for 2-3 days in small volumes, the price is approaching the target, I raise the budget. You should take into account that this time gap can be bigger for you since you do not optimize actions, which a user takes immediately (maybe you should better observe the campaign during ~7 days)
  4. I do not follow the rule which advises “not to increase by more than 20%” when I see positive dynamics. I can double it to collect more data for a campaign. If your budgets are bigger, you better play safe.
  5. Duplicate the companies from the beginning:
  • your budget will be divided into smaller portions among several campaigns. If some of them do not optimize, you will lose much less as compared to the losses in the case of bigger investments into one large campaign.
  • there is a serious possibility that the indicators by these campaigns would differ. Thus you can choose a “winner” and scale it.
  • besides, one company with a small budget will hardly give you the possibility to scale volumes quickly.

Such an experiment has witnessed that campaigns can really be held even during a whole month. The market seasonality also had a positive role. However, we could not have increased the volume and preserved traffic payback without improving the product and rethinking the marketing strategy.

Conclusions 

Here I would like to share several conclusions:

  1. Many effective ideas have already been realized — do not be too shy to use them. When one starts working over something new, they desire to generate a new genius idea, that would produce a breakthrough in their industry. This intention is very commendable. But always keep in mind that many working approaches have already been realized, and sometimes all you need is to use them properly. 
  2. Numeric books and articles were written about soft skills for a clear reason — communication is vital. Any team sometimes faces problems: developers do not understand how does marketing work and why it is impossible to scale in a day; marketers do not understand why some crucial tasks cannot be realized immediately; analytics launch tests, and someone does not see their purpose. It is important to trace all the cases when someone from the team “did not know” or “did not understand” because this is the marker of miscommunication. In our case, the creation of mini teams helped us better understand the process and each other.
  3. Concentration. It is impossible to become successful with a product in which someone devotes 10%, 20%, or even 50% of their time. Success equals 150% of the attention.

I hope you enjoyed reading about our experience in product scaling. By writing this article, I wanted to show that it is vital to cooperate as a team, be quick in making decisions, and be flexible in your work. I will be thankful for any comments and reviews!

Author: Katia Pietukhova, Marketing Specialist at OBRIO

Search