Get ready for some great content coming to your inbox from the team at UserTesting!
A: One of the most common problems we see mobile apps doing is not onboarding users well. On mobile, users make decisions about your app very quickly. First impressions matter. According to research, 25% of apps are abandoned after first time use. A lot of teams out there are focused on making a whole bunch of awesome features for their users, but if a huge chunk of your users are deleting your app after only a few minutes, all that effort goes to waste.
If you don’t nail onboarding, your developers may as well have been drinking beers instead of building those features that no one saw.
It’s not a contest of who has the most features. The apps that succeed are the ones that clearly convey their value proposition to the users from the get-go. Instead of focusing on building out more cool features, app teams should focus on how to showcase them properly.
The first step is to drive users toward the “aha” moment, the instant when users realize what value the app will provide in their lives. This can be done using onboarding tutorials, good clean UI design, and quickly driving users toward core functionality. Once your users reach the “aha” moment, they won’t delete your app. Then, you can start showing off all your other features.
The “aha” moment is reached relatively early on in use, so the next key is to show the user what else they can do in an app, and get them deeper into your ecosystem.
Another common mistake is to hide additional features in an app drawer, the most common of which is the hamburger menu. The hamburger menu is problematic. It signals to users that features tucked away in the menu are not important. Since screen space is incredibly limited, only the most valuable features are immediately viewable. Even if your feature is amazing, the impression that users get is that it’s neglected, cast aside in the same drawer as user settings, share buttons, legal, and other non-essentials. Users don’t explore as much as we think they do, so showing features they’ll value front and center is vital.
One of our clients, a social fashion app, increased retention rates by 18% by simply moving a few key features from the hamburger menu into a tabbed menu at the bottom of the screen. We also recently talked to Slickdeals and they shared with us a similar problem: users were only using what was immediately displayed on the homescreen. Their other features that are very popular on the web were neglected in the side drawer, so now they’re working on a redesign to better showcase their top features.
The point is that users generally don’t spontaneously discover all your features. If you have something your users will love, display it prominently to make sure that it’s being seen and utilized. But of course, you can’t overdo it either and bombard your users with ten thousand different features at once. This is where user testing and A/B testing come in to determine your top features and the ideal layout.
All app development teams (including product managers, developers, designers, marketers, QA, etc.) want to provide a better user experience for their end users and do that iteratively, quickly, and with better data. Regardless of the end goal—be it revenue, marketing, user engagement, etc.—creating an app that users love is what app creators want to do.
That’s actually where A/B testing helps a great deal. It helps you create a better app in less time because:
One of the best things that app developers can do is to A/B test changes before deploying them to all of their users. Because the app marketplaces are so slow, iterating fast and understanding how your users are reacting to your feature changes can be incredibly difficult. With these methods, any mistakes cost precious time and resources.
With A/B testing, teams can deploy new features to a small percentage of their users, and test to analyze how the features affect user behavior. If it’s beneficial, they can then instantly deploy that feature to 100% of their users, without having to resubmit.
App teams can also do user testing to get qualitative feedback from their users and hear them describe their experience in their own words. It’s good for pinpointing exactly what is confusing—or delightful—to the user.
The traditional app cycle is incredibly long, drawn-out, and rigid. The typical release cycle looks somewhat like this, with dev cycles ranging anywhere from a few weeks to a few months:
There are two key roles here that A/B testing plays while testing and optimizing an app.
The first is to use instant A/B testing to shorten the QA process, as well as bypass app store approvals. Using those, you’re able to instantly make changes and roll them out to your users, without waiting for an app store review.
The second place A/B testing plays a key role is getting better, more actionable data and analytics after launch. On a typical mobile release, teams usually roll out a bunch of different changes all at once, ranging from bug fixes, to feature additions, new UI elements, and so on. If your metrics are affected, it’s very difficult to figure out why.
If they go up, great! But you don’t know what specifically contributed to those increases. Was it the bug fixes? Did users love the new feature? Or was it due to the easier to use UI? A/B testing takes the guesswork out of it, allowing you to isolate a change and gain learnings as to exactly what changes have what effect. The more you understand how your users react, the better you can serve them.
User testing and A/B testing go hand in hand. While user testing provides the qualitative feedback, A/B testing takes care of the quantitative. User testing is great for macro opinions. They’ll give you early feedback on macro issues with your design. Using it will help you understand why users behave in a certain way, and what they think/feel about your app.
A/B testing, on the other hand, allows you to experiment with more detailed aspects of your app that users may not know they’re responding to. For example, users can tell you when a checkout flow is downright confusing, but they might not know whether cutting out a step in your checkout flow would help them buy more things. The two types of testing are two different approaches, so you can attack a problem from different angles.
Ultimately, both are essential tools that complement each other well. User testing results will give you ideas on what to A/B test. A/B testing results will give you ideas on what to ask your users.
The most common mistake is assuming that the changes they’re building are going to delight users. More broadly, mobile app creators assume that they are good at predicting what their users want and how their users will behave and feel. And hindsight bias allows use to feel like we knew the answer all along. But really, you need to ask your users and experiment in a data-rigorous way to really know how your users will act and react to your decisions.
Just because people ask for a change, doesn’t mean they really want it. A great example of this was when digital magazine The Next Web was inundated with requests and pleas to build an Android app in addition to their popular iOS app. So they did. Turns out, people weren’t downloading and using it.
“We had gotten enough requests for it and had gotten the impression there were thousands of anxious Android tablets owners holding their breath for an Android version of our magazine. Unfortunately we’ve found out that although Android users are very vocal they aren’t very active when it comes to downloading and reading magazines.” —Boris Veldhuijzen van Zanten, Co-Founder of The Next Web
Without hard data, it’s very difficult to judge what people say they want and what they really want. Figure out ways to validate your features before deploying them to your entire user base.
The most successful apps in the field are staying on top of new technologies and trying out cutting edge methods of development. They’re the ones that are always learning from their peers, keeping their ears to the ground, and aren’t afraid to make some mistakes.
The App Store almost discourages experimentation. Along with the arduous processes required to make any changes, every time you release a new version, you lose all your ratings. The best apps don’t let these types of things stop them. Instead, they learn to validate their hypotheses and incrementally compound their successes.
The best ones are also extremely user focused. Rather than guessing and assuming they know their users, they ask them and test out many hypotheses. They constantly question what they can do to improve the user experience. Sometimes that involves questioning industry “best practices,” and understanding that what works for others doesn’t necessarily work with their app. Most of all, they want to delight users with extremely good service.
ALL. THE. TIME.
“We don’t have time to test right now because we’re working on this big release that’s coming out in 3 months.”
“We don’t have the resources to support a nice-to-have.”
At Apptimize, we hear this a lot. But the truth is that your release cycle shouldn’t be 3 months. While you’re working on your waterfall of 3 months, your competitors are staying agile, talking to customers, learning from customer behavior, and improved their product 10x with 6 smaller iterations in the time it took for you to do one big release. We’re not in the 90s anymore. Mobile isn’t boxed enterprise software.
Staying agile and user focused is critical to staying alive in this space where customers not only have a lot of choice, but switching costs are low and expectations are high.
This is why top apps like Facebook and Netflix built their own user testing and A/B testing processes and platforms before anyone else in the space even thought about doing these things. This is why these companies have found so much continued success. And they spent a lot of time and resources on building these capabilities at time when building your own was the only way to do it. Now any app can just buy the same functionality for one 100th of the cost of building your own. Not doing it would be paramount to getting left behind.
We actually interviewed Lacy Rhodes, iOS Engineer at Etsy, a while ago on this very topic. Essentially, there is no one silver bullet. It’s about incrementally showing value and showing how positive learnings and results from testing compound into huge gains.
Small early wins definitely help to get buy-in from a larger team. Ultimately, testing is about is proving value. Both user testing and A/B testing will help you prove the value of your ideas to the rest of the team and get everyone really excited about knowing what’s actually working and what’s not. It helps everyone waste less time, be more focused, and be heard.
Join 120,000 subscribers and get articles like this every month.
Get ready for some great content coming to your inbox from the team at UserTesting!