Postlight
Prioritizing Performance
Building an Android app for India with React Native
Here at Postlight we’re big fans of React Native, Facebook’s open source framework for building native mobile applications with React. React Native enables any developer with modern JavaScript experience to hop the fence into mobile application development with relative ease and build cross-platform mobile apps in the same declarative, composable, component-based fashion they would build a React web front-end.
So when a client approached us about building an Android application with a laundry list of features including real-time text and image chat, video streaming, and an education component consisting of user-completed flash cards and quizzes, naturally our initial thoughts around implementation leveraged React Native.
Even though we were building an application targeting only Android, the benefits of being able to use React Native were obvious.
We had many features to build and very little time to build them. We needed to work at the speed of JavaScript. We needed to share entire components and chunks of business logic seamlessly across the application. We needed to leverage many of the 500,000 npm packages available, so we could focus on solving our client’s problems and not on reinventing wheels. We needed to scaffold, build, and iterate quickly.
Additionally, there are many more JavaScript developers than there are Android developers out there in the world (and at Postlight!), so we knew that finding developers capable of building and maintaining the application after handoff would be much easier.
There was just one catch, one further consideration, one additional “Feature Request”: all the people who will use this app? They live in India. And most of them use inexpensive, low-end Android devices connected over slow and spotty mobile networks. That meant the app had to be lean and fast to work on slow hardware, tolerate inconsistent networks, and work offline when there was no network available at all. It didn’t just need to be usable. These constraints are endemic to the Indian market, and the application needed to perform well in its target market.
When faced with engineering constraints like these, JavaScript becomes a near-notorious actor. JavaScript has a well deserved reputation for bloating pages unnecessarily and for executing especially slowly on Android devices, not for enabling performance-critical offline-first native applications. We weren’t sure if low-end Android devices had the hardware resources required for executing a high level framework like React Native, let alone whether we could build something performant on top of it. We decided to find out.
Get to know your audience
Let’s face it: most of us living in a developed western country are living in a technical bubble. Those of us working in tech in a developed western country are living in a bubble inside of a bubble. Your product’s users are much less likely than you to be on the latest Apple hardware, connected over the most expensive internet plan available.
The importance of dogfooding the products you build is widely acknowledged and practiced. It’s much easier to identify superfluous features and see what is missing when you yourself become a consumer of the software you are building. Importantly, “dogfooding” entails testing the real-world usage of a product. The real-world is, probably, a very different world than the one existing within our bubble inside of a bubble. It’s probably full of slower hardware and internet connections that aren’t consistent, fast, and reliable. How many of us develop, test, and “dogfood” the software we build in real world conditions?
On average, 94% of the population in the 75 countries included in the Index live within range of a mobile signal. However, only 43% have access to a 4G signal.
— State Of Connectivity 2016: Using Data to Move Towards a More Inclusive Internet
Our client recognized the disconnect that existed between their target market and our bubble within a bubble and addressed it with a Google Drive full of research and statistics detailing the state of consumer technology in India. There were graphs of mobile network classes and how they were distributed, market trends of the popular devices and their hardware and software specifications, and analyses of usage habits unheard of to us. (Did you know that most of the phones popular in regions like India are sold with multiple SIM card slots? We didn’t! They enable users to subscribe to multiple phone service providers and take advantage of differences in the pricing of texts, calls, and data.)
From this research we identified a set of Android devices and mobile network properties that aimed to be fairly representative of those used by average Indians.
We landed on a Karbonn Alfa A112 and a Micromax Q336 as the primary devices we’d use throughout development. These devices were among the most popular purchased in India in recent years and represented common trends among manufacturers: A single core 1 GHz processor with 512 Mb RAM running Android Jelly Bean or KitKat. Other devices included the Xiaomi Redmi 4A, Samsung Galaxy J1 Ace and OnePlus One.
Finding representative network specifications proved more difficult. As an emerging market, the overall availability, access, and types of mobile networks in India were changing drastically year to year. India continues to rank among the fastest growing countries of internet users, adding tens of millions of new users every year. This rate of growth necessitates its own rapid growth in internet infrastructure.
Facebook’s 2016 “State of Connectivity” ranks India 46th in the world in overall internet “Availability”, with an average mobile download speed of 5633 Kbps and just under half of the population having access to a 4G network. As this class of network is very different than the 50+ Mbps Ethernet connection enjoyed by most employees, we resort to software to emulate them. There exist a variety of tools and open source software used for throttling network speeds and artificially dropping packets, we found success with Augmented Traffic Control and Network Link Conditioner.
Over the course of the project, these devices and networks became our new bubble. We developed, tested, and evaluated our milestone progress against them. Application performance made its way into our ticket tracking system, regressions in performance were treated the same as regressions in functionality: a bug to be fixed.
Throttling our development and QA cycles to these conditions may have slowed our output— each ListView took just a bit longer to load and render — but it put the performance of what we were building front and center. When developers, product managers, designers use a product in real world conditions and are forced to face its real world shortcomings, application performance shifts from a vague notion that you can occasionally feel good about to a product priority. The performance of a software product is just as dependent on the process you took to build it as it is on the technical architecture you used to build it.
Be selfish: Solve your problems (your client’s problems) and delegate the rest
Confronted with the full scope of difficulties that would be imposed on us by the realities of the limited hardware capabilities and spotty network connections in context of the full list of the app’s features, our initial discussions around technical architecture focused on how to minimize risk.
We’re big believers of reaching for off-the-shelf services and OSS dependencies over re-inventing wheels in our work. As Gina Trapani notes in “How to Build for the Handoff”, these dependencies are typically much easier to understand and maintain, especially as a project transitions from Postlight to our client’s in-house engineering team.
Firebase Realtime Database formed the core of our architecture. We’ve have great success in using it before in a variety of internal and client web-based work, but this was our first experience in using it in a mobile app. We were delighted to find that it solved many of the demanding network requirements out the box. It was “offline first” and optimistic, persisting data locally before trying to sync it with a central server when a network connection was available. It handled spotty connections excellently, syncing what it could and queueing up requests if they had to be retried as the network state faltered. It even tunneled requests over a persistent WebSocket connection, which cut down on latency by avoiding the inherent overhead in issuing independent HTTP requests.
We leveraged other pieces the full Firebase product suite heavily, making use of Firebase Cloud Functions for maintaining indices (more on this below) and offloading computationally intensive tasks from clients, Firebase Phone Auth, as well as their Analytics and Crash Reporting products.
By delegating the hard problems of network resiliency and offline data synchronization to Firebase to handle, we were able to save months of engineering time, enabling us to focus primarily on solving our client’s problems: the building and optimizing of the user-facing features of the application.
Make it fast, make it right, make it work… Wait, what?
Now that we were emulating the network and hardware conditions faced by average Indians right from our home office, we set out to validate whether our proposed technical architecture decisions would land us anywhere near the realm of acceptable performance. We knew it would be an uphill battle full of optimization and compromise, but we needed to be sure it wasn’t going to be Sisyphean.
The best way to do this, we decided, was to craft a sort-of upside down notion of a initial prototype. We didn’t care about the normal order of priorities one cares about when building a prototype: it didn’t need to really work (“here’s a single ListView and nothing else!”), it certainly didn’t have to be right (“what was that about session persistence?”), it just had to be fast. Really, very, truly fast.
Over the course of two weeks, we bootstrapped the project and built the stereotypical bare-bones chat app — A ListView for conversations with each conversation linking to a ListView of the messages sent between users. The initial version was fast for a prototype, but not fast enough for our prototype. It took another week of work optimizing queries, selectors and judiciously tuning shouldComponentUpdate
until single user performance was acceptable. But we were building a chat app. How did our prototype handle many users? We spun up virtual machines that ran scripts faking load on Firebase, sending fake messages to fake users several times each minute in order to test message receipt latency in real-world conditions
Soon we were 4 weeks into a relatively short client engagement, with not much work towards the actual product to show other than our (really, truly fast) chat prototype. In reality, the time was well spent as we accomplished our goals:
- We validated that our core technical architecture met performance requirements. It was possible to build an app with React Native and Firebase that ran performantly on target devices.
- We learned about compromises and optimizations we would have to make upfront, particularly with respect to our data schema. Data stored in Firebase is idiomatically highly denormalized, but in our case we couldn’t manage the round-trip latency inherent in performing client-side joins. Data had to be duplicated and joined directly in Firebase in critical spots. In other spots, just the fields of associated data we needed for display of a particular screen was duplicated, while the full data structure could be fetched after we rendered the screen. We ended up with ad-hoc indices maintained by cloud functions, with some Firebase nodes general and denormalized and others highly coupled to particular screens. This type of schema adjustment would’ve been a very expensive optimization to have to make further down the line of development.
In a way, our prototype phase followed the normal order of events except that “Make it fast” was a critical component of “Make it work.” The app shouldn’t be considered “working” if it wasn’t fast enough to be usable on target devices.
Artisanal, free-range, small batch optimizations
With the project bootstrapped and architecture validated, we had a high level framework in which to start fleshing out product features. Building around the right architecture meant that further development was more about keeping the application performant as complexity was added rather than making it performant, which is still easier said than done. We realized most of the remaining necessary performance gains through a small number of optimizations:
- Use the Platform: Native components and APIs will always be more performant than JavaScript re-implementations. Do not use or re-invent in JavaScript components like navigators, dialogs, or tab bars, as these are primitives provided by the platform. Components should be thin wrappers around platform-provided implementations whenever possible.
- Implement custom
shouldComponentUpdate
handlers:redux-connect
andReact.PureComponent
bring their own implementations, but they have to be general enough to work with most React applications by default. You know your data model and can realize sizable performance improvements by leveraging that domain knowledge. - Navigation libraries break component lifecycle assumptions: When a new screen is transitioned to in most React Native navigation libraries, the components making up the previous screen are not unmounted. This behavior breaks the “clean up” logic encapsulated in
componentWillUnmount
in libraries that subscribe to a centralized store and trigger re-renders e.g.react-redux
. Our solution to this involved subscribing to screen visibility events and adding anisScreenVisible
flag into the React context. If a component was on a screen that was not currently visible, we knew we could short-circuit its render cycle withshouldComponentUpdate
to avoid wasting time rerendering a non-visible view.
React Native is here to stay
The fact that we were able to take React Native and make it work in our performance-critical application is a testament to its power and flexibility.
React Native is a high level framework from which to build a mobile application, but it is surprisingly not prescriptive about how you work within that framework. It defines the contract but leaves the specifics mostly up to you.
If a view or platform API is missing from what is bundled in the standard library, it is simple to build it yourself, fit it into place, and have your application code be none the wiser. If the performance of a particular component or screen is insufficient, you are free to replace it with a pure native implementation. React Native provides you with a nicely furnished house, but it still shows you where all of the exits are.
This flexibility enabled us to abstract over the details of the Android platform when we could, enabling rapid development time, while still being able to dip down into native code where it was necessary to tune performance. Combine this was the benefits gleaned from the React architecture itself — the composable, declarative, component based approach to UI — and it’s evident that React Native will continue its rapid rise as the framework of choice for mobile development.
React Native is a natural result of Atwood’s law, which states:
any application that can be written in JavaScript, will eventually be written in JavaScript.
And we couldn’t be happier about it.
Daniel Ramirez is an Engineer at Postlight, a New York based digital product studio. Have a cross-platform or otherwise technically challenging mobile application that needs building? Get in touch.
Story published on Oct 19, 2017.