Last September at TechCrunch Disrupt, Mark Zuckerberg responded to questions about his company's mobile strategy, which had been widely criticized. He said one of the company's biggest mistakes was "betting too much on HTML5 as opposed to native" and implied that HTML5 wasn't ready for an app like Facebook. The company was going forward with a native app strategy and had already released its first "native" iOS app--though it too suffered broad criticism for its poor scrolling and rendering performance.
At Sencha we build Web application frameworks that are used by millions of developers worldwide. Every day, we see our customers bring more and more amazing apps to market leveraging pure open Web technology. We'd seen firsthand what HTML5 was capable of, and we knew for a fact that Web technologies were not the reason why the Facebook app was sluggish.
Zuckerberg's comments reignited the simmering Web vs native debate: which is better and what makes sense for app developers. In the weeks after he made his comment, there were many articles written and many voices clamoring to make a point one way or another. We decided the best response to the debate was to "shut up and code." So, we spent a few weeks of our spare time to re-create the core Facebook experience in pure Web technology. The result was Fastbook, a pure HTML5 application that showcases what you can really do with HTML5 on modern devices.
Since we launched Fastbook, nearly 40,000 people have tried the HTML5 app from all around the world, and the response from the Web community has been amazing. Our original blog post received over 4,000 tweets and 2,500 Likes! It has been clear to us that Fastbook showed to the developer community and the Web ecosystem at large what is truly possible with open Web technology.
This four-minute video gives you a quick overview of Fastbook, and shows you a side-by-side comparison of how well our HTML5 app performs against both the native iOS and the native Android Facebook apps (versions 5.2 and 1.9.12 respectively, the latest available when we made this video on December 10, 2012).
So, how did we do it?
Running the old Facebook app via a HTTP monitoring proxy, it's clear to see that the "native" app was mostly a collection of UIWebViews displaying Web pages from m.facebook.com. As the user scrolls down to reveal more content, new chunks of HTML code are appended to the bottom of the page. When the user navigates to a different view, the current page is discarded. Coming back to the old view means reloading the whole page again. This is clearly not ideal for delivering the best end user experience or minimizing the amount of network traffic.
This is a common practice when designing Web pages; however, when it comes to matching the fast and fluid experience that native apps provide, this approach falls short in many ways.
Today's mobile apps set the bar really high to satisfy user experience. It means animating content at 60 FPS while loading and rendering potentially infinite amounts of data, all within the constraint of limited hardware power on mobile devices. The mental model and technical implementation of developing as a "webpage" on mobile doesn't make sense.
In any application, regardless of technology and platform, limiting the application's memory footprint is the a massive factor for high performance. Appending more and more DOM nodes (i.e more HTML markup) to the same document means higher and higher memory consumption, which directly translates to a sluggish user experience. Additionally, as the DOM tree keep on growing, it becomes exponentially more expensive for the browser to layout the Web application. New layouts happen a lot more than you'd think: virtually any change to the DOM tree (adding, removing or changing the content) requires a re-layout. The layout calculation process is very CPU-intensive, which could lead to hundreds of milliseconds pauses in the user experience as the browser does the math. Moreover, throwing away thousands of nodes and recreating them again waste precious time forcing the browser to garbage collect.
We've found that the right approach to building these kinds of applications is to recycle the DOM tree as much as possible. For Fastbook, we implemented an Infinite List component that can handle items of unknown sizes. Only a very small set of DOM nodes are actually created to fill the visible screen area, and they are then constantly recycled to render the next and previous data on-demand. As a result, performance stays constant, regardless of the amount of data the application requires. Low memory consumption, cheap layout, minimal data transfer on-the-wire: infinite by design is a must for application like this.
The latest Facebook application is built in native code, and does in fact implement their main News Feed experience like this. However, in Fastbook it's clear to see that it's a design strategy consideration, not a technology consideration that's needed to achieve this high level of performance.
As the user interacts with the app, multiple heavy tasks have to be performed at the same time: extra data is transferred in the background, high-resolution images are downloaded and decoded, new list items are rendered and measured, visual layers are composited and translated, etc. Mobile apps aim for fluid performance and at 60 FPS, or about 16ms in between frames, the window is very narrow to complete all of these tasks at once without dropping frames. So delivering incredible perceived performance is the rule of thumb, and prioritizing separates a slow and jittering list from one that seems fast and smooth. In Fastbook, we split rendering into two different modes: progressive and buffered.
Progressive rendering kicks in when the scrollable area is animated at 60 FPS while extra content is forced to be rendered as the list approaches its bottom. In this mode, smooth animation and text rendering are given top priority, while loading and drawing high-quality images is temporarily suspended. Low-resolution versions of the same images are displayed as soon as they are downloaded to save time on image decoding. Pausing and making these choices creates the perception of fast loading and high responsiveness.
Buffered rendering returns when the app is idle: when the animation is stopped and the user is reading their News Feed. This is the golden opportunity to release the heavy-lifting tasks such as downloading and painting large images, and to pro-actively render a bunch of extra items. As soon as the user continues scrolling down, the content is already there and the experience to the user is smooth.
One of the biggest problem many Web developers face when jumping into creating large Web applications with a lot of visual elements ("views") and complex styling is scalability. They soon realize that while a single view may perform well standalone, putting many of them together in the same Web document slows things down significantly.
In Fastbook, we demonstrated a technique called sandboxing. This programmatically detaches complex views and renders them into their own documents (i.e. IFRAMEs), and thus partitioning the DOM tree. For all the reasons we talked about above, keeping the DOM tree slim enhances performance across the board and sandboxing helps us achieve that. Our sandboxing technique ensures that events, styling and positioning are seamlessly proxied between the sandbox and the master view. Sandboxing allows layouts to be isolated, and therefore keeps the primary DOM tree as light as possible.
Fastbook shows that on modern browsers, when developers look at the Web browser as an application platform and follow some straightforward techniques, HTML5 is ready to build powerful, high performance applications. For complex applications, developers need to treat the browser as an app platform and use these techniques to allow their applications to scale. Doing so, HTML5 is ready to build nearly any kind of application.
Jacky Nguyen is a lead architect at Sencha Touch. You can view Fastbook and see what HTML5 is capable of at fb.html5isready.com.