Since the earliest days of mobile phones that could support software applications, the industry has been promising itself a set of standards that would unify devices and architecture. I've sat through countless presentations where invariably I stare at a warped 7 layer OSI stack diagram that is somehow supposed to make it all better (usually in the shaded areas). The next slide always has some kind of cliché, like "write once, play many!" followed by a third that lists all the advantages of a device-converged world.
Yeah, we get it, but it never happens. The problem with being critical of any effort that seeks to create a set of standards that would cut across multiple devices allowing developers to scale up their application development is that it's politically incorrect. Imagine if you were a candidate for public office and you dared to say that you were against "better education" or "improved healthcare." Who could actually go on record with a straight face and say that device convergence is a bad idea?
I can! And let me tell you why...
The complexities associated with device fragmentation have kept the bar pretty high for new entrants. The companies that remain, whether they're writing lower-level software, applications, or even mobile websites, have probably created a war chest full of tools, processes and strategies for dealing with the complexity.
Interestingly, if you talk to a dozen top mobile publishers about this subject, every one of those 12 companies will be absolutely convinced that they uniquely possess the secret mojo needed to manage the problem of device divergence. They will further consider their solution as superior to any other and the reason for their success. Even more amazing, none of those 12 solutions will be the same. That just goes to show how many ways there really are to tackle this problem.
Further, if it really was possible to write a single application that cut across a majority of mobile devices with almost no effort, then the user experience across those devices would be nearly identical. That doesn't really provide the user with a lot of choice. It would essentially "commoditize" the business, however unpopular it is for me to say it. All of those interesting device UIs (trackballs, touchscreen, soft-keys, pointers) demand the rethinking of the human interface for applications and when successfully executed give users a dramatically improved experience. Or let's put it this way at least, some users will love it, others may not, but the ones who do remain loyal to the carrier/device manufacturer who brought them that choice.
Why won't we ever solve the problem of device fragmentation?
I know the reason. It's because of the carriers themselves. They buy the handsets, they spec what goes inside of them, and they have a vested interest in keeping device fragmentation alive. Take, for example, the Palm Pre. The device has very unique UI characteristics and by all accounts is quite difficult to develop for. Why on earth did Sprint want that phone in its lineup? Simple: Because if customers fall in love with the Palm Pre, they have to get their service from Sprint. The same is true with the iPhone, of course, at least for the moment. If you want a production iPhone in the U.S., you're signing up with AT&T Mobility.
So long as the carriers have the ability to buy handsets on an exclusive basis, the strategy of using unique devices to capture and secure new users will be alive and well. So if you as a developer want in on those new users (you know, the ones that purchase 70 percent of their content in the first three months of device ownership), you'll need to customize your code to target those devices. Furthermore, you'll be rewarded for that effort by having only a limited number of competitors to deal with. The more complex or unusual the device, the fewer the companies that will go out and support it, and the higher your ROI is likely to be...Continued