Reuse and Recycle: The Mobile Web

Quick! Name two topics that have dominated in technology circles over the past few years.

If you said “mobile” and “HTML5,” then you’ve hit the right page on my site.

With the incredible growth of the smart phone market, staying abreast of the latest speculation, leaks and reviews of the hottest mobile devices has become an international pastime. From Boston to Bangalore the ins and outs of the newest iPhone are dissected to the minutest detail, with Apple aficionados analyzing everything from the glass used, the finish on the back to the case to the smallest UI enhancement.

In parallel to the explosion of mobile devices we’ve had several years of hype surrounding the latest iteration of the HTML specification, HMTL5. Updated for the first time in more than a decade, the HTML specification includes numerous additions aimed at allowing developers to build more robust web applications using just the tools that ship with the browser- HTML, CSS and JavaScript. This work has energized the already hot web development community and might just represent the final move away from plugin-based architectures for heavy duty web application development.

This post is about the convergence of these two trends.

Your mind might immediately flash to some of the headline grabbing collisions you might have seen over the years. Just the lack of support for Adobe’s Flash on iOS devices has driven more than a few headlines over the years.

No, while those collisions are certainly part of the conversation, the important convergence isn’t quite so dramatic and relies on something simple that almost all of the interesting smart phones sold today share in common: a good web browser.

It’s true. Starting with the decision to package a full-powered version of the Apple web browser, Safari, with iPhone, the trend in mobile devices has been towards providing powerful web browsers. Many are based on the open source WebKit rendering engine and provide solid support for a variety of emerging web standards, including those marshaled under the HTML5 banner.

If you’ve spent any time wrestling with legacy web browsers on the desktop, the idea of developing for an ecosystem centered on new, standards compliant web browsers seems like nirvana.

It’s not quite nirvana, but it’s still pretty exciting.

It’s a pretty easy story to follow. Building for the mobile web grants you the ability to build an application using one codebase and one set of technologies that can work out of the box on any mobile device with a capable web browser.

That’s pretty cool.

The rest of this post will look at both of these trends in some more detail, giving you some context on how we got here and what here actually means in terms of standards, devices and browsers.

The Emergence of the Open Web Platform

While the web is a wonderfully messy and vibrant place, where sites can go from a sketch on the back of a napkin to headline-making enterprise with a billion dollar valuation in the course of a few months, the World Wide Web Consortium (W3C), the organization responsible for the standards that the web is built upon, moves more like it’s overseeing transatlantic shipping in the 1800s. If, in the early 2000s, you were the kind of person who paid attention to their activities you could wait for years for anything interesting to happen. There was lots of discussion, lots of tweaking of existing specifications and, really, not much else.

Let’s look at some dates.

  • In December 1997 the W3C published the HTML4.0 specification
  • In early 1998 they published the XML 1.0 Standard
  • In May 1998 the Cascading Style Sheets (CSS) level 2 specification
  • XHTML1.0, the specification that redefined HTML as a subset of XML was published as a recommendation in January 2000

After that, not much happened.

Things happened, of course. Meetings were held, road maps were mapped and progress, of a sort, towards the web of the future was visible in incremental revisions to standards. This orderly progress, to someone with only a passing interest in these sorts of things, probably seemed like a positive trend.

The reality on the open web was different. Out in the real world, the web was busy taking over the world. Fueled by a heady mixture of interest, money and a belief in the web as a platform with the power to change the world, the web was very quickly being pushed and pulled in directions the standards bodies never dreamed of when they were drafting their documents. Compare the needs of the web of the mid-1990s, when these documents were being written to the web of the dot com era and you’ll see why so many problems fell to the creativity of web developers to solve. Solve them they did, with a mix of cleverness and practicality that created a number of excellent solutions, some of which are still used, as well as a number of clearly terrible solutions. Some of those terrible solutions are still used, too, but the less said about those the better.

Still, all the cleverness in the world wasn’t enough to get around the man limitations of the standards and the limitations present in the browsers themselves. Something as simple as generating rounded corners for a box was a topic worthy of hundreds of articles, blog posts and probably a patent or two.

Other topics were either impossible or took many years and herculean efforts to solve. The many creative, technically complicated solutions to serving custom fonts over the web fall into this category. Leveraging third party technologies like Flash, VML and Canvas libraries like cufon and SIFR brought custom type to the web through heroic individual effort and at the cost of third party dependencies. Even developers who believed in the open web stack had to rely on closed technologies to simply get a corporate typeface onto the web in a maintainable way.

Something had to give.

Web Standards, Flash and the Rebirth of the Open Web Platform

All that really needs to be said about the immediate effectiveness of the late 1990s standards is that the era that directly followed it was dominated by Adobe Flash as the platform of choice for anything even remotely interesting on the web. Everything from financial dashboards and video delivery to the slickest marketing work was handled by Adobe’s ubiquitous Flash plugin. Unfortunately for people who believed that the web was most effective when it was built on top of open tools, Flash provided a far richer platform for developing interactive applications and “cool” effects for marketing sites. It really became the de facto standard deep interaction on the web.

The core web technologies were basically ignored.

JavaScript was relegated to the status of a toy language used for occasional form validation, CSS was poorly understood and even more poorly implemented in web browsers and HTML was most useful as a dumb container for serving Flash files.

At least, that was the perception of many people who were making and consuming web sites at the time. The truth, as it often is, was more complicated than the surface indicated. Beyond that façade, there was an enormous amount of work being done to understand and take full advantage of the standard web technologies. The promise of the open web platform was clear, even if it wasn’t obvious to the majority of the people working on the web at the time.

From organizations like the Web Standards Project (WaSP) and the wildly influential mailing list/online magazine A List Apart through the heroic work of individuals like Peter Paul Koch, and Eric Meyer, the fundamental technologies that drove the web were being reevaluated, documented and experimented with at a furious pace. Quietly, invaluable work was being done documenting browser behavior, crafting best practices and solving implementation issues. While it wasn’t the most fashionable portion of the web in those days, there was plenty of activity using open standards in imaginative, evocative ways. That research and work served as the foundation for the revolution that would follow and change the course of the web.

Ajax and the WHATWG

That revolution had two “shots heard round the world.” One took place under the eye of the W3C itself and the other came straight from the front lines. The following section examines those two galvanizing moments in the history of the web standards movement.

THE WHATWG

The first event was the formation of the Web Hypertext Application Technology Working Group (WHATWG) on June 4, 2004.

The WHATWG was actually born three days earlier at the W3C Workshop on Web Applications and Compound Documents. This W3C meeting was organized around the new (at the time) W3C activity in the web application space and included representatives from all the major browser vendors as well as other interested parties. There, in the first, ten-minute presentation of session 3 on the first day of the two day event, representatives from Mozilla and Opera presented a joint paper describing their vision for the future of web application standards. This position paper was authored in response to the slow pace of innovation at the W3C and the W3C’s focus on XML based technologies like XHTML over HTML It presented a set of seven guiding design principles for web application technologies. Because these principles have been followed so closely and have driven so much of what’s developed over the last several years, they’re repeated in full below.

Backwards compatibility, clear migration path

Web application technologies should be based on technologies authors are familiar with, including HTML, CSS, DOM, and JavaScript.

Basic Web application features should be implementable using behaviors, scripting, and style sheets in IE6 today so that authors have a clear migration path. Any solution that cannot be used with the current high-market-share user agent without the need for binary plug-ins is highly unlikely to be successful.

Well-defined error handling

Error handling in Web applications must be defined to a level of detail where User Agents do not have to invent their own error handling mechanisms or reverse engineer other User Agents’.

Users should not be exposed to authoring errors

Specifications must specify exact error recovery behaviour for each possible error scenario. Error handling should for the most part be defined in terms of graceful error recovery (as in CSS), rather than obvious and catastrophic failure (as in XML).

Practical use

Every feature that goes into the Web Applications specifications must be justified by a practical use case. The reverse is not necessarily true: every use case does not necessarily warrant a new feature.

Use cases should preferably be based on real sites where the authors previously used a poor solution to work around the limitation.

Scripting is here to stay

But should be avoided where more convenient declarative markup can be used.

Scripting should be device and presentation neutral unless scoped in a device-specific way (e.g. unless included in XBL).

Device-specific profiling should be avoided

Authors should be able to depend on the same features being implemented in desktop and mobile versions of the same UA.

Open process

The Web has benefited from being developed in an open environment. Web Applications will be core to the web, and its development should also take place in the open. Mailing lists, archives and draft specifications should continuously be visible to the public.

The paper was voted down 11 to 8 against.

Thankfully, rather than packing up their tent and going home, accepting the decision, they decided to strike out on their own. They bought a domain, opened up a mailing list and started work on a series of specifications. They started with three:

  • Web Forms 2.0: An incremental improvement of HTML4.01′s forms.
  • Web Apps 1.0: Features for Application Development in HTML.
  • Web Controls 1.0: A specification describing mechanisms for creating new interactive widgets.

Web Controls has since gone dormant, but the other two, Web Forms and Web Apps have since gone on to form the foundation of the new HTML5 specification.

Score one for going it alone.

As was mentioned, they’ve stuck to their principles over the years. Arguably, the most important of these principles was the very open nature of the standards process in the hands of the WHATWG. Before the birth of the WHATWG, the standards process and surrounding discussion took place in a series of very exclusive mailing lists, requiring both W3C membership and then specific inclusion in the specific Working Group under discussion. There were public, discussion mailing lists, but those were far from where the real action was taking place. It was a very small group of people, operating in a vacuum, completely separated from the people working in these technologies

Instead of that exclusionary approach, the WHATWG made sure their activities were taking place in the open. If you subscribed to the mailing list and commented you were suddenly part of the solution. This has lead to a vibrant, high volume discussion. Following the list closely and you’ll see high profile engineers from some of the web’s biggest companies involved in discussion of proposed features. They’re giving feedback and offering proposals on problems they’re actually seeing on a day-to-day basis. This is an incredibly valuable approach. Instead of closed loop discussions between a handful of high level representatives of the various browser vendors and other interested parties, engineers from the practical side of the ledger are inserting themselves into the conversation and providing a wealth of real-world data and experience.

In addition, since the need for valid use cases was built into the process from the beginning the features that were proposed and implemented were very much based on the problems faces by the people who were in the trenches actually building sites.

This doesn’t always ensure that they end up getting their way, of course. They weren’t kidding when they stated “every use case does not necessarily warrant a new feature.” If you follow the WHATWG mailing list it seems like not a month goes by without someone proposing a new feature, complete with valid use cases and failing to get their proposal approved. For one example, months of discussion around a standardized mechanism to control the way scripts are loaded and executed went nowhere despite well-reasoned arguments from a number of high profile sources.

This works because the editor of the specification, Google’s Ian Hickson, serves as a sort of benevolent dictator (or less-then-benevolent, depending on who you ask.) After all the shouting he’s got final say and isn’t shy about using it.

While everyone isn’t happy all the time, the process works.

It works so well, the W3C actually invited the WHATWG to come in from the cold, transforming the wild band of outlaws into part of the functioning heart of the establishment. In 2007, just a few years after they turned their nose up at the Mozilla/Opera proposal, the W3C adopted the work of the WHATWG to form the basis of a new HTML working group. The momentum of that working group eventually led to the W3C shuttering development of XMLHT2.0. XHTML2.0 had been planned as a complete rewrite of the HTML specification as a subset of XML. In 2004 the future of the web was going to be XML and the WHATWG was a band of outsiders. In 2009 XHTML2.0 was dead and the WHATWG were the ones were the ones driving the web forward.

Since that point work on the HTML5 specification work has continued apace, with an incredible amount of attention being given to the core specification and several related specification.

AJAX

On 18 February 2005, Jesse James Garrett, co-founder and president of design consultancy Adaptive Path, wrote an article entitled “Ajax: A New Approach to Web Applications.” In it, he described a new, at the time, trend in apps like Gmail and Google Maps that focused on smooth application like experiences. He coined the term Ajax to describe it and called the pattern “a fundamental shift in what’s possible on the Web.”

He was certainly right.

He described Ajax, the technology, this way:

  • A standards-based presentation layer
  • Dynamic interaction using the Document Object Model
  • Data interchange XML
  • Asynchronous data retrieval using XMLHttpRequest (XHR)
  • JavaScript binding it all together

While the specific technology stack is a little out of date as JavaScript Object Notation (JSON) has replaced XML as the data interchange format of choice, the basic pattern of dynamic interfaces fed by XHR has survived and thrived in the years since.

I’m not sure how instrumental he expected the blog post to be in that “fundamental shift”, but in hindsight it was a watershed moment in the history of standards based development.

Garret’s post didn’t invent the technology pattern, of course. It had actually been growing organically for several years, with the fundamental technologies in place as early as 2000. What the post did do was give focus to the trend with an intelligent, easy-to-understand, definition and a very marketable name. With that focus, the pattern went from a vague collection of sites and apps tied together by a common design and interaction feel and technology stack, to being something that you could easily market and discuss. Instead of saying “I want to build a fast app that doesn’t rely on multiple page loads like Google Maps using standard web technologies” you could say “I want this app to use Ajax” and people would instantly get what that meant.

Ajax proved to be a remarkably popular term. It actually morphed well beyond its original meaning to become a generic term for any dynamic interaction. Techniques that once would have been called Dynamic HTML (dHTML) and quietly laughed out of the building as a vestige of the late 1990s dot com frenzy were suddenly hot “new” techniques that everyone wanted to be able to leverage in their sites and applications.

The immediate popularity of Ajax meant that a number of very intelligent developers started to take a look at the infrastructure for developing web applications in a cross-browser way. Before Ajax, standards based development was mostly focused on markup and style, which was valuable when doing content sites, but didn’t provide the full solution when approaching the complexities of a browser based application. After Ajax it included a dynamic, interactive component that drew engineers from other programming disciplines in droves. There were a lot of problems to solve and it seems like every company in America was looking for someone to solve them. Libraries like Prototype, Dojo, MooTools and, eventually, jQuery rose up to fill in the gaps between the various browser implementations. This activity has never really slowed down. As the Ajax and JavaScript based application have become more and more entrenched in the web development ecosystem, libraries, plugins and frameworks have been written for every conceivable task.

Into the Enterprise

As is often the case with new technologies and new design patterns, Ajax and JavaScript development were taken up by more nimble, technology-focused organizations. It’s a lot easier to experiment with a new technology when you’re starting from zero and the many startups that flooded the web at the time did just that. More established web companies with strong engineering teams like Yahoo! also took up the cause. It took slightly longer for Ajax to make is way into the enterprise. Larger organizations with less web-specific talent move at a much slower pace, especially when dealing with large legacy codebases. This was true with the adoption of Ajax. It has happened, however. A lot of those startups turned into web giants and brought the influence of the Ajax approach to the masses, As more and more larger organizations take up site redesigns on their long cycles, they’ve taken that influence and run with it, looking for sites and components that leverage modern JavaScript techniques. It’s taken a while but the typical Ajax, “thick client” architecture has gained a real foothold in the enterprise. Three to five years ago, it was very typical for the entire presentation layer to be controlled by a massive, all-powerful server side framework like ASP.net or Spring. These days, the server is much more likely to be providing streamlined data services for consumption by a jQuery based Ajax application. Data is handled by the server and application logic happens in the client.

This thick client architecture is especially important to consider when working on mobile devices as we want to limit the amount of unreliable, potentially slow network traffic and push as much as we can into the browser up front, in order to create a snappy application feel. There’s also the desire to leverage the ability to create offline web applications. Clearly, pushing as much into the browser as is possible becomes more and more important the more you expect your application to function when the device is offline.

This architecture trend can be seen throughout the mobile web.

Full Circle

At this point, JavaScript and the rest of the web platform have completed a remarkable transition. Thanks to the efforts of the WHATWG and the popularity of JavaScript-based application architectures like Ajax, the web platform has completely turned the corner from the dark days when Flash was dominant. Now you’re more likely to read about the “death of Flash” (greatly exaggerated) or about Adobe’s latest tool for HTML5.

Now let’s look at the other important piece of this post- the remarkable growth of mobile devices and, most importantly for web developers everywhere, the remarkable growth of the mobile web.

The Explosion of Mobile and the Birth of the Mobile Web

The potential of the mobile web has been a tantalizing dream for technologists from the earliest days of web enabled phones. Early attempts at unlocking that power were hindered by poor phone usability, bandwidth, and other technological limitations. The most notable of these early attempts was the combination of Wireless Application Protocol (WAP,) and Wireless Markup Language (WML.) While it was much hyped, poor developer acceptance and a limited feature set for end-users basically killed that effort before it ever had a chance to really succeed. It had some success in Japan (“big in Japan”, just like the band Cheap Trick) but it never did much in the United States and had only limited success in Europe and the UK.

In the mid-2000s it became common for smartphones and PDAs to ship with web browsers that were able to access the regular “html web.” As anyone that used any of those browsers could tell you the experience was less than optimal. They were seriously underpowered, they had poor standards compliance and while the screen size and usability was better than the tiny, gray scale screens of the earlier web enabled phones they were still lacking. They worked in a pinch, but it wasn’t exactly fun.

It wasn’t until the release of Apple’s iPhone in June, 2007 that the power of the mobile web was truly unlocked. The wildly successful device shipped with two critical features that served as the key. It came with a large screen and, most importantly, a full powered version of the Safari web browser. That combination meant that the browser was powerful enough to run most web pages, and there was enough screen real estate to really enjoy the web once you got the pages loaded.

At least for iPhone users, the mobile web had finally arrived. The rest of the world was soon to follow.

Enter Android

From the day of its release, Apple’s iPhone absolutely dominated the smart phone market. Monster sales figures and unrelenting hype surrounded Apple’s flagship device.

The race was on to match Apple’s high standard.

Several contenders lined up to try to stem the Apple tide. Existing mobile players like Palm, Research in Motion, Nokia, Samsung and others started to work on competing platforms aimed at taking down the giant from Cupertino. While some of these entries, like Palm’s ill-fated but excellent WebOS, generated some initial excitement, none really took off and all of the existing players started to see declining market share. As it turned out, the most interesting entry into the field wasn’t an existing mobile player at all. It was an entirely new player in the mobile space- the web giant Google.

On November 5, 2007, Google, as part of the Open Handset Alliance, a consortium of companies dedicated to “open standards for mobile devices” unveiled the Android operating system. Built on top of the Linux 2 kernel, Android was positioned as an open source alternative to Apple’s tightly controlled iOS.

Initial reaction to the Android platform was mixed. Being able to leverage both the power of the open source community and Google’s deep pockets, its potential was obvious. Still, the initial user interface fell short of that provided by Apple. Couple that subpar user interface with an initial run of devices that failed to generate popular excitement and it was clear that the Android platform had its work cut out for it if it was ever going to contend with iOS.

It took a while, but it did contend.

With the November 6, 2009 release of the Motorola Droid, Android finally had its first hit. Estimates from the firm Flurry had it selling over 250,000 handsets in the first week and over one million handsets in the first 74 days of its release. That pace was slightly faster than the original iPhone.

Android’s potential was suddenly unlocked. With the right marketing, an attractive form factor and the continued improvement of the OS, Android was suddenly an attractive option. It took off and, so far, hasn’t looked back. According to Gartner Research, it went from 3.9% of the mobile OS market in Q4 2009 to slightly over 50% of a much larger overall market in Q4 2011

The Introduction of Tablets

Living in a space somewhere between the familiar desktop web and the smartphone space, the tablet form factor, led at present by Apple’s iPad, probably represents the most exciting piece of the mobile space. With larger processors, desktop-like screen resolutions and the same portability and gesture-based interface options of a smartphone, tablets are real playground for developers.

The market is very much dominated by Apple at present. From the release of the original iPad in April of 2010, Apple has been the only real player in the market, with Gartner estimating nearly 75% market share.

Whether the tablet market will see a similar Droid moment to awaken the Android-based tablet market or some other player like Microsoft’s tablet-friendly Windows 8 comes along to capture market share from Apple remains to be seen.

It’ll certainly be interesting to see how it plays out.

The Full Picture

In addition to the two giants mentioned above, which in the same Gartner survey accounted for 74.7% of the smartphone market, there are several other platforms and browsers that fill out the full mobile landscape. Considering the scale of this market, ignoring the “other’ 25% is a dangerous proposition.

While Android and iOS have made the most headlines and are, in a lot of ways responsible for driving the development of the mobile space, they’re not the only players. Considering the number of devices out there, even an OS or handset maker with just 5-10% of the market can represent tens of millions of mobile users. It’s also important to learn the lesson presented by the phenomenal growth of Android. This market is young still and therefore fluid. It’s important to keep an eye on the trends and new players in order to stay ahead of the curve and react accordingly if there’s another Android-like growth spurt looming somewhere out there on the horizon.

The New Browser Wars

With these two parallel trends focusing so much attention on web standards it’s no surprise that browser vendors have reacted strongly. Over the past several years there’s been a brand new browser war ramping up as all the major browser vendors have started to ramp up their standards support in a huge way.

For those of us who were around for the first browser war, the idea of improving standards support to gain a competitive advantage in the browser space is basically insane. Compare that concept to the previous browser war between Netscape and Microsoft, and you’ll understand why. That was a complete disaster for web standards. At its heart it prominently featured Netscape Navigator 4.0, one of the two major candidates for the worst browser release of all time in terms of standards support, and Microsoft’s eventual victory allowed it to release Internet Explorer 6.0, the other candidate, and do nothing else for several, tortuous years because they had no real competition to push them out of their slumber.

The current trend is just about the complete opposite.

Two players were already in place before mobile and HTML5 came into play. Mozilla Firefox, born out of the ashes of Netscape Navigator, had already been blazing a trail of standards based software development throughout the 2000s. The open source project had won developer hearts and minds very early with a strong standards-based approach and it eventually began to sway consumers, taking market share away from the dominant Internet Explorer platform. Opera, a consistent, niche player in the desktop browser market also continued to provide excellent standards support throughout this period.

It was the introduction of two new players in the browser market that really kicked things off in terms of standards development end feature experimentation. Safari, Apple’s browser built on top of the open source Webkit rendering engine was released in 2003 for Mac OSX and later for Windows in 2007 and Google’s Chrome browser, also built on top of Webkit, was released in 2008. With strong attention to standards, Chrome’s emphasis on speed and the Safari teams introduction of the Canvas element as well as several popular extensions to the library of CSS modules these two browsers injected incredible life into the browser space and set off competitions on several fronts.

After Chrome and Safari debuted, instead of simply staring at Internet Explorer and hoping it would die a quick death, technologists the world over were now measuring browsers against one another on standards support, speed and other features.

An example of this can be seen in the race to pass the Acid3 test, a test which examines a series of markup, style and behavior based .standards in order to grade browsers on their standards support. From the release of the test the competition was on to see which browser would score 100/100 first. Opera claimed the first perfect score, just a few weeks after the release of the test, with a development build and, amongst the major browsers; Safari eventually became the first production browser to pass the test with version 4.0.

For people who had been through the standards dark ages having a high profile race to standards support was a cork-popping moment.

Which isn’t to say this process is now perfect and the web is a big, happy standards-drunk family, it isn’t. As the controversy that erupted in February 2012 over the usage of vendor prefixes in CSS shows, there’s still competition in the browser space and competition eventually means that there will be rough patches. Vendor prefixes are stings, like -moz- or -webkit- prefixed to CSS properties to indicate the presence of an experimental feature. They’re designed to allow browser vendors experiment with features for period of time before shipping a final, unprefixed version once the feature is stabilized. The controversy surrounds the -webkit- prefix and the unfortunate practice of web developers only including that variation when implementing newer features for the mobile space. Mozilla representative Tantek Çelik opened discussion with the CSS Working Group around Mozilla’s plan to alias certain, select -webkit- prefixes in order to allow Mobile Firefox to be able to contend against the perceived “webkit monopoly” on the mobile web. The days of often excited, hyperbolic and occasionally nuanced discussion that followed were entertaining, throught-provoking and represented ample proof that there was still room for the more fractious side of competition in this new set of browser wars.

Still, the overall trend hs been positive, with the combination of standards bodies and browser vendors pushing the web forward in ways it hadn’t seen since the dawn of the web.

The pressure from developers and users has been so great, in fact, that even Microsoft is going all in on standards. They’ve already released a very strong, standards compliant browser, Internet Explorer 9, and creating a much more aggressive schedule for new browser releases, with version 10 due just a year after the release of version 9. For a company that went several years between versions 6 and 7, cutting down the revision time to 1 year is a major step in the right direction. They’re also strongly focused on providing an excellent, standards compliant mobile browsing experience.

All this means that there’s very real pressure on browser vendors and the handset makers that use their technologies to provide the best possible browsing experience on their devices. While the pace of advancement has been great on the desktop the presence of legacy browsers like Internet Explorer 6 means that the accelerated innovation growth has been somewhat stunted. Mobile devices are another matter entirely.

Why is This Good For Web Developers

All of these devices have good-to-great web browsers. The iOS decision to ship a full-powered version of Safari has been copied consistently by other device and OS vendors. Android, BlackBerry and others ship browsers based on the same WebKit rendering engine as Safari. Additionally, the latest version of Microsoft’s mobile Internet Explorer has strong standards support, as do standalone mobile efforts from desktop stalwarts Mozilla and Opera. While it’s not exactly easy to develop consistently across what feels like an infinite number of devices, the opportunity to work with these brand new standards in production code is liberating. Where once it would have meant a wait of years (or decades in some cases) to really use some of these new standards, the wait can be practically nonexistent for certain projects. If, for example, you’re developing an internal sales application meant for use by a company’s sales force supplied with the latest iPad, you can just dive in and work at the bleeding edge. Chances are pretty good you’ve got a browser capable of handling a decent chunk of the web platform sitting in your pocket right now. If you’re working in the mobile space, the future of web development isn’t at some far flung point in the future when older versions of Internet Explorer fade into obscurity.

The future is here, right now.

Leave a Reply

Your email address will not be published. Required fields are marked *