A Long Journey Reaches a Happy Conclusion: The Uncertain Web is Out In All Formats

lrg

It’s true. After a two year odyssey, The Uncertain Web is finally out in every format that matters. The book has actually been out for a little over a month in multiple ebook formats from O’Reilly and Amazon; but owing to a slight production hiccup it hasn’t regularly been available in print until now (technically this Friday, but Amazon is on the case.) The book is in full color (and looks great) so I wanted to wait until you had the option to get your hands on the print edition before I really started to spread the news.

That time has come. I’m spreading some news…

I’m really happy with the finished product. One of the reasons I’m proud of it is that it’s the first book I’ve written that I could really recommend to my peers– my jQuery book was a hybrid effort with another writer aimed at intermediate developers and Beginning HTML and CSS is book for beginners. The Uncertain Web, on the other hand, works for everyone from a project manager or designer (who could read the first two and final chapters with ease) all the way up to the most technical front end engineer who might be surprised by some of the things that I kick around in my precious 224 pages. Not everyone is going to agree with everything I say in those pages, but there’s definitely something that will make you stop and think about the way you approach the web, no matter who you are.

Here’s how O’Reilly describes the book:

What’s the best way to develop for a Web gone wild? That’s easy. Simply scrap the rules you’ve relied on all these years and embrace uncertainty as a core tenet of design. In this practical book, veteran developer Rob Larsen outlines the principles of what he calls The Uncertain Web, and shows you techniques necessary to successfully make the transition.

By combining web standards, progressive enhancement, an iterative approach to design and development, and a desire to question the status quo, your team can create sites and applications that will perform well in a wide range of present and future devices. This guide points the way.

Topics include:

  • Navigating thousands of browser/device/OS combinations
  • Focusing on optimal, not absolute solutions
  • Feature detection, Modernizr, and polyfills
  • RWD, mobile first, and progressive enhancement
  • UIs that work with multiple user input modes
  • Image optimization, SVG, and server-side options
  • The horribly complex world of web video
  • The Web we want to see in the future

The book is also solely my idea. Which isn’t the case with any of my other books. Typically publishers have a book they want written andthey search for an author. This time I had some ideas I wanted to share and O’Reilly stepped up to let me share them (thanks to Simon St. Laurent.) That, in particular has been rewarding for me.


Have I mentioned I have a back cover blurb? I do. It’s provided by Jeremy Keith and it makes me smile.

“A refreshingly honest look at the chaotic, wonderful world of web development, with handy, practical advice for making future-friendly, backward-compatible websites.”


I also like this tweeted comparison:

All in all I’m a happy dude.

Check Out My Upcoming Webcast, Navigating the Uncertain Web

Hey! I’m doing a webcast on November 11, 2014 at 10 AM Pacific.* It’s called Navigating the Uncertain Web and you should check it out since it will be ridiculously great and it’s free.

Here’s what I had to say about it.

The modern web is a wild place. The proliferation of new browsers, new devices, new input models and new additions to the web platform have combined to create an environment full of uncertainty. It’s never been more difficult to create widely compatible sites. This webcast will show you how to approach compatibility in a nimble way and will help you to solve problems confidently when you’re faced with the web’s uncertainty.

If you’re interested in my book, The Uncertain Web, then this webcast will be right up your alley.

Speaking of the book, I’m finally going to be done with the first draft this week and will be working on getting it ready for production throughout September. It’s been an epic journey getting to this point so I’m ecstatic to see the finish line approaching.


* 6pm – London | 1pm – New York | Wed, Oct 8th at 4am – Sydney | Wed, Oct 8th at 2am – Tokyo | Wed, Oct 8th at 1am – Beijing | 10:30pm – Mumbai

My New Book, The Uncertain Web, is Available in Early Release

rc_lrg

The raw stuff, available for your reading pleasure. I’m really excited to see what people think.

The best way to approach the web today is to forgo hard and fast rules and design for uncertainty. Embracing uncertainty as a core tenet of web development and scrapping the rules we’ve relied on in the past few years is the best bet for creating future proof web solutions. By combining web standards, progressive enhancement, a full technical toolbox, an iterative hybrid approach to design and development and embracing a desire to question the status quo and perceived wisdom, teams can create dynamic web sites and applications that should perform admirably in future devices, with unknown capabilities. By focusing on optimal solutions with intelligent fallbacks and forgoing the desire for absolute solutions design and development can work together to create the web of today and tomorrow.

Introducing My New Book: The Uncertain Web

Hey everybody, I’ve got a new book coming out. It’s called The Uncertain Web and it’s being published by O’Reilly. Right now we’re targeting a November release.

Just in time for Christmas. Buy a dozen and hand them out like candy canes.

If that seems like too long to wait, never fear. I’m about 50% finished and once that chunk has been hammered on by a hand-picked team of geniuses, it will be available as an Early Release.

So, what’s it all about?

Here’s my original pitch:


The Uncertain Web

The best way to approach the web today is to forgo hard and fast rules and design for uncertainty. Embracing uncertainty as a core tenet of web development and scrapping the rules we’ve relied on in the past few years is the best bet for creating future proof web solutions.

In the early 2000s, there was basically one browser (Internet Explorer 6,) one platform (Windows XP) and one screen resolution (1024 by 768) that mattered. With that set up you could design, develop and test against the vast majority of web users with one desktop computer. The biggest question on the horizon, it seemed, was when it would be viable to design for 1280 pixel screens

This limited field of play meant that there was an expectation that sites and applications would look the same everywhere for everyone. Best practices were honed and codified into hard and fast rules which drove design and development. Major choices, such as the size of the basic grid to design for, were no longer choices. You started with a static, 960 pixel grid and sliced and diced it as needed.

Today, things couldn’t be more different. With the launch of the iPhone and the iPad, the rise of Android and the growth of not just one, but two real contenders to Microsoft’s position as the dominant web browser (Firefox and Chrome), developers and designers have an ocean of variables to navigate. Every question around a site design is now pregnant with options.

Initially, developers and designers tried to navigate this new reality by creating new rules.

The problem was, the goalposts kept moving. As soon as a new hard and fast rule was created, some new wrinkle would render it impotent.
People designed and built “iPhone” sites, assuming that Apple’s dominance in the smartphone market was somehow a permanent condition. They tested for touch capabilities and assumed that touch users would never have a mouse.

As Android’s huge growth over the past few years, and the presence of devices like the Chromebooks and Windows 8 laptops with both mouse and touch capabilities have proved that those new rules have a short shelf life.

Even patterns like Responsive Web Design, which some saw as a single solution for design and development moving forward fall apart when applied against complicated application patterns and the vagaries of bandwidth and mobile performance.

By combining web standards, progressive enhancement, an iterative approach to design and development and embracing a desire to question the status quo and perceived wisdom; teams can create dynamic web sites and applications that should perform admirably in future devices, with unknown capabilities. By focusing on optimal solutions with intelligent fallbacks and forgoing the desire for absolute solutions design and development can work together to create the web of today and tomorrow.

This book will outline both the concept and underlying principles of the Uncertain Web and introduce some of the techniques necessary to make the successful transition.


So, that’s the thing.

I’m really excited about this one. I hope you enjoy it.

You’re So Smart You Turned JavaScript into xHTML

Have you ever seen an error message like this one? XML parsers are designed to call it quits as soon as they encounter an error. XML grammar is strict, so the parsers need to be strict too. This particular error is from an & in a URL. &s in URLs are super common occurrence, but if you throw one into a plain XML document, without inserting it as a character entity (&) or numerical character reference (&) you get an error.

Sweet.

xml-error

To be honest, I’m not a huge XML hater. I’m a technical guy so ensuring that some of the documents I create adhere to a technical standard doesn’t kill me.

I mean, Ant.

That said, I don’t like the way XML’s strict error handling inserted itself into the web with xHTML. A good part of the reason the web blossomed the way it did was because the language underpinning it, HTML, it was so forgiving people could get by without knowing what they were doing. My initial HTML was so bad, I’m surprised I ever did anything.

But yet, I did.

And I’ve been building sites since. 17 years and counting! All because I could create a tag-soup mess that, kinda/sorta did what I wanted it to do and the browsers at the time did their best to try to make sense of it. That was great.

XML/xHTML took that possibility away. Which is why, while many people (including myself) wrote xHTML documents, we didn’t actually serve them as XML. The danger of having a catastrophic error was too great, so all but the hardiest developers basically treated xHTML like HTML 4.0, just with XML style syntax.

Which is kinda’ goofy.


Remember when they said “Users should not be exposed to authoring errors?”

The group that went on to found the WHATWG (the group responsible for the current HTML specification) recognized that this was a serious flaw and put a specific reference to this issue with xHTML in their proposal for the future of web application development. It was one of their seven guiding design principles. In case you’re unaware of the history, that paper is the foundation of the modern HTML specification. One piece of that foundation is as follows:

Users should not be exposed to authoring errors

Specifications must specify exact error recovery behaviour for each possible error scenario. Error handling should for the most part be defined in terms of graceful error recovery (as in CSS), rather than obvious and catastrophic failure (as in XML).

These are concepts embedded in the very heart of modern web development.

  • Avoid catastrophic errors.
  • Allow for graceful error recovery.

Why, then, do so many people create web applications that rely so much on JavaScript that they catastrophically fail? The web, at its best, should be resilient. Nothing warms my heart more than a 20 year old page that’s still kicking, a 10 year old link that redirects properly onto a completely new domain or platform or a modern site that can serve something useful to a 15 year old browser. To me, that’s the web at its best.

The opposite end of that spectrum, from my perspective at least, are the sites that do nothing at all without a relatively modern (or, in some places, completely modern) browser with JavaScript turned on and all dependencies (many of which are on third-party domains) loaded. While you can make an argument that JavaScript should be a hard requirement for certain kinds of applications (I’ll give you games written in the Canvas 2d API, for example,) I don’t think it’s the way of the web to have your site fail and show a blank screen because some third-party dependency doesn’t load, JavaScript is turned off or because your developer left a trailing comma in a JavaScript object and didn’t test in Internet Explorer.

I understand that creating a completely static version of your site or application is impractical. Granted. Still, showing nothing if there’s a problem is just terrible. But yet, people are happy to do it.

Client side JavaScript all the way

Nicholas Zakas has talked about this a little bit with his presentation, Enough With the JavaScript already. We’ve moved so far towards JavaScript in some circles that some people are starting to use tools like Backbone even in use cases that don’t require complicated interactions, like a static content site. Whether it’s a case of being obsessed with the new-shiny or simply not knowing any better, people are parsing and rendering entire pages in the client, even when there’s no real dynamic content. Beyond the fact that that’s going to be slower (why would you serve 1MB of JavaScript/JSON to render 100kb of HTML when the server can just serve the parsed/rendered 100kb to begin with?) you’re just setting yourself up for catastrophic failure if, for example, your static content server goes down and all that fancy Backbone code disappears.

What’s wrong with progressive enhancement?

Nothing, that’s what. Just because we can do everything in JavaScript doesn’t mean we should. In fact, the more complexity and dependencies we bring into the front end, perversely, makes the danger of something catastrophic happening even more likely. Third party scripts, web service calls, browser bugs, users with JavaScript disabled and your own application errors can all cause catastrophic failures if you’re not careful. That’s why, in my mind at least, the principles of progressive enhancement are more important now than they’ve ever been. Building solutions that are robust and resistant to changes in the environment (including many that are going to be forever beyond your control) is going to ensure that you can always reach your audience.

Your users don’t care how fancy your stack is. They care that they’re getting their content, that they’re getting it as fast as possible and it appears consistently from visit to visit. If your stack is getting in the way of any of those goals then you need to revisit the way you’ve designed your web solutions.