Christian Heilmann

Progressive Enhancement is not about JavaScript availability.

Wednesday, February 18th, 2015 at 4:05 pm

I have been telling people for years that in order to create great web experiences and keep your sanity as a developer you should embrace Progressive Enhancement.

escalators are great. when there is an issue, they become stairs and still work

A lot of people do the same, others question the principle and advocate for graceful degradation and yet others don’t want anything to do with people who don’t have the newest browsers, fast connections and great hardware to run them on.

People have been constantly questioning the value of progressive enhancement. That’s good. Lately there have been some excellent thought pieces on this, for example Doug Avery’s “Progressive Enhancement Benefits“.

One thing that keeps cropping up as a knee-jerk reaction to the proposal of progressive enhancement is boiling it down to whether JavaScript is available or not. And that is not progressive enhancement. It is a part of it, but it forgets the basics.

Progressive enhancement is about building robust products and being paranoid about availability. It is about asking “if” a lot. That starts even before you think about your interface.

Having your data in a very portable format and having an API to retrieve parts of it instead of the whole set is a great idea. This allows you to build various interfaces for different use cases and form factors. It also allows you to offer this API to other people so they can come up with solutions and ideas you never thought of yourself. Or you might not, as offering an API means a lot of work and you might disappoint people as Michael Mahemoff debated eloquently the other day.

In any case, this is one level of progressive enhancement. The data is there and it is in a logical structure. That’s a given, that works. Now you can go and enhance the experience and build something on top of it.

This here is a great example: you might read this post here seeing my HTML, CSS and JavaScript interface. Or you might read it in an RSS reader. Or as a cached version in some content aggregator. Fine – you are welcome. I don’t make assumptions as to what your favourite way of reading this is. I enhanced progressively as I publish full articles in my feed. I could publish only a teaser and make you go to my blog and look at my ads instead. I chose to make it easy for you as I want you to read this.

Progressive enhancement in its basic form means not making assumptions but start with the most basic thing and check every step on the way if we are still OK to proceed. This means you never leave anything broken behind. Every step on the way results in something usable – not necessarily enjoyable or following a current “must have” format, but usable. It is checking the depth of a lake befor jumping in head-first.

Markup progressive enhancement

Web technologies and standards have this concept at their very core. Take for example the img element in HTML:

<img src="threelayers.png" 
     alt="Three layers of separation 
          - HTML(structure), 
          CSS(presentation) 
          and JavaScript(behaviour)">

By adding an alt attribute with a sensible description you now know what this image is supposed to tell you. If it can be loaded and displayed, then you get a beautiful experience. If not, browsers display this text. People who can’t see the image at all also get this text explanation. Search engines got something to index. Everybody wins.

<h1>My experience in camp</h1>

This is a heading. It is read out as that to assistive technology, and screen readers for example allow users to jump from heading to heading without having to listen to the text in between. By applying CSS, we can turn this into an image, we can rotate it, we can colour it. If the CSS can not be loaded, we still get a heading and the browser renders the text larger and bold as it has a default style sheet associated with it.

Visual progressive enhancement

CSS has the same model. If the CSS parser encounters something it doesn’t understand, it skips to the next instruction. It doesn’t get stuck, it just moves on. That way progressive enhancement in CSS itself has been around for ages, too:

.fancybutton {
  color: #fff;
  background: #333;
  background: linear-gradient(
    to bottom, #ccc 0%,#333 47%,
    #000 100%
  );
}

If the browser doesn’t understand linear gradients, then the button is white on dark grey text. Sadly, what you are more likely to see in the wild is this:

.fancybutton {
  color: #fff;
  background:  -webkit-gradient(
    linear, 
    left top, left bottom, 
    color-stop(0%,#ccc), 
    color-stop(47%,#333), 
    color-stop(100%,#000));
}

Which, if the browser doesn’t understand webkit gradient, results in a white button with white text. Only because the developer was too lazy to first define a background colour the browser could fall back on. Instead, this code assumes that the user has a webkit browser. This is not progressive enhancement. This is breaking the web. So much so, that other browsers had to consider supporting webkit specific CSS, thus bloating browsers and making the web less standardised as a browser-prefixed, experimental feature becomes a necessity.

Progressive enhancement in redirection and interactivity

<a href="http://here.com/catalog">
See catalog
</a>

This is a link pointing to a data endpoint. It is keyboard accessible, I can click, tap or touch it, I can right-click and choose “save as”, I can bookmark it, I can drag it into an email and send it to a friend. When I touch it with my mouse, the cursor changes indicating that this is an interactive element.

That’s a lot of great stuff I get for free and I know it works. If there is an issue with the endpoint, the browser tells me. It shows me when the resource takes too long to load, it shows me an error when it can’t be found. I can try again.

I can even use JavaScript, apply a handler on that link, choose to override the default behaviour with preventDefault() and show the result in the current page without reloading it.

Should my JavaScript fail to execute for all the reasons that can happen to it (slow connection, blocked resources on a firewall level, adblockers, flaky connection), no biggie: the link stays clickable and the browser redirects to the page. In your backend you probably want to check for a special header you send when you request the content with JS instead of as a redirect from the browser and return different views accordingly.

<a href="javascript:void()">
Catalog</a> or 
<a href="#">Catalog</a> or 
<a onclick="catalog()">Catalog</a>

This offers none of that working model. When JS doesn’t work, you got nothing. You still have a link that looks enticing, the cursor changed, you promised the user something. And you failed at delivering it. It is your fault, your mistake. And a simple one to avoid.

XHTML had to die, HTML5 took its place

When XHTML was the cool thing, the big outcry was that it breaks the web. XHTML meant we delivered HTML as XML. This meant that any HTML syntax error – an unclosed tag, an unencoded ampersand, a non-closed quote meant the end user got an error message instead of the thing they came for. Even worse, they got some cryptic error message instead.

HTML5 parsers are forgiving. Errors happen silently and the browser tries to fix them for you. This was considered necessary to stop the web from breaking. It was considered bad form to punish our users for our mistakes.

If you don’t progressively enhance your solutions, you do the same thing. Any small error will result in an interface that is stuck. It is up to you to include error handling, timeout handling, user interaction like right-click -> open in new tab and many other things.

This is what progressive enhancement protects us and our users from. Instead of creating a solution and hoping things work out, we create solutions that have a safety-belt. Things can not break horribly, because we planned for them.

Why don’t we do that? Because it is more work in the first place. However, this is just intelligent design. You measure twice, and cut once. You plan for a door to be wide enough for a wheelchair and a person. You have a set of stairs to reach the next floor when the lift is broken. Or – even better – you have an escalator, that, when broken, just becomes a set of stairs.

Of course I want us to build beautiful, interactive and exciting experiences. However, a lot of the criticism of progressive enhancement doesn’t take into consideration that nothing stops you from doing that. You just have to think more about the journey to reach the final product. And that means more work for the developer. But it is very important work, and every time I did this, I ended up with a smaller, more robust and more beautiful end product.

By applying progressive enhancement to our product plan we deliver a lot of different products along the way. Each working for a different environment, and yet each being the same code base. Each working for a certain environment without us having to specifically test for it. All by turning our assumptions into an if statement. In the long run, you save that way, as you do not have to maintain various products for different environments.

We continuously sacrifice robustness of our products for developer convenience. We’re not the ones using our products. It doesn’t make sense to save time and effort for us when the final product fails to deliver because of a single error.

Share on Mastodon (needs instance)

Share on Twitter

My other work: