Christian Heilmann

Posts Tagged ‘web’

Stumbling on the escalator

Thursday, February 16th, 2012

I am always amazed about the lack of support for progressive enhancement on the web. Whenever you mention it, you face a lot of “yeah, but…” and you feel having to defend something that should be ingrained in the DNA of anyone who works on the web.

Escalator

When explaining progressive enhancement in the past Aaron Gustafson and me quoted the American Stand-Up comedian Mitch Hedberg and his escalator insight:

An escalator can never break – it can only become stairs. You would never see an “Escalator Temporarily Out Of Order” sign, just “Escalator Temporarily Stairs. Sorry for the convenience. We apologize for the fact that you can still get up there.”

This is really what it is about. Our technical solutions should be like escalators – they still work when the technology fails or there is a power outage (if you see CSS animations and transformations and transitions and JavaScript as power) – but they might be less convenient to use. Unlike real world escalators we never have to block them off to repair them.

We could even learn from real-world escalators that shut down when nobody uses them for a while and start once people step on them. On the web, we call this script loading or conditional application of functionality. Why load a lot of images up front when they can’t be seen as they are far away from the viewport?

An interesting thing you can see in the real world is that when an escalator broke down and became stairs people stumble when they enter it. Our bodies have been conditioned to expect movement and our motor memory does a “HUH?” when there isn’t any.

This happens on the web as well. People who never were without a fast connection or new and shiny computer or phone with the latest browsers have a hard time thinking about these situations – it just feels weird.

Travelator

Another interesting thing are the horizontal walkways you have in airports. These are meant to accelerate your walking, not replace it. Still you find people standing on those complaining about their speed.

On the web these are the people who constantly complain about new technology being cool and all but they’d never be able to use it in their current client/development environment. Well, you don’t have to. You can walk in between the walkways and still reach the other side – it just takes a bit longer.

So next time someone praises flexible development and design practices and you have the knee-jerk reaction to either condemn them for not using the newest and coolest as “everybody has a xyz phone and browser abc” or you just don’t see the point in starting from HTML and getting to your goal re-using what you structured and explained in HTML as “GMail and Facebook don’t do it either” think about the escalator and how handy it is in the real world.

Think about it when you are tired (accessibility), or you carry a lot of luggage (performance) or when you just want to have a quick chat whilst being transported up without getting out of breath. Your own body has different needs at different times. Progressively enhancing our products allows us to cater for lots of different needs and environments. Specialising and optimising for one will have a much more impressive result, but for example a lift is pointless when it doesn’t work – no matter how shiny and impressive it looks.

Our job is to make sure people can do the things they went online for – get from their start to their desired goal. This could be convenient and fast or require a bit of work. Our job is to make sure that people do not get the promise of a faster and more convenient way that fails as soon as they try taking it.

You can comment on Google Plus if you want to.

Seven things I want to see on the web in 2011

Sunday, January 2nd, 2011

As I’ve been particularly nice all year, I think I deserve to be allowed to have a wish list of things that should change on the web in 2011. So here is what I want to see:

  1. HTML5 everywhere
  2. Death of the password (antipattern)
  3. Backup APIs
  4. Focus on Security
  5. Governments embracing the web instead of fighting it
  6. Cloud based apps with sharing facilities
  7. More hardware-independent interface innovation

1. HTML5 everywhere

Those who know about my new job heard that I am putting my full energy this year into making HTML5 work to be the replacement for the hacky efforts we do right now to write web applications. I want to see 2011 as the year HTML5 turned mainstream:

  • I want amazingly beautiful and useful software to be built and put in front of the luddites of the web who force their users to have IE6 and not support any other browser.
  • I want to use native form controls like date pickers in travel web sites and finance sites.
  • I want every video on the web to be open and I want to be able to save it with a link and manipulate it without having to re-encode it.
  • I want collaborative software to use web sockets (once the protocol has been fixed) and I want to see web workers to be used to avoid interfaces grinding down to a halt when some calculation needs to be done.
  • I want online converters that use the cloud to make video conversion into open formats dead easy – I also want to have a subtitling format for that.
  • I want interfaces to natively be progressively enhanced by using the same widgets server and client side.
  • I want systems to use Geolocation and Local Storage to be responsive and clever in getting and storing my information rather than having to enter the same data over and over again.

And I want a donkey and a happy puppy to play with – but that’s a different story.

2. Death of the password (antipattern)

I hate memorising passwords. Everybody does. The recent hack of gawker media for example showed that people use amazingly clever passwords like “1234567” or “lifehacker” (on lifehacker.com no less) instead of choosing one that is safer but harder to remember.As I don’t use the same password everywhere and I don’t like staying logged in at sites I don’t frequently use a huge chunk of my online life right now is resetting my password. Not fun, but neither is typing in very complex passwords on my mobile.There are alternatives. Using Facebook, Twitter, Google and Yahoo and oAuth you can allow people to sign in to your site – without having to remember another password or do the dance of going from your site to email and back. Using OpenID you can allow people to use their homepage as their login. These systems also have the benefit that you can tap into the social identity of the users on these systems rather than asking for the same data over and over again. I would love more people to use them in 2011 rather than slavishly sticking to an old idea of having to collect user data on your own system. This is the web – use it.

3. Backup APIs

The recent involuntary announcement of Yahoo that del.icio.us is under the hammer (or halfway in the blender) makes it obvious that nothing is safe to use in the long term (I will write a longer article about this as the Yahoo bashers also live in a dream world, IMHO).Therefore I would love to see startups and API providers always offer a backup API in addition to the normal read/write/update APIs. If I don’t like a system any longer it should be easy for me to take all the data I spent a lot of time and effort on over the years with me. Dopplr was a great example of doing this right. In the current run for more and more realtime web apps we forget that backups are important and simple the decent thing to offer our users.

4. Focus on Security

Yeah, I get it – we need to innovate. We need to innovate hard, cause only the ones with the cool new features every week are the ones who win. Rah Rah Rah.

I disagree though that innovation means sacrificing security and this is what happens all over the place now. Hell, I’ve even heard speakers at startup conferences say that security can come later and privacy is not an issue really. That is bullshit, and anyone with half a technical mind should know it.

The web is a mess right now and it doesn’t have to. Storing data unencrypted, transport of identity in clear text over HTTP, XSS vulnerabilities, backdoors and SQL injection are not misdemeanours – they are just sloppy development and will bite you in the arse sooner or later. Sure, Facebook can pay a lawsuit of people getting their identity stolen. Can your startup?

I dread the day when stealing online identities becomes as profitable as credit card fraud and when the organised crime institutions of this world start targeting it. If we want the web to be awesome, we have to make it secure. Otherwise other people will try to solve the security issues for us – and boy are they clueless, which brings me to the next wish.

5. Governments embracing the web instead of fighting it

Wikileaks was a very necessary incident this year. There is information out there that is kept from us. True, a lot of times knowledge can be dangerous and some information should be kept away from people who don’t know how to read or handle it properly. The same piece of information can be displayed in one way or another to cause one emotion or another – this is what TV is for.

However, if there is one thing that Wikileaks showed is that the people who should have all the knowledge are not necessarily the governments. They’ve proven before that a lot of classified information gets lost by leaving laptops and printouts on trains.

One thing that is less mentioned is that Wikileaks showed that the web is an incredibly efficient media to distribute information and get people to defend your cause. LOIC and the attacks on Visa and Mastercard shows that you can leverage the power of every user out there and make their computer part of a cause – even without them knowing much about computers. Right now only the baddies do that – zombie botnets and viruses.

How about a government programme that allows every citizen to download some data and crunch through it for the state? How about making the job of creating a more efficient state the job of every citizen? If you censor people, you have them against you. If you are open in your communication and share the challenges and ask for help you make people your allies.

Instead of seeing this obvious opportunity governments right now are afraid of the web and try to control it – in essence turn the read+write media that is the web into a lame consumption channel much like TV.

Recently the UK proposed to remove pornography from the internet and you need to contact your ISP that you want to consume it beforehand. I am hard pushed to find a lamer excuse for monitoring people’s online behaviour. I am also hard pushed to even fathom how that would work.Are Rubens pictures of huge naked ladies pornography? What is that file called qweaasdwewweq.part2.rar on Rapidshare or Hotfile? Sure, pornography sites that rated themselves with a meta tag are simple to block, but surely if you want to remove porn from the web you also have to block Blogger and any other simple publication platform people use to store naughty pictures or links to rar-ed full movies. Or maybe that is actually the end goal?

6. Cloud based apps with sharing facilities

I have not gotten my Google laptop yet (I asked for one though, let’s see if that works out) but I love the idea of not having to install anything on my computer any longer. When I joined Mozilla I was amazed that the company laptop came completely empty (I was also amazed just how much information Apple wants to know about you when you install OSX). The reason is that everything the company does is online.

We use Zimbra as our mail, BaseCamp, Google Docs, Etherpad and some others. This rocks, and it would rock even more if cloud based systems would talk more to each other:

  • Instead of sending a URL to someone to open a Google Doc, why not have it as a virtual attachment that allows me to save it as a PDF for on-the-go reading directly from a mail client?
  • Why can’t I just upload a movie to S3 and it automatically creates embeddable WebM versions for me?
  • There are some very cool image editing tools on the web now, but where are the video editors? (yes, there was Jumpcut, but it got the old yeller treatment by Yahoo).
  • We need some cool SVG editors online, which could convert other path-based formats on the go.
  • We need better editors for HTML5 content and put them in the cloud rather than install them locally.
  • We need a good web-standards-based slide system which allows us to sync video and audio easily.
  • We need a web based version control system that handles textual and binary data and not require you to know your way around a CLI. The HTML5 File API could be used for that.
  • Why are all expense and travel systems in the style of the 90s? Why can’t I just link my online bank transaction PDF to an invoice system and tick the ones I spent for the company to get the money back?
  • Why don’t systems use the new technologies we have right now to allow for storing data locally and offline?

In other words, we use web based systems but we forget that they could talk to each other and have much more to play with in browsers than we had in the 90s. Many a time I had to create a PDF and attach it to an email so someone in the expenses department could copy and paste from it into another system. That is just wasted time and duplicated effort. Once things are digital they can be re-used.

A lot of cool online systems are in place already, now it would be great to build some collaboration frameworks that allow me to sync them and connect them. There are some very cool things in the making right now – let’s hope this year will be the one where they become industrial strength and get a lot of use.

7. More hardware-independent interface innovation

2010 was the year of hardware innovation. Apple’s iPad, iPhone and Android systems leapfrogged the old grey huge boxes and netbooks and sub-notebooks made us much more mobile than ever before. Small screens and touch interfaces bring up new and exciting challenges and mean that we should question some of the “standards” we use right now (best example are lightboxes which are simply awful to use on a mobile).

However, instead of taking these learnings and simplifying all interfaces we build hardware specific solutions. A lot of the CSS innovation done by Apple is very much targeted to iPad solutions and it will take other browsers some time to take these on – especially when nobody requests browser vendors to do so.When the iPad came out people asked me if I will now change all my sites to work for it. No, I won’t. I will tweak them to work with it alongside all the other systems out there, but I fail to see why I would want to leave out hundreds of millions of users of the web who do not have an iPad.
So instead of tweaking our designs and interfaces to cater one single solution I would love to see original patterns being enhanced and changed according to new use cases. Hardware is fleeting and changing. Patterns stay.

Conclusion

That’s it! I have a few more requests (like free wireless connectivity at public spaces instead of charging 10 Euro for half an hour like this friggin airport does) but for the web, this would make an awesome 2011. Let’s get to it!

Diving into the web of data – the YQL talk at boagworld live 200

Friday, February 12th, 2010

I just finished a quick podcast demo for the 200th podcast of Boagworld, streamed live on ustream. I thought I had an hour but it turned out to be half an hour. My topic was YQL and I wanted to actually do something like shown in the video that was just released (click through to watch or download the video in English or German):

YQl and YUI video

The story of this is:

  • We spend a lot of time thinking about building the interface and using the right semantic markup and trying to make browsers work (or expect certain browsers in certain settings as we gave up on that idea)
  • What we should be concentrating on more is the data that drives our web sites – it is boring to have to copy and paste texts from Word documents or get a CMS to generate something that is almost but not quite like useful HTML.
  • You could say that once published as HTML the data is available but for starters HTML4 is bad as a data format to store information. Furthermore too many pieces of software can access web sites and the cleanest HTML you release somewhere can be messed up by somebody else with a CMS or any other mean of access further down the line. Sadly enough most content editing software still produces HTML that is tied with its presentation rather than what it should structure and define.
  • Having worked on the datasets provided by the UK government at data.gov.uk I’ve realised that we are nowhere near as a market to provide re-usable and easily convertible data to each other. XML was meant to be that but got lost in complexities of dictionaries, taxonomies and other things that you can spend days on to define English content but have to re-think anyway once you go multilingual. Most content – let’s face it – is maintained in Excel sheets and Word documents – which is OK cause people should not be forced to use a system they don’t like.
  • If you really think about the web as a platform and as a media then we should have simple ways to provide textual data or binary information (for videos and images) instead of getting bogged down in how to please the current generation of browsers.
  • If you really want to be accessible to any web user – and that is anyone who can get content over HTTP - you should think about making your content available as an API. This allows people to build interfaces necessary for edge cases that you didn’t even think existed.
  • YQL is a simple way to use the web as a database and mix and match data and also a very simple way to provide data in easy to digest formats – give it a go.

In any case, after the 2 o’clock podcast where most of my questions were eloquently answered by Jeremy Keith and the Skype connection died in the last 5 minutes I spent the afternoon putting together some demos for this YQL talk as YQL is really easiest to explain with examples and to have something for the people on flaky connections to play with. So if you go to:

You can see what I talked about during the podcast. People in the chat asked if this will be open source. Yes it is, the passcode is pressing cmd+u in Firefox or whatever other way you choose to “view source” in your browser of choice.

Normally I would not do any of these calls purely in JavaScript (as explained in the video) but this was the quickest solution and it can give you an insight just how easy it is to use information you requested, filtered and converted with YQL.

cURL – your “view source” of the web

Friday, December 18th, 2009

What follows here is a quick introduction to the magic of cURL. This was inspired by the comment of Bruce Lawson on my 24 ways article:

Seems very cool and will help me with a small Xmas project. Unfortunately, you lost me at “Do the curl call”. Care to explain what’s happening there?

What is cURL?

OK, here goes. cURL is your “view source” tool for the web. In essence it is a program that allows you to make HTTP requests from the command line or different language implementations.

The cURL homepage has all the information about it but here is where it gets interesting.

If you are on a Mac or on Linux, you are in luck – for you already have cURL. If you are operation system challenged, you can download cURL in different packages.

On aforementioned systems you can simply go to the terminal and do your first cURL thing, load a web site and see the source. To do this, simply enter

curl "http://icant.co.uk"

And hit enter – you will get the source of icant.co.uk (that is the rendered source, like a browser would get it – not the PHP source code of course):

showing with curl

If you want the code in a file you can add a > filename.html at the end:

curl "http://icant.co.uk" > myicantcouk.html

Downloading with curl by  you.

( The speed will vary of course – this is the Yahoo UK pipe :) )

That is basically what cURL does – it allows you to do any HTTP request from the command line. This includes simple things like loading a document, but also allows for clever stuff like submitting forms, setting cookies, authenticating over HTTP, uploading files, faking the referer and user agent set the content type and following redirects. In short, anything you can do with a browser.

I could explain all of that here, but this is tedious as it is well explained (if not nicely presented) on the cURL homepage.

How is that useful for me?

Now, where this becomes really cool is when you use it inside another language that you use to build web sites. PHP is my weapon of choice for a few reasons:

  • It is easy to learn for anybody who knows HTML and JavaScript
  • It comes with almost every web hosting package

The latter is also where the problem is. As a lot of people write terribly shoddy PHP the web is full of insecure web sites. This is why a lot of hosters disallow some of the useful things PHP comes with. For example you can load and display a file from the web with readfile():

<?php
  readfile('http://project64.c64.org/misc/assembler.txt');
?>

Actually, as this is a text file, it needs the right header:

<?php
  header('content-type: text/plain');
  readfile('http://project64.c64.org/misc/assembler.txt');
?>

You will find, however, that a lot of file hosters will not allow you to read files from other servers with readfile(), or fopen() or include(). Mine for example:

readfile not allowed by  you.

And this is where cURL comes in:

<?php
header('content-type:text/plain');
// define the URL to load
$url = 'http://project64.c64.org/misc/assembler.txt';
// start cURL
$ch = curl_init(); 
// tell cURL what the URL is
curl_setopt($ch, CURLOPT_URL, $url); 
// tell cURL that you want the data back from that URL
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 
// run cURL
$output = curl_exec($ch); 
// end the cURL call (this also cleans up memory so it is 
// important)
curl_close($ch);
// display the output
echo $output;
?>

As you can see the options is where things get interesting and the ones you can set are legion.

So, instead of just including or loading a file, you can now alter the output in any way you want. Say you want for example to get some Twitter stuff without using the API. This will get the profile badge from my Twitter homepage:

<?php
$url = 'http://twitter.com/codepo8';
$ch = curl_init(); 
curl_setopt($ch, CURLOPT_URL, $url); 
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 
$output = curl_exec($ch); 
curl_close($ch);
$output = preg_replace('/.*(<div id="profile"[^>]+>)/msi','$1',$output);
$output = preg_replace('/<hr.>.*/msi','',$output);
echo $output;
?>

Notice that the HTML of Twitter has a table as the stats, where a list would have done the trick. Let’s rectify that:

<?php
$url = 'http://twitter.com/codepo8';
$ch = curl_init(); 
curl_setopt($ch, CURLOPT_URL, $url); 
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 
$output = curl_exec($ch); 
curl_close($ch);
$output = preg_replace('/.*(<div id="profile"[^>]+>)/msi','$1',$output);
$output = preg_replace('/<hr.>.*/msi','',$output);
$output = preg_replace('/<?table>/','',$output);
$output = preg_replace('/<(?)tr>/','<$1ul>',$output);
$output = preg_replace('/<(?)td>/','<$1li>',$output);
echo $output;
?>

Scraping stuff of the web is but one thing you can do with cURL. Most of the time what you will be doing is calling web services.

Say you want to search the web for donkeys, you can do that with Yahoo BOSS:

<?php
$search = 'donkeys';
$appid = 'appid=TX6b4XHV34EnPXW0sYEr51hP1pn5O8KAGs'.
         '.LQSXer1Z7RmmVrZouz5SvyXkWsVk-';
$url = 'http://boss.yahooapis.com/ysearch/web/v1/'.
       $search.'?format=xml&'.$appid;
$ch = curl_init(); 
curl_setopt($ch, CURLOPT_URL, $url); 
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 
$output = curl_exec($ch); 
curl_close($ch);
$data = simplexml_load_string($output);
foreach($data->resultset_web->result as $r){
  echo "<h3><a href=\"{$r->clickurl}\">{$r->title}</a></h3>";
  echo "<p>{$r->abstract} <span>({$r->url})</span></p>";
}
?>

You can also do that for APIs that need POST or other authentication. Say for example to use Placemaker to find locations in a text:

$content = 'Hey, I live in London, England and on Monday '.
           'I fly to Nuremberg via Zurich,Switzerland (sadly enough).';
$key = 'C8meDB7V34EYPVngbIRigCC5caaIMO2scfS2t'.
       '.HVsLK56BQfuQOopavckAaIjJ8-';
$ch = curl_init(); 
define('POSTURL',  'http://wherein.yahooapis.com/v1/document');
define('POSTVARS', 'appid='.$key.'&documentContent='.
                    urlencode($content).
                   '&documentType=text/plain&outputType=xml');
$ch = curl_init(POSTURL);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, POSTVARS);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);  
$x = curl_exec($ch);
$places = simplexml_load_string($x, 'SimpleXMLElement',
                                LIBXML_NOCDATA);    
echo "<p>$content</p>";
echo "<ul>";
foreach($places->document->placeDetails as $p){
  $now = $p->place;
  echo "<li>{$now->name}, {$now->type} ";
  echo "({$now->centroid->latitude},{$now->centroid->longitude})</li>";
};
echo "</ul>";
?>

Why is all that necessary? I can do that with jQuery and Ajax!

Yes, you can, but can your users? Also, can you afford to have a page that is not indexed by search engines? Can you be sure that none of the other JavaScript on the page will not cause an error and all of your functionality is gone?

By sticking to your server to do the hard work, you can rely on things working, if you use web resources in JavaScript you are first of all hoping that the user’s computer and browser understands what you want and you also open yourself to all kind of dangerous injections. JavaScript is not secure – every script executed in your page has the same right. If you load third party content with JavaScript and you don’t filter it very cleverly the maintainers of the third party code can inject malicious code that will allow them to steal information from your server and log in as your users or as you.

And why the C64 thing?

Well, the lads behind cURL actually used to do demos on C64 (as did I). Just look at the difference:

horizon 1990

haxx.se 2000

Interfaces that have to die – the onchange select box

Thursday, July 10th, 2008

It is scary how some obviously bad practices refuse to go away and developers jump through hoops to defend them and make them work rather than using simpler alternatives.

My pet peeve is the “chess rules” select box (you touched it, it needs to move):

This is the most evil permutation of this interface idea. A lesser evil version is part of a large form that will change the rest of the form or – if we venture back into more evil lairs – submits it to reflect these changes.

The arguments for this interface are not many:

  • It saves the user one step of pressing a submit button
  • The action is immediate

(Ok, these are actually the same argument, comment if you know some.)

The drawbacks of the solution are immense:

  • People that use a keyboard will submit the form or leave the page without having selected the right option.
  • People that are dependent on using a keyboard for web access will possibly know that you can press the Alt+Arrow Down keys simultaneously to expand the whole dropdown before selecting with arrow up and down and enter.
  • Most people won’t know this though and a lot of people use keyboards and tabbing from field to field to fill out forms.
  • Forms are a terrible annoyance of our times – nobody likes filling them in. Most of the time we won’t keep our eyes on the screen but actually look up information on papers, our credit card or read up the fine print that – of course – is not on the screen but only in the letter we got sent.
  • I have no clue how something like that works with switch access or voice recognition – probably not
  • It is scripting dependent – no JavaScript, no service

The latter is actually a reason for some companies for using this from time to time. Too many links on the same page are bad for SEO - Google considers your page not content but a sitemap and ranks it lower. That’s why some companies start offering the first two levels of navigation as links and the rest as a dropdown in this manner.

However, that still doesn’t make them a good solution, it just means that you offer far too many options to your users to begin with.

Here’s what people do to make this work: Aaron Cannon on the WebAim mailing list


A coworker and I recently devised a way to make a slightly more accessible version of the onchange dropdown navigation box. Basically, using javascript, we determine whether they are using their mouse or the keyboard to select each item in the list. If they used the mouse, it works as usual, immediately taking them to the selected page. If however the selection was made by the keyboard, we display a spinner and delay x number of seconds. If the user doesn’t make another selection within that time, they are taken to the page they selected. If they do, the clock is restarted.

I am not attacking Aaron here, he probably was asked to find a solution no matter what and did his best to find a technical way. The big scare in this for me is “determine whether they are using a mouse or keyboard”. My guess is they check for the event type, but assistive technology like voice recognition must simulate clicks without being a mouse. The other danger signal is the timer and the spinner – this complicates the interface even further (“Is this loading?”).

I simply don’t understand why we constantly try to make things work because this is what the design spec says or this is what we saw somewhere else. How about really testing it with users and see what happens then?