Christian Heilmann

You are currently browsing the archives for the ajax category.

Archive for the ‘ajax’ Category

Scripting Enabled at @mediaAjax 2008

Monday, September 15th, 2008

I am right now at @media Ajax 2008 getting ready to go on stage to deliver my “Scripting Enabled” talk, explaining how the main issue about accessibility is that we just don’t talk enough to each other. Technology is never really the boundary we have with accessibility, it is that we don’t understand how people work and what technology is capable of.

[slideshare id=598740&doc=atmediaajaxscriptingenabled-1221470402418982-9&w=425]

Links in the presentation

Oh look, using Ajax in a stupid way is not a good idea?

Tuesday, April 29th, 2008

It is quite fascinating to me that the newest article on dev.opera.com entitled ‘Stop using Ajax!’ is such a big thing right now. Tweets, shared bookmarks and Google Reader items are pouring in and people seem to consider it an amazingly daring article.

Here’s the truth: James is right. He also was right when he more or less gave the same information as a talk at Highland Fling last year following my presentation on progressive enhancement and JavaScript.

However, there is nothing shocking or daring or new about this. All he says is:

  • Don’t use any technology for the sake of using it
  • Consider the users you want to reach before using a technology that may not be appropriate
  • Make sure your solution is usable and accessible
  • Build your solution on stuff that works, then enhance it.

This is what I consider to be a normal practice when developing any software or web solution.

However, the real question is now why we are at this state – how come that we see this information as daring, shocking or controversial, and how come a lot of comments are still “I don’t care about accessibility because it is not needed for my users”? How come the assumptions and plain accessibility lies are prevailing while the good stuff remains unheard of?

Well, the truth is that we have been preaching far too long to the choir. I’ve been in the web accessibility and standards preaching community for a long time and whenever I asked what about enterprise development and CMS I was told that it is not worth fighting that fight as “We will never reach them”. Well, this is where the money and a lot of jobs are and it is a fact that both accessibility and standards activists in a lot of instances don’t even know the issues that keep the stakeholders in these areas busy. My Digital Web Article ‘10 reasons why clients don’t care about accessibility’ and the follow-up Seven Accessibility Mistakes Part One and Part 2 listed these issues and the wrong ways of how we try to tackle them 3 years ago. My talk at the AbilityNet conference last week Fencing-in the habitat also mentioned this attitude and problems.

Here’s where I am now: I am bored and tired of people fighting the good fight by blaming each other’s mistakes or pointing out problems on systems that are within reach. When people ask for accessibility or Ajax usability advice you’ll get a lot of bashing and “go validate then come back” answers but not much information that can be used immediately or even questions that ask what lead to the state of the product. You’d be surprised what you can find out by asking this simple question.

We have to understand that large systems, frameworks and companies do still run the show, even when we think that bloggers, books on webdesign and mashups push the envelope. They do, but so far they are a minor discomfort for companies that sell Ajax and other out-of-the-box solutions that are inaccessible and to larger parts unusable for humans. When was the last time you used a clever expense or time tracking system in companies that are not a startup or a small web agency? When I was at the AjaxWorld conference in NYC earlier this year I heard a lot about security, ease of deployment and scalability but only a little bit about accessibility (the Dojo talk and the YUI talk, actually). People are a lot more concerned about the cost of software and the speed of release than about the quality or maintainability. It is cheaper to buy a new system every few years than to build one that is properly tested and works for all users. Does your company still have systems or third party solutions that only work on IE/Windows? I am sure there is at least one, ask the HR or finance department.

It doesn’t help to coin another term and call an accessible and usable Ajax solution Hijax, either. As much as I like the idea of it I have to agree with James’ comment – we don’t need another word, we need a reason for people to not just use things out of the box without thinking about them or – even better – offer help to the companies that build the solutions on assumptions in the first place. When I ranted about a system by a large corporation some weeks ago on twitter their marketing manager for EMEA starting following me and I am starting some talks with them.

I have heard numerous times that my ideas about progressive enhancement and accessibility are just a “passing fad” and “that in the real software market you don’t have time for that”. Challenging this attitude is what makes a difference – by proving that by using the technologies we are given in a predictable and secure way does save you time and money. However, there are not many case studies on that…

I cannot change the world when I don’t know what obstacles people have to remove to do the right thing. Deep down every developer wants to do things right, in a clean and maintainable fashion and be proud of what they’ve done. Bad products happen because of rushed projects, bad management and developers getting so frustrated that they are OK with releasing sub-par just to get the money or finally get allocated to a different project.

This is the battle we need to fight – where do these problems come from? Not what technology to avoid. You can use any technology in a good way, you just need to be able to sell it past the hype and the assumption that software is developed as fast as it takes to write a cool press release about it.

My wishlist for a great Ajax API

Tuesday, April 8th, 2008

Coming back from The Highland Fling it was interesting to see that people seem not to be quite convinced yet about the necessity of APIs and the large part they are playing in the next few years of web development. I guess this is partly based on experiences with APIs that aren’t properly explained to non-geeks and inconsistent or hard to use. There is just not much fun in trying to find information bit by bit if all you want to do is write some code (unless you have the old school hacker/cracker mind and didn’t consider spending hours looking at hexdumps trying to find a way to get endless lives in a games a waste of time).

During my interview with Paul Boag at I pointed out that designing a good API is as important as designing any other user interface – including your web page. Gareth Rushgrove agreed in his splendid talk How to be a first class web citizen. I also pointed out that there is a lack of clear and easy tutorials and articles on the matter, so I decided to have a go at it now.

Designing a great Ajax API

As an example I will use the recently released Google translation API, point out its good parts and list things I consider missing. I will not go into the part of actually writing the API but instead explain why I consider the missing parts important. This is not an attack towards Google, I just really liked working with this API and wanted to have it a bit easier to use, so no hard feelings, I really take off my hat that you offer an API like that!

Here are the points I consider important when we’re talking about Ajax APIs in JavaScript (Ajax implies that but you’d be surprised how often a REST API is advertised as Ajax):

  • Good documentation
  • Usage examples to copy + paste
  • Modularity
  • Link results to entries
  • Offer flexible input
  • Allow for custom object transportation
  • Cover usability basics

Documentation and presentation

Let’s start with a positive: the documentation of the Google Ajax Language API is great. You have all the information you need on one page including copy and paste examples. This allows you to work through the API online, read it offline and even print it out to read it on a crowded bus without having to take out your laptop.

Tip: If you are offering copy and paste examples – which by all means you should as this is what people do as a first step – make sure they work! I learnt the hard way in my book Beginning JavaScript with DOM Scripting and Ajax that there is nothing more dangerous than showcasing code snippets instead of full examples – people will copy and paste parts of a script, try to run it and either email you that your code is broken or – even worse – complain in book reviews on Amazon. If you offer copy and paste examples make sure all of them work independently.

Google offer explanations what the API is, what you can do with it, a list of all the parameters and what they mean. This is great for a first-glance user. For the hard-core audience they also offer a class reference.

Usage example

The first code example is quite good, you can copy and paste it and if your computer is connected to the Internet it will work – or it would, if the HTML got some fixes.

First of all it lacks a DOCTYPE, which is a bit annoying as it is a very important part of an HTML document. The more important bit is that the encoding is not set. The live example version has both – bit of a nuisance, as especially when we talk about different languages and using traditional Chinese as the example, the correct encoding is a must.

(Note: the irony, seems like wordpress doesn’t do this right for some reason …)







??????????



Tip: make sure you explain to people that your code examples need an internet connection and other dependencies (like requiring HTTP and thus having to run on a local server). JavaScript historically didn’t have any other dependency than a browser, this is changing lately and can be confusing, especially when you use Ajax behind the scenes like some Flash/Ajax APIs do!

Modularity is good!

The first bit that threw me off to a certain degree was the google.load("langage","1") line, but there is an immediate explanation what it means.

The first script include loads a generic Google Ajax API that has a load() method to add other, smaller APIs build on top of this one. In this case the line means you want to load the language API with the version number 1.

This appears clunky and you will get bad feedback for it (it seems there is nothing better the woo the masses to have a one script include solution) but is actually rather clever.

By modularizing the Ajax code in a base library changes to the core functionality will be easy and by asking the implementer to include the APIs he needs with a version number you can make it the choice of the implementer to upgrade instead of breaking older implementations or having to carry the weight of full backwards compatibility.

Yes, the perfect world scenario is that you’ll never have to change the functionality of your API - just add new features – but in the real world there are constant changes that will make it necessary for you to mess with the original API. There is no such thing as perfect code that is built for eternity. Using a loader function in the base API is also pretty clever, as it means that implementers don’t need to change URLs.

What goes in should come out.

This is where Google created a problem. Both the google.language.detect() and the google.language.translate() methods are quite cool insofar they offer you to send a string and define a callback method when the API returned a value. However, the returning object in both cases gives a result and a status code, but not what was entered. You get all kind of other information (described in the class documentation) but having the original entry would be very useful.

Why? Well the great thing about Ajax is that it is asynchronous, and that is also its weakness. It means that I can send lots of requests in the background in parallel and wait for the results. However, this does not mean that the requests also return in the right order!

This means that if you want to loop through an array of texts to translate, the following is an unsafe way of doing it:

var translations = [ ‘one’,’two’,’three’,’four’,’five’,’six’,’seven’,’eight’,’nine’,’ten’];
var gtl = google.language.translate;
for(var i=0,j=translations.length;i gtl(translations[i],’en’,’de’,function(result) {
if (!result.error) {
var container = document.getElementById(‘translation’);
container.innerHTML += result.translation;
}

});
}

Instead you need to wrap the incrementation of the array counter in a recursive function:

var translations = [ ‘one’,’two’,’three’,’four’,’five’,’six’,’seven’,’eight’,’nine’,’ten’];
var gtl = google.language.translate;
var i=0;
function doTranslation(){
var gtl = google.language.translate;
if(translations[i]){
gtl(translations[i], ‘en’, ‘de’, function(result) {
if (!result.error) {
var container = document.getElementById(‘translation’);
container.innerHTML += result.translation;
i++;
doTranslation();
}

});
}

}
doTranslation();

This is safer, but we lost the opportunity to have several connections running in parallel and thus getting results faster. If the result of the API call had the original text in it, things would be easier, as we could for example populate a result object and match the right request with the right result that way:

var translations = [ ‘one’,’two’,’three’,’four’,’five’,’six’,’seven’,’eight’,’nine’,’ten’];
var gtl = google.language.translate;
var results = {};
for(var i=0,j=translations.length;i gtl(translations[i],’en’,’de’,function(result) {
if (!result.error) {
results[result.input] = result.translation;
}

});
}

Even easier would be a transaction ID to pass in which could be the counter of the loop. Another option of course would be to allow more flexibility in the data that goes in.

Offering flexible input

Both the matching of the input text with the result and a transaction ID still would mean a lot of requests to the API, which is not really nice as it costs money and clobbers the server and the client alike. An easier option would be to not only allow a string as the text parameter but also an array of strings. The return then would also become an array and a lot of the overhead of calling the translation engine would be done on the server in a single call instead of lots and lots of API calls.

This is not hard to do and most JavaScript framework methods work that way, by checking the type of the first argument and branching accordingly. You can even go further and allow the implementers to send an own bespoke object as a third parameter.

Transporting a custom object allows implementers write a lot less code

The benefit of a custom object going out and in is that you can add more parameters to the API call that are only specific to the implementation. Most likely this could be a reference to a namespace to avoid having to repeat long method names or global variables. You could start by providing parameters that make sense to any Ajax call in terms of usability.

Thinking Ajax usability

The main thing any Ajax call should offer a user is a timeout. There is nothing more disappointing than getting the promise of a brave new Ajax world with much more interactive interfaces and then getting stuck looking at spinning wheels or worse hitting a link and getting nothing. Right now the language API has nothing like this, and you’d have to roll a solution by hand. You’d also have to check the error status code to see if the data could not be retrieved and call a failure case of the connection that way.

A nice API would offer me these options, most likely all rolled in one parameters object.

My dream translation API

Taking all these into consideration it would be perfect to get the API to offer these options:

google.language.translate(input,parameters);

The parameters would be:


input // string or array
parameters // object with the following properties
sourceLanguage:string,
targetLanguage:string,
transactionId:string,
customparameters:object, // to transport
timeout:integer, // (in milliseconds)
failure:function(result,params), // (method to call when there is a timeout or an error)
success:function(result,params), // (method to call when all is fine)

The returned data from the API should have both the result and the parameters provided. This would make the life of implementers dead easy.

Summary

In summary, here’s what I expect from a great Ajax API:

  • Have a good documentation with immediate copy and paste examples backed up by a full class documentation
  • Build your APIs modular and allow the implementer to choose the version they want to have
  • Provide a hook to link the result of the API methods to the initial data entered. The easiest way is to repeat this data, more sophisticated is to allow for a connection ID.
  • Allow for multiple values to be sent through, it’ll save you API calls and the implementer hacking around the problem of unreliable order of returns.
  • Allow implementers to add an own object to send and get back to allow for namespacing and other data state retention.
  • Allow for a timeout, connections are not to be trusted.

This is a work in progress

I hope you found something here to agree with and if you know things to add, just drop a comment.

AjaxWorld East – Cold weather, Arctic Snowcruisers and a shift in perception

Thursday, March 27th, 2008

I am just sitting at the airport in Montréal, waiting for the delayed flight to arrive and get me back to New York City to get back to London. I’ve spent the last few days first in New York to speak at the AjaxWorld East conference, then go to Montréal, Canada to speak at the Coder’s Saturday and give a two presentation workshop at Nurun. Suffice to say, I am pretty bombed out, but this is a good chance to jot down my experiences during this trip. Let’s start with AjaxWorld.

The audience

The AjaxWorld conference was a bit of a surprise to me. Judging by the price of a ticket and the web site explaining all the other presentations I had braced myself to stand in front of a large room full of suits that needed to go to the conference rather than wanting to go. Turns out I was wrong. The audience was a mixed crowd of company owners, project managers, designers and developers and everybody was very involved and interested. I’ve met a lot of companies selling developer tools like frameworks and IDEs, a few implementing companies (including the lovely people from the food network who took me out for dinner for a real Italian NYC style) and really surprising edge developers like someone who ran software that controls geostationary satellites! I got a lot of interesting questions and feedback and I am happy to have met everybody there.

The location

The conference took place in the Roosevelt Hotel, located a block away from New York’s Grand Central Station. The hotel is a 1920s affair full of charm and grandeur of a bigone aera. You entered the place and felt like you are in a Hercule Poirot episode. However, for a tech conference this size it was not a great choice. For starters there was no internet connection that was either affordable or stable. Even the $24/day room connection went up and down like a roller coaster. This meant that I was driven to the bagel place (Cosi) on the other side of the street a lot as they had free Wi-fi (I am sure they made a killing these few days).

As for presentation facilities there was the main presentation hall which was a ball-room with really high ceilings reserved for sponsor talks and the keynote. This one easily allowed for all attendees to fit in, but was excruciatingly cold. All the other presentations took place in smaller meeting rooms not quite big enough to hold all interested delegates. The projectors were put in between rows of chairs which made it tricky for you to reach the screen and from time to time attendees would bump against them.

The hotel itself is also grossly overpriced for the comfort the rooms offer. My hotel in Montréal was about a third of the price per room per night, ultra-modern and had free connectivity, a fax and a safe in each room and came with complimentary breakfast. OK, it is New York city and the Roosevelt was built in the 20s, but a fleecing is a fleecing, no matter how your dress it up.

The organization and coverage

Apart from that, the hotel had a cozy atmosphere in the bar and the organizers did their best to keep everything in check. They did a splendid job filming the whole event and live-streaming parts of it. When you needed help, they were immediately there to help you out, something that is quite an uncommon event in NYC. There were more than enough handouts of the current schedules, signs were put up everywhere that you needed and every delegate got a free book on Ajax with the massive badge telling others who that person is and what he or she does. I felt pretty well looked after and was easily convinced to deliver an extra presentation in place of another presenter who had missed his flight.

The only thing I didn’t like too much is the amount of parallel presentations. I missed a lot of things I wanted to see, but couldn’t because I can’t split myself in two. Clever companies made sure that they sent several people there to allow for full coverage.

My presentations

My first presentation was a gap-filler for a speaker that didn’t show up and I wrote it a day before. I talked about the opportunity we have right now to use Flash to deliver rich media content on web sites and control the experience with JavaScript provided that we get a proper API. I got really interested in that lately, working with great people like Aral Balkan, Niqui Merret and Steve Webster. As examples I showed how SWFObject allows you to progressively enhance HTML to contain Flash, how the YUI image uploader allows you to batch upload files using Flash and how the YouTube API allows you to control online video with JavasScript.

My main and planned presentation was about architecting JavaScript for large applications using Event Driven Design showing off Yahoo Maps and Eurosport as examples. The unexpected bit of the presentation was that I explained it using the Antarctic Snowcruiser as an analogy.

The Antarctic Snowcruiser was a massive vehicle built in 1937 to explore Antarctica. The task the cruiser had to fulfill was amazingly problematic – it was to host a crew of 5 and keep them safe from the terrible cold for a year to cross 5000 miles of hostile environment. The cruiser was an absolute marvel of technology and its inventors thought up amazingly clever solutions to the problems they anticipated, including the possibility to retract the wheels into the body of the cruiser to overcome crevasses and to re-use the engine heat to keep the crew from freezing to death. The only thing they forgot to plan for was traction and when the cruiser arrived in Antarctica its wheels spun uselessly and the engine overheated almost immediately. The cruiser got abandoned and was found 20 years later encased in ice. Now it is unknown where it went as the piece of ice it was last seen on broke off and got dragged out to sea.

This was my example how we build applications – we assume a lot and over-engineer on the server side without realizing that most of the application work will be done by a browser in an to us unknown environment. I wanted to inspire people to consider the usability and restrictions of web applications before making assumptions that everything will work and that we can predict the environment our apps run in.

Both presentations were packed and I got good feedback, I’d be interested in more from people that went there and I am looking forward to seeing the videos.

Other presentations

I didn’t see many other presentations, but here is what I got: Douglas Crockford’s Keynote was a stark reminder how broken the technologies we use to create web applications really are, showed security flaws and ideas as to how we can try to fix these. Caja was mentioned along with adSafe and Douglas explained his idea of vats to secure the web. There was more and more detail published on his blog the last few days, so make sure to check there.

In terms of security, there was a full-on attack vector talk by
Danny Allan of Watchfire which showed exactly how far an unrestricted scripting injection will get attackers. It is pretty easy to disregard XSS as a banal penetration of your site but this presentation showed how a fully patched new Firefox running on a windows machine with firewall and up-to-date virus software can be accessed with malicious JavaScript and get all kind of passwords and deeper access using a mix of phishing and social engineering.

The IBM people involved in the Dojo framework showed off the internationalization and accessibility options of Dojo in a joint presentation and I was very impressed as to how far the framework embraced ARIA and how much information was given on how to do both i18n and a10y with usability in mind. Great work!

I’ve spent a lot of time with the people who build qooxdoo, a widget framework exclusively written in JavaScript. Their presentation showed what the framework is and how it works cross-browser. They also showed the very impressive performance of the widgets and how they can be skinned. The really cool stuff of qooxdoo is still in the making though and I will make sure that we watch the guys more closely and reveal some of the clever tricks they use to boost performance and build application code cleverly using a Python build script.

The last presentation I’ve seen was introducing ways to build iPhone applications that have the native look and feel and it was more or less a re-hash of the developer guidelines given out by Apple mixed with an introduction to iUI. I was not at all inspired by the talk which is amazing as iPhone is a really nice platform and highly interesting at the moment.

Summary and learnings

All in all it was well worth it going to the conference. The mix of presentations and topics covered was well done and I myself realized that it is high time we stop advocating web standards as a technical solution. Web standards are there to ensure maintainability and predictability but in the enterprise world we are already one step further and technologies like Flex, Silverlight and Comet are what need our close attention to ensure that we have predictable and maintainable solutions in these as well. Douglas’ view as the browser model as being broken can be considered as too strong a message, but I realized that a lot of the things end users expect these days are just not possible with the HTML/CSS/JS trinity we keep preaching about. This does not mean standards are unimportant – they are best practice in terms of implementation. Technologically we have to brace ourself to be surprised of the changes in the next year.

Code tutorials for lazy people with Ajax Code Display

Monday, January 28th, 2008

Currently I am writing a lot of tutorials for an online self-training course about web standards and I ran into the annoyance of having to maintain example code in two places: the code itself and the HTML document with the explanations. Therefore I took jQuery and wrote a small script that automatically turns links to HTML code examples with HTML entities and line numbers. You can define which lines to display, which lines should be highlighted and you can add a live preview in an IFRAME when the link is clicked.