Christian Heilmann

You are currently browsing the Christian Heilmann blog archives for August, 2011.

Archive for August, 2011

Browserfountain – playing with Canvas particles

Wednesday, August 24th, 2011

Last week I attended the great Creative JavaScript training by Seb Lee Delisle. One of the things we talked about were simple particle systems using canvas with simulated physics. I was pretty blown away to see how easy that is, especially when you fake the physics rather than using the real formulas.

One of my experiments I just uploaded to Mozilla’s Demo Studio, the Browser fountain (source code on GitHub).

Here’s what it looks like on YouTube:

The performance is pretty amazing, although I am doing a few things that make me cringe while writing them if you approach something like that like you approach building production web sites. Sometimes it makes sense to let go, I guess.

I am currently in discussion with Seb if he is OK for me to write a simple tutorial on particles, so let’s see if that goes ahead.

Recording talks on a shoestring – a session at encampment 2011 in London

Monday, August 22nd, 2011

Today I spent a thoroughly enjoyable day at Encampment London and held a quick impromptu session on “recording talks on a shoestring”. There weren’t any slides but – to prove my point – I recorded the audio of my presentation and published it on the web:

Here are the main things I talked about:

  • You can easily record audio with the in-built microphone of computers these days using Audacity. As it does not need installing, this could even be run from a memory stick
  • VLC does not only play almost anything on this planet, but it can also record the screen as a video (screencast here). This is incredibly handy when you have speakers who don’t have slides but actually jump around in different programs for live demos
  • Once you recorded your audio or video you need to convert it to web formats and host them somewhere. You can either do that inside the tools (Audacity allows for ogg and mp3 saving) but you would still need to upload them somehere. As I am releasing my talks as creative commons I normally use Archive.org for hosting and converting.
    • If you recorded your video you can use MPEGStreamclip to simply chop off the start and end and Miro video converter to convert it to WebM
    • Audio files I normally tend to convert in iTunes as I can tag them there and add cover art (as I am listening to them on my iPod)
    • Once uploaded, archive.org will give you the video as MPEG4, OGV and animated GIF (demo here). Audio gets converted to OGG (like this talk)
  • If you want to have your videos converted and hosted without the publicity or licensing of archive.org I normally tend to use Vid.ly which converts your videos on the fly to lots and lots of formats and provides you with a short link to send the right video format to the right device when you call it. It is an incredible service
  • The next step would be to allow for subtitling and captioning. This is incredibly useful to allow people to jump to the part of the talk they are really interested in and give search engines a chance to find your content. Captioning and subtitling is expensive. If you see however how cool it is at TED (for example at the excellent Roger Ebert – remaking my voice) you can see the benefits. To do this on the cheap there is Universal Subtitles, a JavaScript that adds a subtitle interface and a subtitling and translation interface to all the videos in a document. Universal Subtitles is based on the Popcorn.js library. Some people already use this one to allow for transcripts to be linked with the audio as you can see in this demo of the minnesota public radio. If you don’t want to do all that by hand there is also the Butter App project
  • One last thing I mentioned was Screenr, a free tool where you log in with Twitter and you get a screencasting app that allows you to record 5 minutes of your screen and actions and converts and hosts the final video for you. You can also save the MP4 or send it to YouTube.

This was just a quick introduction as to what you can use that is free to provide simple recordings of your event. Nothing of that replaces a professional recording, but it also doesn’t replace a few thousand pounds in your pocket with air.

What I didn’t mention was that Keynote and Quicktime both have presentation and screen recording facilities. But as they are not free, I skipped that.

CSS challenge: 90 degree turned headings in CSS3 with a fallback?

Wednesday, August 17th, 2011

OK, during an IRC session the benevolent overlord of the MDN documentation, Sheppy asked me to help him with making the MDN docs look more awesome with breakouts where the heading is turned 90 degrees.

In essence, what we want is this:

rotating headlines

The above version should be for browsers that support CSS rotation and the one below should be the fallback version. Now, with my efforts, and the ever amazing Chris Coyier we got quite far down the line (on JSFiddle):

However, not quite yet. As you can see with the fallback (the first example) still covers the text and is not the full width. Chris of course proposes using Modernizr to check for CSS support, but that is something we don’t want. And, let’s face it, shouldn’t have to use.

I think this is a fundamental flaw of CSS - it offers us a lot of design opportunities but there is no “if this is possible, do this” (other than mediaqueries). You almost always need JS to test before you can apply a CSS effect – or disappoint people who do not use your cool new browser.

In the past, this was easier. We had CSS hacks and filters that targeted certain browsers. All of them were hacks exploiting issues in the rendering engine of the browser (sometimes in bizarre ways (remember the Box Model Hack’s use of voice CSS?).

Isn’t it time that with the power CSS gives us we also get a checking statement to apply transforms only if they can be used? And even then, there is no real graceful degradation.

So, who can make this work without Modernizr?

Getting rusty – we need new best practices for a different development world

Monday, August 15th, 2011

Here’s the good news: we, those who promoted open web standards, have won! The web of today uses less and less closed technology and plugins and HTML, CSS and JavaScript are the tools used to create a lot of great web experiences.

For example nearly nobody uses Flash for a simple image gallery and more and more companies advertise themselves to us and also to the world as “supporting web standards” and “using open technology”.

Of course, there is a lot of lip service going on and – as Bruce Lawson put it we get HTML5, hollow demos and forgetting the basics. Bruce points out that a lot of HTML5 demos don’t have any semantic markup, don’t even create working links and have a lot of traits we’ve seen in Flash tunnels in the nineties and beginning of the millennium.

On the other side of this argument, a few people keep telling me they are working on blog posts on “why semantic markup and JavaScript fallbacks are not important any longer”.

I think there is a happy middle ground to be found and it mostly means that we need to understand the following: what you do as a web developer is very much dependent on the medium our work is consumed in.

How we use the internet has changed over the years and if we don’t want to be seen as enemies of progress we need to alter our best practices and give them a bit more flexibility when it comes to applying them.

Much like a web site looking and working the same in every browser means catering to the lowest common denominator and not using our platforms smartly having a “one stack of technologies used in a certain fashion” limits us in reaching people who just start on the web and simply want to get some work done.

Are our best practices really rooted in reality?

Altering best practices? How is that possible? Well, for starters I think a lot of what we preach is cargo cult rather than based on real happenings. A lot of the things we tell people “are absolutely necessary to make something work on the web” are not needed and many an excited explanation of the usefulness of semantic markup actually is not based on facts. A lot of what we do as best practices is done for us, not for the end users or the technology we use. But more on that later.

Looking back (not in anger)

How did we get to where we are now, where newly built showcase sites violate the simplest concepts like providing alternative text for an image or using structured HTML rather than a few empty DIVs?

In order to learn how we got into this perceived mess it is important to understand what we did in the past. A lot of talks and books and posts paint a picture of the brave new world of web standards brazenly cutting a path through the jungle of closed technology towards a bright future. This is far from what really happened. If we are honest, a lot of what we did was hacking around to make things work and then trying to find a way to make what we did sustainable. And that last step is what brought us semantics. When I hear praises of POSH – plain old semantic HTML as the way we built things in the past and forgot that skill over time I have to snigger. We did not do any of the sort – at least not in production.

Humble beginnings – HTML and CGI

in the long time ago, HTML was used for presentation, behaviour and structure

In the beginning there were no plugins and there was no JavaScript. We had HTML and images and the biggest mistake people already did was show text as images without any alternative text. This means that text-only browsers (which were still in use) and those on slow connections had the short end of the stick. Interaction was defined as clicking on links and submitting forms.

What we already started to try was to speed things up by using frames. For example we kept a “sticky navigation” and only loaded content pages without any menus. This was the start of breaking basic browser functionality like bookmarking for the sake of performance. I remember working around that using cookies to store the state of the page and re-write the frameset accordingly on subsequent visits. That only fixed it for the current user – sending a link out to others was not possible any more. But the pages loaded much faster.

Layout was achieved in HTML - with horizontal lines, PRE elements, lots of   and tables. It was most important to make the thing look right across all the browsers and not what the HTML really is.

What we tell people instead though is that these were simpler times where the HTML and its semantic value really mattered. I remember it differently.

DHTML days (1)

When JavaScript got supported we started to go properly nuts. Whole menus were written out with document.write() and we used popup windows with frames written dynamically inside them (for example for image galleries):

JavaScript allowed for richer interaction - and more mistakes

We even started checking which browser was in use and – in the more sensible cases – rendered different experiences. In the lesser thought-out solutions we just told people that “This site needs Internet Explorer 4 to work”.

We also started hiding and showing content with JavaScript. Sometimes we wrote it out with JS and didn’t give text browsers (or those in companies where people turned off JS by default by means of a proxy) any content or far too much to take in without caring about structure much.

When SEO started to matter we also used the NOSCRIPT tag to provide fallback text and links – most of the time laden with keywords instead of meaning.

DHTML days (2)

When CSS got supported things really took off – we could not only create dynamic things and show and hide them but really go to town moving, rotating, animating and stacking them. And that we did. DHTML library sites had hundreds of effect menus and image sliders and rotating buttons and whatnot:

JavaScript and CSS gave us the chance to build a lot of shiny things

Most of behaviour was done with JavaScript but we also started to play with CSS and :hover pseudo selectors to build “CSS only multi level dropdown menus” and other things that couldn’t be used without a mouse.

This was the high time of DHTML and the first line of almost any script was checking for IE and document.all or Netscape with document.layers. The speed of computers also forced us to go through all kind of dangerous hacks and tricks (dangerous as they made maintenance very hard as the hacks tended to not get documented) to make things look smooth.

The gospel (according to Zeldman)

When the book of Zeldman came out this was the message: Let’s stop trying to fix things for browsers and jump through hoops to make our products work for an environment that is not to be trusted and rely on the standards instead. The main tool for this message was the separation of technologies:

In order to bring sanity to the web development world we claimed that HTML is structure, CSS is presentation and JS is behaviour

HTML is the structure, CSS is for the visual look and feel and JavaScript is for behaviour. If we keep all of these things separated, then we have a good web product that is easy to maintain, works for everyone and is clean to extend and work with.

That was the idea and we took it further by coining the term Unobtrusive JavaScript (I remember having lots of fun writing this course) and subsequently DOM Scripting (with the DOM Scripting task force of the WaSP driving a lot and Jeremy Keith’s and my book giving good examples how to use it).

The state today

Nowadays it seems we have gone full circle back to the world of mixing and matching the concerns and layers of development:

Today, it seems, all layers of separation are mixed and matched again

With great power comes great responsibility and right now I get the feeling that the latter is very low on our radars as it is far too much fun playing with the cool new things we have at our disposal. We have mobile phones with incredibly fast processors, we have supersonic JavaScript engines capable of 3D animation and hardware accelerated CSS animations. This makes it hard to get excited about semantic values.

Almost all the technologies in the stack get washed out in this new web technology world and separation becomes much harder. What good is a CANVAS without any scripting? Should we animate in CSS or in JavaScript? Is animation behaviour or presentation? Should elements needed solely for visual effects be in the HTML or generated with JavaScript or with :after and :before in CSS?

We do a lot more on the client these days than we did in the past. It is time we give the client more credit in our best practices. Yes, old browsers are not likely to away anytime soon (and this is sometimes on purpose with for example Microsoft not offering IE upgrades to Windows XP - and soon Vista – users).

Separation of concerns vs. the image of web developers

In its meaning and approach the separation first explained by Zeldman is still an incredibly good idea – the thinking behind the separation of the different technologies is great. Some companies very much embraced the concept in their training and for example Yahoo even goes a step further in making it understandable by calling it “separation of concerns” and not layers of development.

This subtle difference also shows partly why this great idea was not always implemented in real life products: you need to have an understanding of how the different technologies work and how to write them properly. In essence, you want to have a team of people with different expert subject matters work together to build a kick-ass product.

In reality though web development is still seen as something any developer with a bit of training could do or when you hire a dedicated web development team they are considered to be experts across the board and are not allowed to concentrate on semantics, CSS or JavaScript.

This is the main reason why final web products out there do not have clear separation. In most cases, the developers are aware that they could have done a better job but they got forced to rush it or work with a technology they did not care much for. If you ever had to debug and optimise CSS written by Java developers you’ll know what I mean.

Web standards showcases and attrition

Whenever we praised a new product coming out as using web standards the right way it was not a big product. It was almost never the result of an enterprise framework or CMS. And – in a lot of cases – it was actually built to make a point about using web standards and not streamline the process of building a web product.

Take the site that most likely was the main cause for the breakthrough of CSS in the view of the community: CSS Zen Garden. The garden was a simple XHTML document, semantically correct but already with a lot of IDs and classes as handles to apply CSS rules to. It’s job was to show that by separating look and feel you can redesign a web site easily and make it look (and later on react to the user) totally different from one case to another.

This went incredibly well, until we got too excited about the possibilities we had with image replacement. Later submissions to the garden had large parts of text in background images, which was ironic as the original argument was that all the content should be in HTML.

In the real world, however, we never had a fixed HTML document to play with – we had CMS to create web pages for us and everything was in flux. You can’t control the amount of menu elements, you can’t control the amount of text, you will not be able to “simply add a class to an element” to give it some extra functionality. It is time we understood that we can inspire with showcase sites and presentations but we really don’t help the people developing web sites and fighting the concept of “everyone can do frontend, it is not hard code”.

Nobody wants to hear about the depth and composition of the sea when what we do is riding jet skis

Right now “best practice web development” talks, presentations and tutorials are incredibly self-referential. We speak to the same people about the same subjects and claim that people use semantics whilst out there the web is in a struggle to survive against walled garden development and native mobile development for a very small part of the world-wide market.

People happily say they “only build for webkit” as this is “the best and fastest and most stable browser”. People are OK to see a showcase site completely and utterly failing when you don’t have the right browser and the right OS.

We start to recede into our respective specialist areas and speak at specialist conferences. A lot that is taught at design conferences is the total opposite of what you hear at performance conferences. We build abstraction layer above abstraction layer to work around browser issues and release dozens of “miracle” libraries and scripts at conferences without even caring if anyone will ever use them.

Speed is still the main thing that we talk about. How to shave off 20 milliseconds off a script loader? How to make an animation 50fps instead of 30fps?

Best practices for a new market of developers

My favourite example was attending the Google IO accessibility talk. For about an hour we got taught how to turn an element into a button and keep it accessible. Not once was mentioned why we didn’t use a BUTTON element for the job. There was lots of great information in that talk – but all of it not needed as we simulate something the browser readily gives us with JS and CSS.

The new generation of developers we have right now are very excited about technology. We, the educators and explainers of “best practices” are tainted by years and years of being let down by browsers. jQuery and other environments propagated “write less, achieve more” as the main goal to success. Most of what we tell people is “to add this and that to give things meaning” and when they ask us “Why?” we have to come up with lies as for example not a single browser cares about the outline algorithm right now.

The only real benefit of using web standards

Using web standards means first and foremost one thing: delivering a clean, professional job. You don’t write clean markup for the browser, you don’t write it for the end users. You write it for the person who takes over the job from you. Much like you should use good grammar in a CV and not write it in crayon you can not expect to get the respect from people maintaining your code when you leave a mess that “works”.

And this is what we need to try to make new developers understand. It is about pride in delivering a clean job. Not about using the newest technology and chasing the shiny. For ourselves we have to understand that the only one who really cared about our beloved standards and separation of concerns is us – as we think maintainability and not quick deployment and continuous iteration of code. The web is not code – the web is a medium where we use a mix of technologies fit for the purpose on hand to deliver a great experience for the end users.

Good governments allow for informed citizens – help prevent the social media lockdown attempt

Friday, August 12th, 2011

When bad things happen, people look for a scapegoat. When a government is threatened it tries to analyse what happened and find a quick solution that gives people hope and shows them that those “up there” are in control and can protect us from evil. In the last few days the United Kingdom burnt, got looted and the next generation of citizens to build the future of this country were the people doing it. Others stood by, too shell shocked to realise that mere presence and a “what are you doing, stop this” could have prevented a lot of the damage.

A lot of further damage was prevented as people organised themselves, communities kept close together and stood their ground. People learnt first-hand from Twitter, Facebook, Text Messaging, emails and phonecalls what is happening, when to board up their shops, where rioters are and what is happening. Of course there was a lot of speculation, but so was in the news. The official BBC news channel during the riots had a few false positives and wrong locations and names of burning buildings. Communication errors happen – if the media is swift enough they can also be verified very fast and sorted out.

The police, the courts and the government have done incredible work to stop the mindless destruction and violence – and that is what it was – this was not an organised terrorist attack, it was pure pent up anger and frustration of not having a prospect of a future in a world where you get bombarded with messages to consume more and think less.

Now the government is starting to misunderstand the opportunity communication systems pose for doing good and keeping information flowing. If you keep people in the dark, they assume the worst. We are conditioned not to assume “all is fine”. Deep down in our subconscious we fear to get eaten by some vicious animal as soon as we go out in the dark and that makes us scared, defensive and ineffective to do good.

When David Cameron talks about a clampdown and “banning rioters from social media” I get scared. I am scared to pay tax and support a government that has not a single clue how the new media that its citizens use every day works while spending millions on funding “IT innovation programs” and “enticing UK entrepreneurs to show technical excellence”. Social media is branded a scapegoat and a big part of why the riots happened and why the police was less effective than they could be. The government and old school media conjure a scary image of social media being a free flow of information that is not governed by anyone and is not controllable and therefore a threat and a playground for evil doers to organise themselves much more efficient than the law enforcement agencies can.

There are so many wrong arguments in this, it hurts me even to think about it. Regardless of the technical impossibility to “block people from social media” – which assumes that people only can have one, fixed identity there (guess what, I can buy a £10 pre-pay simcard and I have a new identity) the quick solution proposal to shut down social media access completely misunderstands the concept of social media – it is a massive group of people!

Social media is an incredible evolution of media. Instead of waiting for news to be written, organised and honed to instill one reaction or another (and this is what a lot of our media has come to) it means a free flow of information. This is scary to some but to me it means that it empowers everyone to chime in, make up their own mind and speak up when things are wrong. This has always been the case on the web. I spent a long time of my life on IRC, as an admin and I kicked, banned and told people off for spreading hate or soliciting illegal behaviour. Wikipedia editors spend a lot of time deleting articles that are wrong or politically incorrect. People contact admins to delete content or solve issues or investigate the behaviour of users. We do police social media by being part of it. That the government has not understood embracing a new media and do the same hurts me. Your citizens message each other and talk to each other – when I see official use of social media all I see is a news feed being sent out to a different channel. Using social media efficiently would safe the government money, time and make it appear approachable and human rather than an entity that is far removed and won’t understand us anyways.

My city burned, my country (yes, I have been here for 10 years and I do consider it my country) is shell-shocked that a few kids can take over and destroy people livelyhoods and make us scared to go out in the dark. This is not because they organised themselves with Blackberries and Twitter – this is because the government has failed to listen and people have lost faith and respect for those in charge.

During the riots I didn’t sleep much. I was on every social media channel, taking in information, comparing with the mainstream media reporting and seeing what is going on. I was on my balcony and out talking to shop owners. I tip my hat to the people on the london-journos Twitter list (especially Paul Lewis who followed the riots up-close and gave information from the streets), The West Londoner, Birminghamriots and ##londonriots on freendode (with David Singleton doing a great job keeping rumours in check and keeping people stating information rather than their political view).

Social media is now used for good – much more swiftly than any other channel. I was amazed by the swift organisation of #riotcleanup and the picture of brooms in the air still makes me choke. As does the courage of the lady telling rioters off for destroying their own neighbourhoods. I love how it was people on social media to propose sending money and help to Ashraf Haziq and others before newspapers thought of it.

Even the police uses social media right now to find the rioters – Flickr groups ask you to identify them and they name the rioters on Twitter.

There are so many issues to fix right now. Calling the riots a wake-up call would be the understatement of the year. Maybe the rioters were organising themselves, maybe there is a big, evil orchestration behind the madness. It does not matter, because if we now concentrate on silencing people instead of listening and understanding their needs, then we fail even more as a society and as a sophisticated, first world country. Countries who have to rule with fear and silencing their citizens are those we accused of doing wrong in the world courts and in some cases invaded to “liberate”. We should now liberate ourselves.

For years our media has painted a picture of young people to be no good hoodies who are more likely to stab you than to talk to you. Cheap, emotional headlines outcasted them as leeches to the system and unwilling to work and be a good citizen. This made people scared of confronting them and even communicating with them. This has to stop. We faced a total breakdown of communication. These kids felt outside the law, outside of society and invincible as they have nothing to fear and in some cases nothing to lose. We need to understand them and fight the causes that drove them into this thinking.

For us, as the privileged people on the web who have the luxury of debating for hours which technology is best to rotate a logo I think it is time to show this government why a freeflow of information is a damn good thing.

  • Our job right now in the UK is to show with hard evidence that communication systems like social media helped to prevent damage
  • Our job is to show that social media is more than celebrities, advertising and organising illegal activity. It is extremely effective as a means to give out information and get feedback – if you use it right
  • Our job is to convince the government that informed citizens are citizens who can act and prevent bad things from happening

Let’s use the power we have at our hands to help the police and government do their job. Let’s stop soiling an amazing opportunity like Facebook, Twitter and Google+ with mindless trolling and hatred and ignorance. Let’s be social on the social media, rather than opinionated. Let’s collect positive things about social media and show them to traditional media and the official channels of the government. Let’s stop telling people how much money Zuckerberg made without going to uni and how many millions are spent on buying products that make it easier on the web to buy other products and instead share knowledge and show that the web can be an amazing opportunity for learning and sharing.

I love the internet, I love being able to get information and have the freedom to make up my own mind about it . I love getting raw data which can be turned into facts after verification. I don’t want to wait for information till the news is out or the paper is printed. This is 2011. Media has to move on – if governments and media professionals do not take part it will without them anyways. Ever since the first pamphlet has been printed people realised the power of distribution. And we have an immediate, world-wide distribution right at our fingertips. This can be used for good or it can be used for propaganda and organising crime. I believe in people and think if we stop concentrating on consumption and instead on sharing information, educating people and understanding the background before passing judgement we are on the way to a better society.