Christian Heilmann

Posts Tagged ‘outreach’

UK Government says no to upgrading IE6 – who is to blame?

Thursday, August 5th, 2010

Back in June Dan Frydman of Inigo Media Ltd submitted a petition to the UK government to encourage government departments to upgrade from IE6 and 6223 people signed it.

A short time ago we got an answer by her Majesty’s government which was a no – of course.

Government says no

Disregarding the horrible PR mumbo-jumbo re-assuring us that the government takes security serious (when they are not leaving personal data files on trains) it gets actually interesting:

Complex software will always have vulnerabilities and motivated adversaries will always work to discover and take advantage of them. There is no evidence that upgrading away from the latest fully patched versions of Internet Explorer to other browsers will make users more secure. Regular software patching and updating will help defend against the latest threats.

This of course is a wonderful example of stating the bleeding obvious, but it is interesting that there is “no evidence that upgrading IE6 makes computers more secure”. I wonder why Microsoft then keeps advertising that IE8 is more secure? True, IE6 can get all the patches for massive attacks but phishing warnings and other interface changes in latest browsers do not get added. So we protect users under the hood but we still leave the barn door wide open for social engineering attacks. A malware warning like Firefox, Chrome or more modern IEs have would help there (unless it gets removed when it affects advertising). If there actually is no proof it would be a good opportunity for Apple, Google and Mozilla to collect some numbers and publish them – not on blogs or other “in crowd” media but in the magazines read by the people who make IT decisions for governments and large corporates.

Security patching not an issue?

The government statement then continues to stress the great relationship they have with Redmond for security related matters:

The Government continues to work with Microsoft and other internet browser suppliers to understand the security of the products used by HMG, including Internet Explorer and we welcome the work that Microsoft are continuing do on delivering security solutions which are deployed as quickly as possible to all Internet Explorer users.

There is a distinct lack of information about what they are – both the other suppliers or the measures. My guess is that Google is starting to approach governments with Chrome and the online office suite. Let’s note down though one thing here: that there is no problem to deploy fixes very quickly to all IE users – we will go back to that.

No centralised security mandate?

Each Department is responsible for managing the risks to its IT systems based on Government Information Assurance policy and technical advice from CESG, the National Technical Authority for Information Assurance. Part of this advice is that regular software patching and updating will help defend against the latest threats. It is for individual departments to make the decision on how best to manage the risk based on this clear guidance.

So, wait – beforehand we were told that there is continuous patching with ease as Microsoft helps a lot and now we learn that it is up to the department to really follow that advice. It is not a mandate, but only a guidance. This means actually that there are probably terribly outdated IE6 in use as changing the IT infrastructure is quite low on the list of priorities for a lot of departments when there are people in the waiting rooms complaining. Which means that if upgrading and patching is not centrally mandated there is no chance we’ll ever have a secure and homogenous IT environment in government bodies.

A departmental decision?

Public sector organisations are free to identify software that supports their business needs as long as it adheres to appropriate standards. Also, the cost-effectiveness of system upgrade depends on the circumstances of the individual department’s requirements.

Which means that a department could switch to other software – especially when they could save money? The catch here is “appropriate standards” which probably means a EULA. Or what, exactly? The other big “oh well, we really can’t do that, can we” here is the cost-effectiveness of a system upgrade. In many cases of Microsoft systems this probably means that the hardware in use is not up to scratch to support other OSes than Windows 2000 or XP1.

Upgrading is an issue?

It is not straightforward for HMG departments to upgrade IE versions on their systems. Upgrading these systems to IE8 can be a very large operation, taking weeks to test and roll out to all users.

How so? Earlier we heard that patching IE is not an issue, so how is replacing IE an issue? Unless of course we’d own up here and admit that it is the infrastructure and the hardware that was defined and set in stone around the millennium when all were scared about Y2K and believed that the IE6/XP Suite will never have to be upgraded.

No time for testing?

The other issue seems to be that testing our systems is hard:

To test all the web applications currently used by HMG departments can take months at significant potential cost to the taxpayer. It is therefore more cost effective in many cases to continue to use IE6 and rely on other measures, such as firewalls and malware scanning software, to further protect public sector internet users.

This to me says that there are systems that were built in a short-sighted manner a long time ago – for IE6 and windows 2000 when they were the new black and every consultant got his Microsoft certification training and out of a sudden was a real expert who can predict the future of the next 10 years. So instead of fixing and replacing the rotten core of the system we add new doors with shiny hinges and a security guard before it and it will be fine. This is like hiring a bouncer for a club where people fight on the dance floor.

The fascinating part of the firewall and malware scanning software is that it makes the life of the end users even more hell than surfing with IE6 already is. One of my favourite things when I switched to Mac/Linux is that my processor can now deal with stuff I want to do rather than analysing my traffic and incoming requests and that I can work without being interrupted by a “scanning all your files, come back in 2 hours” message.

Who is to blame?

The answer of the government was not only predictable, but (in a very shortsighted and limited view) also understandable. Nobody wants to own up having been cheated. And consultants telling people that a network will never have to change do cheat people – no software is 100% future-proof and you cannot run an office on 10 year old hardware without upgrading. The speed of innovation and wealth of information we encounter these days can not be easily consumed on systems that were meant to be used when having a 100kb JPG on the homepage was a huge decision and meant you lost 1/3 of your visitors.

Funnily enough the easiest and favourite target of web geeks in this issue – Microsoft – is not to blame. They do offer a simple way to make their new software support IE6 with a meta tag or – much more appropriate – with a header send by the server (IIS in this case). So the argument that software built for IE6 has to be tested by every department on IE8 is moot as Microsoft solved that issue for us. That the government probably didn’t even know about that option is where it gets interesting:

Reactions like this to an obvious upgrade are our fault

To a degree I have to say after all my years on the web and as a developer, writer, blogger and editor we are the first to blame for no movement in large corporations and the government.

When luminaries of the web design and web development world only showcase things made up to use a certain new technique instead of real world examples it is not surprising that developers working for government agencies don’t get sent to conferences or get their books.

When famous designers say that working for a large company or government is “boring work” and “that there is no point for a creative person to deal with politics in companies” then I really wonder if we have become self-sustaining and complacent. We moved on from shaking the foundations of web development and making people understand the massive opportunity the web as a media and the open web technologies as tools represent to inventing for ourselves rather than for the end user. What will have more users who are much more frustrated when something doesn’t work? The readers of a famous design blog or people who have to pay their council tax online?

When industrial grade research information and tools from companies like Yahoo, Google and Microsoft are never read or – even worse – reproduced in a shinier but less consistent manner by one man army companies and considered to be better (until the one man army is bored of it a month later and never updates) then there is no wonder that other companies don’t believe in these solutions either. Furthermore it means that these companies – who really formed and run the internet as we know it now – will stop sharing their tricks or spending time and money writing them down in a manner that makes sense for people not on the inside.

Shifting our focus

The only way that I can see how responses like the one from the UK government can be prevented in the future is by shifting our focus:

  • Instead of design prototypes and made-up web sites to show a certain technique let’s demand real production case studies and their effects (I remember one @media where the redesign of blogger was shown and how much traffic shifting to CSS saved the company – more of that, please).
  • Ask Microsoft to invite experts, host videos and tutorials of experts with modern solutions and distribute them on their network of clients
  • Make a massive comparison of government web sites and praise what some have done well (nothing works better than competitiveness)
  • Collect success stories of switching to open source solutions and how it saved money and time
  • Take a horrible IE6 only solution and show what it could look and work like if HTML5 and CSS3 were supported
  • Stop plotting shiny pixels on canvas elements and call it a cool HTML5 solution and instead build a complex online form or spreadsheet system using all of the goodies of HTML5
  • Stop applauding people for redesigns of their blog and instead shift people into the limelight who made a difference in an environment like large financial systems or local government

I’ve had these and other points in 1:1 discussions for years now and I yet have to see movement in these areas. Right now, we are happily thinking we innovate and push the envelope where in reality we are making each other go “Oooohhhh” while a large chunk of the audience that could benefit from our knowledge is stuck with really poor experiences on the web. I’d like to pay my council tax on my mobile phone’s browser and get notified when I need to do it – right now there is no way to do that.

Reaching out to developers – a brown bag talk at Sky in London

Monday, June 15th, 2009

Last week I visited Sky in London for a brown bag session during lunch. Their head of innovation, Paul Kane had asked me to come around and talk about API design, the history of YQL and in general tips and tricks how to get innovation and developer outreach working in companies and media. One part of the presentation was the following talk.

Here are the detailed notes:

Reaching Out

Strategies and ideas

  • Third party developers make your product better
  • Reaching them can be tough
  • Here are some ideas and strategies that worked in the past

Third party developers are an amazing force to make your products better and more relevant to the users you want to reach. Reaching them can be a bit of a headache and today I am going to talk about some of the ideas and strategies that worked well in the past and some of them I’d love to see followed more consistently.

The importance of being open

  • Opening your data and services multiplies your reach
  • Developers are happy to use your products and test them at the same time
  • You can not test for all environments, other people can

The first thing to be aware of is that being open to developers multiplies your reach and makes it much easier to create a solid product. With each developer using your APIs or data feeds you reach audiences you don’t have access to and have another developer looking over your product.

Developers are very happy to point out issues and in fact do report issues – users are not that vocal about these issues. This means that a lot of companies are happily polluting the web with terribly unusable products without being aware of it. You can’t anticipate all environments and use cases to test in – if you open your data to the world the developers using it can do that for you.

Building for web use

  • Be Platform independent
  • Be RESTful
  • Release all documentation and code examples in a portable format

The first thing to make sure is that to achieve ultimate reach you should concentrate on building systems that are meant to be used on the web. This means first of all that your web service should be platform independent.

REST does this for you. If you build a system that gives back data over HTTP implementers can use a browser, cURL on the command line or any framework they want to get your information.

Your documentation, information about the API and code examples should be available in as easy as possible formats that are portable and independent of platform. HTML, PDF and zipped code does all this – you cannot expect people to download and install an SDK like a piece of software – we are busy developers and hate polluting our machines with unnecessary installs.

Thinking of endpoints

  • Good URL structures
  • Easy to understand method names and parameters
  • Versioning of APIs

The success of a REST based API stands and falls with the quality of its endpoints. Make sure that your URLs are human readable and make sense. A good web service should be navigable in the browser bar and I shouldn’t have to read up the docs to understand where I want to go.

Keep your method names and parameters short and to the point – nobody likes long URLs as there is no tab completion in the browser bar (yet).

One thing that is terribly important is to version your API in the URL. If you find a bug or you need to rename something in a newer version as new dependencies come up you don’t want to break implementations that were based on an older version. A v2,v3,v4 in the URL is not long but prevents that from happening.

Thinking of output

  • Provide data that people need – a lot of data is good but don’t add information that only makes sense internally.
  • Provide data formats that people can use: xml, rss, json
  • Allow for callback parameters for json to use as JSON-P
  • Maybe allow for object transportation / ID for JSON-P calls
  • Allow people to turn off diagnostics and filter down to specific needs

These are very important points. As a general rule it is a good idea to provide as much data as possible in your APIs. However if this data is of no use to a third party developer or there is simply no documentation for the data then it shouldn’t be in there as it simply distracts from the necessary information. The Yahoo music API for example has a catzilla and hotzilla ID in the results – I have no clue what that is.

The data format question is pretty easy, XML for a full-fat result, RSS to really make it easy for people to use the data and JSON for people like me who don’t like to wade through XML with namespaces.

JSON-P is JSON with a callback parameter that allows the end user to use the data immediately in JavaScript without having to use a proxy to reach your content. The callback ID is an idea I’ve mentioned before. The problem with using JSON-P and generated script nodes is that you can never be sure in which order several calls are returned. Therefore it is a good idea to allow for an ID to be sent and returned with the call so I can match up data with request in the callback method. This could be as simple as repeating the request terms in the result set, but a real transaction ID would make it even more handy.

Diagnostic parts of API calls can help me find out what went wrong or what was going on but it is good when you can turn it off. YQL for example offers that option, check this call vs. this one – the difference is the diagnostics=false parameter. In general it is a good idea to allow developers to filter down the amount of data being sent back (f.e. with an amount parameter) as there are environments where really every byte counts.

Providing easy access

  • For data APIs that are read only, don’t bother with authentication – just use a developer key
  • Have sandboxes where developers can play without having to sign up

Nothing is more annoying than having to authenticate before even getting to see the kind of content you can access with your API. Therefore it is a good idea to keep the pain of authentication as low as possible.

A great idea is to cluster the access to different developers or have a no-frills access version. Vimeo are doing a great job offering different APIs and so is Yahoo Maps. Flickr is providing an API explorer and the Guardian API also has a console for running and filtering down requests. The Rolls Royce of course is the YQL console.

Documentation

  • Provide proper documentation that is quick to read and easy to navigate
  • Offer cheatsheets and quick introductions
  • Have a documentation team – developers are not writers and are too close to the product to actually write readable docs

None of these need much more explanation. Simply be aware that documentation is something you shouldn’t take lightly as developers go to docs when they get stuck and when they encounter only more frustration and hard to understand information there you are not likely to be a big success.

Personally I don’t think you should bother localising documentation unless you need to provide different content to different markets and have to explain developers why some content is not available to them. Let bloggers do the advertising in the local markets for you.

Demo Code

  • Provide demo code and SDKs (if you really must)
  • Have excellent demo code – not ‘look, just two lines of code’
  • Offer different language demos

Demo code is a very powerful tool and shows the quality of your development team. Of course every developer can wade through quickly put together code and get something done, but your code should entice developers to use your systems and create excellent solutions.

Your demo code should not have any security holes, accessibility omissions or simply bad code practices. This code is your advertisement to other developers – it should be easy to copy and paste it and hit the ground running but you should not leave a trail of security concerns and global variables behind.

If you provide copy and paste examples make sure they work outside your environment – all links to dependencies should be absolute and not relative (starting with http at best) so that you don’t get complaints about code not working. I found it best to list a whole example and the explain it chunk by chunk what is going on.

Consistency is key here – keep to an agreed code standard and you won’t confuse implementers. Other than that your code should be customizable as easy as possible – use config objects to allow for that.

Be responsive

  • Have a mailing list / forum that is monitored
  • Keep your eyes out for things people do with your code (blogsearch, twitter)

Developers will do a lot for you and nothing gives them more satisfaction than getting their problems answered by an expert. Have a mailing list or forum with your expert developers or at least people that can ask the right person. Monitor the buzz that is going on and chime in to give praise or set things straight. There is nothing cooler than seeing your tweet about an implementation you’ve built with a certain API getting re-tweeted from an official account or by experts connected with this API.

On the other hand there is nothing more deadly than a real implementation problem remaining unanswered for days. Be visible, be there for people and you’ll have happy users.

Invite people to play

  • Invite selected people for a private beta – this is amazingly powerful as people’s integrity will work for you – or you get great feedback what to change
  • Give people a chance to build “official mashups” before release to show on the site and in the press conference.
  • Maybe launch with a developer day

This is pretty important. Releasing an API is one thing, having other people that the whole audience trusts in their decisions advocate for you is a much bigger impact. Open up betas to some people beforehand, have a handful of API keys upfront for people to play with and have keys that people can give out on twitter or their blogs – all of this will have quite an impact.

The Guardian did a great job with this when they released their APIs. It also works for product launches, as spotify has shown. Having people build official mashups with official blog coverage also allows these people to write “making of” articles and blog posts which show tips and tricks you might not have thought of before.

You can take this idea further – have some undocumented but cool new features every few weeks or so (or previews of upcoming features) and leak them out to twitter or on developer lists. A lot of developers are very happy to be the first to show the world that your API is working on something new and will try to hack around the non-support by writing own docs.

Collaborate with other API providers.

  • Reach out to other API providers
  • Take part in groups dealing with the same topic
  • Create an open YQL table, add your data to gnip

Last but not least remember that you are not alone. We all release APIs and can learn from each other’s mistakes and gains. This presentation is living proof for that and I am not alone in reaching out to companies who want to join the ranks of those who build and maintain the web of data. An open YQL table is written pretty fast and gives you a chance to let developers play with your data without having to worry about limitations and caching for now – the YQL access restrictions and server does that for you.

Translating or localising documentation?

Friday, December 19th, 2008

We just had an interesting meeting here discussing plans of how to provide translations of our documentations in different languages. I am a big fan of documentation and have given several presentations talking about the why and how of good docs. In essence the “good code explains itself” talk is a big fat arrogant lie. You should never expect the people using your code to be on the same level as you – that would defeat the purpose of writing code.

Fact is that far too much software has no proper documentation at all let alone translations into different languages. This is because there a few issues with translating documentation:

  • Price – translation is expensive, which is why companies started crowdsourcing it.
  • Quality – crowdsourcing means you need to trust your community – a lot (looking at Facebook’s translation app shows a lot of translation agency spam in the comments – not a good start)
  • Updating issues – in essence, you are trying to translate a moving target. With books, courses and large enterprise level software the documentation is fixed for a certain amount of time. With smaller pieces of software and APIs the docs change very frequently and you need to find a way to alert the translators about the changes. If you crowdsourced your translation this is a request on top of another request!
  • Keeping sync – different translations take a different amount of time. Translating English to German is pretty straight forward whereas English to Hindi or Mandarin is a tad trickier. This means that translations could be half finished when the main docs change – which is terribly frustrating to those who spent the time translating now outdated material.
  • Relevancy – your documentation normally spans your complete offer – however in different markets some parts of your product may not be available or make any sense. Translating these parts would be pointless, but it also means that tracking the overall progress of translation becomes quite daunting.

All of these are things to consider and not that easy to get around. Frankly, it is very easy to waste a lot of time, effort and money on translation. It makes me wonder if there really is a need for translation or if it doesn’t make much more sense to invest in a localisation framework. What you’d actually need to make your English product available for people in other locations is:

  • Local experts – this could be someone working for you or a group of volunteers that see the value of your product for their markets.
  • Easy access collaboration tools – this could be a wiki, a mailing list or a forum. Anything that makes it easy for people to contribute in their language and with their view of the world. If these tools make collaboration easy without having to jump through hoops people will help you make your product more understandable in their region.
  • A local point of reference – this could be a blog or small presence of your company in the local language or a collaboration with an already established local partner or group.
  • Moles – most international developer communities have their favourite hangouts already. Instead of building one that might not be used (not every country is going “wahey” when a US company offers them a new channel) it makes sense to have people who speak that language and that you trust be on these channels, listen to what people need and offer the right advice pointing to your documentation.

What do you think? do you have examples of where this is done well and what worked and didn’t work? Here are two I found: