Getting you bearings with JavaScript on iOS

This post is an extension of my previous post on accessing the gyroscope in your iOS device from JavaScript for a web application. So be sure to read that one first or you'll probably get squat.

New in iOS 5 is the ability to access the compass using JavaScript. This is done precisely in the same way as with all other «orientation data». The «deviceorientation» event on the window object will (on iPhone 4 or newer running iOS 5 and newer) contain two properties related to the compass, these are:

  • «webkitCompassHeading» [0-360°] - Where 0 is magnetic north
  • «webkitCompassAccuracy» - How many degrees the heading is off (-1 if error)
So for the demo I'm using the same arrow as in the gyroscope post, except now with gorgeous «north» and «south» indicators added. The goal of the demo is to make the arrow point towards north. When you're done playing with it, take a look at the source and amaze at its simplicity.

Further reading:

The HTML 5 History API

The HTML 5 History API will allow us to update the browser history and keep the all so important functionality of the back and forward browser buttons when the site is loading content dynamically using XMLHttpRequest.

You want to use XMLHttpRequests (AJAX) to avoid unnecessary page loads, hence reducing network traffic and therefore speeding up you web-app, you probably already do this in your web-apps. However, before the advent of the HTML 5 History API there was a problem with this otherwise smart approach to building web-apps. When a page updated it's content this would not be reflected in the browser history and you had to do a lot of work to make the back and forward buttons work as expected. Thanks to the HTML 5 History API this is now much simpler. In this article I'll explain an approach to "mimicking" a common server-side approach to "dynamic" page loading using parameters in the url, while retaining the functionality of the browser history. Take a look at the demo.

What this typical AJAX demo does is pretty basic:

  • At startup it loads a page depending upon the presence of a parameter.
  • When you click a link (at the top), it fetches and swaps the content of the "content" holder of the main page.
Pushed history items
However, after having clicked the links you'll also notice that the back and forward buttons are working, without actually navigating you away from the main page, like the're integrated with the app. Also, items are "pushed" onto the browser history, making it possible to navigate directly to a previously loaded page. It's just updating the content of the content box when you click a link, not the entire page. This is what happens:
  • When a link is clicked the new content is fetched using a XMLHttpRequest.
  • The current content is replaced and the browser history is updated.
All of these operations are accomplished using only three (3) API calls:
  • The «onpopstate» event allows us to listen to events fired when the history is being navigated, like when using the back button. (see line 57'ish in the demo code for how this is handled).
  • The «replaceState» method allows us to replace the current state in the browser history. This is used to create a state when for example on initial page load.
  • The «pushState» method allows us to add things to the history stack (it's a stack, hence push and pop).
Now, this does not work in all browsers (IE..duh!) and as of writing this post still has «Working Draft» status. However, support is pretty good and when Microsoft finally adds support in IE 10, all major browsers will support the History API.

Take a look at the demo and the source code (which is heavily commented) and feel free to use the source freely.

Bye!

Creating a simple WYSIWYG editor with HTML and JavaScript


An updated and better version of this post with a new and better demo can be found here: http://www.kinderas.com/technology/2014/2/18/a-simple-rich-text-editor

Wouldn't it be nice to provide the users of your website with a WYSIWYG rich text editor instead of boring forms and a bunch of textfields? Enter «contenteditable». This is a simple property which can be set on any DOM element, like divs, sections and so on. What it does it making the element writable for the user, directly inline with all your predefines styles and content.
contenteditable="true"
To get set up, set the «contenteditable» property of your selected element to "true". In my demo I'm using a «section» element, but as noted above, you can use any DOM element. And that is it actually. You now have a working rich text editor running on you web-page. You can use the keyboard shortcuts like «cmd+b» to make selected text bold and so on. But wait, there's more.
document.execCommand('bold',false,null);
This simple JavaScript command will allow you to programmatically make text bold and thats not all, there are a dozen of built in commands which can be executed on the selected text, for an overview take a look at the WHATWG specification.

When combining «contenteditable» with the HTML 5 storage APIs you can pretty easily make a powerful rich text editor for your users. Take a look at my simple demo to see how you can accomplish this. As a side note, iOS 5 brings support for «contenteditable» on mobile platforms as well.

Accessing the gyroscope and accelerometer using JavaScript

With the advent of "mobile" devices such as the iPad, iPhone and all the gazillion Android devices, an increasing demand for browser-applications to sport the features and functionality native applications arises. In this article I'll have a quick look at one of these features, namely «device orientation». First of all, take a look at the demo. In this demo I'm using the "deviceorientation" event listener of the «window» object to listen for orientation events. Orientation events are in short events triggered when your device is twisting and turning, sort of. In my iPhone, these events originate in the accelerometer and the gyroscope. For your phone this might be different, but it doesn't matter as the API's for accessing the data are pretty much the same and part of the «DeviceOrientation Event Specification». Note that this demo is only tested on the iPhone 4, so if you have something else it might not work. Let's get to it then.
window.addEventListener('deviceorientation', orientationHandler, false);

First of all we need to add an event listener for the «device orientation» event. This event is fired by the «window» object as mentioned above. We then simply set up our event handler.

function orientationHandler(e)
{
 image.style.webkitTransform = "perspective(500) 
rotateZ("+e.alpha+"deg) 
rotateX("+e.beta+"deg) 
rotateY("+e.gamma+"deg)";
}

There are three properties of the «orientation event» we want to get at.

  • «alpha» - This is rotation about the Z axis. In other words left and right rotation.
  • «beta» - This is rotation about the X axis. This means how much you are tilting the device towards you.
  • «gamma» - This is the rotation about the Y axis, or the angle of the device screen if you may.
There are also two other properties present in iOS 5 which gives you access to the compass and it's accuracy, but those are not used in this example.

The «perspective(500)» transform simply defines how "far away" from the object you are when it's rotated, or the depth if you like. Since the properties of the «device orientation» event correlates to the values handled by the CSS transform properties, no calculation is needed. Try it out! (Should work on iPhone 4 and iPad 2)

References:

HTML 5 Web Workers and image processing

One of the more exiting things we are getting via the new HTML specification is Web Workers. If you have no idea what Web Workers (lets call them Worker from now on) are you can basically think of them as "sandboxed" threads. By threads I do refer to threads as in «multithreading», an architectural feature of modern CPUs and operating systems. Workers allow us to run tasks separately from the main thread where all the drawing, animation and DOM manipulation is going on. This allows us to do calculations without leaving the impression on the user that the browser is "hanging". If you have worked with other "thread wrapping" technologies like Grand Central Dispatch (GCD) you will get Workers without fuzz, however Workers do have some limitations that other similar technologies don't have.

Workers limitations and usage
First of all, Workers are separate code blocks of JavaScript not found in the spawning script, kinda.. In most cases a Worker is a separate JavaScript file which you DO NOT refer to in you HTML script tags in the HTML document. However, Workers can also be defined inline through the BlobBuilder interface. In this post I will use only external Workers. Also, there are two kinds of Workers, «Shared» and «Dedicated». I will only use «Dedicated Workers» in this post.

Workers cannot access the following:
  • The DOM or the DOM APIs
  • The window object
  • The document and the parent object. 
A worker can access:
  • The «navigator» object
  • «XMLHttpRequest»
  • A read-only version of the «location» object
  • The Application cache
  • setTimeout(), clearTimeout() and setInterval(), clearInterval()
A Worker can also spawn other Workers and import external scripts via «importScripts()».

The demo (aka. the fun part)
For the demo this time I have created a simple web-app which loads and displays 3 pictures n times. You can randomize the position of the images by clicking the randomize button. You can pick the number of images displayed as well as their size. The two image manipulation buttons will grayscale and invert the images, if you check the checkbox the images will perform the randomize animation concurrently with the image processing. It's this latter part which calls for the usage of Workers.

The demo will show you how to spawn a separate Worker for each of the images, thereby allowing for concurrent processing and animation without much degradation in performance. I won't explicitly go through all the layout and animation code here, rather focusing on the usage of the Workers.

var worker = new Worker('DWGrayscale.js');
worker.addEventListener('message', function(e){
   ctxAr[e.data.index].context.putImageData(e.data.imagedata,0,0);
});
worker.postMessage(...imagedata...);

On line 126 - 133 within the grayscale function we create a Worker for each of the images, then we add a listener for the «onmessage» event allowing the Worker to communicate with the main thread. We finish off by posting a message to the worker using «postMessage» and passing the image data. These messages are the only way to communicate with a Worker. This means that we also need to implement this interface in the Worker itself.
addEventListener('message', function(e){
 var imageData = e.data.imagedata;
 //....process image data....
 postMessage(...imagedata...);
});
In the Worker «DWGrayscale.js» we grab the passed image data from the «data» property of the event. We then go on processing the image data, when done we post a message back to the main thread passing the image data which then in turn can be used to update the canvas element on screen.

Why not pass the Canvas element or at least the Canvas context?
Workers do not have access to the DOM, because it would not be thread safe. Workers are limited to passing object which can be serialized into JSON, which does not support cyclic objects. The Canvas element is a DOM element and the context is a cyclic object, hence we need to get the actual pixel array which we can pass back and forth to the Worker.

Take a look at the demo and the source code to learn in more detail how the demo was created.
Note also that you should probably run the demo in Safari as it has no limitation on the number of Workers concurrently running like Chrome has, also the animations uses the WebKit prefix and will not work in non WebKit browsers. So, please don't use this code in an production environment!

References:

Quicktip: Skewing an image with JavaScript and CSS3

Have you ever found yourself in a situation where you needed to skew a rabbit? If you haven't, you'll probably enter this place soon, so here I am to prepare you. The task is to skew an image on a HTML page using CSS and JavaScript. This could be attached to for instance a button click event. This is a one liner..
document.getElementById('myImage').style.transform = "skew(-15deg)"
The «skew» property of the «transform» CSS method takes degrees of skew as an input. Negative numbers will skew the image top right, bottom left and positive numbers..the other way around.
Note that in order for this to actually work, you will need to use vendor prefixes as the time of writing this post. For Safari (desktop and iOS) and Chrome (and Android) you would use «webkitTransform» for FireFox «mozTransform» and so on.

The Peacekeeper

Peacekeeper on an iMac 3.0 Ghz
I decided to try out Futuremaks spanking "new" Peacekeeper browser tester today. In contrast to other more "nerdy" tests like Acid3 and SunSpider which are useful for people like me, Peacekeepers tests are more targeted at actual normal usage of a browser, making it slightly more interesting for users without a degree in computer science.

On the other hand, if you want to understand the six test, well.. it gets a bit technical. The first test «rendering» is a test of how fast the browser can draw stuff to the screen, while doing other calculations and operations at the same time. The second test which Futuremark calls «social networking» is actually a test in how fast the browser can create a SHA1 hash, parse some XML and filtering and sorting the elements of an array. Thirdly we arrive at the «complex graphics» test, which is a test on canvas drawing operations, canvas being part of the new HTML 5 standard. The «data» test which I have renamed to "arrays" does work on exactly that, array manipulation. The «DOM manipulation» test, is one of the most important tests as this tests the browsers ability to look up elements in the DOM fast. This should be of particular interest to jQuery fans. The last test «text parsing», is what its name suggests, parsing and searching in strings. To get a full detailed explanation of what the individual test entails, see Futuremarks FAQ site.

My testcomputer was an iMac 3.06 GHz, with 8GB of RAM and an ATI Radeon HD 4670 graphics card running OS X 10.6.7. It matters, because these tests will very, much, depending on your computer and OS.

As for the results one can clearly see that the WebKit browsers beat the snot out of Firefox. Chrome is clearly the fastest browser overall, however for DOM manipulation Safari is slightly faster than Chrome. Again, beware of this one jQuery fans because this is where all those selectors come into play. Chrome is ruling the array manipulation speed, this might be due to the V8 JavaScript engine. All the browsers basically suck when it comes to rendering HTML 5 Canvas graphics. As a side-note, I observed a test that was run on a Windows based computer as well. To our surprise, both Chrome and Opera beat Internet Explorer 9 into the slippers when it came to canvas performance. Beauty of the web my ass, in your own tests internally made tests perhaps.

Conclusion
So, judging from this Google Chrome is the browser your should go for. And in my opinion, it is. For me, I'm sticking with Safari for my day to day use, but I'm using Chrome when writing this blog post.

Scaling and rotating an elephant using JavaScript

This second post about «touch» is somewhat an extension to my previous post «Drawing by touching using JavaScript». However in this post the focus will be on «gesture events». A gesture event is an event which will fire when more than one finger is present on the screen. Due to the somewhat complex nature of the accompanying demo, I will not walk through it line by line, but rather focus on some key ares which are important when dealing with touches and gestures.
Step 1 is to take a look at the demo and read through my comments. To understand this you will need a basic understanding of object oriented JavaScript and I will not walk through that part of the demo in this post. What I will explain in some detail is how the gesture events work (to see how touch events work, refer to my previous post) and some nifty tricks you can do with event handlers and transforms. Oh..and you will be needing a touch device to test this thing. Let's get cracking then.

The basic functionality of this demo is to place images on the page which have some touch and gesture events applied to them. This will allow us to move, scale and rotate the images using gestures. An array «elephants» is used to keep track of all the images on the page. On line 70 (in the demo JavaScript file) an Object called «TouchImage» is defined. This object will keep track of all events and transforms associated with one image. We create a new «TouchImage» on line 52. This happens for every time you hit the «Create elephant» button. Then on line 99 - 104:
tImage.image.addEventListener('gesturestart', tImage, false);
tImage.image.addEventListener('gesturechange', tImage, false);
tImage.image.addEventListener('gestureend', tImage, false);
tImage.image.addEventListener('touchcancel', tImage, false);
tImage.image.addEventListener('touchstart', tImage, false);
tImage.image.addEventListener('touchend', tImage, false);

Now, this might look a bit strange, we're passing «this» («tImage» is a closure for «this») as the event handler for the listeners. This is because at line 117 we define a method called «handleEvent» on the «TouchImage». This is a magical method in JavaScript which will basically handle any event passed to it's object. Within this method we check to see if the caller is a function and if it is, we simply call it.
TouchImage.prototype.handleEvent = function(event)
{
 if(typeof(this[event.type]) === "function"){
  return this[event.type](event);
 }
}
This way we can extend the «TouchImage» object with methods called «gesturesstart» and so on, the same names as the events. An additional plus is that we stay within scope of the object, avoiding to pass closures around like rag dolls. Now to the gesture handlers. In contrast to «touch» handlers, there are only three «gesture» handlers: «getsurestart», «gesturechange» and «gestureend». Each of these will trigger only if there are more than one finger on the screen.
TouchImage.prototype.gesturestart = function(event)
{
 event.preventDefault();
 this.startRotation = this.rotation;
 this.startScale    = this.scale;
}
In «gesturestart» which triggers when a second finger is placed on screen, we capture the current scale and rotation values of the «TouchImage» in question.
TouchImage.prototype.gesturechange = function(event)
{
 event.preventDefault();
 
 this.scale    = this.startScale * event.scale;
 this.rotation = this.startRotation + event.rotation;
 
 this.applyTransforms(); 
}
In «gesturechange» we calculate the scale and rotation by using the start values and the changed values. The scale is multiplied with the new scale and the rotation is calculated by adding the start rotation to the current rotation. This works because both values in the change handler will give us the amount changed since the start event. Then we call the «applyTransforms()» method where the actual transform is done.
TouchImage.prototype.applyTransforms = function()
{
 var transform = 'translate3d(' + this.posX + 'px,' + this.posY + 'px, '+this.posZ+'px)';
 transform += ' rotate(' + this.rotation + 'deg)';
 transform += ' scale(' + this.scale + ')';
 this.image.style.webkitTransform = transform;  
}
The transform is done on all properties at the same time. You cannot separate them from each other, because we are in effect overwriting the entire transform style element each time something is changing. Do also note that we are in fact using «transform3d», which is quite different from the regular «transform». The reason for this is twofold. First, the «transform3d» will render the image on it's own 3D layer, at times triggering hardware rendering. This will yield a significant performance increase. Secondly, the «transform3d» allow us to stack elements in the «z-axis», hence we can move the active element to the front. We do this sorting by  calling the «sortDepth» method, which only sorts the «elephants» array.

That's the gesture events. Now, there are regular touch events in this application as well. These are used to move the «TouchImage» objects around the screen. I won't cover how this is done in detail here because it's much the same as in the previous post. However one thing to notice is that an offset is calculated in the «touchstart» handler. This is done so that when moving the image it tracks from the point where the user places her finger on the image. If we didn't do this the image would snap to 0,0 under the users finger when moved, making for an unexpected user experience. That's it and that's that!

Drawing by touching using JavaScript


An updated version of this post and demo is available here: http://www.kinderas.com/technology/2014/3/13/a-drawing-application


The web is no longer exclusive for desktop and laptop computers. With the introduction of the iPhone and the iPad Apple changed how we interact with the web. In the wake of Apples success with iOS devices we see the emergence of a slew of "handheld" devices. They are all different, some are small, others bigger and more powerful, however most of them utilize touch as a means of input. In this article I take a look at how we can create a simple touch enabled drawing application using only JavaScript and a tinsy-winsy bit of HTML 5.

First, you can try the finished app (with comments) and download the source code from here.

The first thing we need to do is to make sure that the user visiting our web-app is on a device which can understand touch events. This is pretty straight forward. We accomplish this by asking the «window» DOM element if is knows about one of the touch events.
if('ontouchstart' in window == false){
   alert('Sorry, you need a touch enabled device to use this app');
   return;
}
If this does not stop our script, we know that the current device supports the touch events we need. The next step is to prevent the screen itself from scrolling when you drag your finger across it. We need to do this because touch events «bubbles» in JavaScript. This means that all the parent elements will get a chance to handle the touch event after we have handled it in our function. So after we have handled the «touchmove» event in our canvas element it will bubble right up to the window element where the browser will try to scroll the window. PS: Sometimes you might want this behavior, for example when you're scrolling a page. For our demo, we don't need it. To disable page scrolling we listen for the «touchmove» event on the document element, then we flag the event as handled using «event.preventDefault()».

Now to the construction part. We need to create a canvas element.
var canvas = document.createElement('canvas');
canvas.width  = window.innerWidth;
canvas.height = window.innerHeight;
document.body.appendChild(canvas);
We now have a «canvas» element which is the same size as our entire page. Next up we need to create a context in which to draw for the canvas. A context is kinda like a page within a drawing pad. The context can receive JavaScript drawing commands as we will see later on.
ctx  = canvas.getContext('2d');
ctx.strokeStyle = "rgba(255,0,0,1)";
ctx.lineWidth   = 5;
ctx.lineCap     = 'round';
The context is now set up using the color red with a 5 pixel wide stroke and rounded ends. You can change these values as you like of course.

In order for our fingers to be able to produce wonderful line drawings, we need to tell our application to listen for touch events. We'll need three of them. The «touchstart» event is where we set our starting position for the drawing operation. This event fires when a finger is added to the screen. The «touchmove» is where we draw the lines, this event fires when we move a finger on the screen. The «touchcancel» is where we handle exceptions, like what happens if you receive a call in the middle of your art creation extravaganza. This event fires whenever it needs to.
canvas.addEventListener("touchstart",touchstartHandler,false);
canvas.addEventListener("touchmove", touchmoveHandler,false);
canvas.addEventListener("touchcancel", touchcancelHandler,false);

Now we'll need to handle those events as well, here is where the real fun begins! Let's have a look a the «touchstart» handler first.
function touchstartHandler(event)
{
   ctx.moveTo(event.touches[0].pageX, event.touches[0].pageY);
}
It's just one line, but do take notice in the «touches» object contained within the event. This is an array (well, kind of) of touches. The reason for this is that most humans have more than one finger, so the «touches» object could actually contain several values. We only need one, so we refer to the first (0) element in the «touches» object. Now that we have found the first finger we get it's location on the screen by asking for the «pageX/Y» values. We then continue to move the canvas context pointer to the coordinates for this finger using the «moveTo» method. This happens every time we place a finger on the screen. But, only for the first finger.
function touchmoveHandler(event)
{
    ctx.lineTo(event.touches[0].pageX, event.touches[0].pageY);
    ctx.stroke();
}
In the «thouchmove» handler we do the actual work of telling the canvas context to draw a line from where it's last location was to where the finger is now. The first time this is called the line is drawn from where we placed the finger on to the screen, set in the «touchstart» handler. After that, the canvas context will update it's starting position to the location of the last drawing operation. Like the «moveTo» command in «touchstart» the «lineTo» will update the coordinate position, but it will also issue a drawing command which is rendered to the screen when we call the «stroke» method.

That's it. Try it out on your iPad, iPhone or any other touch device which understands the new HTML 5 APIs by clicking this magical link.

Recommended further reading:

Quicktip: Flipping an image with JavaScript


Let's say you wanted to flip an image in you web-app horizontally or vertically. By using a tiny bit of JavaScript and CSS 3 this is really easy.

//Get the image
var img = document.getElementById('myimage');
//Flip it horizontally
img.style.webkitTransform = 'scaleX(-1)';


And you're done! Note that this will only work in webkit browsers. The equivalent for Mozilla browsers would be «img.style.MozTransform». To flip the image vertically you could use: «scaleY(-1)». And to flip both horizontally and vertically at the same time: «scale(-1,-1)».

[edit 17.03.2011]
You can of course do this in other browsers besides Gecko or WebKit based browsers, as pointed out in the comments. Safari will actually support both the «webkit» and the «Moz» prefix, but this will most likely go away soon. So what you'll need to use is feature detection. This will still not work in ALL browsers, but the usable ones will most likely support it.
if(img.style.webkitTransform){
  img.style.webkitTransform = 'scaleX(-1)';
}else if(img.style.MozTransform){
  img.style.MozTransform = 'scaleX(-1)';
}else if(img.style.OTransform){
   img.style.OTransform = 'scaleX(-1)';
}else if(img.style.msTransform){
   img.style.msTransform = 'scaleX(-1)';
}else{
   img.style.transform = 'scaleX(-1)';
}

HTML 5 Offline data storage

One of the most anticipated features of HTML 5 and all it's consequential technologies and APIs are the «offline storage» APIs. In this post I'll take a "real world" approach to using the HTML 5 Web-Database and the «HTML 5 Offline application cache». I will take you through an example where a complete elephant gets stored on your computer. That's right! A pink one as well! The point of this demo is to investigate how to store both static data as well as dynamic data which might not be known at design time. For this occasion I have created a demo which you are free to download and inspect. It is fully commented and somewhat verbose for easier reading. What this application does in essence is to store a HTML and a JavaScript file on your computer, then it will go on to store a picture (which could be dynamic data) in a local database, hence making the application usable when not online. Let's have a look!

So, our goal is to store both static data and some dynamic data. To achieve this we'll need to use two approaches as mentioned above. The first one, called «HTML 5 Offline Application Storage» is almost automatic once you have it configured. We will utilize this method to store our main JavaScript file so that it will work when you're not online. Cool!
This approach uses a simple text file, called a «manifest file» which tells the browser which data to store locally. In order for this file to be interpreted by the browser correctly you'll need to (1) configure your web-server to serve this file with the «text/cache-manifest» mime-type. It doesn't matter what kind of extension you use, but I prefer either «.manifest» or «.cache». Now that thats out of the way we need to create the «manifest file». Mine looks something like this.

CACHE MANIFEST
# Cache manifest version 1.0.5
# If you change the version number in this comment,
# the cache manifest is no longer byte-for-byte
# identical.

main.js

NETWORK:
# All URLs that start with the following lines
# are whitelisted.

http://web.kinderas.com/

The first line MUST be the text «CACHE MANIFEST». After that you'll go on to specify the files you'll like to be cached. Here you'll put stuff like JavaScript files, HTML pages, CSS files and so on. Mine only has the one «main.js». Note that you do NOT need to specify the HTML file which declares the manifest file. This will typically be your «index.html» file or something like that. More on that later. The next section is the «NETWORK:» section. In this section you specify a "white list" containing URL's from where your application is allowed to get it's data. If you don't specify this your app will not download any data, not from the server it's hosted on or from anywhere else. I have specified my own domain since this is where the elephant is hosted.
You declare the manifest file in your HTML file like so:
<html manifest="cache.manifest">
That is it for the manifest file actually. If you are simply going to host a static site, a game or something like that, this will work just file as is. Note that to update your files, you do need to make a change in the manifest file itself, like incrementing the version number.

Next we need to add some sexy JavaScript in order to make the pink elephant available for your viewing pleasure in locations the WiFi gods have forsaken. There are three main steps to this: (1) Opening and creating the database and the table if it's not already present. (2) Reading the image from the database or saving it to the database if it's not already in there. (3) Displaying the image. I will not be explaining every line of the code in this post, you'd rather take a look at the JavaScript file and read the comments. However I will discuss some of the more important points briefly.
Not all browsers will support the HTML 5 database APIs, so we need to check for this before we can do anything. To do this we check for the existence of the «openDatabase» method on the «window» object, like so:

if(!window.openDatabase){
// No support for HTML 5 db
return;
}

This will detect if the browser has support for the methods we need. If not, give the user a message or some alternative content.
Then, to open / create the database we would we simply write:
db = openDatabase('testdb','1.0','Offline Elephant DB',1024*1024);
We have just created a 1MB database or opened one if it already existed. Now it's ready to execute SQL queries using transactions.

var sql = 'CREATE TABLE IF NOT EXISTS offline_image (id INTEGER ....);';
db.transaction(
 function(transaction){
  transaction.executeSql(sql,[],
  function(transaction, result){
   //The table was created
  },
  errorHandler);
 }
);

The database table has now been created if it wasn't already there. We now need to check if there is an image already saved in the database, this happens on line 56 in the «readImage()» function. If there is an image with the matching filename saved, we use that, if not we go on to loading and serializing the image, as follows.

var canvas = document.createElement('canvas');
var ctx    = canvas.getContext('2d');
var img    = document.createElement('img'); 
img.onload = function(){
 canvas.width  = img.width;
 canvas.height = img.height;
 ctx.drawImage(img,0,0,img.width,img.height);
 var base64Image = canvas.toDataURL();
 showImage(base64Image);  
}
img.src = sImgURL;

To be able to save the image we need to create a text version of it's data. We can accomplish this by loading the image and then rendering it in a canvas element. The Canvas element has a method called «toDataURL» which will create a base64 representation of the Canvas content. Base64 images, also referred to as data urls can easily be saved to the database. Base64 data can also be read directly by the «img» element, so there is no need to decode the base64 string again once it's encoded. But, do note that the canvas element will give you the png base64 version of the image, and it's quite a bit larger than the original binary file.

That's the gist of it, but you do need to take a look at the demo and the JavaScript in order to really understand what it going on here. Once you get the hang of it, this is a really powerful approach to making web-applications much more accessible and more interesting.

Further reading:

9 things I've learned about web-development

I've been a developer for quite some time now, meddling with both the web-centric and the native side of things. As of lately, that is for the last 3 months or so I have focused more in depth on "standards" based web development. Meaning HTML, CSS and JavaScript. Even if I've been writing HTML since 1996 (yeah, I'm getting old), I have never really bothered to really focus in depth on the basic building blocks of HTML, CSS and JavaScript, until now. 4 books and many many hours of training videos later I feel that I've learned something, which is that an iPad make for a lousy pillow if you keep trying to sleep on it! But more than that, I found 9 things that I have learned, which may be blatantly obvious to you if you are a fancy pantsy web-developer, but then again, maybe you'll learn something new?

1. Web-designers and web-developers think differently.
What do I mean by that? Consider of how you're tagging your HTML with classes and id's. The designer approach to this would be classes all the way, then using the clever selector syntax of CSS to reach the nested areas of the document. This makes sense when working with formatting. Developers tend to view the HTML as a framework with components. Classes are used for formatting while id's are used to identify areas of the markup one would need to reach via JavaScript. Designers will prioritize the "flow" of the «document» whilst developers don't consider it a document at all. I don't think either approach is «correct» or «wrong», it's just two ways of approaching the same challenge.

2. CSS is not a layout language.
Almost all layout in modern HTML based webpages are done using CSS, so how can anybody claim that it's not a layout language? I too was somewhat surprised when time and time again the "people who knew what they are talking about" kept repeating this. From the WHATWG to W3C and several books on the topic kept driving this home..«not a layout language». Turns out, CSS is a formatting language with some layout features. Semantics? Well, not really. If you compare CSS 2.1 to a "real" layout language like MXML you'll soon notice a big difference. Layout languages, or UI markup languages have special layout components to group element, flow elements, sort elements both horizontally and vertically. CSS 2.1 does not have this, it's totally reliant on either floats or absolute positioning. This will better with the introduction of the flexible box model in CSS3, but still it will remain a formatting language at heart.

3. CSS selectors are incredible powerful...stuff
This is one of the areas on which I have discovered most new stuff. CSS 2.1 embodies most of them, but with the introduction of the CSS 3 standard, selectors can now do some amazing stuff. Conditional child selection, substrings within attributes while traversing child elements are just some examples. And the best part is that they are largely supported in all modern browsers, including Internet Explorer 8. If I where to recommend one area on with to focus you attention, if you don't already know this, it would be CSS selectors. It's not complicated or hard to understand, there are just so much under the hood which can make your CSS writing a lot more enjoyable.

4. Things will look different in different places.
If you try to create pixel perfect designs across all browsers and platforms you will go bonkers, that is unless you are a technophile machochist whom derives enjoyment from banging your head up against the gigantic wall of Internet Explorer inconsistency. You're much better off using either the progressive enhancement or the graceful degradation approach where you serve different users the same content, only wrapped in a slightly different presentation. This way Internet Explorer users can sit and stare at that black and white page of rich text all day long, while us WebKit fanboys can swim in the loveliness that is a modern browser with animations and shit.

5. A position is not the same position elsewhere, or everywhere.
Since this is more or less an extension of point 4, I can't be bothered to find a pretty picture for this one. (As a side note, if you use Google image search for the term "position", turn safe search on!) What this point is concerning, what I've found is: Firefox and Chrome (or any other browsers) does not have the same interpretation when it comes to rendering a point at, say x: 100, y: 100. Since it's up to the user agent (browser) to parse your CSS and HTML and then draw it on the screen, you will at certain points end up in a situation where an absolute point in the top left coordinate system will differ from user agent to user agent and even on the same user agent on different operating systems. This can be avoided by using floats and containers instead of using absolute positioning. But, it's better to accept that things might not look exactly the same everywhere.

6. Modernizer Rocks!
If you ever need to use conditional CSS 3 or HTML 5, Modernizer is by far the best solution I've found. The way Modernizer works is actually twofold. You can first use it with CSS directly. Lets say you want to take advantage of the RGBA color model found in CSS 3, but you also need to support those pesky Internet Explorer users. All you need to do in your stylesheet is to prefix your class or id selector with ".rgba". Let's say I wanted to apply a style via the selector «h2[class*="onkey"]», but only for browsers supporting RGBA. It would look like this: «.rgba h2[class*="onkey"]». The RGBA class simply adds itself as a ascendant class. This style would then only apply if RGBA was supported. The other way of using Modernizer is with JavaScript. To check for support for H.264 video playback abilities, simply use «Modernizer.video.h264» and it'll return false, "maybe" or "probably". Yep, that is the HTML 5 spec for video format detection..

7. jQuery selectors are convenient, JavaScript is fast!
If you don't know what jQuery is, you should go find out! Someone once wrote that jQuery is Gods gift to JavaScript. Where God is of course John Resig and I was the one that wrote that, just now in fact. jQuery is seriously awesome, it allows for fast and easy cross browser development. Underscoring cross browser, which is what it does really really well. It is just a JavaScript framework, a tiny one at that, but crammed with neat functionality, from animations, ajax loading and the usage of CSS selectors to get a hold of DOM elements. You can say things like: «jQuery('div.donkey')» which would give you all the donkey divs in the page. This is really convenient, because you can find stuff in the DOM with the same syntax used in the CSS. There is a downside however. When using CSS selectors in a CSS file or in the HTML file, the user agent takes care of all the heavy lifting required to traverse the DOM and find those elements. This is fast because the user agent does this natively using a compiled language such as C. However, jQuery is not written into the native part of the browser and will therefore need to use JavaScript to traverse the DOM to find the elements based on the CSS selector. Hence using the build in JavaScript functions «getElementById», «getElementsByTagName» or the spanking new «getElementsByClassName» is much faster than using the jQuery selectors. What you can do is combine them like so «jQuery(getElementById('id-name'))». Then you would get the speed of the native JavaScript functions while keeping all the jQuery goodness as well.

8. There are datatypes in JavaScript!
!!Nerd alert!! Even if you don't explicitly declare datatypes in JS, there is a difference between for example «"2"» and «2». If you where to add these two values together «"2" + 2», you'd get «"22"», while «2 + 2» returns «4». Now, what if «a = [2,3,"a"]» and «b = 2». What would «a + b» give you? That's right «a + b => "2,3,a2"». JS has a lot of functions for dealing with this. You have «toString()» to convert a number to a string, «parseInt(strNum)» to convert a string into a whole number and so on. The important thing is to be astutely aware of it, even if you can't declare a strict datatype.

9. The WebKit Web Inspector is your most important tool
If you're in any WebKit browser, Safari, Chrome so on you have access to the build in "Web Inspector". This tool is your best friend when working with HTML, CSS and JavaScript. It will do anything the more famous Firebug does and more. Don't get me wrong, I'm not railing on Firebug which is an awesome tool, I am however highly recommending the WebKit Web Inspector because of it's speed, advanced profiling tools and unsurpassed JavaScript debugger. Try it and you'll see why I'm praising it!

So that's it, or more likely, that's some of it. If you have actually read all the text down to this I truly hope that this post have contributed in some small way to your blah blah blah blah..you know the drill. Now get back to work, those recursive functions won't be creating themselves ;-)

The truth about Google and H.264

 If you leave this post remembering only one thing, let it be this: Making something Open Source does not automatically make it better, it just makes it Open Source. How's that for flamebait?

Unless you have been living in a box buried underground lately, you have probably noticed the shitstorm surrounding Googles decision to drop support for the H.264 codec in it's Chrome web-browser. Of course, as a web-developer I do indeed have some thoughts on why this happened and whether it's a smart move. However, before sharing from my goldmine of biased opinion we'll need to get the REAL facts straight, so you can judge for yourself.

Open vs. Free
H.264 is a codec constructed from a bunch of patented technologies and it's developed by the «Video Coding Experts Group» and the «MPEG Group». H.264 is a video standard (ISO/IEC 14496-10) handled by the «International Organization for Standardization». H.264 is an open standard, however it is not a free standard. All the patents included in H.264 are handled by MPEG-LA, not to be confused with the MPEG Group. Microsoft and Apple are some of the minor patent holders of H.264. H.264 is free to use for non-commercial usage, for end users for it's lifetime. For commercial usage (e.g. iTunes Store) and for usage in decoders (e.g. in browsers) one will have to pay a license fee capped at $6.5M. H.264 sports a bunch of hardware decoders, from iOS devices to Televisions and DVD/BlueRay players. To create H.264 files you can use everything from Apple QuickTime to FFMPEG. H.264 is supported by Apple Safari, Internet Explorer 9 and Google Chrome via the Mpeg4 container.

VP8, the codec within the WebM container, was developed as a proprietary product by a company called On2 Technologies. In 2010 On2 Technologies was acquired by Google, whom continued to release the patents within VP8 under a Creative Commons License. VP8 is free to use and free to implement. As of today there are slim to none hardware support for VP8 decoding. To decode/encode VP8 today you'll most likely use Google own software based «libvpx» library or the FFMPEG(ffvp8) implementation. VP8 is today supported by Opera and Firefox 4 via the WebM container.

The thing to notice here is that H.264 is indeed an open standard, developed by an independent companies and approved as a standard by ISO, but it's not free to implement. VP8 was developed as a proprietary product by one company, later released into the public domain as Open Source. VP8 is free to implement.

What does it all mean?
In my opinion Googles move has nothing to do with being open, it might have something to do with being free however. For companies like Mozilla, which develops a truly Open Source product it totally makes sense to not implement a video codec for which you have to pay patent royalties. Note that Mozilla does not support MP3 either, which also contains commercial patents. It's not that Mozilla can't pay the license fees, it is more the fact that you cannot freely distribute as Open Source a product which implements a patent encumbered video decoder. Mozilla (Firefox) has never had support for H.264, which is a decision I completely support on the basis of their completely open approach in other areas of the implementation as well.
Google on the other hand made the rather strange decision to remove it's already existing support for a widely used codec in a browser littered with other patented technologies. Under the flag of being "open". It's this "being open" part which rubs me the wrong way. Google Chrome is NOT an open browser in the same sense as Firefox is, it implements mp3 and it's own embedded FlashPlayer to mention a few things. As far as I can see, the only logical explanation for Googles move is that this is a business decision. They don't want to pay for a license partly under the control of Apple and Microsoft and would rather control their own codec, namely VP8, which is under Googles control, even if it's Open Source. VP8 has never been through any independent standards organization like H.264 has, it was developed as a proprietary product, in sharp contrast to the independent development of H.264 which was a joint operation shared by many companies. Codec developers also claim the superiority in quality of H.264 over VP8. It's estimated that about 60-70% of all web video is encoded in H.264 already, a large part of that is YouTube owned by Google. The reason you can watch video for hours on your Android. Microsoft or iOS device is the H.264 hardware encoder which sits inside of it. Despite all of this Google chooses to exclude H.264 from it's web-browser.. to be open.. yea right!

Graceful degradation vs. Progressive enhancement

There are three possible things you might be feeling after seeing that header. (1) No interest at all (confused?), (2) somewhat intrigued and curious (and confused?) or (3) already in the trenches ready to defend your position.

First of all, I know that this is somewhat of a "controversial" area, but in the end you do not need to agree with what I have to say, it's just my point of view. But, before we get to the subjective part, let's take an objective look at what is involved when considering these two approaches to (in this case) making web-applications.

Graceful degradation (or fault-tolerant system).
In web design, this is the basic idea that we design and write code for the most capable browsers first, then we add support for less capable browsers. An example of this is the "alt" attribute of the «img» tag. Most users will get the image, while those that browsers that do not support (or choose not to display) images gets the "degraded" text representation. The «noscript» tag is another example.

Progressive enhancement.
When subscribing to «progressive enhancement», you will first design and write code for the least capable browsers (like Internet Explorer). Then you'll add in functionality to enhance the experience for users with more capable browsers (like Opera, Firefox, Chrome and Safari). The linked stylesheet is a much used example of this. First you create the web-app in pure HTML (works almost everywhere), then you link a stylesheet to the page (which is ignored by old browsers), making the experience better for more up to date users. The FlashPlugin based «SIFR» method is also an example of «progressive enhancement». 

Aren't those two identical (or the subjective part)?
Both of these practices will lead to the same result in most cases. That is, the goal of both approaches is to give the best user experience no matter what browser might be trying to display the web-application. The difference is the starting point. Where as «progressive enhancement» assumes the lowest common denominator as a starting point, «graceful degradation» assumes the opposite, starting at the newest and adding in support for less capable browsers later.

It is my opinion that the starting point of the lowest common denominator is a not such a good idea, because this line of thought will slow down the progress of new technology adaptation, especially within large enterprise environments. I don't think people in general will catch up on anything unless there is an incentive to do so. A "degraded" experience is a great incentive. Also, web-developers should in my opinion be allowed to adapt new technology as early as possible. A bunch of pig headed Internet Explorer users should not stand in the way of that. There is also the issue of security, speed and support for assistive devices. I would highly recommend the online book "20 things I have learned about browsers and the Internet" from the Google Chrome team, where you can read about why up to date browsers are important for the Internet itself.

I know what you are thinking. "Wouldn't both approaches allow for rapid technology adaptation?". The answer is of course yes...and no. You can use «progressive enhancement» and still push the limits of technology, forcing the world forward. However, to put it bluntly, that is not the intention of this approach, it's intention is to first support the slackers, then new technology. «Graceful degradation» aims at first supporting new technology, then give the slackers the content, but with a lesser experience. There's a subtle, but important difference here.

Note that «graceful degradation» has it's drawbacks as well. It is somewhat more complicated to add in "old technology" to a "new technology" project, rather than building "new technology" on top of "old technology" like «progressive enhancement» does it. But, as time goes by, a «progressive enhancement» approach will have more and more of it's foundation deprecated, while the root building blocks of a «graceful degradation» based project will become more and more relevant.