The Peacekeeper

Peacekeeper on an iMac 3.0 Ghz
I decided to try out Futuremaks spanking "new" Peacekeeper browser tester today. In contrast to other more "nerdy" tests like Acid3 and SunSpider which are useful for people like me, Peacekeepers tests are more targeted at actual normal usage of a browser, making it slightly more interesting for users without a degree in computer science.

On the other hand, if you want to understand the six test, well.. it gets a bit technical. The first test «rendering» is a test of how fast the browser can draw stuff to the screen, while doing other calculations and operations at the same time. The second test which Futuremark calls «social networking» is actually a test in how fast the browser can create a SHA1 hash, parse some XML and filtering and sorting the elements of an array. Thirdly we arrive at the «complex graphics» test, which is a test on canvas drawing operations, canvas being part of the new HTML 5 standard. The «data» test which I have renamed to "arrays" does work on exactly that, array manipulation. The «DOM manipulation» test, is one of the most important tests as this tests the browsers ability to look up elements in the DOM fast. This should be of particular interest to jQuery fans. The last test «text parsing», is what its name suggests, parsing and searching in strings. To get a full detailed explanation of what the individual test entails, see Futuremarks FAQ site.

My testcomputer was an iMac 3.06 GHz, with 8GB of RAM and an ATI Radeon HD 4670 graphics card running OS X 10.6.7. It matters, because these tests will very, much, depending on your computer and OS.

As for the results one can clearly see that the WebKit browsers beat the snot out of Firefox. Chrome is clearly the fastest browser overall, however for DOM manipulation Safari is slightly faster than Chrome. Again, beware of this one jQuery fans because this is where all those selectors come into play. Chrome is ruling the array manipulation speed, this might be due to the V8 JavaScript engine. All the browsers basically suck when it comes to rendering HTML 5 Canvas graphics. As a side-note, I observed a test that was run on a Windows based computer as well. To our surprise, both Chrome and Opera beat Internet Explorer 9 into the slippers when it came to canvas performance. Beauty of the web my ass, in your own tests internally made tests perhaps.

So, judging from this Google Chrome is the browser your should go for. And in my opinion, it is. For me, I'm sticking with Safari for my day to day use, but I'm using Chrome when writing this blog post.

Scaling and rotating an elephant using JavaScript

This second post about «touch» is somewhat an extension to my previous post «Drawing by touching using JavaScript». However in this post the focus will be on «gesture events». A gesture event is an event which will fire when more than one finger is present on the screen. Due to the somewhat complex nature of the accompanying demo, I will not walk through it line by line, but rather focus on some key ares which are important when dealing with touches and gestures.
Step 1 is to take a look at the demo and read through my comments. To understand this you will need a basic understanding of object oriented JavaScript and I will not walk through that part of the demo in this post. What I will explain in some detail is how the gesture events work (to see how touch events work, refer to my previous post) and some nifty tricks you can do with event handlers and transforms. Oh..and you will be needing a touch device to test this thing. Let's get cracking then.

The basic functionality of this demo is to place images on the page which have some touch and gesture events applied to them. This will allow us to move, scale and rotate the images using gestures. An array «elephants» is used to keep track of all the images on the page. On line 70 (in the demo JavaScript file) an Object called «TouchImage» is defined. This object will keep track of all events and transforms associated with one image. We create a new «TouchImage» on line 52. This happens for every time you hit the «Create elephant» button. Then on line 99 - 104:
tImage.image.addEventListener('gesturestart', tImage, false);
tImage.image.addEventListener('gesturechange', tImage, false);
tImage.image.addEventListener('gestureend', tImage, false);
tImage.image.addEventListener('touchcancel', tImage, false);
tImage.image.addEventListener('touchstart', tImage, false);
tImage.image.addEventListener('touchend', tImage, false);

Now, this might look a bit strange, we're passing «this» («tImage» is a closure for «this») as the event handler for the listeners. This is because at line 117 we define a method called «handleEvent» on the «TouchImage». This is a magical method in JavaScript which will basically handle any event passed to it's object. Within this method we check to see if the caller is a function and if it is, we simply call it.
TouchImage.prototype.handleEvent = function(event)
 if(typeof(this[event.type]) === "function"){
  return this[event.type](event);
This way we can extend the «TouchImage» object with methods called «gesturesstart» and so on, the same names as the events. An additional plus is that we stay within scope of the object, avoiding to pass closures around like rag dolls. Now to the gesture handlers. In contrast to «touch» handlers, there are only three «gesture» handlers: «getsurestart», «gesturechange» and «gestureend». Each of these will trigger only if there are more than one finger on the screen.
TouchImage.prototype.gesturestart = function(event)
 this.startRotation = this.rotation;
 this.startScale    = this.scale;
In «gesturestart» which triggers when a second finger is placed on screen, we capture the current scale and rotation values of the «TouchImage» in question.
TouchImage.prototype.gesturechange = function(event)
 this.scale    = this.startScale * event.scale;
 this.rotation = this.startRotation + event.rotation;
In «gesturechange» we calculate the scale and rotation by using the start values and the changed values. The scale is multiplied with the new scale and the rotation is calculated by adding the start rotation to the current rotation. This works because both values in the change handler will give us the amount changed since the start event. Then we call the «applyTransforms()» method where the actual transform is done.
TouchImage.prototype.applyTransforms = function()
 var transform = 'translate3d(' + this.posX + 'px,' + this.posY + 'px, '+this.posZ+'px)';
 transform += ' rotate(' + this.rotation + 'deg)';
 transform += ' scale(' + this.scale + ')'; = transform;  
The transform is done on all properties at the same time. You cannot separate them from each other, because we are in effect overwriting the entire transform style element each time something is changing. Do also note that we are in fact using «transform3d», which is quite different from the regular «transform». The reason for this is twofold. First, the «transform3d» will render the image on it's own 3D layer, at times triggering hardware rendering. This will yield a significant performance increase. Secondly, the «transform3d» allow us to stack elements in the «z-axis», hence we can move the active element to the front. We do this sorting by  calling the «sortDepth» method, which only sorts the «elephants» array.

That's the gesture events. Now, there are regular touch events in this application as well. These are used to move the «TouchImage» objects around the screen. I won't cover how this is done in detail here because it's much the same as in the previous post. However one thing to notice is that an offset is calculated in the «touchstart» handler. This is done so that when moving the image it tracks from the point where the user places her finger on the image. If we didn't do this the image would snap to 0,0 under the users finger when moved, making for an unexpected user experience. That's it and that's that!