Scaling and rotating an elephant using JavaScript

This second post about «touch» is somewhat an extension to my previous post «Drawing by touching using JavaScript». However in this post the focus will be on «gesture events». A gesture event is an event which will fire when more than one finger is present on the screen. Due to the somewhat complex nature of the accompanying demo, I will not walk through it line by line, but rather focus on some key ares which are important when dealing with touches and gestures.
Step 1 is to take a look at the demo and read through my comments. To understand this you will need a basic understanding of object oriented JavaScript and I will not walk through that part of the demo in this post. What I will explain in some detail is how the gesture events work (to see how touch events work, refer to my previous post) and some nifty tricks you can do with event handlers and transforms. Oh..and you will be needing a touch device to test this thing. Let's get cracking then.

The basic functionality of this demo is to place images on the page which have some touch and gesture events applied to them. This will allow us to move, scale and rotate the images using gestures. An array «elephants» is used to keep track of all the images on the page. On line 70 (in the demo JavaScript file) an Object called «TouchImage» is defined. This object will keep track of all events and transforms associated with one image. We create a new «TouchImage» on line 52. This happens for every time you hit the «Create elephant» button. Then on line 99 - 104:
tImage.image.addEventListener('gesturestart', tImage, false);
tImage.image.addEventListener('gesturechange', tImage, false);
tImage.image.addEventListener('gestureend', tImage, false);
tImage.image.addEventListener('touchcancel', tImage, false);
tImage.image.addEventListener('touchstart', tImage, false);
tImage.image.addEventListener('touchend', tImage, false);

Now, this might look a bit strange, we're passing «this» («tImage» is a closure for «this») as the event handler for the listeners. This is because at line 117 we define a method called «handleEvent» on the «TouchImage». This is a magical method in JavaScript which will basically handle any event passed to it's object. Within this method we check to see if the caller is a function and if it is, we simply call it.
TouchImage.prototype.handleEvent = function(event)
 if(typeof(this[event.type]) === "function"){
  return this[event.type](event);
This way we can extend the «TouchImage» object with methods called «gesturesstart» and so on, the same names as the events. An additional plus is that we stay within scope of the object, avoiding to pass closures around like rag dolls. Now to the gesture handlers. In contrast to «touch» handlers, there are only three «gesture» handlers: «getsurestart», «gesturechange» and «gestureend». Each of these will trigger only if there are more than one finger on the screen.
TouchImage.prototype.gesturestart = function(event)
 this.startRotation = this.rotation;
 this.startScale    = this.scale;
In «gesturestart» which triggers when a second finger is placed on screen, we capture the current scale and rotation values of the «TouchImage» in question.
TouchImage.prototype.gesturechange = function(event)
 this.scale    = this.startScale * event.scale;
 this.rotation = this.startRotation + event.rotation;
In «gesturechange» we calculate the scale and rotation by using the start values and the changed values. The scale is multiplied with the new scale and the rotation is calculated by adding the start rotation to the current rotation. This works because both values in the change handler will give us the amount changed since the start event. Then we call the «applyTransforms()» method where the actual transform is done.
TouchImage.prototype.applyTransforms = function()
 var transform = 'translate3d(' + this.posX + 'px,' + this.posY + 'px, '+this.posZ+'px)';
 transform += ' rotate(' + this.rotation + 'deg)';
 transform += ' scale(' + this.scale + ')'; = transform;  
The transform is done on all properties at the same time. You cannot separate them from each other, because we are in effect overwriting the entire transform style element each time something is changing. Do also note that we are in fact using «transform3d», which is quite different from the regular «transform». The reason for this is twofold. First, the «transform3d» will render the image on it's own 3D layer, at times triggering hardware rendering. This will yield a significant performance increase. Secondly, the «transform3d» allow us to stack elements in the «z-axis», hence we can move the active element to the front. We do this sorting by  calling the «sortDepth» method, which only sorts the «elephants» array.

That's the gesture events. Now, there are regular touch events in this application as well. These are used to move the «TouchImage» objects around the screen. I won't cover how this is done in detail here because it's much the same as in the previous post. However one thing to notice is that an offset is calculated in the «touchstart» handler. This is done so that when moving the image it tracks from the point where the user places her finger on the image. If we didn't do this the image would snap to 0,0 under the users finger when moved, making for an unexpected user experience. That's it and that's that!


man said...

hi, thanks for your sharing...but the demo page can not be found. Is it another link available that i can view the whole source code? thanks a lot

Jørn Kinderås said...

Sorry about that, my webhost is working on resolving it